repo_name
stringclasses 1
value | pr_number
int64 4.12k
11.2k
| pr_title
stringlengths 9
107
| pr_description
stringlengths 107
5.48k
| author
stringlengths 4
18
| date_created
unknown | date_merged
unknown | previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| query
stringlengths 118
5.52k
| before_content
stringlengths 0
7.93M
| after_content
stringlengths 0
7.93M
| label
int64 -1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Non-preemptive Shortest Job First
Shortest execution time process is chosen for the next execution.
https://www.guru99.com/shortest-job-first-sjf-scheduling.html
https://en.wikipedia.org/wiki/Shortest_job_next
"""
from __future__ import annotations
from statistics import mean
def calculate_waitingtime(
arrival_time: list[int], burst_time: list[int], no_of_processes: int
) -> list[int]:
"""
Calculate the waiting time of each processes
Return: The waiting time for each process.
>>> calculate_waitingtime([0,1,2], [10, 5, 8], 3)
[0, 9, 13]
>>> calculate_waitingtime([1,2,2,4], [4, 6, 3, 1], 4)
[0, 7, 4, 1]
>>> calculate_waitingtime([0,0,0], [12, 2, 10],3)
[12, 0, 2]
"""
waiting_time = [0] * no_of_processes
remaining_time = [0] * no_of_processes
# Initialize remaining_time to waiting_time.
for i in range(no_of_processes):
remaining_time[i] = burst_time[i]
ready_process: list[int] = []
completed = 0
total_time = 0
# When processes are not completed,
# A process whose arrival time has passed \
# and has remaining execution time is put into the ready_process.
# The shortest process in the ready_process, target_process is executed.
while completed != no_of_processes:
ready_process = []
target_process = -1
for i in range(no_of_processes):
if (arrival_time[i] <= total_time) and (remaining_time[i] > 0):
ready_process.append(i)
if len(ready_process) > 0:
target_process = ready_process[0]
for i in ready_process:
if remaining_time[i] < remaining_time[target_process]:
target_process = i
total_time += burst_time[target_process]
completed += 1
remaining_time[target_process] = 0
waiting_time[target_process] = (
total_time - arrival_time[target_process] - burst_time[target_process]
)
else:
total_time += 1
return waiting_time
def calculate_turnaroundtime(
burst_time: list[int], no_of_processes: int, waiting_time: list[int]
) -> list[int]:
"""
Calculate the turnaround time of each process.
Return: The turnaround time for each process.
>>> calculate_turnaroundtime([0,1,2], 3, [0, 10, 15])
[0, 11, 17]
>>> calculate_turnaroundtime([1,2,2,4], 4, [1, 8, 5, 4])
[2, 10, 7, 8]
>>> calculate_turnaroundtime([0,0,0], 3, [12, 0, 2])
[12, 0, 2]
"""
turn_around_time = [0] * no_of_processes
for i in range(no_of_processes):
turn_around_time[i] = burst_time[i] + waiting_time[i]
return turn_around_time
if __name__ == "__main__":
print("[TEST CASE 01]")
no_of_processes = 4
burst_time = [2, 5, 3, 7]
arrival_time = [0, 0, 0, 0]
waiting_time = calculate_waitingtime(arrival_time, burst_time, no_of_processes)
turn_around_time = calculate_turnaroundtime(
burst_time, no_of_processes, waiting_time
)
# Printing the Result
print("PID\tBurst Time\tArrival Time\tWaiting Time\tTurnaround Time")
for i, process_id in enumerate(list(range(1, 5))):
print(
f"{process_id}\t{burst_time[i]}\t\t\t{arrival_time[i]}\t\t\t\t"
f"{waiting_time[i]}\t\t\t\t{turn_around_time[i]}"
)
print(f"\nAverage waiting time = {mean(waiting_time):.5f}")
print(f"Average turnaround time = {mean(turn_around_time):.5f}")
| """
Non-preemptive Shortest Job First
Shortest execution time process is chosen for the next execution.
https://www.guru99.com/shortest-job-first-sjf-scheduling.html
https://en.wikipedia.org/wiki/Shortest_job_next
"""
from __future__ import annotations
from statistics import mean
def calculate_waitingtime(
arrival_time: list[int], burst_time: list[int], no_of_processes: int
) -> list[int]:
"""
Calculate the waiting time of each processes
Return: The waiting time for each process.
>>> calculate_waitingtime([0,1,2], [10, 5, 8], 3)
[0, 9, 13]
>>> calculate_waitingtime([1,2,2,4], [4, 6, 3, 1], 4)
[0, 7, 4, 1]
>>> calculate_waitingtime([0,0,0], [12, 2, 10],3)
[12, 0, 2]
"""
waiting_time = [0] * no_of_processes
remaining_time = [0] * no_of_processes
# Initialize remaining_time to waiting_time.
for i in range(no_of_processes):
remaining_time[i] = burst_time[i]
ready_process: list[int] = []
completed = 0
total_time = 0
# When processes are not completed,
# A process whose arrival time has passed \
# and has remaining execution time is put into the ready_process.
# The shortest process in the ready_process, target_process is executed.
while completed != no_of_processes:
ready_process = []
target_process = -1
for i in range(no_of_processes):
if (arrival_time[i] <= total_time) and (remaining_time[i] > 0):
ready_process.append(i)
if len(ready_process) > 0:
target_process = ready_process[0]
for i in ready_process:
if remaining_time[i] < remaining_time[target_process]:
target_process = i
total_time += burst_time[target_process]
completed += 1
remaining_time[target_process] = 0
waiting_time[target_process] = (
total_time - arrival_time[target_process] - burst_time[target_process]
)
else:
total_time += 1
return waiting_time
def calculate_turnaroundtime(
burst_time: list[int], no_of_processes: int, waiting_time: list[int]
) -> list[int]:
"""
Calculate the turnaround time of each process.
Return: The turnaround time for each process.
>>> calculate_turnaroundtime([0,1,2], 3, [0, 10, 15])
[0, 11, 17]
>>> calculate_turnaroundtime([1,2,2,4], 4, [1, 8, 5, 4])
[2, 10, 7, 8]
>>> calculate_turnaroundtime([0,0,0], 3, [12, 0, 2])
[12, 0, 2]
"""
turn_around_time = [0] * no_of_processes
for i in range(no_of_processes):
turn_around_time[i] = burst_time[i] + waiting_time[i]
return turn_around_time
if __name__ == "__main__":
print("[TEST CASE 01]")
no_of_processes = 4
burst_time = [2, 5, 3, 7]
arrival_time = [0, 0, 0, 0]
waiting_time = calculate_waitingtime(arrival_time, burst_time, no_of_processes)
turn_around_time = calculate_turnaroundtime(
burst_time, no_of_processes, waiting_time
)
# Printing the Result
print("PID\tBurst Time\tArrival Time\tWaiting Time\tTurnaround Time")
for i, process_id in enumerate(list(range(1, 5))):
print(
f"{process_id}\t{burst_time[i]}\t\t\t{arrival_time[i]}\t\t\t\t"
f"{waiting_time[i]}\t\t\t\t{turn_around_time[i]}"
)
print(f"\nAverage waiting time = {mean(waiting_time):.5f}")
print(f"Average turnaround time = {mean(turn_around_time):.5f}")
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import string
def decrypt(message: str) -> None:
"""
>>> decrypt('TMDETUX PMDVU')
Decryption using Key #0: TMDETUX PMDVU
Decryption using Key #1: SLCDSTW OLCUT
Decryption using Key #2: RKBCRSV NKBTS
Decryption using Key #3: QJABQRU MJASR
Decryption using Key #4: PIZAPQT LIZRQ
Decryption using Key #5: OHYZOPS KHYQP
Decryption using Key #6: NGXYNOR JGXPO
Decryption using Key #7: MFWXMNQ IFWON
Decryption using Key #8: LEVWLMP HEVNM
Decryption using Key #9: KDUVKLO GDUML
Decryption using Key #10: JCTUJKN FCTLK
Decryption using Key #11: IBSTIJM EBSKJ
Decryption using Key #12: HARSHIL DARJI
Decryption using Key #13: GZQRGHK CZQIH
Decryption using Key #14: FYPQFGJ BYPHG
Decryption using Key #15: EXOPEFI AXOGF
Decryption using Key #16: DWNODEH ZWNFE
Decryption using Key #17: CVMNCDG YVMED
Decryption using Key #18: BULMBCF XULDC
Decryption using Key #19: ATKLABE WTKCB
Decryption using Key #20: ZSJKZAD VSJBA
Decryption using Key #21: YRIJYZC URIAZ
Decryption using Key #22: XQHIXYB TQHZY
Decryption using Key #23: WPGHWXA SPGYX
Decryption using Key #24: VOFGVWZ ROFXW
Decryption using Key #25: UNEFUVY QNEWV
"""
for key in range(len(string.ascii_uppercase)):
translated = ""
for symbol in message:
if symbol in string.ascii_uppercase:
num = string.ascii_uppercase.find(symbol)
num = num - key
if num < 0:
num = num + len(string.ascii_uppercase)
translated = translated + string.ascii_uppercase[num]
else:
translated = translated + symbol
print(f"Decryption using Key #{key}: {translated}")
def main() -> None:
message = input("Encrypted message: ")
message = message.upper()
decrypt(message)
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| import string
def decrypt(message: str) -> None:
"""
>>> decrypt('TMDETUX PMDVU')
Decryption using Key #0: TMDETUX PMDVU
Decryption using Key #1: SLCDSTW OLCUT
Decryption using Key #2: RKBCRSV NKBTS
Decryption using Key #3: QJABQRU MJASR
Decryption using Key #4: PIZAPQT LIZRQ
Decryption using Key #5: OHYZOPS KHYQP
Decryption using Key #6: NGXYNOR JGXPO
Decryption using Key #7: MFWXMNQ IFWON
Decryption using Key #8: LEVWLMP HEVNM
Decryption using Key #9: KDUVKLO GDUML
Decryption using Key #10: JCTUJKN FCTLK
Decryption using Key #11: IBSTIJM EBSKJ
Decryption using Key #12: HARSHIL DARJI
Decryption using Key #13: GZQRGHK CZQIH
Decryption using Key #14: FYPQFGJ BYPHG
Decryption using Key #15: EXOPEFI AXOGF
Decryption using Key #16: DWNODEH ZWNFE
Decryption using Key #17: CVMNCDG YVMED
Decryption using Key #18: BULMBCF XULDC
Decryption using Key #19: ATKLABE WTKCB
Decryption using Key #20: ZSJKZAD VSJBA
Decryption using Key #21: YRIJYZC URIAZ
Decryption using Key #22: XQHIXYB TQHZY
Decryption using Key #23: WPGHWXA SPGYX
Decryption using Key #24: VOFGVWZ ROFXW
Decryption using Key #25: UNEFUVY QNEWV
"""
for key in range(len(string.ascii_uppercase)):
translated = ""
for symbol in message:
if symbol in string.ascii_uppercase:
num = string.ascii_uppercase.find(symbol)
num = num - key
if num < 0:
num = num + len(string.ascii_uppercase)
translated = translated + string.ascii_uppercase[num]
else:
translated = translated + symbol
print(f"Decryption using Key #{key}: {translated}")
def main() -> None:
message = input("Encrypted message: ")
message = message.upper()
decrypt(message)
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/perl
use strict;
use warnings;
use IPC::Open2;
# An example hook script to integrate Watchman
# (https://facebook.github.io/watchman/) with git to speed up detecting
# new and modified files.
#
# The hook is passed a version (currently 1) and a time in nanoseconds
# formatted as a string and outputs to stdout all files that have been
# modified since the given time. Paths must be relative to the root of
# the working tree and separated by a single NUL.
#
# To enable this hook, rename this file to "query-watchman" and set
# 'git config core.fsmonitor .git/hooks/query-watchman'
#
my ($version, $time) = @ARGV;
# Check the hook interface version
if ($version == 1) {
# convert nanoseconds to seconds
# subtract one second to make sure watchman will return all changes
$time = int ($time / 1000000000) - 1;
} else {
die "Unsupported query-fsmonitor hook version '$version'.\n" .
"Falling back to scanning...\n";
}
my $git_work_tree;
if ($^O =~ 'msys' || $^O =~ 'cygwin') {
$git_work_tree = Win32::GetCwd();
$git_work_tree =~ tr/\\/\//;
} else {
require Cwd;
$git_work_tree = Cwd::cwd();
}
my $retry = 1;
launch_watchman();
sub launch_watchman {
my $pid = open2(\*CHLD_OUT, \*CHLD_IN, 'watchman -j --no-pretty')
or die "open2() failed: $!\n" .
"Falling back to scanning...\n";
# In the query expression below we're asking for names of files that
# changed since $time but were not transient (ie created after
# $time but no longer exist).
#
# To accomplish this, we're using the "since" generator to use the
# recency index to select candidate nodes and "fields" to limit the
# output to file names only.
my $query = <<" END";
["query", "$git_work_tree", {
"since": $time,
"fields": ["name"]
}]
END
print CHLD_IN $query;
close CHLD_IN;
my $response = do {local $/; <CHLD_OUT>};
die "Watchman: command returned no output.\n" .
"Falling back to scanning...\n" if $response eq "";
die "Watchman: command returned invalid output: $response\n" .
"Falling back to scanning...\n" unless $response =~ /^\{/;
my $json_pkg;
eval {
require JSON::XS;
$json_pkg = "JSON::XS";
1;
} or do {
require JSON::PP;
$json_pkg = "JSON::PP";
};
my $o = $json_pkg->new->utf8->decode($response);
if ($retry > 0 and $o->{error} and $o->{error} =~ m/unable to resolve root .* directory (.*) is not watched/) {
print STDERR "Adding '$git_work_tree' to watchman's watch list.\n";
$retry--;
qx/watchman watch "$git_work_tree"/;
die "Failed to make watchman watch '$git_work_tree'.\n" .
"Falling back to scanning...\n" if $? != 0;
# Watchman will always return all files on the first query so
# return the fast "everything is dirty" flag to git and do the
# Watchman query just to get it over with now so we won't pay
# the cost in git to look up each individual file.
print "/\0";
eval { launch_watchman() };
exit 0;
}
die "Watchman: $o->{error}.\n" .
"Falling back to scanning...\n" if $o->{error};
binmode STDOUT, ":utf8";
local $, = "\0";
print @{$o->{files}};
}
| #!/usr/bin/perl
use strict;
use warnings;
use IPC::Open2;
# An example hook script to integrate Watchman
# (https://facebook.github.io/watchman/) with git to speed up detecting
# new and modified files.
#
# The hook is passed a version (currently 1) and a time in nanoseconds
# formatted as a string and outputs to stdout all files that have been
# modified since the given time. Paths must be relative to the root of
# the working tree and separated by a single NUL.
#
# To enable this hook, rename this file to "query-watchman" and set
# 'git config core.fsmonitor .git/hooks/query-watchman'
#
my ($version, $time) = @ARGV;
# Check the hook interface version
if ($version == 1) {
# convert nanoseconds to seconds
# subtract one second to make sure watchman will return all changes
$time = int ($time / 1000000000) - 1;
} else {
die "Unsupported query-fsmonitor hook version '$version'.\n" .
"Falling back to scanning...\n";
}
my $git_work_tree;
if ($^O =~ 'msys' || $^O =~ 'cygwin') {
$git_work_tree = Win32::GetCwd();
$git_work_tree =~ tr/\\/\//;
} else {
require Cwd;
$git_work_tree = Cwd::cwd();
}
my $retry = 1;
launch_watchman();
sub launch_watchman {
my $pid = open2(\*CHLD_OUT, \*CHLD_IN, 'watchman -j --no-pretty')
or die "open2() failed: $!\n" .
"Falling back to scanning...\n";
# In the query expression below we're asking for names of files that
# changed since $time but were not transient (ie created after
# $time but no longer exist).
#
# To accomplish this, we're using the "since" generator to use the
# recency index to select candidate nodes and "fields" to limit the
# output to file names only.
my $query = <<" END";
["query", "$git_work_tree", {
"since": $time,
"fields": ["name"]
}]
END
print CHLD_IN $query;
close CHLD_IN;
my $response = do {local $/; <CHLD_OUT>};
die "Watchman: command returned no output.\n" .
"Falling back to scanning...\n" if $response eq "";
die "Watchman: command returned invalid output: $response\n" .
"Falling back to scanning...\n" unless $response =~ /^\{/;
my $json_pkg;
eval {
require JSON::XS;
$json_pkg = "JSON::XS";
1;
} or do {
require JSON::PP;
$json_pkg = "JSON::PP";
};
my $o = $json_pkg->new->utf8->decode($response);
if ($retry > 0 and $o->{error} and $o->{error} =~ m/unable to resolve root .* directory (.*) is not watched/) {
print STDERR "Adding '$git_work_tree' to watchman's watch list.\n";
$retry--;
qx/watchman watch "$git_work_tree"/;
die "Failed to make watchman watch '$git_work_tree'.\n" .
"Falling back to scanning...\n" if $? != 0;
# Watchman will always return all files on the first query so
# return the fast "everything is dirty" flag to git and do the
# Watchman query just to get it over with now so we won't pay
# the cost in git to look up each individual file.
print "/\0";
eval { launch_watchman() };
exit 0;
}
die "Watchman: $o->{error}.\n" .
"Falling back to scanning...\n" if $o->{error};
binmode STDOUT, ":utf8";
local $, = "\0";
print @{$o->{files}};
}
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import base64
def base32_encode(string: str) -> bytes:
"""
Encodes a given string to base32, returning a bytes-like object
>>> base32_encode("Hello World!")
b'JBSWY3DPEBLW64TMMQQQ===='
>>> base32_encode("123456")
b'GEZDGNBVGY======'
>>> base32_encode("some long complex string")
b'ONXW2ZJANRXW4ZZAMNXW24DMMV4CA43UOJUW4ZY='
"""
# encoded the input (we need a bytes like object)
# then, b32encoded the bytes-like object
return base64.b32encode(string.encode("utf-8"))
def base32_decode(encoded_bytes: bytes) -> str:
"""
Decodes a given bytes-like object to a string, returning a string
>>> base32_decode(b'JBSWY3DPEBLW64TMMQQQ====')
'Hello World!'
>>> base32_decode(b'GEZDGNBVGY======')
'123456'
>>> base32_decode(b'ONXW2ZJANRXW4ZZAMNXW24DMMV4CA43UOJUW4ZY=')
'some long complex string'
"""
# decode the bytes from base32
# then, decode the bytes-like object to return as a string
return base64.b32decode(encoded_bytes).decode("utf-8")
if __name__ == "__main__":
test = "Hello World!"
encoded = base32_encode(test)
print(encoded)
decoded = base32_decode(encoded)
print(decoded)
| import base64
def base32_encode(string: str) -> bytes:
"""
Encodes a given string to base32, returning a bytes-like object
>>> base32_encode("Hello World!")
b'JBSWY3DPEBLW64TMMQQQ===='
>>> base32_encode("123456")
b'GEZDGNBVGY======'
>>> base32_encode("some long complex string")
b'ONXW2ZJANRXW4ZZAMNXW24DMMV4CA43UOJUW4ZY='
"""
# encoded the input (we need a bytes like object)
# then, b32encoded the bytes-like object
return base64.b32encode(string.encode("utf-8"))
def base32_decode(encoded_bytes: bytes) -> str:
"""
Decodes a given bytes-like object to a string, returning a string
>>> base32_decode(b'JBSWY3DPEBLW64TMMQQQ====')
'Hello World!'
>>> base32_decode(b'GEZDGNBVGY======')
'123456'
>>> base32_decode(b'ONXW2ZJANRXW4ZZAMNXW24DMMV4CA43UOJUW4ZY=')
'some long complex string'
"""
# decode the bytes from base32
# then, decode the bytes-like object to return as a string
return base64.b32decode(encoded_bytes).decode("utf-8")
if __name__ == "__main__":
test = "Hello World!"
encoded = base32_encode(test)
print(encoded)
decoded = base32_decode(encoded)
print(decoded)
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def multiplication_table(number: int, number_of_terms: int) -> str:
"""
Prints the multiplication table of a given number till the given number of terms
>>> print(multiplication_table(3, 5))
3 * 1 = 3
3 * 2 = 6
3 * 3 = 9
3 * 4 = 12
3 * 5 = 15
>>> print(multiplication_table(-4, 6))
-4 * 1 = -4
-4 * 2 = -8
-4 * 3 = -12
-4 * 4 = -16
-4 * 5 = -20
-4 * 6 = -24
"""
return "\n".join(
f"{number} * {i} = {number * i}" for i in range(1, number_of_terms + 1)
)
if __name__ == "__main__":
print(multiplication_table(number=5, number_of_terms=10))
| def multiplication_table(number: int, number_of_terms: int) -> str:
"""
Prints the multiplication table of a given number till the given number of terms
>>> print(multiplication_table(3, 5))
3 * 1 = 3
3 * 2 = 6
3 * 3 = 9
3 * 4 = 12
3 * 5 = 15
>>> print(multiplication_table(-4, 6))
-4 * 1 = -4
-4 * 2 = -8
-4 * 3 = -12
-4 * 4 = -16
-4 * 5 = -20
-4 * 6 = -24
"""
return "\n".join(
f"{number} * {i} = {number * i}" for i in range(1, number_of_terms + 1)
)
if __name__ == "__main__":
print(multiplication_table(number=5, number_of_terms=10))
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/env python3
"""Provide the functionality to manipulate a single bit."""
def set_bit(number: int, position: int) -> int:
"""
Set the bit at position to 1.
Details: perform bitwise or for given number and X.
Where X is a number with all the bits – zeroes and bit on given
position – one.
>>> set_bit(0b1101, 1) # 0b1111
15
>>> set_bit(0b0, 5) # 0b100000
32
>>> set_bit(0b1111, 1) # 0b1111
15
"""
return number | (1 << position)
def clear_bit(number: int, position: int) -> int:
"""
Set the bit at position to 0.
Details: perform bitwise and for given number and X.
Where X is a number with all the bits – ones and bit on given
position – zero.
>>> clear_bit(0b10010, 1) # 0b10000
16
>>> clear_bit(0b0, 5) # 0b0
0
"""
return number & ~(1 << position)
def flip_bit(number: int, position: int) -> int:
"""
Flip the bit at position.
Details: perform bitwise xor for given number and X.
Where X is a number with all the bits – zeroes and bit on given
position – one.
>>> flip_bit(0b101, 1) # 0b111
7
>>> flip_bit(0b101, 0) # 0b100
4
"""
return number ^ (1 << position)
def is_bit_set(number: int, position: int) -> bool:
"""
Is the bit at position set?
Details: Shift the bit at position to be the first (smallest) bit.
Then check if the first bit is set by anding the shifted number with 1.
>>> is_bit_set(0b1010, 0)
False
>>> is_bit_set(0b1010, 1)
True
>>> is_bit_set(0b1010, 2)
False
>>> is_bit_set(0b1010, 3)
True
>>> is_bit_set(0b0, 17)
False
"""
return ((number >> position) & 1) == 1
def get_bit(number: int, position: int) -> int:
"""
Get the bit at the given position
Details: perform bitwise and for the given number and X,
Where X is a number with all the bits – zeroes and bit on given position – one.
If the result is not equal to 0, then the bit on the given position is 1, else 0.
>>> get_bit(0b1010, 0)
0
>>> get_bit(0b1010, 1)
1
>>> get_bit(0b1010, 2)
0
>>> get_bit(0b1010, 3)
1
"""
return int((number & (1 << position)) != 0)
if __name__ == "__main__":
import doctest
doctest.testmod()
| #!/usr/bin/env python3
"""Provide the functionality to manipulate a single bit."""
def set_bit(number: int, position: int) -> int:
"""
Set the bit at position to 1.
Details: perform bitwise or for given number and X.
Where X is a number with all the bits – zeroes and bit on given
position – one.
>>> set_bit(0b1101, 1) # 0b1111
15
>>> set_bit(0b0, 5) # 0b100000
32
>>> set_bit(0b1111, 1) # 0b1111
15
"""
return number | (1 << position)
def clear_bit(number: int, position: int) -> int:
"""
Set the bit at position to 0.
Details: perform bitwise and for given number and X.
Where X is a number with all the bits – ones and bit on given
position – zero.
>>> clear_bit(0b10010, 1) # 0b10000
16
>>> clear_bit(0b0, 5) # 0b0
0
"""
return number & ~(1 << position)
def flip_bit(number: int, position: int) -> int:
"""
Flip the bit at position.
Details: perform bitwise xor for given number and X.
Where X is a number with all the bits – zeroes and bit on given
position – one.
>>> flip_bit(0b101, 1) # 0b111
7
>>> flip_bit(0b101, 0) # 0b100
4
"""
return number ^ (1 << position)
def is_bit_set(number: int, position: int) -> bool:
"""
Is the bit at position set?
Details: Shift the bit at position to be the first (smallest) bit.
Then check if the first bit is set by anding the shifted number with 1.
>>> is_bit_set(0b1010, 0)
False
>>> is_bit_set(0b1010, 1)
True
>>> is_bit_set(0b1010, 2)
False
>>> is_bit_set(0b1010, 3)
True
>>> is_bit_set(0b0, 17)
False
"""
return ((number >> position) & 1) == 1
def get_bit(number: int, position: int) -> int:
"""
Get the bit at the given position
Details: perform bitwise and for the given number and X,
Where X is a number with all the bits – zeroes and bit on given position – one.
If the result is not equal to 0, then the bit on the given position is 1, else 0.
>>> get_bit(0b1010, 0)
0
>>> get_bit(0b1010, 1)
1
>>> get_bit(0b1010, 2)
0
>>> get_bit(0b1010, 3)
1
"""
return int((number & (1 << position)) != 0)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Information on 2's complement: https://en.wikipedia.org/wiki/Two%27s_complement
def twos_complement(number: int) -> str:
"""
Take in a negative integer 'number'.
Return the two's complement representation of 'number'.
>>> twos_complement(0)
'0b0'
>>> twos_complement(-1)
'0b11'
>>> twos_complement(-5)
'0b1011'
>>> twos_complement(-17)
'0b101111'
>>> twos_complement(-207)
'0b100110001'
>>> twos_complement(1)
Traceback (most recent call last):
...
ValueError: input must be a negative integer
"""
if number > 0:
raise ValueError("input must be a negative integer")
binary_number_length = len(bin(number)[3:])
twos_complement_number = bin(abs(number) - (1 << binary_number_length))[3:]
twos_complement_number = (
(
"1"
+ "0" * (binary_number_length - len(twos_complement_number))
+ twos_complement_number
)
if number < 0
else "0"
)
return "0b" + twos_complement_number
if __name__ == "__main__":
import doctest
doctest.testmod()
| # Information on 2's complement: https://en.wikipedia.org/wiki/Two%27s_complement
def twos_complement(number: int) -> str:
"""
Take in a negative integer 'number'.
Return the two's complement representation of 'number'.
>>> twos_complement(0)
'0b0'
>>> twos_complement(-1)
'0b11'
>>> twos_complement(-5)
'0b1011'
>>> twos_complement(-17)
'0b101111'
>>> twos_complement(-207)
'0b100110001'
>>> twos_complement(1)
Traceback (most recent call last):
...
ValueError: input must be a negative integer
"""
if number > 0:
raise ValueError("input must be a negative integer")
binary_number_length = len(bin(number)[3:])
twos_complement_number = bin(abs(number) - (1 << binary_number_length))[3:]
twos_complement_number = (
(
"1"
+ "0" * (binary_number_length - len(twos_complement_number))
+ twos_complement_number
)
if number < 0
else "0"
)
return "0b" + twos_complement_number
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| class MaxFenwickTree:
"""
Maximum Fenwick Tree
More info: https://cp-algorithms.com/data_structures/fenwick.html
---------
>>> ft = MaxFenwickTree(5)
>>> ft.query(0, 5)
0
>>> ft.update(4, 100)
>>> ft.query(0, 5)
100
>>> ft.update(4, 0)
>>> ft.update(2, 20)
>>> ft.query(0, 5)
20
>>> ft.update(4, 10)
>>> ft.query(2, 5)
20
>>> ft.query(1, 5)
20
>>> ft.update(2, 0)
>>> ft.query(0, 5)
10
>>> ft = MaxFenwickTree(10000)
>>> ft.update(255, 30)
>>> ft.query(0, 10000)
30
>>> ft = MaxFenwickTree(6)
>>> ft.update(5, 1)
>>> ft.query(5, 6)
1
>>> ft = MaxFenwickTree(6)
>>> ft.update(0, 1000)
>>> ft.query(0, 1)
1000
"""
def __init__(self, size: int) -> None:
"""
Create empty Maximum Fenwick Tree with specified size
Parameters:
size: size of Array
Returns:
None
"""
self.size = size
self.arr = [0] * size
self.tree = [0] * size
@staticmethod
def get_next(index: int) -> int:
"""
Get next index in O(1)
"""
return index | (index + 1)
@staticmethod
def get_prev(index: int) -> int:
"""
Get previous index in O(1)
"""
return (index & (index + 1)) - 1
def update(self, index: int, value: int) -> None:
"""
Set index to value in O(lg^2 N)
Parameters:
index: index to update
value: value to set
Returns:
None
"""
self.arr[index] = value
while index < self.size:
current_left_border = self.get_prev(index) + 1
if current_left_border == index:
self.tree[index] = value
else:
self.tree[index] = max(value, current_left_border, index)
index = self.get_next(index)
def query(self, left: int, right: int) -> int:
"""
Answer the query of maximum range [l, r) in O(lg^2 N)
Parameters:
left: left index of query range (inclusive)
right: right index of query range (exclusive)
Returns:
Maximum value of range [left, right)
"""
right -= 1 # Because of right is exclusive
result = 0
while left <= right:
current_left = self.get_prev(right)
if left <= current_left:
result = max(result, self.tree[right])
right = current_left
else:
result = max(result, self.arr[right])
right -= 1
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
| class MaxFenwickTree:
"""
Maximum Fenwick Tree
More info: https://cp-algorithms.com/data_structures/fenwick.html
---------
>>> ft = MaxFenwickTree(5)
>>> ft.query(0, 5)
0
>>> ft.update(4, 100)
>>> ft.query(0, 5)
100
>>> ft.update(4, 0)
>>> ft.update(2, 20)
>>> ft.query(0, 5)
20
>>> ft.update(4, 10)
>>> ft.query(2, 5)
20
>>> ft.query(1, 5)
20
>>> ft.update(2, 0)
>>> ft.query(0, 5)
10
>>> ft = MaxFenwickTree(10000)
>>> ft.update(255, 30)
>>> ft.query(0, 10000)
30
>>> ft = MaxFenwickTree(6)
>>> ft.update(5, 1)
>>> ft.query(5, 6)
1
>>> ft = MaxFenwickTree(6)
>>> ft.update(0, 1000)
>>> ft.query(0, 1)
1000
"""
def __init__(self, size: int) -> None:
"""
Create empty Maximum Fenwick Tree with specified size
Parameters:
size: size of Array
Returns:
None
"""
self.size = size
self.arr = [0] * size
self.tree = [0] * size
@staticmethod
def get_next(index: int) -> int:
"""
Get next index in O(1)
"""
return index | (index + 1)
@staticmethod
def get_prev(index: int) -> int:
"""
Get previous index in O(1)
"""
return (index & (index + 1)) - 1
def update(self, index: int, value: int) -> None:
"""
Set index to value in O(lg^2 N)
Parameters:
index: index to update
value: value to set
Returns:
None
"""
self.arr[index] = value
while index < self.size:
current_left_border = self.get_prev(index) + 1
if current_left_border == index:
self.tree[index] = value
else:
self.tree[index] = max(value, current_left_border, index)
index = self.get_next(index)
def query(self, left: int, right: int) -> int:
"""
Answer the query of maximum range [l, r) in O(lg^2 N)
Parameters:
left: left index of query range (inclusive)
right: right index of query range (exclusive)
Returns:
Maximum value of range [left, right)
"""
right -= 1 # Because of right is exclusive
result = 0
while left <= right:
current_left = self.get_prev(right)
if left <= current_left:
result = max(result, self.tree[right])
right = current_left
else:
result = max(result, self.arr[right])
right -= 1
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from decimal import Decimal, getcontext
from math import ceil, factorial
def pi(precision: int) -> str:
"""
The Chudnovsky algorithm is a fast method for calculating the digits of PI,
based on Ramanujan’s PI formulae.
https://en.wikipedia.org/wiki/Chudnovsky_algorithm
PI = constant_term / ((multinomial_term * linear_term) / exponential_term)
where constant_term = 426880 * sqrt(10005)
The linear_term and the exponential_term can be defined iteratively as follows:
L_k+1 = L_k + 545140134 where L_0 = 13591409
X_k+1 = X_k * -262537412640768000 where X_0 = 1
The multinomial_term is defined as follows:
6k! / ((3k)! * (k!) ^ 3)
where k is the k_th iteration.
This algorithm correctly calculates around 14 digits of PI per iteration
>>> pi(10)
'3.14159265'
>>> pi(100)
'3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706'
>>> pi('hello')
Traceback (most recent call last):
...
TypeError: Undefined for non-integers
>>> pi(-1)
Traceback (most recent call last):
...
ValueError: Undefined for non-natural numbers
"""
if not isinstance(precision, int):
raise TypeError("Undefined for non-integers")
elif precision < 1:
raise ValueError("Undefined for non-natural numbers")
getcontext().prec = precision
num_iterations = ceil(precision / 14)
constant_term = 426880 * Decimal(10005).sqrt()
exponential_term = 1
linear_term = 13591409
partial_sum = Decimal(linear_term)
for k in range(1, num_iterations):
multinomial_term = factorial(6 * k) // (factorial(3 * k) * factorial(k) ** 3)
linear_term += 545140134
exponential_term *= -262537412640768000
partial_sum += Decimal(multinomial_term * linear_term) / exponential_term
return str(constant_term / partial_sum)[:-1]
if __name__ == "__main__":
n = 50
print(f"The first {n} digits of pi is: {pi(n)}")
| from decimal import Decimal, getcontext
from math import ceil, factorial
def pi(precision: int) -> str:
"""
The Chudnovsky algorithm is a fast method for calculating the digits of PI,
based on Ramanujan’s PI formulae.
https://en.wikipedia.org/wiki/Chudnovsky_algorithm
PI = constant_term / ((multinomial_term * linear_term) / exponential_term)
where constant_term = 426880 * sqrt(10005)
The linear_term and the exponential_term can be defined iteratively as follows:
L_k+1 = L_k + 545140134 where L_0 = 13591409
X_k+1 = X_k * -262537412640768000 where X_0 = 1
The multinomial_term is defined as follows:
6k! / ((3k)! * (k!) ^ 3)
where k is the k_th iteration.
This algorithm correctly calculates around 14 digits of PI per iteration
>>> pi(10)
'3.14159265'
>>> pi(100)
'3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706'
>>> pi('hello')
Traceback (most recent call last):
...
TypeError: Undefined for non-integers
>>> pi(-1)
Traceback (most recent call last):
...
ValueError: Undefined for non-natural numbers
"""
if not isinstance(precision, int):
raise TypeError("Undefined for non-integers")
elif precision < 1:
raise ValueError("Undefined for non-natural numbers")
getcontext().prec = precision
num_iterations = ceil(precision / 14)
constant_term = 426880 * Decimal(10005).sqrt()
exponential_term = 1
linear_term = 13591409
partial_sum = Decimal(linear_term)
for k in range(1, num_iterations):
multinomial_term = factorial(6 * k) // (factorial(3 * k) * factorial(k) ** 3)
linear_term += 545140134
exponential_term *= -262537412640768000
partial_sum += Decimal(multinomial_term * linear_term) / exponential_term
return str(constant_term / partial_sum)[:-1]
if __name__ == "__main__":
n = 50
print(f"The first {n} digits of pi is: {pi(n)}")
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/env python3
"""
Illustrate how to implement bucket sort algorithm.
Author: OMKAR PATHAK
This program will illustrate how to implement bucket sort algorithm
Wikipedia says: Bucket sort, or bin sort, is a sorting algorithm that works
by distributing the elements of an array into a number of buckets.
Each bucket is then sorted individually, either using a different sorting
algorithm, or by recursively applying the bucket sorting algorithm. It is a
distribution sort, and is a cousin of radix sort in the most to least
significant digit flavour.
Bucket sort is a generalization of pigeonhole sort. Bucket sort can be
implemented with comparisons and therefore can also be considered a
comparison sort algorithm. The computational complexity estimates involve the
number of buckets.
Time Complexity of Solution:
Worst case scenario occurs when all the elements are placed in a single bucket.
The overall performance would then be dominated by the algorithm used to sort each
bucket. In this case, O(n log n), because of TimSort
Average Case O(n + (n^2)/k + k), where k is the number of buckets
If k = O(n), time complexity is O(n)
Source: https://en.wikipedia.org/wiki/Bucket_sort
"""
from __future__ import annotations
def bucket_sort(my_list: list) -> list:
"""
>>> data = [-1, 2, -5, 0]
>>> bucket_sort(data) == sorted(data)
True
>>> data = [9, 8, 7, 6, -12]
>>> bucket_sort(data) == sorted(data)
True
>>> data = [.4, 1.2, .1, .2, -.9]
>>> bucket_sort(data) == sorted(data)
True
>>> bucket_sort([]) == sorted([])
True
>>> import random
>>> collection = random.sample(range(-50, 50), 50)
>>> bucket_sort(collection) == sorted(collection)
True
"""
if len(my_list) == 0:
return []
min_value, max_value = min(my_list), max(my_list)
bucket_count = int(max_value - min_value) + 1
buckets: list[list] = [[] for _ in range(bucket_count)]
for i in my_list:
buckets[int(i - min_value)].append(i)
return [v for bucket in buckets for v in sorted(bucket)]
if __name__ == "__main__":
from doctest import testmod
testmod()
assert bucket_sort([4, 5, 3, 2, 1]) == [1, 2, 3, 4, 5]
assert bucket_sort([0, 1, -10, 15, 2, -2]) == [-10, -2, 0, 1, 2, 15]
| #!/usr/bin/env python3
"""
Illustrate how to implement bucket sort algorithm.
Author: OMKAR PATHAK
This program will illustrate how to implement bucket sort algorithm
Wikipedia says: Bucket sort, or bin sort, is a sorting algorithm that works
by distributing the elements of an array into a number of buckets.
Each bucket is then sorted individually, either using a different sorting
algorithm, or by recursively applying the bucket sorting algorithm. It is a
distribution sort, and is a cousin of radix sort in the most to least
significant digit flavour.
Bucket sort is a generalization of pigeonhole sort. Bucket sort can be
implemented with comparisons and therefore can also be considered a
comparison sort algorithm. The computational complexity estimates involve the
number of buckets.
Time Complexity of Solution:
Worst case scenario occurs when all the elements are placed in a single bucket.
The overall performance would then be dominated by the algorithm used to sort each
bucket. In this case, O(n log n), because of TimSort
Average Case O(n + (n^2)/k + k), where k is the number of buckets
If k = O(n), time complexity is O(n)
Source: https://en.wikipedia.org/wiki/Bucket_sort
"""
from __future__ import annotations
def bucket_sort(my_list: list) -> list:
"""
>>> data = [-1, 2, -5, 0]
>>> bucket_sort(data) == sorted(data)
True
>>> data = [9, 8, 7, 6, -12]
>>> bucket_sort(data) == sorted(data)
True
>>> data = [.4, 1.2, .1, .2, -.9]
>>> bucket_sort(data) == sorted(data)
True
>>> bucket_sort([]) == sorted([])
True
>>> import random
>>> collection = random.sample(range(-50, 50), 50)
>>> bucket_sort(collection) == sorted(collection)
True
"""
if len(my_list) == 0:
return []
min_value, max_value = min(my_list), max(my_list)
bucket_count = int(max_value - min_value) + 1
buckets: list[list] = [[] for _ in range(bucket_count)]
for i in my_list:
buckets[int(i - min_value)].append(i)
return [v for bucket in buckets for v in sorted(bucket)]
if __name__ == "__main__":
from doctest import testmod
testmod()
assert bucket_sort([4, 5, 3, 2, 1]) == [1, 2, 3, 4, 5]
assert bucket_sort([0, 1, -10, 15, 2, -2]) == [-10, -2, 0, 1, 2, 15]
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """ Luhn Algorithm """
from __future__ import annotations
def is_luhn(string: str) -> bool:
"""
Perform Luhn validation on an input string
Algorithm:
* Double every other digit starting from 2nd last digit.
* Subtract 9 if number is greater than 9.
* Sum the numbers
*
>>> test_cases = (79927398710, 79927398711, 79927398712, 79927398713,
... 79927398714, 79927398715, 79927398716, 79927398717, 79927398718,
... 79927398719)
>>> [is_luhn(str(test_case)) for test_case in test_cases]
[False, False, False, True, False, False, False, False, False, False]
"""
check_digit: int
_vector: list[str] = list(string)
__vector, check_digit = _vector[:-1], int(_vector[-1])
vector: list[int] = [int(digit) for digit in __vector]
vector.reverse()
for i, digit in enumerate(vector):
if i & 1 == 0:
doubled: int = digit * 2
if doubled > 9:
doubled -= 9
check_digit += doubled
else:
check_digit += digit
return check_digit % 10 == 0
if __name__ == "__main__":
import doctest
doctest.testmod()
assert is_luhn("79927398713")
assert not is_luhn("79927398714")
| """ Luhn Algorithm """
from __future__ import annotations
def is_luhn(string: str) -> bool:
"""
Perform Luhn validation on an input string
Algorithm:
* Double every other digit starting from 2nd last digit.
* Subtract 9 if number is greater than 9.
* Sum the numbers
*
>>> test_cases = (79927398710, 79927398711, 79927398712, 79927398713,
... 79927398714, 79927398715, 79927398716, 79927398717, 79927398718,
... 79927398719)
>>> [is_luhn(str(test_case)) for test_case in test_cases]
[False, False, False, True, False, False, False, False, False, False]
"""
check_digit: int
_vector: list[str] = list(string)
__vector, check_digit = _vector[:-1], int(_vector[-1])
vector: list[int] = [int(digit) for digit in __vector]
vector.reverse()
for i, digit in enumerate(vector):
if i & 1 == 0:
doubled: int = digit * 2
if doubled > 9:
doubled -= 9
check_digit += doubled
else:
check_digit += digit
return check_digit % 10 == 0
if __name__ == "__main__":
import doctest
doctest.testmod()
assert is_luhn("79927398713")
assert not is_luhn("79927398714")
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 1: https://projecteuler.net/problem=1
Multiples of 3 and 5
If we list all the natural numbers below 10 that are multiples of 3 or 5,
we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
"""
def solution(n: int = 1000) -> int:
"""
Returns the sum of all the multiples of 3 or 5 below n.
>>> solution(3)
0
>>> solution(4)
3
>>> solution(10)
23
>>> solution(600)
83700
"""
a = 3
result = 0
while a < n:
if a % 3 == 0 or a % 5 == 0:
result += a
elif a % 15 == 0:
result -= a
a += 1
return result
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 1: https://projecteuler.net/problem=1
Multiples of 3 and 5
If we list all the natural numbers below 10 that are multiples of 3 or 5,
we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
"""
def solution(n: int = 1000) -> int:
"""
Returns the sum of all the multiples of 3 or 5 below n.
>>> solution(3)
0
>>> solution(4)
3
>>> solution(10)
23
>>> solution(600)
83700
"""
a = 3
result = 0
while a < n:
if a % 3 == 0 or a % 5 == 0:
result += a
elif a % 15 == 0:
result -= a
a += 1
return result
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
import random
class Dice:
NUM_SIDES = 6
def __init__(self):
"""Initialize a six sided dice"""
self.sides = list(range(1, Dice.NUM_SIDES + 1))
def roll(self):
return random.choice(self.sides)
def throw_dice(num_throws: int, num_dice: int = 2) -> list[float]:
"""
Return probability list of all possible sums when throwing dice.
>>> random.seed(0)
>>> throw_dice(10, 1)
[10.0, 0.0, 30.0, 50.0, 10.0, 0.0]
>>> throw_dice(100, 1)
[19.0, 17.0, 17.0, 11.0, 23.0, 13.0]
>>> throw_dice(1000, 1)
[18.8, 15.5, 16.3, 17.6, 14.2, 17.6]
>>> throw_dice(10000, 1)
[16.35, 16.89, 16.93, 16.6, 16.52, 16.71]
>>> throw_dice(10000, 2)
[2.74, 5.6, 7.99, 11.26, 13.92, 16.7, 14.44, 10.63, 8.05, 5.92, 2.75]
"""
dices = [Dice() for i in range(num_dice)]
count_of_sum = [0] * (len(dices) * Dice.NUM_SIDES + 1)
for _ in range(num_throws):
count_of_sum[sum(dice.roll() for dice in dices)] += 1
probability = [round((count * 100) / num_throws, 2) for count in count_of_sum]
return probability[num_dice:] # remove probability of sums that never appear
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
import random
class Dice:
NUM_SIDES = 6
def __init__(self):
"""Initialize a six sided dice"""
self.sides = list(range(1, Dice.NUM_SIDES + 1))
def roll(self):
return random.choice(self.sides)
def throw_dice(num_throws: int, num_dice: int = 2) -> list[float]:
"""
Return probability list of all possible sums when throwing dice.
>>> random.seed(0)
>>> throw_dice(10, 1)
[10.0, 0.0, 30.0, 50.0, 10.0, 0.0]
>>> throw_dice(100, 1)
[19.0, 17.0, 17.0, 11.0, 23.0, 13.0]
>>> throw_dice(1000, 1)
[18.8, 15.5, 16.3, 17.6, 14.2, 17.6]
>>> throw_dice(10000, 1)
[16.35, 16.89, 16.93, 16.6, 16.52, 16.71]
>>> throw_dice(10000, 2)
[2.74, 5.6, 7.99, 11.26, 13.92, 16.7, 14.44, 10.63, 8.05, 5.92, 2.75]
"""
dices = [Dice() for i in range(num_dice)]
count_of_sum = [0] * (len(dices) * Dice.NUM_SIDES + 1)
for _ in range(num_throws):
count_of_sum[sum(dice.roll() for dice in dices)] += 1
probability = [round((count * 100) / num_throws, 2) for count in count_of_sum]
return probability[num_dice:] # remove probability of sums that never appear
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Linked Lists consists of Nodes.
Nodes contain data and also may link to other nodes:
- Head Node: First node, the address of the
head node gives us access of the complete list
- Last node: points to null
"""
from __future__ import annotations
from typing import Any
class Node:
def __init__(self, item: Any, next: Any) -> None: # noqa: A002
self.item = item
self.next = next
class LinkedList:
def __init__(self) -> None:
self.head: Node | None = None
self.size = 0
def add(self, item: Any) -> None:
self.head = Node(item, self.head)
self.size += 1
def remove(self) -> Any:
# Switched 'self.is_empty()' to 'self.head is None'
# because mypy was considering the possibility that 'self.head'
# can be None in below else part and giving error
if self.head is None:
return None
else:
item = self.head.item
self.head = self.head.next
self.size -= 1
return item
def is_empty(self) -> bool:
return self.head is None
def __str__(self) -> str:
"""
>>> linked_list = LinkedList()
>>> linked_list.add(23)
>>> linked_list.add(14)
>>> linked_list.add(9)
>>> print(linked_list)
9 --> 14 --> 23
"""
if self.is_empty():
return ""
else:
iterate = self.head
item_str = ""
item_list: list[str] = []
while iterate:
item_list.append(str(iterate.item))
iterate = iterate.next
item_str = " --> ".join(item_list)
return item_str
def __len__(self) -> int:
"""
>>> linked_list = LinkedList()
>>> len(linked_list)
0
>>> linked_list.add("a")
>>> len(linked_list)
1
>>> linked_list.add("b")
>>> len(linked_list)
2
>>> _ = linked_list.remove()
>>> len(linked_list)
1
>>> _ = linked_list.remove()
>>> len(linked_list)
0
"""
return self.size
| """
Linked Lists consists of Nodes.
Nodes contain data and also may link to other nodes:
- Head Node: First node, the address of the
head node gives us access of the complete list
- Last node: points to null
"""
from __future__ import annotations
from typing import Any
class Node:
def __init__(self, item: Any, next: Any) -> None: # noqa: A002
self.item = item
self.next = next
class LinkedList:
def __init__(self) -> None:
self.head: Node | None = None
self.size = 0
def add(self, item: Any) -> None:
self.head = Node(item, self.head)
self.size += 1
def remove(self) -> Any:
# Switched 'self.is_empty()' to 'self.head is None'
# because mypy was considering the possibility that 'self.head'
# can be None in below else part and giving error
if self.head is None:
return None
else:
item = self.head.item
self.head = self.head.next
self.size -= 1
return item
def is_empty(self) -> bool:
return self.head is None
def __str__(self) -> str:
"""
>>> linked_list = LinkedList()
>>> linked_list.add(23)
>>> linked_list.add(14)
>>> linked_list.add(9)
>>> print(linked_list)
9 --> 14 --> 23
"""
if self.is_empty():
return ""
else:
iterate = self.head
item_str = ""
item_list: list[str] = []
while iterate:
item_list.append(str(iterate.item))
iterate = iterate.next
item_str = " --> ".join(item_list)
return item_str
def __len__(self) -> int:
"""
>>> linked_list = LinkedList()
>>> len(linked_list)
0
>>> linked_list.add("a")
>>> len(linked_list)
1
>>> linked_list.add("b")
>>> len(linked_list)
2
>>> _ = linked_list.remove()
>>> len(linked_list)
1
>>> _ = linked_list.remove()
>>> len(linked_list)
0
"""
return self.size
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Gamma function is a very useful tool in math and physics.
It helps calculating complex integral in a convenient way.
for more info: https://en.wikipedia.org/wiki/Gamma_function
Python's Standard Library math.gamma() function overflows around gamma(171.624).
"""
from math import pi, sqrt
def gamma(num: float) -> float:
"""
Calculates the value of Gamma function of num
where num is either an integer (1, 2, 3..) or a half-integer (0.5, 1.5, 2.5 ...).
Implemented using recursion
Examples:
>>> from math import isclose, gamma as math_gamma
>>> gamma(0.5)
1.7724538509055159
>>> gamma(2)
1.0
>>> gamma(3.5)
3.3233509704478426
>>> gamma(171.5)
9.483367566824795e+307
>>> all(isclose(gamma(num), math_gamma(num)) for num in (0.5, 2, 3.5, 171.5))
True
>>> gamma(0)
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma(-1.1)
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma(-4)
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma(172)
Traceback (most recent call last):
...
OverflowError: math range error
>>> gamma(1.1)
Traceback (most recent call last):
...
NotImplementedError: num must be an integer or a half-integer
"""
if num <= 0:
raise ValueError("math domain error")
if num > 171.5:
raise OverflowError("math range error")
elif num - int(num) not in (0, 0.5):
raise NotImplementedError("num must be an integer or a half-integer")
elif num == 0.5:
return sqrt(pi)
else:
return 1.0 if num == 1 else (num - 1) * gamma(num - 1)
def test_gamma() -> None:
"""
>>> test_gamma()
"""
assert gamma(0.5) == sqrt(pi)
assert gamma(1) == 1.0
assert gamma(2) == 1.0
if __name__ == "__main__":
from doctest import testmod
testmod()
num = 1.0
while num:
num = float(input("Gamma of: "))
print(f"gamma({num}) = {gamma(num)}")
print("\nEnter 0 to exit...")
| """
Gamma function is a very useful tool in math and physics.
It helps calculating complex integral in a convenient way.
for more info: https://en.wikipedia.org/wiki/Gamma_function
Python's Standard Library math.gamma() function overflows around gamma(171.624).
"""
from math import pi, sqrt
def gamma(num: float) -> float:
"""
Calculates the value of Gamma function of num
where num is either an integer (1, 2, 3..) or a half-integer (0.5, 1.5, 2.5 ...).
Implemented using recursion
Examples:
>>> from math import isclose, gamma as math_gamma
>>> gamma(0.5)
1.7724538509055159
>>> gamma(2)
1.0
>>> gamma(3.5)
3.3233509704478426
>>> gamma(171.5)
9.483367566824795e+307
>>> all(isclose(gamma(num), math_gamma(num)) for num in (0.5, 2, 3.5, 171.5))
True
>>> gamma(0)
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma(-1.1)
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma(-4)
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma(172)
Traceback (most recent call last):
...
OverflowError: math range error
>>> gamma(1.1)
Traceback (most recent call last):
...
NotImplementedError: num must be an integer or a half-integer
"""
if num <= 0:
raise ValueError("math domain error")
if num > 171.5:
raise OverflowError("math range error")
elif num - int(num) not in (0, 0.5):
raise NotImplementedError("num must be an integer or a half-integer")
elif num == 0.5:
return sqrt(pi)
else:
return 1.0 if num == 1 else (num - 1) * gamma(num - 1)
def test_gamma() -> None:
"""
>>> test_gamma()
"""
assert gamma(0.5) == sqrt(pi)
assert gamma(1) == 1.0
assert gamma(2) == 1.0
if __name__ == "__main__":
from doctest import testmod
testmod()
num = 1.0
while num:
num = float(input("Gamma of: "))
print(f"gamma({num}) = {gamma(num)}")
print("\nEnter 0 to exit...")
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
* Author: Manuel Di Lullo (https://github.com/manueldilullo)
* Description: Random graphs generator.
Uses graphs represented with an adjacency list.
URL: https://en.wikipedia.org/wiki/Random_graph
"""
import random
def random_graph(
vertices_number: int, probability: float, directed: bool = False
) -> dict:
"""
Generate a random graph
@input: vertices_number (number of vertices),
probability (probability that a generic edge (u,v) exists),
directed (if True: graph will be a directed graph,
otherwise it will be an undirected graph)
@examples:
>>> random.seed(1)
>>> random_graph(4, 0.5)
{0: [1], 1: [0, 2, 3], 2: [1, 3], 3: [1, 2]}
>>> random.seed(1)
>>> random_graph(4, 0.5, True)
{0: [1], 1: [2, 3], 2: [3], 3: []}
"""
graph: dict = {i: [] for i in range(vertices_number)}
# if probability is greater or equal than 1, then generate a complete graph
if probability >= 1:
return complete_graph(vertices_number)
# if probability is lower or equal than 0, then return a graph without edges
if probability <= 0:
return graph
# for each couple of nodes, add an edge from u to v
# if the number randomly generated is greater than probability probability
for i in range(vertices_number):
for j in range(i + 1, vertices_number):
if random.random() < probability:
graph[i].append(j)
if not directed:
# if the graph is undirected, add an edge in from j to i, either
graph[j].append(i)
return graph
def complete_graph(vertices_number: int) -> dict:
"""
Generate a complete graph with vertices_number vertices.
@input: vertices_number (number of vertices),
directed (False if the graph is undirected, True otherwise)
@example:
>>> complete_graph(3)
{0: [1, 2], 1: [0, 2], 2: [0, 1]}
"""
return {
i: [j for j in range(vertices_number) if i != j] for i in range(vertices_number)
}
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
* Author: Manuel Di Lullo (https://github.com/manueldilullo)
* Description: Random graphs generator.
Uses graphs represented with an adjacency list.
URL: https://en.wikipedia.org/wiki/Random_graph
"""
import random
def random_graph(
vertices_number: int, probability: float, directed: bool = False
) -> dict:
"""
Generate a random graph
@input: vertices_number (number of vertices),
probability (probability that a generic edge (u,v) exists),
directed (if True: graph will be a directed graph,
otherwise it will be an undirected graph)
@examples:
>>> random.seed(1)
>>> random_graph(4, 0.5)
{0: [1], 1: [0, 2, 3], 2: [1, 3], 3: [1, 2]}
>>> random.seed(1)
>>> random_graph(4, 0.5, True)
{0: [1], 1: [2, 3], 2: [3], 3: []}
"""
graph: dict = {i: [] for i in range(vertices_number)}
# if probability is greater or equal than 1, then generate a complete graph
if probability >= 1:
return complete_graph(vertices_number)
# if probability is lower or equal than 0, then return a graph without edges
if probability <= 0:
return graph
# for each couple of nodes, add an edge from u to v
# if the number randomly generated is greater than probability probability
for i in range(vertices_number):
for j in range(i + 1, vertices_number):
if random.random() < probability:
graph[i].append(j)
if not directed:
# if the graph is undirected, add an edge in from j to i, either
graph[j].append(i)
return graph
def complete_graph(vertices_number: int) -> dict:
"""
Generate a complete graph with vertices_number vertices.
@input: vertices_number (number of vertices),
directed (False if the graph is undirected, True otherwise)
@example:
>>> complete_graph(3)
{0: [1, 2], 1: [0, 2], 2: [0, 1]}
"""
return {
i: [j for j in range(vertices_number) if i != j] for i in range(vertices_number)
}
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 35
https://projecteuler.net/problem=35
Problem Statement:
The number 197 is called a circular prime because all rotations of the digits:
197, 971, and 719, are themselves prime.
There are thirteen such primes below 100: 2, 3, 5, 7, 11, 13, 17, 31, 37, 71, 73,
79, and 97.
How many circular primes are there below one million?
To solve this problem in an efficient manner, we will first mark all the primes
below 1 million using the Seive of Eratosthenes. Then, out of all these primes,
we will rule out the numbers which contain an even digit. After this we will
generate each circular combination of the number and check if all are prime.
"""
from __future__ import annotations
seive = [True] * 1000001
i = 2
while i * i <= 1000000:
if seive[i]:
for j in range(i * i, 1000001, i):
seive[j] = False
i += 1
def is_prime(n: int) -> bool:
"""
For 2 <= n <= 1000000, return True if n is prime.
>>> is_prime(87)
False
>>> is_prime(23)
True
>>> is_prime(25363)
False
"""
return seive[n]
def contains_an_even_digit(n: int) -> bool:
"""
Return True if n contains an even digit.
>>> contains_an_even_digit(0)
True
>>> contains_an_even_digit(975317933)
False
>>> contains_an_even_digit(-245679)
True
"""
return any(digit in "02468" for digit in str(n))
def find_circular_primes(limit: int = 1000000) -> list[int]:
"""
Return circular primes below limit.
>>> len(find_circular_primes(100))
13
>>> len(find_circular_primes(1000000))
55
"""
result = [2] # result already includes the number 2.
for num in range(3, limit + 1, 2):
if is_prime(num) and not contains_an_even_digit(num):
str_num = str(num)
list_nums = [int(str_num[j:] + str_num[:j]) for j in range(len(str_num))]
if all(is_prime(i) for i in list_nums):
result.append(num)
return result
def solution() -> int:
"""
>>> solution()
55
"""
return len(find_circular_primes())
if __name__ == "__main__":
print(f"{len(find_circular_primes()) = }")
| """
Project Euler Problem 35
https://projecteuler.net/problem=35
Problem Statement:
The number 197 is called a circular prime because all rotations of the digits:
197, 971, and 719, are themselves prime.
There are thirteen such primes below 100: 2, 3, 5, 7, 11, 13, 17, 31, 37, 71, 73,
79, and 97.
How many circular primes are there below one million?
To solve this problem in an efficient manner, we will first mark all the primes
below 1 million using the Seive of Eratosthenes. Then, out of all these primes,
we will rule out the numbers which contain an even digit. After this we will
generate each circular combination of the number and check if all are prime.
"""
from __future__ import annotations
seive = [True] * 1000001
i = 2
while i * i <= 1000000:
if seive[i]:
for j in range(i * i, 1000001, i):
seive[j] = False
i += 1
def is_prime(n: int) -> bool:
"""
For 2 <= n <= 1000000, return True if n is prime.
>>> is_prime(87)
False
>>> is_prime(23)
True
>>> is_prime(25363)
False
"""
return seive[n]
def contains_an_even_digit(n: int) -> bool:
"""
Return True if n contains an even digit.
>>> contains_an_even_digit(0)
True
>>> contains_an_even_digit(975317933)
False
>>> contains_an_even_digit(-245679)
True
"""
return any(digit in "02468" for digit in str(n))
def find_circular_primes(limit: int = 1000000) -> list[int]:
"""
Return circular primes below limit.
>>> len(find_circular_primes(100))
13
>>> len(find_circular_primes(1000000))
55
"""
result = [2] # result already includes the number 2.
for num in range(3, limit + 1, 2):
if is_prime(num) and not contains_an_even_digit(num):
str_num = str(num)
list_nums = [int(str_num[j:] + str_num[:j]) for j in range(len(str_num))]
if all(is_prime(i) for i in list_nums):
result.append(num)
return result
def solution() -> int:
"""
>>> solution()
55
"""
return len(find_circular_primes())
if __name__ == "__main__":
print(f"{len(find_circular_primes()) = }")
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
def mean(nums: list) -> float:
"""
Find mean of a list of numbers.
Wiki: https://en.wikipedia.org/wiki/Mean
>>> mean([3, 6, 9, 12, 15, 18, 21])
12.0
>>> mean([5, 10, 15, 20, 25, 30, 35])
20.0
>>> mean([1, 2, 3, 4, 5, 6, 7, 8])
4.5
>>> mean([])
Traceback (most recent call last):
...
ValueError: List is empty
"""
if not nums:
raise ValueError("List is empty")
return sum(nums) / len(nums)
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
def mean(nums: list) -> float:
"""
Find mean of a list of numbers.
Wiki: https://en.wikipedia.org/wiki/Mean
>>> mean([3, 6, 9, 12, 15, 18, 21])
12.0
>>> mean([5, 10, 15, 20, 25, 30, 35])
20.0
>>> mean([1, 2, 3, 4, 5, 6, 7, 8])
4.5
>>> mean([])
Traceback (most recent call last):
...
ValueError: List is empty
"""
if not nums:
raise ValueError("List is empty")
return sum(nums) / len(nums)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
render 3d points for 2d surfaces.
"""
from __future__ import annotations
import math
__version__ = "2020.9.26"
__author__ = "xcodz-dot, cclaus, dhruvmanila"
def convert_to_2d(
x: float, y: float, z: float, scale: float, distance: float
) -> tuple[float, float]:
"""
Converts 3d point to a 2d drawable point
>>> convert_to_2d(1.0, 2.0, 3.0, 10.0, 10.0)
(7.6923076923076925, 15.384615384615385)
>>> convert_to_2d(1, 2, 3, 10, 10)
(7.6923076923076925, 15.384615384615385)
>>> convert_to_2d("1", 2, 3, 10, 10) # '1' is str
Traceback (most recent call last):
...
TypeError: Input values must either be float or int: ['1', 2, 3, 10, 10]
"""
if not all(isinstance(val, (float, int)) for val in locals().values()):
raise TypeError(
"Input values must either be float or int: " f"{list(locals().values())}"
)
projected_x = ((x * distance) / (z + distance)) * scale
projected_y = ((y * distance) / (z + distance)) * scale
return projected_x, projected_y
def rotate(
x: float, y: float, z: float, axis: str, angle: float
) -> tuple[float, float, float]:
"""
rotate a point around a certain axis with a certain angle
angle can be any integer between 1, 360 and axis can be any one of
'x', 'y', 'z'
>>> rotate(1.0, 2.0, 3.0, 'y', 90.0)
(3.130524675073759, 2.0, 0.4470070007889556)
>>> rotate(1, 2, 3, "z", 180)
(0.999736015495891, -2.0001319704760485, 3)
>>> rotate('1', 2, 3, "z", 90.0) # '1' is str
Traceback (most recent call last):
...
TypeError: Input values except axis must either be float or int: ['1', 2, 3, 90.0]
>>> rotate(1, 2, 3, "n", 90) # 'n' is not a valid axis
Traceback (most recent call last):
...
ValueError: not a valid axis, choose one of 'x', 'y', 'z'
>>> rotate(1, 2, 3, "x", -90)
(1, -2.5049096187183877, -2.5933429780983657)
>>> rotate(1, 2, 3, "x", 450) # 450 wrap around to 90
(1, 3.5776792428178217, -0.44744970165427644)
"""
if not isinstance(axis, str):
raise TypeError("Axis must be a str")
input_variables = locals()
del input_variables["axis"]
if not all(isinstance(val, (float, int)) for val in input_variables.values()):
raise TypeError(
"Input values except axis must either be float or int: "
f"{list(input_variables.values())}"
)
angle = (angle % 360) / 450 * 180 / math.pi
if axis == "z":
new_x = x * math.cos(angle) - y * math.sin(angle)
new_y = y * math.cos(angle) + x * math.sin(angle)
new_z = z
elif axis == "x":
new_y = y * math.cos(angle) - z * math.sin(angle)
new_z = z * math.cos(angle) + y * math.sin(angle)
new_x = x
elif axis == "y":
new_x = x * math.cos(angle) - z * math.sin(angle)
new_z = z * math.cos(angle) + x * math.sin(angle)
new_y = y
else:
raise ValueError("not a valid axis, choose one of 'x', 'y', 'z'")
return new_x, new_y, new_z
if __name__ == "__main__":
import doctest
doctest.testmod()
print(f"{convert_to_2d(1.0, 2.0, 3.0, 10.0, 10.0) = }")
print(f"{rotate(1.0, 2.0, 3.0, 'y', 90.0) = }")
| """
render 3d points for 2d surfaces.
"""
from __future__ import annotations
import math
__version__ = "2020.9.26"
__author__ = "xcodz-dot, cclaus, dhruvmanila"
def convert_to_2d(
x: float, y: float, z: float, scale: float, distance: float
) -> tuple[float, float]:
"""
Converts 3d point to a 2d drawable point
>>> convert_to_2d(1.0, 2.0, 3.0, 10.0, 10.0)
(7.6923076923076925, 15.384615384615385)
>>> convert_to_2d(1, 2, 3, 10, 10)
(7.6923076923076925, 15.384615384615385)
>>> convert_to_2d("1", 2, 3, 10, 10) # '1' is str
Traceback (most recent call last):
...
TypeError: Input values must either be float or int: ['1', 2, 3, 10, 10]
"""
if not all(isinstance(val, (float, int)) for val in locals().values()):
raise TypeError(
"Input values must either be float or int: " f"{list(locals().values())}"
)
projected_x = ((x * distance) / (z + distance)) * scale
projected_y = ((y * distance) / (z + distance)) * scale
return projected_x, projected_y
def rotate(
x: float, y: float, z: float, axis: str, angle: float
) -> tuple[float, float, float]:
"""
rotate a point around a certain axis with a certain angle
angle can be any integer between 1, 360 and axis can be any one of
'x', 'y', 'z'
>>> rotate(1.0, 2.0, 3.0, 'y', 90.0)
(3.130524675073759, 2.0, 0.4470070007889556)
>>> rotate(1, 2, 3, "z", 180)
(0.999736015495891, -2.0001319704760485, 3)
>>> rotate('1', 2, 3, "z", 90.0) # '1' is str
Traceback (most recent call last):
...
TypeError: Input values except axis must either be float or int: ['1', 2, 3, 90.0]
>>> rotate(1, 2, 3, "n", 90) # 'n' is not a valid axis
Traceback (most recent call last):
...
ValueError: not a valid axis, choose one of 'x', 'y', 'z'
>>> rotate(1, 2, 3, "x", -90)
(1, -2.5049096187183877, -2.5933429780983657)
>>> rotate(1, 2, 3, "x", 450) # 450 wrap around to 90
(1, 3.5776792428178217, -0.44744970165427644)
"""
if not isinstance(axis, str):
raise TypeError("Axis must be a str")
input_variables = locals()
del input_variables["axis"]
if not all(isinstance(val, (float, int)) for val in input_variables.values()):
raise TypeError(
"Input values except axis must either be float or int: "
f"{list(input_variables.values())}"
)
angle = (angle % 360) / 450 * 180 / math.pi
if axis == "z":
new_x = x * math.cos(angle) - y * math.sin(angle)
new_y = y * math.cos(angle) + x * math.sin(angle)
new_z = z
elif axis == "x":
new_y = y * math.cos(angle) - z * math.sin(angle)
new_z = z * math.cos(angle) + y * math.sin(angle)
new_x = x
elif axis == "y":
new_x = x * math.cos(angle) - z * math.sin(angle)
new_z = z * math.cos(angle) + x * math.sin(angle)
new_y = y
else:
raise ValueError("not a valid axis, choose one of 'x', 'y', 'z'")
return new_x, new_y, new_z
if __name__ == "__main__":
import doctest
doctest.testmod()
print(f"{convert_to_2d(1.0, 2.0, 3.0, 10.0, 10.0) = }")
print(f"{rotate(1.0, 2.0, 3.0, 'y', 90.0) = }")
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/env python3
"""
The Bifid Cipher uses a Polybius Square to encipher a message in a way that
makes it fairly difficult to decipher without knowing the secret.
https://www.braingle.com/brainteasers/codes/bifid.php
"""
import numpy as np
SQUARE = [
["a", "b", "c", "d", "e"],
["f", "g", "h", "i", "k"],
["l", "m", "n", "o", "p"],
["q", "r", "s", "t", "u"],
["v", "w", "x", "y", "z"],
]
class BifidCipher:
def __init__(self) -> None:
self.SQUARE = np.array(SQUARE)
def letter_to_numbers(self, letter: str) -> np.ndarray:
"""
Return the pair of numbers that represents the given letter in the
polybius square
>>> np.array_equal(BifidCipher().letter_to_numbers('a'), [1,1])
True
>>> np.array_equal(BifidCipher().letter_to_numbers('u'), [4,5])
True
"""
index1, index2 = np.where(letter == self.SQUARE)
indexes = np.concatenate([index1 + 1, index2 + 1])
return indexes
def numbers_to_letter(self, index1: int, index2: int) -> str:
"""
Return the letter corresponding to the position [index1, index2] in
the polybius square
>>> BifidCipher().numbers_to_letter(4, 5) == "u"
True
>>> BifidCipher().numbers_to_letter(1, 1) == "a"
True
"""
letter = self.SQUARE[index1 - 1, index2 - 1]
return letter
def encode(self, message: str) -> str:
"""
Return the encoded version of message according to the polybius cipher
>>> BifidCipher().encode('testmessage') == 'qtltbdxrxlk'
True
>>> BifidCipher().encode('Test Message') == 'qtltbdxrxlk'
True
>>> BifidCipher().encode('test j') == BifidCipher().encode('test i')
True
"""
message = message.lower()
message = message.replace(" ", "")
message = message.replace("j", "i")
first_step = np.empty((2, len(message)))
for letter_index in range(len(message)):
numbers = self.letter_to_numbers(message[letter_index])
first_step[0, letter_index] = numbers[0]
first_step[1, letter_index] = numbers[1]
second_step = first_step.reshape(2 * len(message))
encoded_message = ""
for numbers_index in range(len(message)):
index1 = int(second_step[numbers_index * 2])
index2 = int(second_step[(numbers_index * 2) + 1])
letter = self.numbers_to_letter(index1, index2)
encoded_message = encoded_message + letter
return encoded_message
def decode(self, message: str) -> str:
"""
Return the decoded version of message according to the polybius cipher
>>> BifidCipher().decode('qtltbdxrxlk') == 'testmessage'
True
"""
message = message.lower()
message.replace(" ", "")
first_step = np.empty(2 * len(message))
for letter_index in range(len(message)):
numbers = self.letter_to_numbers(message[letter_index])
first_step[letter_index * 2] = numbers[0]
first_step[letter_index * 2 + 1] = numbers[1]
second_step = first_step.reshape((2, len(message)))
decoded_message = ""
for numbers_index in range(len(message)):
index1 = int(second_step[0, numbers_index])
index2 = int(second_step[1, numbers_index])
letter = self.numbers_to_letter(index1, index2)
decoded_message = decoded_message + letter
return decoded_message
| #!/usr/bin/env python3
"""
The Bifid Cipher uses a Polybius Square to encipher a message in a way that
makes it fairly difficult to decipher without knowing the secret.
https://www.braingle.com/brainteasers/codes/bifid.php
"""
import numpy as np
SQUARE = [
["a", "b", "c", "d", "e"],
["f", "g", "h", "i", "k"],
["l", "m", "n", "o", "p"],
["q", "r", "s", "t", "u"],
["v", "w", "x", "y", "z"],
]
class BifidCipher:
def __init__(self) -> None:
self.SQUARE = np.array(SQUARE)
def letter_to_numbers(self, letter: str) -> np.ndarray:
"""
Return the pair of numbers that represents the given letter in the
polybius square
>>> np.array_equal(BifidCipher().letter_to_numbers('a'), [1,1])
True
>>> np.array_equal(BifidCipher().letter_to_numbers('u'), [4,5])
True
"""
index1, index2 = np.where(letter == self.SQUARE)
indexes = np.concatenate([index1 + 1, index2 + 1])
return indexes
def numbers_to_letter(self, index1: int, index2: int) -> str:
"""
Return the letter corresponding to the position [index1, index2] in
the polybius square
>>> BifidCipher().numbers_to_letter(4, 5) == "u"
True
>>> BifidCipher().numbers_to_letter(1, 1) == "a"
True
"""
letter = self.SQUARE[index1 - 1, index2 - 1]
return letter
def encode(self, message: str) -> str:
"""
Return the encoded version of message according to the polybius cipher
>>> BifidCipher().encode('testmessage') == 'qtltbdxrxlk'
True
>>> BifidCipher().encode('Test Message') == 'qtltbdxrxlk'
True
>>> BifidCipher().encode('test j') == BifidCipher().encode('test i')
True
"""
message = message.lower()
message = message.replace(" ", "")
message = message.replace("j", "i")
first_step = np.empty((2, len(message)))
for letter_index in range(len(message)):
numbers = self.letter_to_numbers(message[letter_index])
first_step[0, letter_index] = numbers[0]
first_step[1, letter_index] = numbers[1]
second_step = first_step.reshape(2 * len(message))
encoded_message = ""
for numbers_index in range(len(message)):
index1 = int(second_step[numbers_index * 2])
index2 = int(second_step[(numbers_index * 2) + 1])
letter = self.numbers_to_letter(index1, index2)
encoded_message = encoded_message + letter
return encoded_message
def decode(self, message: str) -> str:
"""
Return the decoded version of message according to the polybius cipher
>>> BifidCipher().decode('qtltbdxrxlk') == 'testmessage'
True
"""
message = message.lower()
message.replace(" ", "")
first_step = np.empty(2 * len(message))
for letter_index in range(len(message)):
numbers = self.letter_to_numbers(message[letter_index])
first_step[letter_index * 2] = numbers[0]
first_step[letter_index * 2 + 1] = numbers[1]
second_step = first_step.reshape((2, len(message)))
decoded_message = ""
for numbers_index in range(len(message)):
index1 = int(second_step[0, numbers_index])
index2 = int(second_step[1, numbers_index])
letter = self.numbers_to_letter(index1, index2)
decoded_message = decoded_message + letter
return decoded_message
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Binomial Heap
Reference: Advanced Data Structures, Peter Brass
"""
class Node:
"""
Node in a doubly-linked binomial tree, containing:
- value
- size of left subtree
- link to left, right and parent nodes
"""
def __init__(self, val):
self.val = val
# Number of nodes in left subtree
self.left_tree_size = 0
self.left = None
self.right = None
self.parent = None
def merge_trees(self, other):
"""
In-place merge of two binomial trees of equal size.
Returns the root of the resulting tree
"""
assert self.left_tree_size == other.left_tree_size, "Unequal Sizes of Blocks"
if self.val < other.val:
other.left = self.right
other.parent = None
if self.right:
self.right.parent = other
self.right = other
self.left_tree_size = self.left_tree_size * 2 + 1
return self
else:
self.left = other.right
self.parent = None
if other.right:
other.right.parent = self
other.right = self
other.left_tree_size = other.left_tree_size * 2 + 1
return other
class BinomialHeap:
r"""
Min-oriented priority queue implemented with the Binomial Heap data
structure implemented with the BinomialHeap class. It supports:
- Insert element in a heap with n elements: Guaranteed logn, amoratized 1
- Merge (meld) heaps of size m and n: O(logn + logm)
- Delete Min: O(logn)
- Peek (return min without deleting it): O(1)
Example:
Create a random permutation of 30 integers to be inserted and 19 of them deleted
>>> import numpy as np
>>> permutation = np.random.permutation(list(range(30)))
Create a Heap and insert the 30 integers
__init__() test
>>> first_heap = BinomialHeap()
30 inserts - insert() test
>>> for number in permutation:
... first_heap.insert(number)
Size test
>>> first_heap.size
30
Deleting - delete() test
>>> [first_heap.delete_min() for _ in range(20)]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
Create a new Heap
>>> second_heap = BinomialHeap()
>>> vals = [17, 20, 31, 34]
>>> for value in vals:
... second_heap.insert(value)
The heap should have the following structure:
17
/ \
# 31
/ \
20 34
/ \ / \
# # # #
preOrder() test
>>> " ".join(str(x) for x in second_heap.pre_order())
"(17, 0) ('#', 1) (31, 1) (20, 2) ('#', 3) ('#', 3) (34, 2) ('#', 3) ('#', 3)"
printing Heap - __str__() test
>>> print(second_heap)
17
-#
-31
--20
---#
---#
--34
---#
---#
mergeHeaps() test
>>>
>>> merged = second_heap.merge_heaps(first_heap)
>>> merged.peek()
17
values in merged heap; (merge is inplace)
>>> results = []
>>> while not first_heap.is_empty():
... results.append(first_heap.delete_min())
>>> results
[17, 20, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 31, 34]
"""
def __init__(self, bottom_root=None, min_node=None, heap_size=0):
self.size = heap_size
self.bottom_root = bottom_root
self.min_node = min_node
def merge_heaps(self, other):
"""
In-place merge of two binomial heaps.
Both of them become the resulting merged heap
"""
# Empty heaps corner cases
if other.size == 0:
return None
if self.size == 0:
self.size = other.size
self.bottom_root = other.bottom_root
self.min_node = other.min_node
return None
# Update size
self.size = self.size + other.size
# Update min.node
if self.min_node.val > other.min_node.val:
self.min_node = other.min_node
# Merge
# Order roots by left_subtree_size
combined_roots_list = []
i, j = self.bottom_root, other.bottom_root
while i or j:
if i and ((not j) or i.left_tree_size < j.left_tree_size):
combined_roots_list.append((i, True))
i = i.parent
else:
combined_roots_list.append((j, False))
j = j.parent
# Insert links between them
for i in range(len(combined_roots_list) - 1):
if combined_roots_list[i][1] != combined_roots_list[i + 1][1]:
combined_roots_list[i][0].parent = combined_roots_list[i + 1][0]
combined_roots_list[i + 1][0].left = combined_roots_list[i][0]
# Consecutively merge roots with same left_tree_size
i = combined_roots_list[0][0]
while i.parent:
if (
(i.left_tree_size == i.parent.left_tree_size) and (not i.parent.parent)
) or (
i.left_tree_size == i.parent.left_tree_size
and i.left_tree_size != i.parent.parent.left_tree_size
):
# Neighbouring Nodes
previous_node = i.left
next_node = i.parent.parent
# Merging trees
i = i.merge_trees(i.parent)
# Updating links
i.left = previous_node
i.parent = next_node
if previous_node:
previous_node.parent = i
if next_node:
next_node.left = i
else:
i = i.parent
# Updating self.bottom_root
while i.left:
i = i.left
self.bottom_root = i
# Update other
other.size = self.size
other.bottom_root = self.bottom_root
other.min_node = self.min_node
# Return the merged heap
return self
def insert(self, val):
"""
insert a value in the heap
"""
if self.size == 0:
self.bottom_root = Node(val)
self.size = 1
self.min_node = self.bottom_root
else:
# Create new node
new_node = Node(val)
# Update size
self.size += 1
# update min_node
if val < self.min_node.val:
self.min_node = new_node
# Put new_node as a bottom_root in heap
self.bottom_root.left = new_node
new_node.parent = self.bottom_root
self.bottom_root = new_node
# Consecutively merge roots with same left_tree_size
while (
self.bottom_root.parent
and self.bottom_root.left_tree_size
== self.bottom_root.parent.left_tree_size
):
# Next node
next_node = self.bottom_root.parent.parent
# Merge
self.bottom_root = self.bottom_root.merge_trees(self.bottom_root.parent)
# Update Links
self.bottom_root.parent = next_node
self.bottom_root.left = None
if next_node:
next_node.left = self.bottom_root
def peek(self):
"""
return min element without deleting it
"""
return self.min_node.val
def is_empty(self):
return self.size == 0
def delete_min(self):
"""
delete min element and return it
"""
# assert not self.isEmpty(), "Empty Heap"
# Save minimal value
min_value = self.min_node.val
# Last element in heap corner case
if self.size == 1:
# Update size
self.size = 0
# Update bottom root
self.bottom_root = None
# Update min_node
self.min_node = None
return min_value
# No right subtree corner case
# The structure of the tree implies that this should be the bottom root
# and there is at least one other root
if self.min_node.right is None:
# Update size
self.size -= 1
# Update bottom root
self.bottom_root = self.bottom_root.parent
self.bottom_root.left = None
# Update min_node
self.min_node = self.bottom_root
i = self.bottom_root.parent
while i:
if i.val < self.min_node.val:
self.min_node = i
i = i.parent
return min_value
# General case
# Find the BinomialHeap of the right subtree of min_node
bottom_of_new = self.min_node.right
bottom_of_new.parent = None
min_of_new = bottom_of_new
size_of_new = 1
# Size, min_node and bottom_root
while bottom_of_new.left:
size_of_new = size_of_new * 2 + 1
bottom_of_new = bottom_of_new.left
if bottom_of_new.val < min_of_new.val:
min_of_new = bottom_of_new
# Corner case of single root on top left path
if (not self.min_node.left) and (not self.min_node.parent):
self.size = size_of_new
self.bottom_root = bottom_of_new
self.min_node = min_of_new
# print("Single root, multiple nodes case")
return min_value
# Remaining cases
# Construct heap of right subtree
new_heap = BinomialHeap(
bottom_root=bottom_of_new, min_node=min_of_new, heap_size=size_of_new
)
# Update size
self.size = self.size - 1 - size_of_new
# Neighbour nodes
previous_node = self.min_node.left
next_node = self.min_node.parent
# Initialize new bottom_root and min_node
self.min_node = previous_node or next_node
self.bottom_root = next_node
# Update links of previous_node and search below for new min_node and
# bottom_root
if previous_node:
previous_node.parent = next_node
# Update bottom_root and search for min_node below
self.bottom_root = previous_node
self.min_node = previous_node
while self.bottom_root.left:
self.bottom_root = self.bottom_root.left
if self.bottom_root.val < self.min_node.val:
self.min_node = self.bottom_root
if next_node:
next_node.left = previous_node
# Search for new min_node above min_node
i = next_node
while i:
if i.val < self.min_node.val:
self.min_node = i
i = i.parent
# Merge heaps
self.merge_heaps(new_heap)
return min_value
def pre_order(self):
"""
Returns the Pre-order representation of the heap including
values of nodes plus their level distance from the root;
Empty nodes appear as #
"""
# Find top root
top_root = self.bottom_root
while top_root.parent:
top_root = top_root.parent
# preorder
heap_pre_order = []
self.__traversal(top_root, heap_pre_order)
return heap_pre_order
def __traversal(self, curr_node, preorder, level=0):
"""
Pre-order traversal of nodes
"""
if curr_node:
preorder.append((curr_node.val, level))
self.__traversal(curr_node.left, preorder, level + 1)
self.__traversal(curr_node.right, preorder, level + 1)
else:
preorder.append(("#", level))
def __str__(self):
"""
Overwriting str for a pre-order print of nodes in heap;
Performance is poor, so use only for small examples
"""
if self.is_empty():
return ""
preorder_heap = self.pre_order()
return "\n".join(("-" * level + str(value)) for value, level in preorder_heap)
# Unit Tests
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Binomial Heap
Reference: Advanced Data Structures, Peter Brass
"""
class Node:
"""
Node in a doubly-linked binomial tree, containing:
- value
- size of left subtree
- link to left, right and parent nodes
"""
def __init__(self, val):
self.val = val
# Number of nodes in left subtree
self.left_tree_size = 0
self.left = None
self.right = None
self.parent = None
def merge_trees(self, other):
"""
In-place merge of two binomial trees of equal size.
Returns the root of the resulting tree
"""
assert self.left_tree_size == other.left_tree_size, "Unequal Sizes of Blocks"
if self.val < other.val:
other.left = self.right
other.parent = None
if self.right:
self.right.parent = other
self.right = other
self.left_tree_size = self.left_tree_size * 2 + 1
return self
else:
self.left = other.right
self.parent = None
if other.right:
other.right.parent = self
other.right = self
other.left_tree_size = other.left_tree_size * 2 + 1
return other
class BinomialHeap:
r"""
Min-oriented priority queue implemented with the Binomial Heap data
structure implemented with the BinomialHeap class. It supports:
- Insert element in a heap with n elements: Guaranteed logn, amoratized 1
- Merge (meld) heaps of size m and n: O(logn + logm)
- Delete Min: O(logn)
- Peek (return min without deleting it): O(1)
Example:
Create a random permutation of 30 integers to be inserted and 19 of them deleted
>>> import numpy as np
>>> permutation = np.random.permutation(list(range(30)))
Create a Heap and insert the 30 integers
__init__() test
>>> first_heap = BinomialHeap()
30 inserts - insert() test
>>> for number in permutation:
... first_heap.insert(number)
Size test
>>> first_heap.size
30
Deleting - delete() test
>>> [first_heap.delete_min() for _ in range(20)]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
Create a new Heap
>>> second_heap = BinomialHeap()
>>> vals = [17, 20, 31, 34]
>>> for value in vals:
... second_heap.insert(value)
The heap should have the following structure:
17
/ \
# 31
/ \
20 34
/ \ / \
# # # #
preOrder() test
>>> " ".join(str(x) for x in second_heap.pre_order())
"(17, 0) ('#', 1) (31, 1) (20, 2) ('#', 3) ('#', 3) (34, 2) ('#', 3) ('#', 3)"
printing Heap - __str__() test
>>> print(second_heap)
17
-#
-31
--20
---#
---#
--34
---#
---#
mergeHeaps() test
>>>
>>> merged = second_heap.merge_heaps(first_heap)
>>> merged.peek()
17
values in merged heap; (merge is inplace)
>>> results = []
>>> while not first_heap.is_empty():
... results.append(first_heap.delete_min())
>>> results
[17, 20, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 31, 34]
"""
def __init__(self, bottom_root=None, min_node=None, heap_size=0):
self.size = heap_size
self.bottom_root = bottom_root
self.min_node = min_node
def merge_heaps(self, other):
"""
In-place merge of two binomial heaps.
Both of them become the resulting merged heap
"""
# Empty heaps corner cases
if other.size == 0:
return None
if self.size == 0:
self.size = other.size
self.bottom_root = other.bottom_root
self.min_node = other.min_node
return None
# Update size
self.size = self.size + other.size
# Update min.node
if self.min_node.val > other.min_node.val:
self.min_node = other.min_node
# Merge
# Order roots by left_subtree_size
combined_roots_list = []
i, j = self.bottom_root, other.bottom_root
while i or j:
if i and ((not j) or i.left_tree_size < j.left_tree_size):
combined_roots_list.append((i, True))
i = i.parent
else:
combined_roots_list.append((j, False))
j = j.parent
# Insert links between them
for i in range(len(combined_roots_list) - 1):
if combined_roots_list[i][1] != combined_roots_list[i + 1][1]:
combined_roots_list[i][0].parent = combined_roots_list[i + 1][0]
combined_roots_list[i + 1][0].left = combined_roots_list[i][0]
# Consecutively merge roots with same left_tree_size
i = combined_roots_list[0][0]
while i.parent:
if (
(i.left_tree_size == i.parent.left_tree_size) and (not i.parent.parent)
) or (
i.left_tree_size == i.parent.left_tree_size
and i.left_tree_size != i.parent.parent.left_tree_size
):
# Neighbouring Nodes
previous_node = i.left
next_node = i.parent.parent
# Merging trees
i = i.merge_trees(i.parent)
# Updating links
i.left = previous_node
i.parent = next_node
if previous_node:
previous_node.parent = i
if next_node:
next_node.left = i
else:
i = i.parent
# Updating self.bottom_root
while i.left:
i = i.left
self.bottom_root = i
# Update other
other.size = self.size
other.bottom_root = self.bottom_root
other.min_node = self.min_node
# Return the merged heap
return self
def insert(self, val):
"""
insert a value in the heap
"""
if self.size == 0:
self.bottom_root = Node(val)
self.size = 1
self.min_node = self.bottom_root
else:
# Create new node
new_node = Node(val)
# Update size
self.size += 1
# update min_node
if val < self.min_node.val:
self.min_node = new_node
# Put new_node as a bottom_root in heap
self.bottom_root.left = new_node
new_node.parent = self.bottom_root
self.bottom_root = new_node
# Consecutively merge roots with same left_tree_size
while (
self.bottom_root.parent
and self.bottom_root.left_tree_size
== self.bottom_root.parent.left_tree_size
):
# Next node
next_node = self.bottom_root.parent.parent
# Merge
self.bottom_root = self.bottom_root.merge_trees(self.bottom_root.parent)
# Update Links
self.bottom_root.parent = next_node
self.bottom_root.left = None
if next_node:
next_node.left = self.bottom_root
def peek(self):
"""
return min element without deleting it
"""
return self.min_node.val
def is_empty(self):
return self.size == 0
def delete_min(self):
"""
delete min element and return it
"""
# assert not self.isEmpty(), "Empty Heap"
# Save minimal value
min_value = self.min_node.val
# Last element in heap corner case
if self.size == 1:
# Update size
self.size = 0
# Update bottom root
self.bottom_root = None
# Update min_node
self.min_node = None
return min_value
# No right subtree corner case
# The structure of the tree implies that this should be the bottom root
# and there is at least one other root
if self.min_node.right is None:
# Update size
self.size -= 1
# Update bottom root
self.bottom_root = self.bottom_root.parent
self.bottom_root.left = None
# Update min_node
self.min_node = self.bottom_root
i = self.bottom_root.parent
while i:
if i.val < self.min_node.val:
self.min_node = i
i = i.parent
return min_value
# General case
# Find the BinomialHeap of the right subtree of min_node
bottom_of_new = self.min_node.right
bottom_of_new.parent = None
min_of_new = bottom_of_new
size_of_new = 1
# Size, min_node and bottom_root
while bottom_of_new.left:
size_of_new = size_of_new * 2 + 1
bottom_of_new = bottom_of_new.left
if bottom_of_new.val < min_of_new.val:
min_of_new = bottom_of_new
# Corner case of single root on top left path
if (not self.min_node.left) and (not self.min_node.parent):
self.size = size_of_new
self.bottom_root = bottom_of_new
self.min_node = min_of_new
# print("Single root, multiple nodes case")
return min_value
# Remaining cases
# Construct heap of right subtree
new_heap = BinomialHeap(
bottom_root=bottom_of_new, min_node=min_of_new, heap_size=size_of_new
)
# Update size
self.size = self.size - 1 - size_of_new
# Neighbour nodes
previous_node = self.min_node.left
next_node = self.min_node.parent
# Initialize new bottom_root and min_node
self.min_node = previous_node or next_node
self.bottom_root = next_node
# Update links of previous_node and search below for new min_node and
# bottom_root
if previous_node:
previous_node.parent = next_node
# Update bottom_root and search for min_node below
self.bottom_root = previous_node
self.min_node = previous_node
while self.bottom_root.left:
self.bottom_root = self.bottom_root.left
if self.bottom_root.val < self.min_node.val:
self.min_node = self.bottom_root
if next_node:
next_node.left = previous_node
# Search for new min_node above min_node
i = next_node
while i:
if i.val < self.min_node.val:
self.min_node = i
i = i.parent
# Merge heaps
self.merge_heaps(new_heap)
return min_value
def pre_order(self):
"""
Returns the Pre-order representation of the heap including
values of nodes plus their level distance from the root;
Empty nodes appear as #
"""
# Find top root
top_root = self.bottom_root
while top_root.parent:
top_root = top_root.parent
# preorder
heap_pre_order = []
self.__traversal(top_root, heap_pre_order)
return heap_pre_order
def __traversal(self, curr_node, preorder, level=0):
"""
Pre-order traversal of nodes
"""
if curr_node:
preorder.append((curr_node.val, level))
self.__traversal(curr_node.left, preorder, level + 1)
self.__traversal(curr_node.right, preorder, level + 1)
else:
preorder.append(("#", level))
def __str__(self):
"""
Overwriting str for a pre-order print of nodes in heap;
Performance is poor, so use only for small examples
"""
if self.is_empty():
return ""
preorder_heap = self.pre_order()
return "\n".join(("-" * level + str(value)) for value, level in preorder_heap)
# Unit Tests
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler problem 145: https://projecteuler.net/problem=145
Author: Vineet Rao, Maxim Smolskiy
Problem statement:
Some positive integers n have the property that the sum [ n + reverse(n) ]
consists entirely of odd (decimal) digits.
For instance, 36 + 63 = 99 and 409 + 904 = 1313.
We will call such numbers reversible; so 36, 63, 409, and 904 are reversible.
Leading zeroes are not allowed in either n or reverse(n).
There are 120 reversible numbers below one-thousand.
How many reversible numbers are there below one-billion (10^9)?
"""
EVEN_DIGITS = [0, 2, 4, 6, 8]
ODD_DIGITS = [1, 3, 5, 7, 9]
def reversible_numbers(
remaining_length: int, remainder: int, digits: list[int], length: int
) -> int:
"""
Count the number of reversible numbers of given length.
Iterate over possible digits considering parity of current sum remainder.
>>> reversible_numbers(1, 0, [0], 1)
0
>>> reversible_numbers(2, 0, [0] * 2, 2)
20
>>> reversible_numbers(3, 0, [0] * 3, 3)
100
"""
if remaining_length == 0:
if digits[0] == 0 or digits[-1] == 0:
return 0
for i in range(length // 2 - 1, -1, -1):
remainder += digits[i] + digits[length - i - 1]
if remainder % 2 == 0:
return 0
remainder //= 10
return 1
if remaining_length == 1:
if remainder % 2 == 0:
return 0
result = 0
for digit in range(10):
digits[length // 2] = digit
result += reversible_numbers(
0, (remainder + 2 * digit) // 10, digits, length
)
return result
result = 0
for digit1 in range(10):
digits[(length + remaining_length) // 2 - 1] = digit1
if (remainder + digit1) % 2 == 0:
other_parity_digits = ODD_DIGITS
else:
other_parity_digits = EVEN_DIGITS
for digit2 in other_parity_digits:
digits[(length - remaining_length) // 2] = digit2
result += reversible_numbers(
remaining_length - 2,
(remainder + digit1 + digit2) // 10,
digits,
length,
)
return result
def solution(max_power: int = 9) -> int:
"""
To evaluate the solution, use solution()
>>> solution(3)
120
>>> solution(6)
18720
>>> solution(7)
68720
"""
result = 0
for length in range(1, max_power + 1):
result += reversible_numbers(length, 0, [0] * length, length)
return result
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler problem 145: https://projecteuler.net/problem=145
Author: Vineet Rao, Maxim Smolskiy
Problem statement:
Some positive integers n have the property that the sum [ n + reverse(n) ]
consists entirely of odd (decimal) digits.
For instance, 36 + 63 = 99 and 409 + 904 = 1313.
We will call such numbers reversible; so 36, 63, 409, and 904 are reversible.
Leading zeroes are not allowed in either n or reverse(n).
There are 120 reversible numbers below one-thousand.
How many reversible numbers are there below one-billion (10^9)?
"""
EVEN_DIGITS = [0, 2, 4, 6, 8]
ODD_DIGITS = [1, 3, 5, 7, 9]
def reversible_numbers(
remaining_length: int, remainder: int, digits: list[int], length: int
) -> int:
"""
Count the number of reversible numbers of given length.
Iterate over possible digits considering parity of current sum remainder.
>>> reversible_numbers(1, 0, [0], 1)
0
>>> reversible_numbers(2, 0, [0] * 2, 2)
20
>>> reversible_numbers(3, 0, [0] * 3, 3)
100
"""
if remaining_length == 0:
if digits[0] == 0 or digits[-1] == 0:
return 0
for i in range(length // 2 - 1, -1, -1):
remainder += digits[i] + digits[length - i - 1]
if remainder % 2 == 0:
return 0
remainder //= 10
return 1
if remaining_length == 1:
if remainder % 2 == 0:
return 0
result = 0
for digit in range(10):
digits[length // 2] = digit
result += reversible_numbers(
0, (remainder + 2 * digit) // 10, digits, length
)
return result
result = 0
for digit1 in range(10):
digits[(length + remaining_length) // 2 - 1] = digit1
if (remainder + digit1) % 2 == 0:
other_parity_digits = ODD_DIGITS
else:
other_parity_digits = EVEN_DIGITS
for digit2 in other_parity_digits:
digits[(length - remaining_length) // 2] = digit2
result += reversible_numbers(
remaining_length - 2,
(remainder + digit1 + digit2) // 10,
digits,
length,
)
return result
def solution(max_power: int = 9) -> int:
"""
To evaluate the solution, use solution()
>>> solution(3)
120
>>> solution(6)
18720
>>> solution(7)
68720
"""
result = 0
for length in range(1, max_power + 1):
result += reversible_numbers(length, 0, [0] * length, length)
return result
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Implementation of Circular Queue (using Python lists)
class CircularQueue:
"""Circular FIFO queue with a fixed capacity"""
def __init__(self, n: int):
self.n = n
self.array = [None] * self.n
self.front = 0 # index of the first element
self.rear = 0
self.size = 0
def __len__(self) -> int:
"""
>>> cq = CircularQueue(5)
>>> len(cq)
0
>>> cq.enqueue("A") # doctest: +ELLIPSIS
<data_structures.queue.circular_queue.CircularQueue object at ...
>>> len(cq)
1
"""
return self.size
def is_empty(self) -> bool:
"""
>>> cq = CircularQueue(5)
>>> cq.is_empty()
True
>>> cq.enqueue("A").is_empty()
False
"""
return self.size == 0
def first(self):
"""
>>> cq = CircularQueue(5)
>>> cq.first()
False
>>> cq.enqueue("A").first()
'A'
"""
return False if self.is_empty() else self.array[self.front]
def enqueue(self, data):
"""
This function insert an element in the queue using self.rear value as an index
>>> cq = CircularQueue(5)
>>> cq.enqueue("A") # doctest: +ELLIPSIS
<data_structures.queue.circular_queue.CircularQueue object at ...
>>> (cq.size, cq.first())
(1, 'A')
>>> cq.enqueue("B") # doctest: +ELLIPSIS
<data_structures.queue.circular_queue.CircularQueue object at ...
>>> (cq.size, cq.first())
(2, 'A')
"""
if self.size >= self.n:
raise Exception("QUEUE IS FULL")
self.array[self.rear] = data
self.rear = (self.rear + 1) % self.n
self.size += 1
return self
def dequeue(self):
"""
This function removes an element from the queue using on self.front value as an
index
>>> cq = CircularQueue(5)
>>> cq.dequeue()
Traceback (most recent call last):
...
Exception: UNDERFLOW
>>> cq.enqueue("A").enqueue("B").dequeue()
'A'
>>> (cq.size, cq.first())
(1, 'B')
>>> cq.dequeue()
'B'
>>> cq.dequeue()
Traceback (most recent call last):
...
Exception: UNDERFLOW
"""
if self.size == 0:
raise Exception("UNDERFLOW")
temp = self.array[self.front]
self.array[self.front] = None
self.front = (self.front + 1) % self.n
self.size -= 1
return temp
| # Implementation of Circular Queue (using Python lists)
class CircularQueue:
"""Circular FIFO queue with a fixed capacity"""
def __init__(self, n: int):
self.n = n
self.array = [None] * self.n
self.front = 0 # index of the first element
self.rear = 0
self.size = 0
def __len__(self) -> int:
"""
>>> cq = CircularQueue(5)
>>> len(cq)
0
>>> cq.enqueue("A") # doctest: +ELLIPSIS
<data_structures.queue.circular_queue.CircularQueue object at ...
>>> len(cq)
1
"""
return self.size
def is_empty(self) -> bool:
"""
>>> cq = CircularQueue(5)
>>> cq.is_empty()
True
>>> cq.enqueue("A").is_empty()
False
"""
return self.size == 0
def first(self):
"""
>>> cq = CircularQueue(5)
>>> cq.first()
False
>>> cq.enqueue("A").first()
'A'
"""
return False if self.is_empty() else self.array[self.front]
def enqueue(self, data):
"""
This function insert an element in the queue using self.rear value as an index
>>> cq = CircularQueue(5)
>>> cq.enqueue("A") # doctest: +ELLIPSIS
<data_structures.queue.circular_queue.CircularQueue object at ...
>>> (cq.size, cq.first())
(1, 'A')
>>> cq.enqueue("B") # doctest: +ELLIPSIS
<data_structures.queue.circular_queue.CircularQueue object at ...
>>> (cq.size, cq.first())
(2, 'A')
"""
if self.size >= self.n:
raise Exception("QUEUE IS FULL")
self.array[self.rear] = data
self.rear = (self.rear + 1) % self.n
self.size += 1
return self
def dequeue(self):
"""
This function removes an element from the queue using on self.front value as an
index
>>> cq = CircularQueue(5)
>>> cq.dequeue()
Traceback (most recent call last):
...
Exception: UNDERFLOW
>>> cq.enqueue("A").enqueue("B").dequeue()
'A'
>>> (cq.size, cq.first())
(1, 'B')
>>> cq.dequeue()
'B'
>>> cq.dequeue()
Traceback (most recent call last):
...
Exception: UNDERFLOW
"""
if self.size == 0:
raise Exception("UNDERFLOW")
temp = self.array[self.front]
self.array[self.front] = None
self.front = (self.front + 1) % self.n
self.size -= 1
return temp
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # pack-refs with: peeled fully-peeled sorted
8668f5792dc673f085966f6f90c9c896081f22e9 refs/remotes/origin/Fewer-forward-propogations-to-speed-tests
c1fd8cb9e667ab59ca4446d0dcf216d1696a010c refs/remotes/origin/Python-3.12-on-Debian-bookworm
e093689124ab5f4a0938e4801abad0dbeb5bf881 refs/remotes/origin/cclauss-patch-1
04b896124ac5e76d5d5ed4ded91302557b1bc081 refs/remotes/origin/fix-maclaurin_series-on-Python3.12
672d0b39404444787f1ca3b5a3b6fd29a5a75447 refs/remotes/origin/fuzzy_operations.py-on-Python-3.12
9caf4784aada17dc75348f77cc8c356df503c0f3 refs/remotes/origin/master
01dc64a3a2f397872c759c4cb575ad2be5856d6a refs/remotes/origin/quantum_random.py.disabled
| # pack-refs with: peeled fully-peeled sorted
8668f5792dc673f085966f6f90c9c896081f22e9 refs/remotes/origin/Fewer-forward-propogations-to-speed-tests
c1fd8cb9e667ab59ca4446d0dcf216d1696a010c refs/remotes/origin/Python-3.12-on-Debian-bookworm
e093689124ab5f4a0938e4801abad0dbeb5bf881 refs/remotes/origin/cclauss-patch-1
04b896124ac5e76d5d5ed4ded91302557b1bc081 refs/remotes/origin/fix-maclaurin_series-on-Python3.12
672d0b39404444787f1ca3b5a3b6fd29a5a75447 refs/remotes/origin/fuzzy_operations.py-on-Python-3.12
9caf4784aada17dc75348f77cc8c356df503c0f3 refs/remotes/origin/master
01dc64a3a2f397872c759c4cb575ad2be5856d6a refs/remotes/origin/quantum_random.py.disabled
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # https://en.wikipedia.org/wiki/Simulated_annealing
import math
import random
from typing import Any
from .hill_climbing import SearchProblem
def simulated_annealing(
search_prob,
find_max: bool = True,
max_x: float = math.inf,
min_x: float = -math.inf,
max_y: float = math.inf,
min_y: float = -math.inf,
visualization: bool = False,
start_temperate: float = 100,
rate_of_decrease: float = 0.01,
threshold_temp: float = 1,
) -> Any:
"""
Implementation of the simulated annealing algorithm. We start with a given state,
find all its neighbors. Pick a random neighbor, if that neighbor improves the
solution, we move in that direction, if that neighbor does not improve the solution,
we generate a random real number between 0 and 1, if the number is within a certain
range (calculated using temperature) we move in that direction, else we pick
another neighbor randomly and repeat the process.
Args:
search_prob: The search state at the start.
find_max: If True, the algorithm should find the minimum else the minimum.
max_x, min_x, max_y, min_y: the maximum and minimum bounds of x and y.
visualization: If True, a matplotlib graph is displayed.
start_temperate: the initial temperate of the system when the program starts.
rate_of_decrease: the rate at which the temperate decreases in each iteration.
threshold_temp: the threshold temperature below which we end the search
Returns a search state having the maximum (or minimum) score.
"""
search_end = False
current_state = search_prob
current_temp = start_temperate
scores = []
iterations = 0
best_state = None
while not search_end:
current_score = current_state.score()
if best_state is None or current_score > best_state.score():
best_state = current_state
scores.append(current_score)
iterations += 1
next_state = None
neighbors = current_state.get_neighbors()
while (
next_state is None and neighbors
): # till we do not find a neighbor that we can move to
index = random.randint(0, len(neighbors) - 1) # picking a random neighbor
picked_neighbor = neighbors.pop(index)
change = picked_neighbor.score() - current_score
if (
picked_neighbor.x > max_x
or picked_neighbor.x < min_x
or picked_neighbor.y > max_y
or picked_neighbor.y < min_y
):
continue # neighbor outside our bounds
if not find_max:
change = change * -1 # in case we are finding minimum
if change > 0: # improves the solution
next_state = picked_neighbor
else:
probability = (math.e) ** (
change / current_temp
) # probability generation function
if random.random() < probability: # random number within probability
next_state = picked_neighbor
current_temp = current_temp - (current_temp * rate_of_decrease)
if current_temp < threshold_temp or next_state is None:
# temperature below threshold, or could not find a suitable neighbor
search_end = True
else:
current_state = next_state
if visualization:
from matplotlib import pyplot as plt
plt.plot(range(iterations), scores)
plt.xlabel("Iterations")
plt.ylabel("Function values")
plt.show()
return best_state
if __name__ == "__main__":
def test_f1(x, y):
return (x**2) + (y**2)
# starting the problem with initial coordinates (12, 47)
prob = SearchProblem(x=12, y=47, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(
prob, find_max=False, max_x=100, min_x=5, max_y=50, min_y=-5, visualization=True
)
print(
"The minimum score for f(x, y) = x^2 + y^2 with the domain 100 > x > 5 "
f"and 50 > y > - 5 found via hill climbing: {local_min.score()}"
)
# starting the problem with initial coordinates (12, 47)
prob = SearchProblem(x=12, y=47, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(
prob, find_max=True, max_x=100, min_x=5, max_y=50, min_y=-5, visualization=True
)
print(
"The maximum score for f(x, y) = x^2 + y^2 with the domain 100 > x > 5 "
f"and 50 > y > - 5 found via hill climbing: {local_min.score()}"
)
def test_f2(x, y):
return (3 * x**2) - (6 * y)
prob = SearchProblem(x=3, y=4, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(prob, find_max=False, visualization=True)
print(
"The minimum score for f(x, y) = 3*x^2 - 6*y found via hill climbing: "
f"{local_min.score()}"
)
prob = SearchProblem(x=3, y=4, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(prob, find_max=True, visualization=True)
print(
"The maximum score for f(x, y) = 3*x^2 - 6*y found via hill climbing: "
f"{local_min.score()}"
)
| # https://en.wikipedia.org/wiki/Simulated_annealing
import math
import random
from typing import Any
from .hill_climbing import SearchProblem
def simulated_annealing(
search_prob,
find_max: bool = True,
max_x: float = math.inf,
min_x: float = -math.inf,
max_y: float = math.inf,
min_y: float = -math.inf,
visualization: bool = False,
start_temperate: float = 100,
rate_of_decrease: float = 0.01,
threshold_temp: float = 1,
) -> Any:
"""
Implementation of the simulated annealing algorithm. We start with a given state,
find all its neighbors. Pick a random neighbor, if that neighbor improves the
solution, we move in that direction, if that neighbor does not improve the solution,
we generate a random real number between 0 and 1, if the number is within a certain
range (calculated using temperature) we move in that direction, else we pick
another neighbor randomly and repeat the process.
Args:
search_prob: The search state at the start.
find_max: If True, the algorithm should find the minimum else the minimum.
max_x, min_x, max_y, min_y: the maximum and minimum bounds of x and y.
visualization: If True, a matplotlib graph is displayed.
start_temperate: the initial temperate of the system when the program starts.
rate_of_decrease: the rate at which the temperate decreases in each iteration.
threshold_temp: the threshold temperature below which we end the search
Returns a search state having the maximum (or minimum) score.
"""
search_end = False
current_state = search_prob
current_temp = start_temperate
scores = []
iterations = 0
best_state = None
while not search_end:
current_score = current_state.score()
if best_state is None or current_score > best_state.score():
best_state = current_state
scores.append(current_score)
iterations += 1
next_state = None
neighbors = current_state.get_neighbors()
while (
next_state is None and neighbors
): # till we do not find a neighbor that we can move to
index = random.randint(0, len(neighbors) - 1) # picking a random neighbor
picked_neighbor = neighbors.pop(index)
change = picked_neighbor.score() - current_score
if (
picked_neighbor.x > max_x
or picked_neighbor.x < min_x
or picked_neighbor.y > max_y
or picked_neighbor.y < min_y
):
continue # neighbor outside our bounds
if not find_max:
change = change * -1 # in case we are finding minimum
if change > 0: # improves the solution
next_state = picked_neighbor
else:
probability = (math.e) ** (
change / current_temp
) # probability generation function
if random.random() < probability: # random number within probability
next_state = picked_neighbor
current_temp = current_temp - (current_temp * rate_of_decrease)
if current_temp < threshold_temp or next_state is None:
# temperature below threshold, or could not find a suitable neighbor
search_end = True
else:
current_state = next_state
if visualization:
from matplotlib import pyplot as plt
plt.plot(range(iterations), scores)
plt.xlabel("Iterations")
plt.ylabel("Function values")
plt.show()
return best_state
if __name__ == "__main__":
def test_f1(x, y):
return (x**2) + (y**2)
# starting the problem with initial coordinates (12, 47)
prob = SearchProblem(x=12, y=47, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(
prob, find_max=False, max_x=100, min_x=5, max_y=50, min_y=-5, visualization=True
)
print(
"The minimum score for f(x, y) = x^2 + y^2 with the domain 100 > x > 5 "
f"and 50 > y > - 5 found via hill climbing: {local_min.score()}"
)
# starting the problem with initial coordinates (12, 47)
prob = SearchProblem(x=12, y=47, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(
prob, find_max=True, max_x=100, min_x=5, max_y=50, min_y=-5, visualization=True
)
print(
"The maximum score for f(x, y) = x^2 + y^2 with the domain 100 > x > 5 "
f"and 50 > y > - 5 found via hill climbing: {local_min.score()}"
)
def test_f2(x, y):
return (3 * x**2) - (6 * y)
prob = SearchProblem(x=3, y=4, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(prob, find_max=False, visualization=True)
print(
"The minimum score for f(x, y) = 3*x^2 - 6*y found via hill climbing: "
f"{local_min.score()}"
)
prob = SearchProblem(x=3, y=4, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(prob, find_max=True, visualization=True)
print(
"The maximum score for f(x, y) = 3*x^2 - 6*y found via hill climbing: "
f"{local_min.score()}"
)
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Author : Yvonne
This is a pure Python implementation of Dynamic Programming solution to the
longest_sub_array problem.
The problem is :
Given an array, to find the longest and continuous sub array and get the max sum of the
sub array in the given array.
"""
class SubArray:
def __init__(self, arr):
# we need a list not a string, so do something to change the type
self.array = arr.split(",")
def solve_sub_array(self):
rear = [int(self.array[0])] * len(self.array)
sum_value = [int(self.array[0])] * len(self.array)
for i in range(1, len(self.array)):
sum_value[i] = max(
int(self.array[i]) + sum_value[i - 1], int(self.array[i])
)
rear[i] = max(sum_value[i], rear[i - 1])
return rear[len(self.array) - 1]
if __name__ == "__main__":
whole_array = input("please input some numbers:")
array = SubArray(whole_array)
re = array.solve_sub_array()
print(("the results is:", re))
| """
Author : Yvonne
This is a pure Python implementation of Dynamic Programming solution to the
longest_sub_array problem.
The problem is :
Given an array, to find the longest and continuous sub array and get the max sum of the
sub array in the given array.
"""
class SubArray:
def __init__(self, arr):
# we need a list not a string, so do something to change the type
self.array = arr.split(",")
def solve_sub_array(self):
rear = [int(self.array[0])] * len(self.array)
sum_value = [int(self.array[0])] * len(self.array)
for i in range(1, len(self.array)):
sum_value[i] = max(
int(self.array[i]) + sum_value[i - 1], int(self.array[i])
)
rear[i] = max(sum_value[i], rear[i - 1])
return rear[len(self.array) - 1]
if __name__ == "__main__":
whole_array = input("please input some numbers:")
array = SubArray(whole_array)
re = array.solve_sub_array()
print(("the results is:", re))
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
ELECTRON_CHARGE = 1.6021e-19 # units = C
def electric_conductivity(
conductivity: float,
electron_conc: float,
mobility: float,
) -> tuple[str, float]:
"""
This function can calculate any one of the three -
1. Conductivity
2. Electron Concentration
3. Electron Mobility
This is calculated from the other two provided values
Examples -
>>> electric_conductivity(conductivity=25, electron_conc=100, mobility=0)
('mobility', 1.5604519068722301e+18)
>>> electric_conductivity(conductivity=0, electron_conc=1600, mobility=200)
('conductivity', 5.12672e-14)
>>> electric_conductivity(conductivity=1000, electron_conc=0, mobility=1200)
('electron_conc', 5.201506356240767e+18)
"""
if (conductivity, electron_conc, mobility).count(0) != 1:
raise ValueError("You cannot supply more or less than 2 values")
elif conductivity < 0:
raise ValueError("Conductivity cannot be negative")
elif electron_conc < 0:
raise ValueError("Electron concentration cannot be negative")
elif mobility < 0:
raise ValueError("mobility cannot be negative")
elif conductivity == 0:
return (
"conductivity",
mobility * electron_conc * ELECTRON_CHARGE,
)
elif electron_conc == 0:
return (
"electron_conc",
conductivity / (mobility * ELECTRON_CHARGE),
)
else:
return (
"mobility",
conductivity / (electron_conc * ELECTRON_CHARGE),
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
ELECTRON_CHARGE = 1.6021e-19 # units = C
def electric_conductivity(
conductivity: float,
electron_conc: float,
mobility: float,
) -> tuple[str, float]:
"""
This function can calculate any one of the three -
1. Conductivity
2. Electron Concentration
3. Electron Mobility
This is calculated from the other two provided values
Examples -
>>> electric_conductivity(conductivity=25, electron_conc=100, mobility=0)
('mobility', 1.5604519068722301e+18)
>>> electric_conductivity(conductivity=0, electron_conc=1600, mobility=200)
('conductivity', 5.12672e-14)
>>> electric_conductivity(conductivity=1000, electron_conc=0, mobility=1200)
('electron_conc', 5.201506356240767e+18)
"""
if (conductivity, electron_conc, mobility).count(0) != 1:
raise ValueError("You cannot supply more or less than 2 values")
elif conductivity < 0:
raise ValueError("Conductivity cannot be negative")
elif electron_conc < 0:
raise ValueError("Electron concentration cannot be negative")
elif mobility < 0:
raise ValueError("mobility cannot be negative")
elif conductivity == 0:
return (
"conductivity",
mobility * electron_conc * ELECTRON_CHARGE,
)
elif electron_conc == 0:
return (
"electron_conc",
conductivity / (mobility * ELECTRON_CHARGE),
)
else:
return (
"mobility",
conductivity / (electron_conc * ELECTRON_CHARGE),
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def binary_recursive(decimal: int) -> str:
"""
Take a positive integer value and return its binary equivalent.
>>> binary_recursive(1000)
'1111101000'
>>> binary_recursive("72")
'1001000'
>>> binary_recursive("number")
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: 'number'
"""
decimal = int(decimal)
if decimal in (0, 1): # Exit cases for the recursion
return str(decimal)
div, mod = divmod(decimal, 2)
return binary_recursive(div) + str(mod)
def main(number: str) -> str:
"""
Take an integer value and raise ValueError for wrong inputs,
call the function above and return the output with prefix "0b" & "-0b"
for positive and negative integers respectively.
>>> main(0)
'0b0'
>>> main(40)
'0b101000'
>>> main(-40)
'-0b101000'
>>> main(40.8)
Traceback (most recent call last):
...
ValueError: Input value is not an integer
>>> main("forty")
Traceback (most recent call last):
...
ValueError: Input value is not an integer
"""
number = str(number).strip()
if not number:
raise ValueError("No input value was provided")
negative = "-" if number.startswith("-") else ""
number = number.lstrip("-")
if not number.isnumeric():
raise ValueError("Input value is not an integer")
return f"{negative}0b{binary_recursive(int(number))}"
if __name__ == "__main__":
from doctest import testmod
testmod()
| def binary_recursive(decimal: int) -> str:
"""
Take a positive integer value and return its binary equivalent.
>>> binary_recursive(1000)
'1111101000'
>>> binary_recursive("72")
'1001000'
>>> binary_recursive("number")
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: 'number'
"""
decimal = int(decimal)
if decimal in (0, 1): # Exit cases for the recursion
return str(decimal)
div, mod = divmod(decimal, 2)
return binary_recursive(div) + str(mod)
def main(number: str) -> str:
"""
Take an integer value and raise ValueError for wrong inputs,
call the function above and return the output with prefix "0b" & "-0b"
for positive and negative integers respectively.
>>> main(0)
'0b0'
>>> main(40)
'0b101000'
>>> main(-40)
'-0b101000'
>>> main(40.8)
Traceback (most recent call last):
...
ValueError: Input value is not an integer
>>> main("forty")
Traceback (most recent call last):
...
ValueError: Input value is not an integer
"""
number = str(number).strip()
if not number:
raise ValueError("No input value was provided")
negative = "-" if number.startswith("-") else ""
number = number.lstrip("-")
if not number.isnumeric():
raise ValueError("Input value is not an integer")
return f"{negative}0b{binary_recursive(int(number))}"
if __name__ == "__main__":
from doctest import testmod
testmod()
| -1 |
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,184 | Replace flake8 with ruff | ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-16T11:23:53Z" | "2023-03-16T12:31:30Z" | c96241b5a5052af466894ef90c7a7c749ba872eb | 521fbca61c6bdb84746564eb58c2ef2131260187 | Replace flake8 with ruff. ### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from math import log2
def binary_count_trailing_zeros(a: int) -> int:
"""
Take in 1 integer, return a number that is
the number of trailing zeros in binary representation of that number.
>>> binary_count_trailing_zeros(25)
0
>>> binary_count_trailing_zeros(36)
2
>>> binary_count_trailing_zeros(16)
4
>>> binary_count_trailing_zeros(58)
1
>>> binary_count_trailing_zeros(4294967296)
32
>>> binary_count_trailing_zeros(0)
0
>>> binary_count_trailing_zeros(-10)
Traceback (most recent call last):
...
ValueError: Input value must be a positive integer
>>> binary_count_trailing_zeros(0.8)
Traceback (most recent call last):
...
TypeError: Input value must be a 'int' type
>>> binary_count_trailing_zeros("0")
Traceback (most recent call last):
...
TypeError: '<' not supported between instances of 'str' and 'int'
"""
if a < 0:
raise ValueError("Input value must be a positive integer")
elif isinstance(a, float):
raise TypeError("Input value must be a 'int' type")
return 0 if (a == 0) else int(log2(a & -a))
if __name__ == "__main__":
import doctest
doctest.testmod()
| from math import log2
def binary_count_trailing_zeros(a: int) -> int:
"""
Take in 1 integer, return a number that is
the number of trailing zeros in binary representation of that number.
>>> binary_count_trailing_zeros(25)
0
>>> binary_count_trailing_zeros(36)
2
>>> binary_count_trailing_zeros(16)
4
>>> binary_count_trailing_zeros(58)
1
>>> binary_count_trailing_zeros(4294967296)
32
>>> binary_count_trailing_zeros(0)
0
>>> binary_count_trailing_zeros(-10)
Traceback (most recent call last):
...
ValueError: Input value must be a positive integer
>>> binary_count_trailing_zeros(0.8)
Traceback (most recent call last):
...
TypeError: Input value must be a 'int' type
>>> binary_count_trailing_zeros("0")
Traceback (most recent call last):
...
TypeError: '<' not supported between instances of 'str' and 'int'
"""
if a < 0:
raise ValueError("Input value must be a positive integer")
elif isinstance(a, float):
raise TypeError("Input value must be a 'int' type")
return 0 if (a == 0) else int(log2(a & -a))
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
from collections.abc import Iterator
from typing import Any
class Node:
def __init__(self, data: Any):
self.data: Any = data
self.next: Node | None = None
class CircularLinkedList:
def __init__(self):
self.head = None
self.tail = None
def __iter__(self) -> Iterator[Any]:
node = self.head
while self.head:
yield node.data
node = node.next
if node == self.head:
break
def __len__(self) -> int:
return len(tuple(iter(self)))
def __repr__(self):
return "->".join(str(item) for item in iter(self))
def insert_tail(self, data: Any) -> None:
self.insert_nth(len(self), data)
def insert_head(self, data: Any) -> None:
self.insert_nth(0, data)
def insert_nth(self, index: int, data: Any) -> None:
if index < 0 or index > len(self):
raise IndexError("list index out of range.")
new_node = Node(data)
if self.head is None:
new_node.next = new_node # first node points itself
self.tail = self.head = new_node
elif index == 0: # insert at head
new_node.next = self.head
self.head = self.tail.next = new_node
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
new_node.next = temp.next
temp.next = new_node
if index == len(self) - 1: # insert at tail
self.tail = new_node
def delete_front(self):
return self.delete_nth(0)
def delete_tail(self) -> Any:
return self.delete_nth(len(self) - 1)
def delete_nth(self, index: int = 0) -> Any:
if not 0 <= index < len(self):
raise IndexError("list index out of range.")
delete_node = self.head
if self.head == self.tail: # just one node
self.head = self.tail = None
elif index == 0: # delete head node
self.tail.next = self.tail.next.next
self.head = self.head.next
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
delete_node = temp.next
temp.next = temp.next.next
if index == len(self) - 1: # delete at tail
self.tail = temp
return delete_node.data
def is_empty(self) -> bool:
return len(self) == 0
def test_circular_linked_list() -> None:
"""
>>> test_circular_linked_list()
"""
circular_linked_list = CircularLinkedList()
assert len(circular_linked_list) == 0
assert circular_linked_list.is_empty() is True
assert str(circular_linked_list) == ""
try:
circular_linked_list.delete_front()
raise AssertionError() # This should not happen
except IndexError:
assert True # This should happen
try:
circular_linked_list.delete_tail()
raise AssertionError() # This should not happen
except IndexError:
assert True # This should happen
try:
circular_linked_list.delete_nth(-1)
raise AssertionError()
except IndexError:
assert True
try:
circular_linked_list.delete_nth(0)
raise AssertionError()
except IndexError:
assert True
assert circular_linked_list.is_empty() is True
for i in range(5):
assert len(circular_linked_list) == i
circular_linked_list.insert_nth(i, i + 1)
assert str(circular_linked_list) == "->".join(str(i) for i in range(1, 6))
circular_linked_list.insert_tail(6)
assert str(circular_linked_list) == "->".join(str(i) for i in range(1, 7))
circular_linked_list.insert_head(0)
assert str(circular_linked_list) == "->".join(str(i) for i in range(0, 7))
assert circular_linked_list.delete_front() == 0
assert circular_linked_list.delete_tail() == 6
assert str(circular_linked_list) == "->".join(str(i) for i in range(1, 6))
assert circular_linked_list.delete_nth(2) == 3
circular_linked_list.insert_nth(2, 3)
assert str(circular_linked_list) == "->".join(str(i) for i in range(1, 6))
assert circular_linked_list.is_empty() is False
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
from collections.abc import Iterator
from typing import Any
class Node:
def __init__(self, data: Any):
self.data: Any = data
self.next: Node | None = None
class CircularLinkedList:
def __init__(self):
self.head = None
self.tail = None
def __iter__(self) -> Iterator[Any]:
node = self.head
while self.head:
yield node.data
node = node.next
if node == self.head:
break
def __len__(self) -> int:
return sum(1 for _ in self)
def __repr__(self):
return "->".join(str(item) for item in iter(self))
def insert_tail(self, data: Any) -> None:
self.insert_nth(len(self), data)
def insert_head(self, data: Any) -> None:
self.insert_nth(0, data)
def insert_nth(self, index: int, data: Any) -> None:
if index < 0 or index > len(self):
raise IndexError("list index out of range.")
new_node = Node(data)
if self.head is None:
new_node.next = new_node # first node points itself
self.tail = self.head = new_node
elif index == 0: # insert at head
new_node.next = self.head
self.head = self.tail.next = new_node
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
new_node.next = temp.next
temp.next = new_node
if index == len(self) - 1: # insert at tail
self.tail = new_node
def delete_front(self):
return self.delete_nth(0)
def delete_tail(self) -> Any:
return self.delete_nth(len(self) - 1)
def delete_nth(self, index: int = 0) -> Any:
if not 0 <= index < len(self):
raise IndexError("list index out of range.")
delete_node = self.head
if self.head == self.tail: # just one node
self.head = self.tail = None
elif index == 0: # delete head node
self.tail.next = self.tail.next.next
self.head = self.head.next
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
delete_node = temp.next
temp.next = temp.next.next
if index == len(self) - 1: # delete at tail
self.tail = temp
return delete_node.data
def is_empty(self) -> bool:
return len(self) == 0
def test_circular_linked_list() -> None:
"""
>>> test_circular_linked_list()
"""
circular_linked_list = CircularLinkedList()
assert len(circular_linked_list) == 0
assert circular_linked_list.is_empty() is True
assert str(circular_linked_list) == ""
try:
circular_linked_list.delete_front()
raise AssertionError() # This should not happen
except IndexError:
assert True # This should happen
try:
circular_linked_list.delete_tail()
raise AssertionError() # This should not happen
except IndexError:
assert True # This should happen
try:
circular_linked_list.delete_nth(-1)
raise AssertionError()
except IndexError:
assert True
try:
circular_linked_list.delete_nth(0)
raise AssertionError()
except IndexError:
assert True
assert circular_linked_list.is_empty() is True
for i in range(5):
assert len(circular_linked_list) == i
circular_linked_list.insert_nth(i, i + 1)
assert str(circular_linked_list) == "->".join(str(i) for i in range(1, 6))
circular_linked_list.insert_tail(6)
assert str(circular_linked_list) == "->".join(str(i) for i in range(1, 7))
circular_linked_list.insert_head(0)
assert str(circular_linked_list) == "->".join(str(i) for i in range(0, 7))
assert circular_linked_list.delete_front() == 0
assert circular_linked_list.delete_tail() == 6
assert str(circular_linked_list) == "->".join(str(i) for i in range(1, 6))
assert circular_linked_list.delete_nth(2) == 3
circular_linked_list.insert_nth(2, 3)
assert str(circular_linked_list) == "->".join(str(i) for i in range(1, 6))
assert circular_linked_list.is_empty() is False
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
https://en.wikipedia.org/wiki/Doubly_linked_list
"""
class Node:
def __init__(self, data):
self.data = data
self.previous = None
self.next = None
def __str__(self):
return f"{self.data}"
class DoublyLinkedList:
def __init__(self):
self.head = None
self.tail = None
def __iter__(self):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.insert_at_head('b')
>>> linked_list.insert_at_head('a')
>>> linked_list.insert_at_tail('c')
>>> tuple(linked_list)
('a', 'b', 'c')
"""
node = self.head
while node:
yield node.data
node = node.next
def __str__(self):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.insert_at_tail('a')
>>> linked_list.insert_at_tail('b')
>>> linked_list.insert_at_tail('c')
>>> str(linked_list)
'a->b->c'
"""
return "->".join([str(item) for item in self])
def __len__(self):
"""
>>> linked_list = DoublyLinkedList()
>>> for i in range(0, 5):
... linked_list.insert_at_nth(i, i + 1)
>>> len(linked_list) == 5
True
"""
return len(tuple(iter(self)))
def insert_at_head(self, data):
self.insert_at_nth(0, data)
def insert_at_tail(self, data):
self.insert_at_nth(len(self), data)
def insert_at_nth(self, index: int, data):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.insert_at_nth(-1, 666)
Traceback (most recent call last):
....
IndexError: list index out of range
>>> linked_list.insert_at_nth(1, 666)
Traceback (most recent call last):
....
IndexError: list index out of range
>>> linked_list.insert_at_nth(0, 2)
>>> linked_list.insert_at_nth(0, 1)
>>> linked_list.insert_at_nth(2, 4)
>>> linked_list.insert_at_nth(2, 3)
>>> str(linked_list)
'1->2->3->4'
>>> linked_list.insert_at_nth(5, 5)
Traceback (most recent call last):
....
IndexError: list index out of range
"""
if not 0 <= index <= len(self):
raise IndexError("list index out of range")
new_node = Node(data)
if self.head is None:
self.head = self.tail = new_node
elif index == 0:
self.head.previous = new_node
new_node.next = self.head
self.head = new_node
elif index == len(self):
self.tail.next = new_node
new_node.previous = self.tail
self.tail = new_node
else:
temp = self.head
for _ in range(0, index):
temp = temp.next
temp.previous.next = new_node
new_node.previous = temp.previous
new_node.next = temp
temp.previous = new_node
def delete_head(self):
return self.delete_at_nth(0)
def delete_tail(self):
return self.delete_at_nth(len(self) - 1)
def delete_at_nth(self, index: int):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.delete_at_nth(0)
Traceback (most recent call last):
....
IndexError: list index out of range
>>> for i in range(0, 5):
... linked_list.insert_at_nth(i, i + 1)
>>> linked_list.delete_at_nth(0) == 1
True
>>> linked_list.delete_at_nth(3) == 5
True
>>> linked_list.delete_at_nth(1) == 3
True
>>> str(linked_list)
'2->4'
>>> linked_list.delete_at_nth(2)
Traceback (most recent call last):
....
IndexError: list index out of range
"""
if not 0 <= index <= len(self) - 1:
raise IndexError("list index out of range")
delete_node = self.head # default first node
if len(self) == 1:
self.head = self.tail = None
elif index == 0:
self.head = self.head.next
self.head.previous = None
elif index == len(self) - 1:
delete_node = self.tail
self.tail = self.tail.previous
self.tail.next = None
else:
temp = self.head
for _ in range(0, index):
temp = temp.next
delete_node = temp
temp.next.previous = temp.previous
temp.previous.next = temp.next
return delete_node.data
def delete(self, data) -> str:
current = self.head
while current.data != data: # Find the position to delete
if current.next:
current = current.next
else: # We have reached the end an no value matches
raise ValueError("No data matching given value")
if current == self.head:
self.delete_head()
elif current == self.tail:
self.delete_tail()
else: # Before: 1 <--> 2(current) <--> 3
current.previous.next = current.next # 1 --> 3
current.next.previous = current.previous # 1 <--> 3
return data
def is_empty(self):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.is_empty()
True
>>> linked_list.insert_at_tail(1)
>>> linked_list.is_empty()
False
"""
return len(self) == 0
def test_doubly_linked_list() -> None:
"""
>>> test_doubly_linked_list()
"""
linked_list = DoublyLinkedList()
assert linked_list.is_empty() is True
assert str(linked_list) == ""
try:
linked_list.delete_head()
raise AssertionError() # This should not happen.
except IndexError:
assert True # This should happen.
try:
linked_list.delete_tail()
raise AssertionError() # This should not happen.
except IndexError:
assert True # This should happen.
for i in range(10):
assert len(linked_list) == i
linked_list.insert_at_nth(i, i + 1)
assert str(linked_list) == "->".join(str(i) for i in range(1, 11))
linked_list.insert_at_head(0)
linked_list.insert_at_tail(11)
assert str(linked_list) == "->".join(str(i) for i in range(0, 12))
assert linked_list.delete_head() == 0
assert linked_list.delete_at_nth(9) == 10
assert linked_list.delete_tail() == 11
assert len(linked_list) == 9
assert str(linked_list) == "->".join(str(i) for i in range(1, 10))
if __name__ == "__main__":
from doctest import testmod
testmod()
| """
https://en.wikipedia.org/wiki/Doubly_linked_list
"""
class Node:
def __init__(self, data):
self.data = data
self.previous = None
self.next = None
def __str__(self):
return f"{self.data}"
class DoublyLinkedList:
def __init__(self):
self.head = None
self.tail = None
def __iter__(self):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.insert_at_head('b')
>>> linked_list.insert_at_head('a')
>>> linked_list.insert_at_tail('c')
>>> tuple(linked_list)
('a', 'b', 'c')
"""
node = self.head
while node:
yield node.data
node = node.next
def __str__(self):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.insert_at_tail('a')
>>> linked_list.insert_at_tail('b')
>>> linked_list.insert_at_tail('c')
>>> str(linked_list)
'a->b->c'
"""
return "->".join([str(item) for item in self])
def __len__(self):
"""
>>> linked_list = DoublyLinkedList()
>>> for i in range(0, 5):
... linked_list.insert_at_nth(i, i + 1)
>>> len(linked_list) == 5
True
"""
return sum(1 for _ in self)
def insert_at_head(self, data):
self.insert_at_nth(0, data)
def insert_at_tail(self, data):
self.insert_at_nth(len(self), data)
def insert_at_nth(self, index: int, data):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.insert_at_nth(-1, 666)
Traceback (most recent call last):
....
IndexError: list index out of range
>>> linked_list.insert_at_nth(1, 666)
Traceback (most recent call last):
....
IndexError: list index out of range
>>> linked_list.insert_at_nth(0, 2)
>>> linked_list.insert_at_nth(0, 1)
>>> linked_list.insert_at_nth(2, 4)
>>> linked_list.insert_at_nth(2, 3)
>>> str(linked_list)
'1->2->3->4'
>>> linked_list.insert_at_nth(5, 5)
Traceback (most recent call last):
....
IndexError: list index out of range
"""
if not 0 <= index <= len(self):
raise IndexError("list index out of range")
new_node = Node(data)
if self.head is None:
self.head = self.tail = new_node
elif index == 0:
self.head.previous = new_node
new_node.next = self.head
self.head = new_node
elif index == len(self):
self.tail.next = new_node
new_node.previous = self.tail
self.tail = new_node
else:
temp = self.head
for _ in range(0, index):
temp = temp.next
temp.previous.next = new_node
new_node.previous = temp.previous
new_node.next = temp
temp.previous = new_node
def delete_head(self):
return self.delete_at_nth(0)
def delete_tail(self):
return self.delete_at_nth(len(self) - 1)
def delete_at_nth(self, index: int):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.delete_at_nth(0)
Traceback (most recent call last):
....
IndexError: list index out of range
>>> for i in range(0, 5):
... linked_list.insert_at_nth(i, i + 1)
>>> linked_list.delete_at_nth(0) == 1
True
>>> linked_list.delete_at_nth(3) == 5
True
>>> linked_list.delete_at_nth(1) == 3
True
>>> str(linked_list)
'2->4'
>>> linked_list.delete_at_nth(2)
Traceback (most recent call last):
....
IndexError: list index out of range
"""
if not 0 <= index <= len(self) - 1:
raise IndexError("list index out of range")
delete_node = self.head # default first node
if len(self) == 1:
self.head = self.tail = None
elif index == 0:
self.head = self.head.next
self.head.previous = None
elif index == len(self) - 1:
delete_node = self.tail
self.tail = self.tail.previous
self.tail.next = None
else:
temp = self.head
for _ in range(0, index):
temp = temp.next
delete_node = temp
temp.next.previous = temp.previous
temp.previous.next = temp.next
return delete_node.data
def delete(self, data) -> str:
current = self.head
while current.data != data: # Find the position to delete
if current.next:
current = current.next
else: # We have reached the end an no value matches
raise ValueError("No data matching given value")
if current == self.head:
self.delete_head()
elif current == self.tail:
self.delete_tail()
else: # Before: 1 <--> 2(current) <--> 3
current.previous.next = current.next # 1 --> 3
current.next.previous = current.previous # 1 <--> 3
return data
def is_empty(self):
"""
>>> linked_list = DoublyLinkedList()
>>> linked_list.is_empty()
True
>>> linked_list.insert_at_tail(1)
>>> linked_list.is_empty()
False
"""
return len(self) == 0
def test_doubly_linked_list() -> None:
"""
>>> test_doubly_linked_list()
"""
linked_list = DoublyLinkedList()
assert linked_list.is_empty() is True
assert str(linked_list) == ""
try:
linked_list.delete_head()
raise AssertionError() # This should not happen.
except IndexError:
assert True # This should happen.
try:
linked_list.delete_tail()
raise AssertionError() # This should not happen.
except IndexError:
assert True # This should happen.
for i in range(10):
assert len(linked_list) == i
linked_list.insert_at_nth(i, i + 1)
assert str(linked_list) == "->".join(str(i) for i in range(1, 11))
linked_list.insert_at_head(0)
linked_list.insert_at_tail(11)
assert str(linked_list) == "->".join(str(i) for i in range(0, 12))
assert linked_list.delete_head() == 0
assert linked_list.delete_at_nth(9) == 10
assert linked_list.delete_tail() == 11
assert len(linked_list) == 9
assert str(linked_list) == "->".join(str(i) for i in range(1, 10))
if __name__ == "__main__":
from doctest import testmod
testmod()
| 1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Algorithm that merges two sorted linked lists into one sorted linked list.
"""
from __future__ import annotations
from collections.abc import Iterable, Iterator
from dataclasses import dataclass
test_data_odd = (3, 9, -11, 0, 7, 5, 1, -1)
test_data_even = (4, 6, 2, 0, 8, 10, 3, -2)
@dataclass
class Node:
data: int
next_node: Node | None
class SortedLinkedList:
def __init__(self, ints: Iterable[int]) -> None:
self.head: Node | None = None
for i in sorted(ints, reverse=True):
self.head = Node(i, self.head)
def __iter__(self) -> Iterator[int]:
"""
>>> tuple(SortedLinkedList(test_data_odd)) == tuple(sorted(test_data_odd))
True
>>> tuple(SortedLinkedList(test_data_even)) == tuple(sorted(test_data_even))
True
"""
node = self.head
while node:
yield node.data
node = node.next_node
def __len__(self) -> int:
"""
>>> for i in range(3):
... len(SortedLinkedList(range(i))) == i
True
True
True
>>> len(SortedLinkedList(test_data_odd))
8
"""
return len(tuple(iter(self)))
def __str__(self) -> str:
"""
>>> str(SortedLinkedList([]))
''
>>> str(SortedLinkedList(test_data_odd))
'-11 -> -1 -> 0 -> 1 -> 3 -> 5 -> 7 -> 9'
>>> str(SortedLinkedList(test_data_even))
'-2 -> 0 -> 2 -> 3 -> 4 -> 6 -> 8 -> 10'
"""
return " -> ".join([str(node) for node in self])
def merge_lists(
sll_one: SortedLinkedList, sll_two: SortedLinkedList
) -> SortedLinkedList:
"""
>>> SSL = SortedLinkedList
>>> merged = merge_lists(SSL(test_data_odd), SSL(test_data_even))
>>> len(merged)
16
>>> str(merged)
'-11 -> -2 -> -1 -> 0 -> 0 -> 1 -> 2 -> 3 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8 -> 9 -> 10'
>>> list(merged) == list(sorted(test_data_odd + test_data_even))
True
"""
return SortedLinkedList(list(sll_one) + list(sll_two))
if __name__ == "__main__":
import doctest
doctest.testmod()
SSL = SortedLinkedList
print(merge_lists(SSL(test_data_odd), SSL(test_data_even)))
| """
Algorithm that merges two sorted linked lists into one sorted linked list.
"""
from __future__ import annotations
from collections.abc import Iterable, Iterator
from dataclasses import dataclass
test_data_odd = (3, 9, -11, 0, 7, 5, 1, -1)
test_data_even = (4, 6, 2, 0, 8, 10, 3, -2)
@dataclass
class Node:
data: int
next_node: Node | None
class SortedLinkedList:
def __init__(self, ints: Iterable[int]) -> None:
self.head: Node | None = None
for i in sorted(ints, reverse=True):
self.head = Node(i, self.head)
def __iter__(self) -> Iterator[int]:
"""
>>> tuple(SortedLinkedList(test_data_odd)) == tuple(sorted(test_data_odd))
True
>>> tuple(SortedLinkedList(test_data_even)) == tuple(sorted(test_data_even))
True
"""
node = self.head
while node:
yield node.data
node = node.next_node
def __len__(self) -> int:
"""
>>> for i in range(3):
... len(SortedLinkedList(range(i))) == i
True
True
True
>>> len(SortedLinkedList(test_data_odd))
8
"""
return sum(1 for _ in self)
def __str__(self) -> str:
"""
>>> str(SortedLinkedList([]))
''
>>> str(SortedLinkedList(test_data_odd))
'-11 -> -1 -> 0 -> 1 -> 3 -> 5 -> 7 -> 9'
>>> str(SortedLinkedList(test_data_even))
'-2 -> 0 -> 2 -> 3 -> 4 -> 6 -> 8 -> 10'
"""
return " -> ".join([str(node) for node in self])
def merge_lists(
sll_one: SortedLinkedList, sll_two: SortedLinkedList
) -> SortedLinkedList:
"""
>>> SSL = SortedLinkedList
>>> merged = merge_lists(SSL(test_data_odd), SSL(test_data_even))
>>> len(merged)
16
>>> str(merged)
'-11 -> -2 -> -1 -> 0 -> 0 -> 1 -> 2 -> 3 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8 -> 9 -> 10'
>>> list(merged) == list(sorted(test_data_odd + test_data_even))
True
"""
return SortedLinkedList(list(sll_one) + list(sll_two))
if __name__ == "__main__":
import doctest
doctest.testmod()
SSL = SortedLinkedList
print(merge_lists(SSL(test_data_odd), SSL(test_data_even)))
| 1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from typing import Any
class Node:
def __init__(self, data: Any):
"""
Create and initialize Node class instance.
>>> Node(20)
Node(20)
>>> Node("Hello, world!")
Node(Hello, world!)
>>> Node(None)
Node(None)
>>> Node(True)
Node(True)
"""
self.data = data
self.next = None
def __repr__(self) -> str:
"""
Get the string representation of this node.
>>> Node(10).__repr__()
'Node(10)'
"""
return f"Node({self.data})"
class LinkedList:
def __init__(self):
"""
Create and initialize LinkedList class instance.
>>> linked_list = LinkedList()
"""
self.head = None
def __iter__(self) -> Any:
"""
This function is intended for iterators to access
and iterate through data inside linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("tail")
>>> linked_list.insert_tail("tail_1")
>>> linked_list.insert_tail("tail_2")
>>> for node in linked_list: # __iter__ used here.
... node
'tail'
'tail_1'
'tail_2'
"""
node = self.head
while node:
yield node.data
node = node.next
def __len__(self) -> int:
"""
Return length of linked list i.e. number of nodes
>>> linked_list = LinkedList()
>>> len(linked_list)
0
>>> linked_list.insert_tail("tail")
>>> len(linked_list)
1
>>> linked_list.insert_head("head")
>>> len(linked_list)
2
>>> _ = linked_list.delete_tail()
>>> len(linked_list)
1
>>> _ = linked_list.delete_head()
>>> len(linked_list)
0
"""
return len(tuple(iter(self)))
def __repr__(self) -> str:
"""
String representation/visualization of a Linked Lists
>>> linked_list = LinkedList()
>>> linked_list.insert_tail(1)
>>> linked_list.insert_tail(3)
>>> linked_list.__repr__()
'1->3'
"""
return "->".join([str(item) for item in self])
def __getitem__(self, index: int) -> Any:
"""
Indexing Support. Used to get a node at particular position
>>> linked_list = LinkedList()
>>> for i in range(0, 10):
... linked_list.insert_nth(i, i)
>>> all(str(linked_list[i]) == str(i) for i in range(0, 10))
True
>>> linked_list[-10]
Traceback (most recent call last):
...
ValueError: list index out of range.
>>> linked_list[len(linked_list)]
Traceback (most recent call last):
...
ValueError: list index out of range.
"""
if not 0 <= index < len(self):
raise ValueError("list index out of range.")
for i, node in enumerate(self):
if i == index:
return node
return None
# Used to change the data of a particular node
def __setitem__(self, index: int, data: Any) -> None:
"""
>>> linked_list = LinkedList()
>>> for i in range(0, 10):
... linked_list.insert_nth(i, i)
>>> linked_list[0] = 666
>>> linked_list[0]
666
>>> linked_list[5] = -666
>>> linked_list[5]
-666
>>> linked_list[-10] = 666
Traceback (most recent call last):
...
ValueError: list index out of range.
>>> linked_list[len(linked_list)] = 666
Traceback (most recent call last):
...
ValueError: list index out of range.
"""
if not 0 <= index < len(self):
raise ValueError("list index out of range.")
current = self.head
for _ in range(index):
current = current.next
current.data = data
def insert_tail(self, data: Any) -> None:
"""
Insert data to the end of linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("tail")
>>> linked_list
tail
>>> linked_list.insert_tail("tail_2")
>>> linked_list
tail->tail_2
>>> linked_list.insert_tail("tail_3")
>>> linked_list
tail->tail_2->tail_3
"""
self.insert_nth(len(self), data)
def insert_head(self, data: Any) -> None:
"""
Insert data to the beginning of linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_head("head")
>>> linked_list
head
>>> linked_list.insert_head("head_2")
>>> linked_list
head_2->head
>>> linked_list.insert_head("head_3")
>>> linked_list
head_3->head_2->head
"""
self.insert_nth(0, data)
def insert_nth(self, index: int, data: Any) -> None:
"""
Insert data at given index.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.insert_nth(1, "fourth")
>>> linked_list
first->fourth->second->third
>>> linked_list.insert_nth(3, "fifth")
>>> linked_list
first->fourth->second->fifth->third
"""
if not 0 <= index <= len(self):
raise IndexError("list index out of range")
new_node = Node(data)
if self.head is None:
self.head = new_node
elif index == 0:
new_node.next = self.head # link new_node to head
self.head = new_node
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
new_node.next = temp.next
temp.next = new_node
def print_list(self) -> None: # print every node data
"""
This method prints every node data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
"""
print(self)
def delete_head(self) -> Any:
"""
Delete the first node and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_head()
'first'
>>> linked_list
second->third
>>> linked_list.delete_head()
'second'
>>> linked_list
third
>>> linked_list.delete_head()
'third'
>>> linked_list.delete_head()
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
return self.delete_nth(0)
def delete_tail(self) -> Any: # delete from tail
"""
Delete the tail end node and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_tail()
'third'
>>> linked_list
first->second
>>> linked_list.delete_tail()
'second'
>>> linked_list
first
>>> linked_list.delete_tail()
'first'
>>> linked_list.delete_tail()
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
return self.delete_nth(len(self) - 1)
def delete_nth(self, index: int = 0) -> Any:
"""
Delete node at given index and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_nth(1) # delete middle
'second'
>>> linked_list
first->third
>>> linked_list.delete_nth(5) # this raises error
Traceback (most recent call last):
...
IndexError: List index out of range.
>>> linked_list.delete_nth(-1) # this also raises error
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
if not 0 <= index <= len(self) - 1: # test if index is valid
raise IndexError("List index out of range.")
delete_node = self.head # default first node
if index == 0:
self.head = self.head.next
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
delete_node = temp.next
temp.next = temp.next.next
return delete_node.data
def is_empty(self) -> bool:
"""
Check if linked list is empty.
>>> linked_list = LinkedList()
>>> linked_list.is_empty()
True
>>> linked_list.insert_head("first")
>>> linked_list.is_empty()
False
"""
return self.head is None
def reverse(self) -> None:
"""
This reverses the linked list order.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.reverse()
>>> linked_list
third->second->first
"""
prev = None
current = self.head
while current:
# Store the current node's next node.
next_node = current.next
# Make the current node's next point backwards
current.next = prev
# Make the previous node be the current node
prev = current
# Make the current node the next node (to progress iteration)
current = next_node
# Return prev in order to put the head at the end
self.head = prev
def test_singly_linked_list() -> None:
"""
>>> test_singly_linked_list()
"""
linked_list = LinkedList()
assert linked_list.is_empty() is True
assert str(linked_list) == ""
try:
linked_list.delete_head()
raise AssertionError() # This should not happen.
except IndexError:
assert True # This should happen.
try:
linked_list.delete_tail()
raise AssertionError() # This should not happen.
except IndexError:
assert True # This should happen.
for i in range(10):
assert len(linked_list) == i
linked_list.insert_nth(i, i + 1)
assert str(linked_list) == "->".join(str(i) for i in range(1, 11))
linked_list.insert_head(0)
linked_list.insert_tail(11)
assert str(linked_list) == "->".join(str(i) for i in range(0, 12))
assert linked_list.delete_head() == 0
assert linked_list.delete_nth(9) == 10
assert linked_list.delete_tail() == 11
assert len(linked_list) == 9
assert str(linked_list) == "->".join(str(i) for i in range(1, 10))
assert all(linked_list[i] == i + 1 for i in range(0, 9)) is True
for i in range(0, 9):
linked_list[i] = -i
assert all(linked_list[i] == -i for i in range(0, 9)) is True
linked_list.reverse()
assert str(linked_list) == "->".join(str(i) for i in range(-8, 1))
def test_singly_linked_list_2() -> None:
"""
This section of the test used varying data types for input.
>>> test_singly_linked_list_2()
"""
test_input = [
-9,
100,
Node(77345112),
"dlrow olleH",
7,
5555,
0,
-192.55555,
"Hello, world!",
77.9,
Node(10),
None,
None,
12.20,
]
linked_list = LinkedList()
for i in test_input:
linked_list.insert_tail(i)
# Check if it's empty or not
assert linked_list.is_empty() is False
assert (
str(linked_list) == "-9->100->Node(77345112)->dlrow olleH->7->5555->0->"
"-192.55555->Hello, world!->77.9->Node(10)->None->None->12.2"
)
# Delete the head
result = linked_list.delete_head()
assert result == -9
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None->None->12.2"
)
# Delete the tail
result = linked_list.delete_tail()
assert result == 12.2
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None->None"
)
# Delete a node in specific location in linked list
result = linked_list.delete_nth(10)
assert result is None
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None"
)
# Add a Node instance to its head
linked_list.insert_head(Node("Hello again, world!"))
assert (
str(linked_list)
== "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
"7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None"
)
# Add None to its tail
linked_list.insert_tail(None)
assert (
str(linked_list)
== "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
"7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None->None"
)
# Reverse the linked list
linked_list.reverse()
assert (
str(linked_list)
== "None->None->Node(10)->77.9->Hello, world!->-192.55555->0->5555->"
"7->dlrow olleH->Node(77345112)->100->Node(Hello again, world!)"
)
def main():
from doctest import testmod
testmod()
linked_list = LinkedList()
linked_list.insert_head(input("Inserting 1st at head ").strip())
linked_list.insert_head(input("Inserting 2nd at head ").strip())
print("\nPrint list:")
linked_list.print_list()
linked_list.insert_tail(input("\nInserting 1st at tail ").strip())
linked_list.insert_tail(input("Inserting 2nd at tail ").strip())
print("\nPrint list:")
linked_list.print_list()
print("\nDelete head")
linked_list.delete_head()
print("Delete tail")
linked_list.delete_tail()
print("\nPrint list:")
linked_list.print_list()
print("\nReverse linked list")
linked_list.reverse()
print("\nPrint list:")
linked_list.print_list()
print("\nString representation of linked list:")
print(linked_list)
print("\nReading/changing Node data using indexing:")
print(f"Element at Position 1: {linked_list[1]}")
linked_list[1] = input("Enter New Value: ").strip()
print("New list:")
print(linked_list)
print(f"length of linked_list is : {len(linked_list)}")
if __name__ == "__main__":
main()
| from typing import Any
class Node:
def __init__(self, data: Any):
"""
Create and initialize Node class instance.
>>> Node(20)
Node(20)
>>> Node("Hello, world!")
Node(Hello, world!)
>>> Node(None)
Node(None)
>>> Node(True)
Node(True)
"""
self.data = data
self.next = None
def __repr__(self) -> str:
"""
Get the string representation of this node.
>>> Node(10).__repr__()
'Node(10)'
"""
return f"Node({self.data})"
class LinkedList:
def __init__(self):
"""
Create and initialize LinkedList class instance.
>>> linked_list = LinkedList()
"""
self.head = None
def __iter__(self) -> Any:
"""
This function is intended for iterators to access
and iterate through data inside linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("tail")
>>> linked_list.insert_tail("tail_1")
>>> linked_list.insert_tail("tail_2")
>>> for node in linked_list: # __iter__ used here.
... node
'tail'
'tail_1'
'tail_2'
"""
node = self.head
while node:
yield node.data
node = node.next
def __len__(self) -> int:
"""
Return length of linked list i.e. number of nodes
>>> linked_list = LinkedList()
>>> len(linked_list)
0
>>> linked_list.insert_tail("tail")
>>> len(linked_list)
1
>>> linked_list.insert_head("head")
>>> len(linked_list)
2
>>> _ = linked_list.delete_tail()
>>> len(linked_list)
1
>>> _ = linked_list.delete_head()
>>> len(linked_list)
0
"""
return sum(1 for _ in self)
def __repr__(self) -> str:
"""
String representation/visualization of a Linked Lists
>>> linked_list = LinkedList()
>>> linked_list.insert_tail(1)
>>> linked_list.insert_tail(3)
>>> linked_list.__repr__()
'1->3'
"""
return "->".join([str(item) for item in self])
def __getitem__(self, index: int) -> Any:
"""
Indexing Support. Used to get a node at particular position
>>> linked_list = LinkedList()
>>> for i in range(0, 10):
... linked_list.insert_nth(i, i)
>>> all(str(linked_list[i]) == str(i) for i in range(0, 10))
True
>>> linked_list[-10]
Traceback (most recent call last):
...
ValueError: list index out of range.
>>> linked_list[len(linked_list)]
Traceback (most recent call last):
...
ValueError: list index out of range.
"""
if not 0 <= index < len(self):
raise ValueError("list index out of range.")
for i, node in enumerate(self):
if i == index:
return node
return None
# Used to change the data of a particular node
def __setitem__(self, index: int, data: Any) -> None:
"""
>>> linked_list = LinkedList()
>>> for i in range(0, 10):
... linked_list.insert_nth(i, i)
>>> linked_list[0] = 666
>>> linked_list[0]
666
>>> linked_list[5] = -666
>>> linked_list[5]
-666
>>> linked_list[-10] = 666
Traceback (most recent call last):
...
ValueError: list index out of range.
>>> linked_list[len(linked_list)] = 666
Traceback (most recent call last):
...
ValueError: list index out of range.
"""
if not 0 <= index < len(self):
raise ValueError("list index out of range.")
current = self.head
for _ in range(index):
current = current.next
current.data = data
def insert_tail(self, data: Any) -> None:
"""
Insert data to the end of linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("tail")
>>> linked_list
tail
>>> linked_list.insert_tail("tail_2")
>>> linked_list
tail->tail_2
>>> linked_list.insert_tail("tail_3")
>>> linked_list
tail->tail_2->tail_3
"""
self.insert_nth(len(self), data)
def insert_head(self, data: Any) -> None:
"""
Insert data to the beginning of linked list.
>>> linked_list = LinkedList()
>>> linked_list.insert_head("head")
>>> linked_list
head
>>> linked_list.insert_head("head_2")
>>> linked_list
head_2->head
>>> linked_list.insert_head("head_3")
>>> linked_list
head_3->head_2->head
"""
self.insert_nth(0, data)
def insert_nth(self, index: int, data: Any) -> None:
"""
Insert data at given index.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.insert_nth(1, "fourth")
>>> linked_list
first->fourth->second->third
>>> linked_list.insert_nth(3, "fifth")
>>> linked_list
first->fourth->second->fifth->third
"""
if not 0 <= index <= len(self):
raise IndexError("list index out of range")
new_node = Node(data)
if self.head is None:
self.head = new_node
elif index == 0:
new_node.next = self.head # link new_node to head
self.head = new_node
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
new_node.next = temp.next
temp.next = new_node
def print_list(self) -> None: # print every node data
"""
This method prints every node data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
"""
print(self)
def delete_head(self) -> Any:
"""
Delete the first node and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_head()
'first'
>>> linked_list
second->third
>>> linked_list.delete_head()
'second'
>>> linked_list
third
>>> linked_list.delete_head()
'third'
>>> linked_list.delete_head()
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
return self.delete_nth(0)
def delete_tail(self) -> Any: # delete from tail
"""
Delete the tail end node and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_tail()
'third'
>>> linked_list
first->second
>>> linked_list.delete_tail()
'second'
>>> linked_list
first
>>> linked_list.delete_tail()
'first'
>>> linked_list.delete_tail()
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
return self.delete_nth(len(self) - 1)
def delete_nth(self, index: int = 0) -> Any:
"""
Delete node at given index and return the
node's data.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.delete_nth(1) # delete middle
'second'
>>> linked_list
first->third
>>> linked_list.delete_nth(5) # this raises error
Traceback (most recent call last):
...
IndexError: List index out of range.
>>> linked_list.delete_nth(-1) # this also raises error
Traceback (most recent call last):
...
IndexError: List index out of range.
"""
if not 0 <= index <= len(self) - 1: # test if index is valid
raise IndexError("List index out of range.")
delete_node = self.head # default first node
if index == 0:
self.head = self.head.next
else:
temp = self.head
for _ in range(index - 1):
temp = temp.next
delete_node = temp.next
temp.next = temp.next.next
return delete_node.data
def is_empty(self) -> bool:
"""
Check if linked list is empty.
>>> linked_list = LinkedList()
>>> linked_list.is_empty()
True
>>> linked_list.insert_head("first")
>>> linked_list.is_empty()
False
"""
return self.head is None
def reverse(self) -> None:
"""
This reverses the linked list order.
>>> linked_list = LinkedList()
>>> linked_list.insert_tail("first")
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
first->second->third
>>> linked_list.reverse()
>>> linked_list
third->second->first
"""
prev = None
current = self.head
while current:
# Store the current node's next node.
next_node = current.next
# Make the current node's next point backwards
current.next = prev
# Make the previous node be the current node
prev = current
# Make the current node the next node (to progress iteration)
current = next_node
# Return prev in order to put the head at the end
self.head = prev
def test_singly_linked_list() -> None:
"""
>>> test_singly_linked_list()
"""
linked_list = LinkedList()
assert linked_list.is_empty() is True
assert str(linked_list) == ""
try:
linked_list.delete_head()
raise AssertionError() # This should not happen.
except IndexError:
assert True # This should happen.
try:
linked_list.delete_tail()
raise AssertionError() # This should not happen.
except IndexError:
assert True # This should happen.
for i in range(10):
assert len(linked_list) == i
linked_list.insert_nth(i, i + 1)
assert str(linked_list) == "->".join(str(i) for i in range(1, 11))
linked_list.insert_head(0)
linked_list.insert_tail(11)
assert str(linked_list) == "->".join(str(i) for i in range(0, 12))
assert linked_list.delete_head() == 0
assert linked_list.delete_nth(9) == 10
assert linked_list.delete_tail() == 11
assert len(linked_list) == 9
assert str(linked_list) == "->".join(str(i) for i in range(1, 10))
assert all(linked_list[i] == i + 1 for i in range(0, 9)) is True
for i in range(0, 9):
linked_list[i] = -i
assert all(linked_list[i] == -i for i in range(0, 9)) is True
linked_list.reverse()
assert str(linked_list) == "->".join(str(i) for i in range(-8, 1))
def test_singly_linked_list_2() -> None:
"""
This section of the test used varying data types for input.
>>> test_singly_linked_list_2()
"""
test_input = [
-9,
100,
Node(77345112),
"dlrow olleH",
7,
5555,
0,
-192.55555,
"Hello, world!",
77.9,
Node(10),
None,
None,
12.20,
]
linked_list = LinkedList()
for i in test_input:
linked_list.insert_tail(i)
# Check if it's empty or not
assert linked_list.is_empty() is False
assert (
str(linked_list) == "-9->100->Node(77345112)->dlrow olleH->7->5555->0->"
"-192.55555->Hello, world!->77.9->Node(10)->None->None->12.2"
)
# Delete the head
result = linked_list.delete_head()
assert result == -9
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None->None->12.2"
)
# Delete the tail
result = linked_list.delete_tail()
assert result == 12.2
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None->None"
)
# Delete a node in specific location in linked list
result = linked_list.delete_nth(10)
assert result is None
assert (
str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
"Hello, world!->77.9->Node(10)->None"
)
# Add a Node instance to its head
linked_list.insert_head(Node("Hello again, world!"))
assert (
str(linked_list)
== "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
"7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None"
)
# Add None to its tail
linked_list.insert_tail(None)
assert (
str(linked_list)
== "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
"7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None->None"
)
# Reverse the linked list
linked_list.reverse()
assert (
str(linked_list)
== "None->None->Node(10)->77.9->Hello, world!->-192.55555->0->5555->"
"7->dlrow olleH->Node(77345112)->100->Node(Hello again, world!)"
)
def main():
from doctest import testmod
testmod()
linked_list = LinkedList()
linked_list.insert_head(input("Inserting 1st at head ").strip())
linked_list.insert_head(input("Inserting 2nd at head ").strip())
print("\nPrint list:")
linked_list.print_list()
linked_list.insert_tail(input("\nInserting 1st at tail ").strip())
linked_list.insert_tail(input("Inserting 2nd at tail ").strip())
print("\nPrint list:")
linked_list.print_list()
print("\nDelete head")
linked_list.delete_head()
print("Delete tail")
linked_list.delete_tail()
print("\nPrint list:")
linked_list.print_list()
print("\nReverse linked list")
linked_list.reverse()
print("\nPrint list:")
linked_list.print_list()
print("\nString representation of linked list:")
print(linked_list)
print("\nReading/changing Node data using indexing:")
print(f"Element at Position 1: {linked_list[1]}")
linked_list[1] = input("Enter New Value: ").strip()
print("New list:")
print(linked_list)
print(f"length of linked_list is : {len(linked_list)}")
if __name__ == "__main__":
main()
| 1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
The Jaccard similarity coefficient is a commonly used indicator of the
similarity between two sets. Let U be a set and A and B be subsets of U,
then the Jaccard index/similarity is defined to be the ratio of the number
of elements of their intersection and the number of elements of their union.
Inspired from Wikipedia and
the book Mining of Massive Datasets [MMDS 2nd Edition, Chapter 3]
https://en.wikipedia.org/wiki/Jaccard_index
https://mmds.org
Jaccard similarity is widely used with MinHashing.
"""
def jaccard_similarity(set_a, set_b, alternative_union=False):
"""
Finds the jaccard similarity between two sets.
Essentially, its intersection over union.
The alternative way to calculate this is to take union as sum of the
number of items in the two sets. This will lead to jaccard similarity
of a set with itself be 1/2 instead of 1. [MMDS 2nd Edition, Page 77]
Parameters:
:set_a (set,list,tuple): A non-empty set/list
:set_b (set,list,tuple): A non-empty set/list
:alternativeUnion (boolean): If True, use sum of number of
items as union
Output:
(float) The jaccard similarity between the two sets.
Examples:
>>> set_a = {'a', 'b', 'c', 'd', 'e'}
>>> set_b = {'c', 'd', 'e', 'f', 'h', 'i'}
>>> jaccard_similarity(set_a, set_b)
0.375
>>> jaccard_similarity(set_a, set_a)
1.0
>>> jaccard_similarity(set_a, set_a, True)
0.5
>>> set_a = ['a', 'b', 'c', 'd', 'e']
>>> set_b = ('c', 'd', 'e', 'f', 'h', 'i')
>>> jaccard_similarity(set_a, set_b)
0.375
"""
if isinstance(set_a, set) and isinstance(set_b, set):
intersection = len(set_a.intersection(set_b))
if alternative_union:
union = len(set_a) + len(set_b)
else:
union = len(set_a.union(set_b))
return intersection / union
if isinstance(set_a, (list, tuple)) and isinstance(set_b, (list, tuple)):
intersection = [element for element in set_a if element in set_b]
if alternative_union:
union = len(set_a) + len(set_b)
return len(intersection) / union
else:
union = set_a + [element for element in set_b if element not in set_a]
return len(intersection) / len(union)
return len(intersection) / len(union)
return None
if __name__ == "__main__":
set_a = {"a", "b", "c", "d", "e"}
set_b = {"c", "d", "e", "f", "h", "i"}
print(jaccard_similarity(set_a, set_b))
| """
The Jaccard similarity coefficient is a commonly used indicator of the
similarity between two sets. Let U be a set and A and B be subsets of U,
then the Jaccard index/similarity is defined to be the ratio of the number
of elements of their intersection and the number of elements of their union.
Inspired from Wikipedia and
the book Mining of Massive Datasets [MMDS 2nd Edition, Chapter 3]
https://en.wikipedia.org/wiki/Jaccard_index
https://mmds.org
Jaccard similarity is widely used with MinHashing.
"""
def jaccard_similarity(set_a, set_b, alternative_union=False):
"""
Finds the jaccard similarity between two sets.
Essentially, its intersection over union.
The alternative way to calculate this is to take union as sum of the
number of items in the two sets. This will lead to jaccard similarity
of a set with itself be 1/2 instead of 1. [MMDS 2nd Edition, Page 77]
Parameters:
:set_a (set,list,tuple): A non-empty set/list
:set_b (set,list,tuple): A non-empty set/list
:alternativeUnion (boolean): If True, use sum of number of
items as union
Output:
(float) The jaccard similarity between the two sets.
Examples:
>>> set_a = {'a', 'b', 'c', 'd', 'e'}
>>> set_b = {'c', 'd', 'e', 'f', 'h', 'i'}
>>> jaccard_similarity(set_a, set_b)
0.375
>>> jaccard_similarity(set_a, set_a)
1.0
>>> jaccard_similarity(set_a, set_a, True)
0.5
>>> set_a = ['a', 'b', 'c', 'd', 'e']
>>> set_b = ('c', 'd', 'e', 'f', 'h', 'i')
>>> jaccard_similarity(set_a, set_b)
0.375
"""
if isinstance(set_a, set) and isinstance(set_b, set):
intersection = len(set_a.intersection(set_b))
if alternative_union:
union = len(set_a) + len(set_b)
else:
union = len(set_a.union(set_b))
return intersection / union
if isinstance(set_a, (list, tuple)) and isinstance(set_b, (list, tuple)):
intersection = [element for element in set_a if element in set_b]
if alternative_union:
union = len(set_a) + len(set_b)
return len(intersection) / union
else:
union = set_a + [element for element in set_b if element not in set_a]
return len(intersection) / len(union)
return len(intersection) / len(union)
return None
if __name__ == "__main__":
set_a = {"a", "b", "c", "d", "e"}
set_b = {"c", "d", "e", "f", "h", "i"}
print(jaccard_similarity(set_a, set_b))
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Problem 45: https://projecteuler.net/problem=45
Triangle, pentagonal, and hexagonal numbers are generated by the following formulae:
Triangle T(n) = (n * (n + 1)) / 2 1, 3, 6, 10, 15, ...
Pentagonal P(n) = (n * (3 * n − 1)) / 2 1, 5, 12, 22, 35, ...
Hexagonal H(n) = n * (2 * n − 1) 1, 6, 15, 28, 45, ...
It can be verified that T(285) = P(165) = H(143) = 40755.
Find the next triangle number that is also pentagonal and hexagonal.
All triangle numbers are hexagonal numbers.
T(2n-1) = n * (2 * n - 1) = H(n)
So we shall check only for hexagonal numbers which are also pentagonal.
"""
def hexagonal_num(n: int) -> int:
"""
Returns nth hexagonal number
>>> hexagonal_num(143)
40755
>>> hexagonal_num(21)
861
>>> hexagonal_num(10)
190
"""
return n * (2 * n - 1)
def is_pentagonal(n: int) -> bool:
"""
Returns True if n is pentagonal, False otherwise.
>>> is_pentagonal(330)
True
>>> is_pentagonal(7683)
False
>>> is_pentagonal(2380)
True
"""
root = (1 + 24 * n) ** 0.5
return ((1 + root) / 6) % 1 == 0
def solution(start: int = 144) -> int:
"""
Returns the next number which is triangular, pentagonal and hexagonal.
>>> solution(144)
1533776805
"""
n = start
num = hexagonal_num(n)
while not is_pentagonal(num):
n += 1
num = hexagonal_num(n)
return num
if __name__ == "__main__":
print(f"{solution()} = ")
| """
Problem 45: https://projecteuler.net/problem=45
Triangle, pentagonal, and hexagonal numbers are generated by the following formulae:
Triangle T(n) = (n * (n + 1)) / 2 1, 3, 6, 10, 15, ...
Pentagonal P(n) = (n * (3 * n − 1)) / 2 1, 5, 12, 22, 35, ...
Hexagonal H(n) = n * (2 * n − 1) 1, 6, 15, 28, 45, ...
It can be verified that T(285) = P(165) = H(143) = 40755.
Find the next triangle number that is also pentagonal and hexagonal.
All triangle numbers are hexagonal numbers.
T(2n-1) = n * (2 * n - 1) = H(n)
So we shall check only for hexagonal numbers which are also pentagonal.
"""
def hexagonal_num(n: int) -> int:
"""
Returns nth hexagonal number
>>> hexagonal_num(143)
40755
>>> hexagonal_num(21)
861
>>> hexagonal_num(10)
190
"""
return n * (2 * n - 1)
def is_pentagonal(n: int) -> bool:
"""
Returns True if n is pentagonal, False otherwise.
>>> is_pentagonal(330)
True
>>> is_pentagonal(7683)
False
>>> is_pentagonal(2380)
True
"""
root = (1 + 24 * n) ** 0.5
return ((1 + root) / 6) % 1 == 0
def solution(start: int = 144) -> int:
"""
Returns the next number which is triangular, pentagonal and hexagonal.
>>> solution(144)
1533776805
"""
n = start
num = hexagonal_num(n)
while not is_pentagonal(num):
n += 1
num = hexagonal_num(n)
return num
if __name__ == "__main__":
print(f"{solution()} = ")
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 89: https://projecteuler.net/problem=89
For a number written in Roman numerals to be considered valid there are basic rules
which must be followed. Even though the rules allow some numbers to be expressed in
more than one way there is always a "best" way of writing a particular number.
For example, it would appear that there are at least six ways of writing the number
sixteen:
IIIIIIIIIIIIIIII
VIIIIIIIIIII
VVIIIIII
XIIIIII
VVVI
XVI
However, according to the rules only XIIIIII and XVI are valid, and the last example
is considered to be the most efficient, as it uses the least number of numerals.
The 11K text file, roman.txt (right click and 'Save Link/Target As...'), contains one
thousand numbers written in valid, but not necessarily minimal, Roman numerals; see
About... Roman Numerals for the definitive rules for this problem.
Find the number of characters saved by writing each of these in their minimal form.
Note: You can assume that all the Roman numerals in the file contain no more than four
consecutive identical units.
"""
import os
SYMBOLS = {"I": 1, "V": 5, "X": 10, "L": 50, "C": 100, "D": 500, "M": 1000}
def parse_roman_numerals(numerals: str) -> int:
"""
Converts a string of roman numerals to an integer.
e.g.
>>> parse_roman_numerals("LXXXIX")
89
>>> parse_roman_numerals("IIII")
4
"""
total_value = 0
index = 0
while index < len(numerals) - 1:
current_value = SYMBOLS[numerals[index]]
next_value = SYMBOLS[numerals[index + 1]]
if current_value < next_value:
total_value -= current_value
else:
total_value += current_value
index += 1
total_value += SYMBOLS[numerals[index]]
return total_value
def generate_roman_numerals(num: int) -> str:
"""
Generates a string of roman numerals for a given integer.
e.g.
>>> generate_roman_numerals(89)
'LXXXIX'
>>> generate_roman_numerals(4)
'IV'
"""
numerals = ""
m_count = num // 1000
numerals += m_count * "M"
num %= 1000
c_count = num // 100
if c_count == 9:
numerals += "CM"
c_count -= 9
elif c_count == 4:
numerals += "CD"
c_count -= 4
if c_count >= 5:
numerals += "D"
c_count -= 5
numerals += c_count * "C"
num %= 100
x_count = num // 10
if x_count == 9:
numerals += "XC"
x_count -= 9
elif x_count == 4:
numerals += "XL"
x_count -= 4
if x_count >= 5:
numerals += "L"
x_count -= 5
numerals += x_count * "X"
num %= 10
if num == 9:
numerals += "IX"
num -= 9
elif num == 4:
numerals += "IV"
num -= 4
if num >= 5:
numerals += "V"
num -= 5
numerals += num * "I"
return numerals
def solution(roman_numerals_filename: str = "/p089_roman.txt") -> int:
"""
Calculates and returns the answer to project euler problem 89.
>>> solution("/numeralcleanup_test.txt")
16
"""
savings = 0
with open(os.path.dirname(__file__) + roman_numerals_filename) as file1:
lines = file1.readlines()
for line in lines:
original = line.strip()
num = parse_roman_numerals(original)
shortened = generate_roman_numerals(num)
savings += len(original) - len(shortened)
return savings
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 89: https://projecteuler.net/problem=89
For a number written in Roman numerals to be considered valid there are basic rules
which must be followed. Even though the rules allow some numbers to be expressed in
more than one way there is always a "best" way of writing a particular number.
For example, it would appear that there are at least six ways of writing the number
sixteen:
IIIIIIIIIIIIIIII
VIIIIIIIIIII
VVIIIIII
XIIIIII
VVVI
XVI
However, according to the rules only XIIIIII and XVI are valid, and the last example
is considered to be the most efficient, as it uses the least number of numerals.
The 11K text file, roman.txt (right click and 'Save Link/Target As...'), contains one
thousand numbers written in valid, but not necessarily minimal, Roman numerals; see
About... Roman Numerals for the definitive rules for this problem.
Find the number of characters saved by writing each of these in their minimal form.
Note: You can assume that all the Roman numerals in the file contain no more than four
consecutive identical units.
"""
import os
SYMBOLS = {"I": 1, "V": 5, "X": 10, "L": 50, "C": 100, "D": 500, "M": 1000}
def parse_roman_numerals(numerals: str) -> int:
"""
Converts a string of roman numerals to an integer.
e.g.
>>> parse_roman_numerals("LXXXIX")
89
>>> parse_roman_numerals("IIII")
4
"""
total_value = 0
index = 0
while index < len(numerals) - 1:
current_value = SYMBOLS[numerals[index]]
next_value = SYMBOLS[numerals[index + 1]]
if current_value < next_value:
total_value -= current_value
else:
total_value += current_value
index += 1
total_value += SYMBOLS[numerals[index]]
return total_value
def generate_roman_numerals(num: int) -> str:
"""
Generates a string of roman numerals for a given integer.
e.g.
>>> generate_roman_numerals(89)
'LXXXIX'
>>> generate_roman_numerals(4)
'IV'
"""
numerals = ""
m_count = num // 1000
numerals += m_count * "M"
num %= 1000
c_count = num // 100
if c_count == 9:
numerals += "CM"
c_count -= 9
elif c_count == 4:
numerals += "CD"
c_count -= 4
if c_count >= 5:
numerals += "D"
c_count -= 5
numerals += c_count * "C"
num %= 100
x_count = num // 10
if x_count == 9:
numerals += "XC"
x_count -= 9
elif x_count == 4:
numerals += "XL"
x_count -= 4
if x_count >= 5:
numerals += "L"
x_count -= 5
numerals += x_count * "X"
num %= 10
if num == 9:
numerals += "IX"
num -= 9
elif num == 4:
numerals += "IV"
num -= 4
if num >= 5:
numerals += "V"
num -= 5
numerals += num * "I"
return numerals
def solution(roman_numerals_filename: str = "/p089_roman.txt") -> int:
"""
Calculates and returns the answer to project euler problem 89.
>>> solution("/numeralcleanup_test.txt")
16
"""
savings = 0
with open(os.path.dirname(__file__) + roman_numerals_filename) as file1:
lines = file1.readlines()
for line in lines:
original = line.strip()
num = parse_roman_numerals(original)
shortened = generate_roman_numerals(num)
savings += len(original) - len(shortened)
return savings
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
A binary search Tree
"""
from collections.abc import Iterable
from typing import Any
class Node:
def __init__(self, value: int | None = None):
self.value = value
self.parent: Node | None = None # Added in order to delete a node easier
self.left: Node | None = None
self.right: Node | None = None
def __repr__(self) -> str:
from pprint import pformat
if self.left is None and self.right is None:
return str(self.value)
return pformat({f"{self.value}": (self.left, self.right)}, indent=1)
class BinarySearchTree:
def __init__(self, root: Node | None = None):
self.root = root
def __str__(self) -> str:
"""
Return a string of all the Nodes using in order traversal
"""
return str(self.root)
def __reassign_nodes(self, node: Node, new_children: Node | None) -> None:
if new_children is not None: # reset its kids
new_children.parent = node.parent
if node.parent is not None: # reset its parent
if self.is_right(node): # If it is the right children
node.parent.right = new_children
else:
node.parent.left = new_children
else:
self.root = None
def is_right(self, node: Node) -> bool:
if node.parent and node.parent.right:
return node == node.parent.right
return False
def empty(self) -> bool:
return self.root is None
def __insert(self, value) -> None:
"""
Insert a new node in Binary Search Tree with value label
"""
new_node = Node(value) # create a new Node
if self.empty(): # if Tree is empty
self.root = new_node # set its root
else: # Tree is not empty
parent_node = self.root # from root
if parent_node is None:
return
while True: # While we don't get to a leaf
if value < parent_node.value: # We go left
if parent_node.left is None:
parent_node.left = new_node # We insert the new node in a leaf
break
else:
parent_node = parent_node.left
else:
if parent_node.right is None:
parent_node.right = new_node
break
else:
parent_node = parent_node.right
new_node.parent = parent_node
def insert(self, *values) -> None:
for value in values:
self.__insert(value)
def search(self, value) -> Node | None:
if self.empty():
raise IndexError("Warning: Tree is empty! please use another.")
else:
node = self.root
# use lazy evaluation here to avoid NoneType Attribute error
while node is not None and node.value is not value:
node = node.left if value < node.value else node.right
return node
def get_max(self, node: Node | None = None) -> Node | None:
"""
We go deep on the right branch
"""
if node is None:
if self.root is None:
return None
node = self.root
if not self.empty():
while node.right is not None:
node = node.right
return node
def get_min(self, node: Node | None = None) -> Node | None:
"""
We go deep on the left branch
"""
if node is None:
node = self.root
if self.root is None:
return None
if not self.empty():
node = self.root
while node.left is not None:
node = node.left
return node
def remove(self, value: int) -> None:
node = self.search(value) # Look for the node with that label
if node is not None:
if node.left is None and node.right is None: # If it has no children
self.__reassign_nodes(node, None)
elif node.left is None: # Has only right children
self.__reassign_nodes(node, node.right)
elif node.right is None: # Has only left children
self.__reassign_nodes(node, node.left)
else:
tmp_node = self.get_max(
node.left
) # Gets the max value of the left branch
self.remove(tmp_node.value) # type: ignore
node.value = (
tmp_node.value # type: ignore
) # Assigns the value to the node to delete and keep tree structure
def preorder_traverse(self, node: Node | None) -> Iterable:
if node is not None:
yield node # Preorder Traversal
yield from self.preorder_traverse(node.left)
yield from self.preorder_traverse(node.right)
def traversal_tree(self, traversal_function=None) -> Any:
"""
This function traversal the tree.
You can pass a function to traversal the tree as needed by client code
"""
if traversal_function is None:
return self.preorder_traverse(self.root)
else:
return traversal_function(self.root)
def inorder(self, arr: list, node: Node | None) -> None:
"""Perform an inorder traversal and append values of the nodes to
a list named arr"""
if node:
self.inorder(arr, node.left)
arr.append(node.value)
self.inorder(arr, node.right)
def find_kth_smallest(self, k: int, node: Node) -> int:
"""Return the kth smallest element in a binary search tree"""
arr: list[int] = []
self.inorder(arr, node) # append all values to list using inorder traversal
return arr[k - 1]
def postorder(curr_node: Node | None) -> list[Node]:
"""
postOrder (left, right, self)
"""
node_list = []
if curr_node is not None:
node_list = postorder(curr_node.left) + postorder(curr_node.right) + [curr_node]
return node_list
def binary_search_tree() -> None:
r"""
Example
8
/ \
3 10
/ \ \
1 6 14
/ \ /
4 7 13
>>> t = BinarySearchTree()
>>> t.insert(8, 3, 6, 1, 10, 14, 13, 4, 7)
>>> print(" ".join(repr(i.value) for i in t.traversal_tree()))
8 3 1 6 4 7 10 14 13
>>> print(" ".join(repr(i.value) for i in t.traversal_tree(postorder)))
1 4 7 6 3 13 14 10 8
>>> BinarySearchTree().search(6)
Traceback (most recent call last):
...
IndexError: Warning: Tree is empty! please use another.
"""
testlist = (8, 3, 6, 1, 10, 14, 13, 4, 7)
t = BinarySearchTree()
for i in testlist:
t.insert(i)
# Prints all the elements of the list in order traversal
print(t)
if t.search(6) is not None:
print("The value 6 exists")
else:
print("The value 6 doesn't exist")
if t.search(-1) is not None:
print("The value -1 exists")
else:
print("The value -1 doesn't exist")
if not t.empty():
print("Max Value: ", t.get_max().value) # type: ignore
print("Min Value: ", t.get_min().value) # type: ignore
for i in testlist:
t.remove(i)
print(t)
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True)
| """
A binary search Tree
"""
from collections.abc import Iterable
from typing import Any
class Node:
def __init__(self, value: int | None = None):
self.value = value
self.parent: Node | None = None # Added in order to delete a node easier
self.left: Node | None = None
self.right: Node | None = None
def __repr__(self) -> str:
from pprint import pformat
if self.left is None and self.right is None:
return str(self.value)
return pformat({f"{self.value}": (self.left, self.right)}, indent=1)
class BinarySearchTree:
def __init__(self, root: Node | None = None):
self.root = root
def __str__(self) -> str:
"""
Return a string of all the Nodes using in order traversal
"""
return str(self.root)
def __reassign_nodes(self, node: Node, new_children: Node | None) -> None:
if new_children is not None: # reset its kids
new_children.parent = node.parent
if node.parent is not None: # reset its parent
if self.is_right(node): # If it is the right children
node.parent.right = new_children
else:
node.parent.left = new_children
else:
self.root = None
def is_right(self, node: Node) -> bool:
if node.parent and node.parent.right:
return node == node.parent.right
return False
def empty(self) -> bool:
return self.root is None
def __insert(self, value) -> None:
"""
Insert a new node in Binary Search Tree with value label
"""
new_node = Node(value) # create a new Node
if self.empty(): # if Tree is empty
self.root = new_node # set its root
else: # Tree is not empty
parent_node = self.root # from root
if parent_node is None:
return
while True: # While we don't get to a leaf
if value < parent_node.value: # We go left
if parent_node.left is None:
parent_node.left = new_node # We insert the new node in a leaf
break
else:
parent_node = parent_node.left
else:
if parent_node.right is None:
parent_node.right = new_node
break
else:
parent_node = parent_node.right
new_node.parent = parent_node
def insert(self, *values) -> None:
for value in values:
self.__insert(value)
def search(self, value) -> Node | None:
if self.empty():
raise IndexError("Warning: Tree is empty! please use another.")
else:
node = self.root
# use lazy evaluation here to avoid NoneType Attribute error
while node is not None and node.value is not value:
node = node.left if value < node.value else node.right
return node
def get_max(self, node: Node | None = None) -> Node | None:
"""
We go deep on the right branch
"""
if node is None:
if self.root is None:
return None
node = self.root
if not self.empty():
while node.right is not None:
node = node.right
return node
def get_min(self, node: Node | None = None) -> Node | None:
"""
We go deep on the left branch
"""
if node is None:
node = self.root
if self.root is None:
return None
if not self.empty():
node = self.root
while node.left is not None:
node = node.left
return node
def remove(self, value: int) -> None:
node = self.search(value) # Look for the node with that label
if node is not None:
if node.left is None and node.right is None: # If it has no children
self.__reassign_nodes(node, None)
elif node.left is None: # Has only right children
self.__reassign_nodes(node, node.right)
elif node.right is None: # Has only left children
self.__reassign_nodes(node, node.left)
else:
tmp_node = self.get_max(
node.left
) # Gets the max value of the left branch
self.remove(tmp_node.value) # type: ignore
node.value = (
tmp_node.value # type: ignore
) # Assigns the value to the node to delete and keep tree structure
def preorder_traverse(self, node: Node | None) -> Iterable:
if node is not None:
yield node # Preorder Traversal
yield from self.preorder_traverse(node.left)
yield from self.preorder_traverse(node.right)
def traversal_tree(self, traversal_function=None) -> Any:
"""
This function traversal the tree.
You can pass a function to traversal the tree as needed by client code
"""
if traversal_function is None:
return self.preorder_traverse(self.root)
else:
return traversal_function(self.root)
def inorder(self, arr: list, node: Node | None) -> None:
"""Perform an inorder traversal and append values of the nodes to
a list named arr"""
if node:
self.inorder(arr, node.left)
arr.append(node.value)
self.inorder(arr, node.right)
def find_kth_smallest(self, k: int, node: Node) -> int:
"""Return the kth smallest element in a binary search tree"""
arr: list[int] = []
self.inorder(arr, node) # append all values to list using inorder traversal
return arr[k - 1]
def postorder(curr_node: Node | None) -> list[Node]:
"""
postOrder (left, right, self)
"""
node_list = []
if curr_node is not None:
node_list = postorder(curr_node.left) + postorder(curr_node.right) + [curr_node]
return node_list
def binary_search_tree() -> None:
r"""
Example
8
/ \
3 10
/ \ \
1 6 14
/ \ /
4 7 13
>>> t = BinarySearchTree()
>>> t.insert(8, 3, 6, 1, 10, 14, 13, 4, 7)
>>> print(" ".join(repr(i.value) for i in t.traversal_tree()))
8 3 1 6 4 7 10 14 13
>>> print(" ".join(repr(i.value) for i in t.traversal_tree(postorder)))
1 4 7 6 3 13 14 10 8
>>> BinarySearchTree().search(6)
Traceback (most recent call last):
...
IndexError: Warning: Tree is empty! please use another.
"""
testlist = (8, 3, 6, 1, 10, 14, 13, 4, 7)
t = BinarySearchTree()
for i in testlist:
t.insert(i)
# Prints all the elements of the list in order traversal
print(t)
if t.search(6) is not None:
print("The value 6 exists")
else:
print("The value 6 doesn't exist")
if t.search(-1) is not None:
print("The value -1 exists")
else:
print("The value -1 doesn't exist")
if not t.empty():
print("Max Value: ", t.get_max().value) # type: ignore
print("Min Value: ", t.get_min().value) # type: ignore
for i in testlist:
t.remove(i)
print(t)
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True)
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
def collatz_sequence(n: int) -> list[int]:
"""
Collatz conjecture: start with any positive integer n. The next term is
obtained as follows:
If n term is even, the next term is: n / 2 .
If n is odd, the next term is: 3 * n + 1.
The conjecture states the sequence will always reach 1 for any starting value n.
Example:
>>> collatz_sequence(2.1)
Traceback (most recent call last):
...
Exception: Sequence only defined for natural numbers
>>> collatz_sequence(0)
Traceback (most recent call last):
...
Exception: Sequence only defined for natural numbers
>>> collatz_sequence(43) # doctest: +NORMALIZE_WHITESPACE
[43, 130, 65, 196, 98, 49, 148, 74, 37, 112, 56, 28, 14, 7,
22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1]
"""
if not isinstance(n, int) or n < 1:
raise Exception("Sequence only defined for natural numbers")
sequence = [n]
while n != 1:
n = 3 * n + 1 if n & 1 else n // 2
sequence.append(n)
return sequence
def main():
n = 43
sequence = collatz_sequence(n)
print(sequence)
print(f"collatz sequence from {n} took {len(sequence)} steps.")
if __name__ == "__main__":
main()
| from __future__ import annotations
def collatz_sequence(n: int) -> list[int]:
"""
Collatz conjecture: start with any positive integer n. The next term is
obtained as follows:
If n term is even, the next term is: n / 2 .
If n is odd, the next term is: 3 * n + 1.
The conjecture states the sequence will always reach 1 for any starting value n.
Example:
>>> collatz_sequence(2.1)
Traceback (most recent call last):
...
Exception: Sequence only defined for natural numbers
>>> collatz_sequence(0)
Traceback (most recent call last):
...
Exception: Sequence only defined for natural numbers
>>> collatz_sequence(43) # doctest: +NORMALIZE_WHITESPACE
[43, 130, 65, 196, 98, 49, 148, 74, 37, 112, 56, 28, 14, 7,
22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1]
"""
if not isinstance(n, int) or n < 1:
raise Exception("Sequence only defined for natural numbers")
sequence = [n]
while n != 1:
n = 3 * n + 1 if n & 1 else n // 2
sequence.append(n)
return sequence
def main():
n = 43
sequence = collatz_sequence(n)
print(sequence)
print(f"collatz sequence from {n} took {len(sequence)} steps.")
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from scipy.constants import g
"""
Finding the gravitational potential energy of an object with reference
to the earth,by taking its mass and height above the ground as input
Description : Gravitational energy or gravitational potential energy
is the potential energy a massive object has in relation to another
massive object due to gravity. It is the potential energy associated
with the gravitational field, which is released (converted into
kinetic energy) when the objects fall towards each other.
Gravitational potential energy increases when two objects
are brought further apart.
For two pairwise interacting point particles, the gravitational
potential energy U is given by
U=-GMm/R
where M and m are the masses of the two particles, R is the distance
between them, and G is the gravitational constant.
Close to the Earth's surface, the gravitational field is approximately
constant, and the gravitational potential energy of an object reduces to
U=mgh
where m is the object's mass, g=GM/R² is the gravity of Earth, and h is
the height of the object's center of mass above a chosen reference level.
Reference : "https://en.m.wikipedia.org/wiki/Gravitational_energy"
"""
def potential_energy(mass: float, height: float) -> float:
# function will accept mass and height as parameters and return potential energy
"""
>>> potential_energy(10,10)
980.665
>>> potential_energy(0,5)
0.0
>>> potential_energy(8,0)
0.0
>>> potential_energy(10,5)
490.3325
>>> potential_energy(0,0)
0.0
>>> potential_energy(2,8)
156.9064
>>> potential_energy(20,100)
19613.3
"""
if mass < 0:
# handling of negative values of mass
raise ValueError("The mass of a body cannot be negative")
if height < 0:
# handling of negative values of height
raise ValueError("The height above the ground cannot be negative")
return mass * g * height
if __name__ == "__main__":
from doctest import testmod
testmod(name="potential_energy")
| from scipy.constants import g
"""
Finding the gravitational potential energy of an object with reference
to the earth,by taking its mass and height above the ground as input
Description : Gravitational energy or gravitational potential energy
is the potential energy a massive object has in relation to another
massive object due to gravity. It is the potential energy associated
with the gravitational field, which is released (converted into
kinetic energy) when the objects fall towards each other.
Gravitational potential energy increases when two objects
are brought further apart.
For two pairwise interacting point particles, the gravitational
potential energy U is given by
U=-GMm/R
where M and m are the masses of the two particles, R is the distance
between them, and G is the gravitational constant.
Close to the Earth's surface, the gravitational field is approximately
constant, and the gravitational potential energy of an object reduces to
U=mgh
where m is the object's mass, g=GM/R² is the gravity of Earth, and h is
the height of the object's center of mass above a chosen reference level.
Reference : "https://en.m.wikipedia.org/wiki/Gravitational_energy"
"""
def potential_energy(mass: float, height: float) -> float:
# function will accept mass and height as parameters and return potential energy
"""
>>> potential_energy(10,10)
980.665
>>> potential_energy(0,5)
0.0
>>> potential_energy(8,0)
0.0
>>> potential_energy(10,5)
490.3325
>>> potential_energy(0,0)
0.0
>>> potential_energy(2,8)
156.9064
>>> potential_energy(20,100)
19613.3
"""
if mass < 0:
# handling of negative values of mass
raise ValueError("The mass of a body cannot be negative")
if height < 0:
# handling of negative values of height
raise ValueError("The height above the ground cannot be negative")
return mass * g * height
if __name__ == "__main__":
from doctest import testmod
testmod(name="potential_energy")
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Author : Yvonne
This is a pure Python implementation of Dynamic Programming solution to the
longest_sub_array problem.
The problem is :
Given an array, to find the longest and continuous sub array and get the max sum of the
sub array in the given array.
"""
class SubArray:
def __init__(self, arr):
# we need a list not a string, so do something to change the type
self.array = arr.split(",")
def solve_sub_array(self):
rear = [int(self.array[0])] * len(self.array)
sum_value = [int(self.array[0])] * len(self.array)
for i in range(1, len(self.array)):
sum_value[i] = max(
int(self.array[i]) + sum_value[i - 1], int(self.array[i])
)
rear[i] = max(sum_value[i], rear[i - 1])
return rear[len(self.array) - 1]
if __name__ == "__main__":
whole_array = input("please input some numbers:")
array = SubArray(whole_array)
re = array.solve_sub_array()
print(("the results is:", re))
| """
Author : Yvonne
This is a pure Python implementation of Dynamic Programming solution to the
longest_sub_array problem.
The problem is :
Given an array, to find the longest and continuous sub array and get the max sum of the
sub array in the given array.
"""
class SubArray:
def __init__(self, arr):
# we need a list not a string, so do something to change the type
self.array = arr.split(",")
def solve_sub_array(self):
rear = [int(self.array[0])] * len(self.array)
sum_value = [int(self.array[0])] * len(self.array)
for i in range(1, len(self.array)):
sum_value[i] = max(
int(self.array[i]) + sum_value[i - 1], int(self.array[i])
)
rear[i] = max(sum_value[i], rear[i - 1])
return rear[len(self.array) - 1]
if __name__ == "__main__":
whole_array = input("please input some numbers:")
array = SubArray(whole_array)
re = array.solve_sub_array()
print(("the results is:", re))
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Check if three points are collinear in 3D.
In short, the idea is that we are able to create a triangle using three points,
and the area of that triangle can determine if the three points are collinear or not.
First, we create two vectors with the same initial point from the three points,
then we will calculate the cross-product of them.
The length of the cross vector is numerically equal to the area of a parallelogram.
Finally, the area of the triangle is equal to half of the area of the parallelogram.
Since we are only differentiating between zero and anything else,
we can get rid of the square root when calculating the length of the vector,
and also the division by two at the end.
From a second perspective, if the two vectors are parallel and overlapping,
we can't get a nonzero perpendicular vector,
since there will be an infinite number of orthogonal vectors.
To simplify the solution we will not calculate the length,
but we will decide directly from the vector whether it is equal to (0, 0, 0) or not.
Read More:
https://math.stackexchange.com/a/1951650
"""
Vector3d = tuple[float, float, float]
Point3d = tuple[float, float, float]
def create_vector(end_point1: Point3d, end_point2: Point3d) -> Vector3d:
"""
Pass two points to get the vector from them in the form (x, y, z).
>>> create_vector((0, 0, 0), (1, 1, 1))
(1, 1, 1)
>>> create_vector((45, 70, 24), (47, 32, 1))
(2, -38, -23)
>>> create_vector((-14, -1, -8), (-7, 6, 4))
(7, 7, 12)
"""
x = end_point2[0] - end_point1[0]
y = end_point2[1] - end_point1[1]
z = end_point2[2] - end_point1[2]
return (x, y, z)
def get_3d_vectors_cross(ab: Vector3d, ac: Vector3d) -> Vector3d:
"""
Get the cross of the two vectors AB and AC.
I used determinant of 2x2 to get the determinant of the 3x3 matrix in the process.
Read More:
https://en.wikipedia.org/wiki/Cross_product
https://en.wikipedia.org/wiki/Determinant
>>> get_3d_vectors_cross((3, 4, 7), (4, 9, 2))
(-55, 22, 11)
>>> get_3d_vectors_cross((1, 1, 1), (1, 1, 1))
(0, 0, 0)
>>> get_3d_vectors_cross((-4, 3, 0), (3, -9, -12))
(-36, -48, 27)
>>> get_3d_vectors_cross((17.67, 4.7, 6.78), (-9.5, 4.78, -19.33))
(-123.2594, 277.15110000000004, 129.11260000000001)
"""
x = ab[1] * ac[2] - ab[2] * ac[1] # *i
y = (ab[0] * ac[2] - ab[2] * ac[0]) * -1 # *j
z = ab[0] * ac[1] - ab[1] * ac[0] # *k
return (x, y, z)
def is_zero_vector(vector: Vector3d, accuracy: int) -> bool:
"""
Check if vector is equal to (0, 0, 0) of not.
Sine the algorithm is very accurate, we will never get a zero vector,
so we need to round the vector axis,
because we want a result that is either True or False.
In other applications, we can return a float that represents the collinearity ratio.
>>> is_zero_vector((0, 0, 0), accuracy=10)
True
>>> is_zero_vector((15, 74, 32), accuracy=10)
False
>>> is_zero_vector((-15, -74, -32), accuracy=10)
False
"""
return tuple(round(x, accuracy) for x in vector) == (0, 0, 0)
def are_collinear(a: Point3d, b: Point3d, c: Point3d, accuracy: int = 10) -> bool:
"""
Check if three points are collinear or not.
1- Create tow vectors AB and AC.
2- Get the cross vector of the tow vectors.
3- Calcolate the length of the cross vector.
4- If the length is zero then the points are collinear, else they are not.
The use of the accuracy parameter is explained in is_zero_vector docstring.
>>> are_collinear((4.802293498137402, 3.536233125455244, 0),
... (-2.186788107953106, -9.24561398001649, 7.141509524846482),
... (1.530169574640268, -2.447927606600034, 3.343487096469054))
True
>>> are_collinear((-6, -2, 6),
... (6.200213806439997, -4.930157614926678, -4.482371908289856),
... (-4.085171149525941, -2.459889509029438, 4.354787180795383))
True
>>> are_collinear((2.399001826862445, -2.452009976680793, 4.464656666157666),
... (-3.682816335934376, 5.753788986533145, 9.490993909044244),
... (1.962903518985307, 3.741415730125627, 7))
False
>>> are_collinear((1.875375340689544, -7.268426006071538, 7.358196269835993),
... (-3.546599383667157, -4.630005261513976, 3.208784032924246),
... (-2.564606140206386, 3.937845170672183, 7))
False
"""
ab = create_vector(a, b)
ac = create_vector(a, c)
return is_zero_vector(get_3d_vectors_cross(ab, ac), accuracy)
| """
Check if three points are collinear in 3D.
In short, the idea is that we are able to create a triangle using three points,
and the area of that triangle can determine if the three points are collinear or not.
First, we create two vectors with the same initial point from the three points,
then we will calculate the cross-product of them.
The length of the cross vector is numerically equal to the area of a parallelogram.
Finally, the area of the triangle is equal to half of the area of the parallelogram.
Since we are only differentiating between zero and anything else,
we can get rid of the square root when calculating the length of the vector,
and also the division by two at the end.
From a second perspective, if the two vectors are parallel and overlapping,
we can't get a nonzero perpendicular vector,
since there will be an infinite number of orthogonal vectors.
To simplify the solution we will not calculate the length,
but we will decide directly from the vector whether it is equal to (0, 0, 0) or not.
Read More:
https://math.stackexchange.com/a/1951650
"""
Vector3d = tuple[float, float, float]
Point3d = tuple[float, float, float]
def create_vector(end_point1: Point3d, end_point2: Point3d) -> Vector3d:
"""
Pass two points to get the vector from them in the form (x, y, z).
>>> create_vector((0, 0, 0), (1, 1, 1))
(1, 1, 1)
>>> create_vector((45, 70, 24), (47, 32, 1))
(2, -38, -23)
>>> create_vector((-14, -1, -8), (-7, 6, 4))
(7, 7, 12)
"""
x = end_point2[0] - end_point1[0]
y = end_point2[1] - end_point1[1]
z = end_point2[2] - end_point1[2]
return (x, y, z)
def get_3d_vectors_cross(ab: Vector3d, ac: Vector3d) -> Vector3d:
"""
Get the cross of the two vectors AB and AC.
I used determinant of 2x2 to get the determinant of the 3x3 matrix in the process.
Read More:
https://en.wikipedia.org/wiki/Cross_product
https://en.wikipedia.org/wiki/Determinant
>>> get_3d_vectors_cross((3, 4, 7), (4, 9, 2))
(-55, 22, 11)
>>> get_3d_vectors_cross((1, 1, 1), (1, 1, 1))
(0, 0, 0)
>>> get_3d_vectors_cross((-4, 3, 0), (3, -9, -12))
(-36, -48, 27)
>>> get_3d_vectors_cross((17.67, 4.7, 6.78), (-9.5, 4.78, -19.33))
(-123.2594, 277.15110000000004, 129.11260000000001)
"""
x = ab[1] * ac[2] - ab[2] * ac[1] # *i
y = (ab[0] * ac[2] - ab[2] * ac[0]) * -1 # *j
z = ab[0] * ac[1] - ab[1] * ac[0] # *k
return (x, y, z)
def is_zero_vector(vector: Vector3d, accuracy: int) -> bool:
"""
Check if vector is equal to (0, 0, 0) of not.
Sine the algorithm is very accurate, we will never get a zero vector,
so we need to round the vector axis,
because we want a result that is either True or False.
In other applications, we can return a float that represents the collinearity ratio.
>>> is_zero_vector((0, 0, 0), accuracy=10)
True
>>> is_zero_vector((15, 74, 32), accuracy=10)
False
>>> is_zero_vector((-15, -74, -32), accuracy=10)
False
"""
return tuple(round(x, accuracy) for x in vector) == (0, 0, 0)
def are_collinear(a: Point3d, b: Point3d, c: Point3d, accuracy: int = 10) -> bool:
"""
Check if three points are collinear or not.
1- Create tow vectors AB and AC.
2- Get the cross vector of the tow vectors.
3- Calcolate the length of the cross vector.
4- If the length is zero then the points are collinear, else they are not.
The use of the accuracy parameter is explained in is_zero_vector docstring.
>>> are_collinear((4.802293498137402, 3.536233125455244, 0),
... (-2.186788107953106, -9.24561398001649, 7.141509524846482),
... (1.530169574640268, -2.447927606600034, 3.343487096469054))
True
>>> are_collinear((-6, -2, 6),
... (6.200213806439997, -4.930157614926678, -4.482371908289856),
... (-4.085171149525941, -2.459889509029438, 4.354787180795383))
True
>>> are_collinear((2.399001826862445, -2.452009976680793, 4.464656666157666),
... (-3.682816335934376, 5.753788986533145, 9.490993909044244),
... (1.962903518985307, 3.741415730125627, 7))
False
>>> are_collinear((1.875375340689544, -7.268426006071538, 7.358196269835993),
... (-3.546599383667157, -4.630005261513976, 3.208784032924246),
... (-2.564606140206386, 3.937845170672183, 7))
False
"""
ab = create_vector(a, b)
ac = create_vector(a, c)
return is_zero_vector(get_3d_vectors_cross(ab, ac), accuracy)
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Non-preemptive Shortest Job First
Shortest execution time process is chosen for the next execution.
https://www.guru99.com/shortest-job-first-sjf-scheduling.html
https://en.wikipedia.org/wiki/Shortest_job_next
"""
from __future__ import annotations
from statistics import mean
def calculate_waitingtime(
arrival_time: list[int], burst_time: list[int], no_of_processes: int
) -> list[int]:
"""
Calculate the waiting time of each processes
Return: The waiting time for each process.
>>> calculate_waitingtime([0,1,2], [10, 5, 8], 3)
[0, 9, 13]
>>> calculate_waitingtime([1,2,2,4], [4, 6, 3, 1], 4)
[0, 7, 4, 1]
>>> calculate_waitingtime([0,0,0], [12, 2, 10],3)
[12, 0, 2]
"""
waiting_time = [0] * no_of_processes
remaining_time = [0] * no_of_processes
# Initialize remaining_time to waiting_time.
for i in range(no_of_processes):
remaining_time[i] = burst_time[i]
ready_process: list[int] = []
completed = 0
total_time = 0
# When processes are not completed,
# A process whose arrival time has passed \
# and has remaining execution time is put into the ready_process.
# The shortest process in the ready_process, target_process is executed.
while completed != no_of_processes:
ready_process = []
target_process = -1
for i in range(no_of_processes):
if (arrival_time[i] <= total_time) and (remaining_time[i] > 0):
ready_process.append(i)
if len(ready_process) > 0:
target_process = ready_process[0]
for i in ready_process:
if remaining_time[i] < remaining_time[target_process]:
target_process = i
total_time += burst_time[target_process]
completed += 1
remaining_time[target_process] = 0
waiting_time[target_process] = (
total_time - arrival_time[target_process] - burst_time[target_process]
)
else:
total_time += 1
return waiting_time
def calculate_turnaroundtime(
burst_time: list[int], no_of_processes: int, waiting_time: list[int]
) -> list[int]:
"""
Calculate the turnaround time of each process.
Return: The turnaround time for each process.
>>> calculate_turnaroundtime([0,1,2], 3, [0, 10, 15])
[0, 11, 17]
>>> calculate_turnaroundtime([1,2,2,4], 4, [1, 8, 5, 4])
[2, 10, 7, 8]
>>> calculate_turnaroundtime([0,0,0], 3, [12, 0, 2])
[12, 0, 2]
"""
turn_around_time = [0] * no_of_processes
for i in range(no_of_processes):
turn_around_time[i] = burst_time[i] + waiting_time[i]
return turn_around_time
if __name__ == "__main__":
print("[TEST CASE 01]")
no_of_processes = 4
burst_time = [2, 5, 3, 7]
arrival_time = [0, 0, 0, 0]
waiting_time = calculate_waitingtime(arrival_time, burst_time, no_of_processes)
turn_around_time = calculate_turnaroundtime(
burst_time, no_of_processes, waiting_time
)
# Printing the Result
print("PID\tBurst Time\tArrival Time\tWaiting Time\tTurnaround Time")
for i, process_id in enumerate(list(range(1, 5))):
print(
f"{process_id}\t{burst_time[i]}\t\t\t{arrival_time[i]}\t\t\t\t"
f"{waiting_time[i]}\t\t\t\t{turn_around_time[i]}"
)
print(f"\nAverage waiting time = {mean(waiting_time):.5f}")
print(f"Average turnaround time = {mean(turn_around_time):.5f}")
| """
Non-preemptive Shortest Job First
Shortest execution time process is chosen for the next execution.
https://www.guru99.com/shortest-job-first-sjf-scheduling.html
https://en.wikipedia.org/wiki/Shortest_job_next
"""
from __future__ import annotations
from statistics import mean
def calculate_waitingtime(
arrival_time: list[int], burst_time: list[int], no_of_processes: int
) -> list[int]:
"""
Calculate the waiting time of each processes
Return: The waiting time for each process.
>>> calculate_waitingtime([0,1,2], [10, 5, 8], 3)
[0, 9, 13]
>>> calculate_waitingtime([1,2,2,4], [4, 6, 3, 1], 4)
[0, 7, 4, 1]
>>> calculate_waitingtime([0,0,0], [12, 2, 10],3)
[12, 0, 2]
"""
waiting_time = [0] * no_of_processes
remaining_time = [0] * no_of_processes
# Initialize remaining_time to waiting_time.
for i in range(no_of_processes):
remaining_time[i] = burst_time[i]
ready_process: list[int] = []
completed = 0
total_time = 0
# When processes are not completed,
# A process whose arrival time has passed \
# and has remaining execution time is put into the ready_process.
# The shortest process in the ready_process, target_process is executed.
while completed != no_of_processes:
ready_process = []
target_process = -1
for i in range(no_of_processes):
if (arrival_time[i] <= total_time) and (remaining_time[i] > 0):
ready_process.append(i)
if len(ready_process) > 0:
target_process = ready_process[0]
for i in ready_process:
if remaining_time[i] < remaining_time[target_process]:
target_process = i
total_time += burst_time[target_process]
completed += 1
remaining_time[target_process] = 0
waiting_time[target_process] = (
total_time - arrival_time[target_process] - burst_time[target_process]
)
else:
total_time += 1
return waiting_time
def calculate_turnaroundtime(
burst_time: list[int], no_of_processes: int, waiting_time: list[int]
) -> list[int]:
"""
Calculate the turnaround time of each process.
Return: The turnaround time for each process.
>>> calculate_turnaroundtime([0,1,2], 3, [0, 10, 15])
[0, 11, 17]
>>> calculate_turnaroundtime([1,2,2,4], 4, [1, 8, 5, 4])
[2, 10, 7, 8]
>>> calculate_turnaroundtime([0,0,0], 3, [12, 0, 2])
[12, 0, 2]
"""
turn_around_time = [0] * no_of_processes
for i in range(no_of_processes):
turn_around_time[i] = burst_time[i] + waiting_time[i]
return turn_around_time
if __name__ == "__main__":
print("[TEST CASE 01]")
no_of_processes = 4
burst_time = [2, 5, 3, 7]
arrival_time = [0, 0, 0, 0]
waiting_time = calculate_waitingtime(arrival_time, burst_time, no_of_processes)
turn_around_time = calculate_turnaroundtime(
burst_time, no_of_processes, waiting_time
)
# Printing the Result
print("PID\tBurst Time\tArrival Time\tWaiting Time\tTurnaround Time")
for i, process_id in enumerate(list(range(1, 5))):
print(
f"{process_id}\t{burst_time[i]}\t\t\t{arrival_time[i]}\t\t\t\t"
f"{waiting_time[i]}\t\t\t\t{turn_around_time[i]}"
)
print(f"\nAverage waiting time = {mean(waiting_time):.5f}")
print(f"Average turnaround time = {mean(turn_around_time):.5f}")
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import unittest
import numpy as np
def schur_complement(
mat_a: np.ndarray,
mat_b: np.ndarray,
mat_c: np.ndarray,
pseudo_inv: np.ndarray | None = None,
) -> np.ndarray:
"""
Schur complement of a symmetric matrix X given as a 2x2 block matrix
consisting of matrices A, B and C.
Matrix A must be quadratic and non-singular.
In case A is singular, a pseudo-inverse may be provided using
the pseudo_inv argument.
Link to Wiki: https://en.wikipedia.org/wiki/Schur_complement
See also Convex Optimization – Boyd and Vandenberghe, A.5.5
>>> import numpy as np
>>> a = np.array([[1, 2], [2, 1]])
>>> b = np.array([[0, 3], [3, 0]])
>>> c = np.array([[2, 1], [6, 3]])
>>> schur_complement(a, b, c)
array([[ 5., -5.],
[ 0., 6.]])
"""
shape_a = np.shape(mat_a)
shape_b = np.shape(mat_b)
shape_c = np.shape(mat_c)
if shape_a[0] != shape_b[0]:
raise ValueError(
f"Expected the same number of rows for A and B. \
Instead found A of size {shape_a} and B of size {shape_b}"
)
if shape_b[1] != shape_c[1]:
raise ValueError(
f"Expected the same number of columns for B and C. \
Instead found B of size {shape_b} and C of size {shape_c}"
)
a_inv = pseudo_inv
if a_inv is None:
try:
a_inv = np.linalg.inv(mat_a)
except np.linalg.LinAlgError:
raise ValueError(
"Input matrix A is not invertible. Cannot compute Schur complement."
)
return mat_c - mat_b.T @ a_inv @ mat_b
class TestSchurComplement(unittest.TestCase):
def test_schur_complement(self) -> None:
a = np.array([[1, 2, 1], [2, 1, 2], [3, 2, 4]])
b = np.array([[0, 3], [3, 0], [2, 3]])
c = np.array([[2, 1], [6, 3]])
s = schur_complement(a, b, c)
input_matrix = np.block([[a, b], [b.T, c]])
det_x = np.linalg.det(input_matrix)
det_a = np.linalg.det(a)
det_s = np.linalg.det(s)
self.assertAlmostEqual(det_x, det_a * det_s)
def test_improper_a_b_dimensions(self) -> None:
a = np.array([[1, 2, 1], [2, 1, 2], [3, 2, 4]])
b = np.array([[0, 3], [3, 0], [2, 3]])
c = np.array([[2, 1], [6, 3]])
with self.assertRaises(ValueError):
schur_complement(a, b, c)
def test_improper_b_c_dimensions(self) -> None:
a = np.array([[1, 2, 1], [2, 1, 2], [3, 2, 4]])
b = np.array([[0, 3], [3, 0], [2, 3]])
c = np.array([[2, 1, 3], [6, 3, 5]])
with self.assertRaises(ValueError):
schur_complement(a, b, c)
if __name__ == "__main__":
import doctest
doctest.testmod()
unittest.main()
| import unittest
import numpy as np
def schur_complement(
mat_a: np.ndarray,
mat_b: np.ndarray,
mat_c: np.ndarray,
pseudo_inv: np.ndarray | None = None,
) -> np.ndarray:
"""
Schur complement of a symmetric matrix X given as a 2x2 block matrix
consisting of matrices A, B and C.
Matrix A must be quadratic and non-singular.
In case A is singular, a pseudo-inverse may be provided using
the pseudo_inv argument.
Link to Wiki: https://en.wikipedia.org/wiki/Schur_complement
See also Convex Optimization – Boyd and Vandenberghe, A.5.5
>>> import numpy as np
>>> a = np.array([[1, 2], [2, 1]])
>>> b = np.array([[0, 3], [3, 0]])
>>> c = np.array([[2, 1], [6, 3]])
>>> schur_complement(a, b, c)
array([[ 5., -5.],
[ 0., 6.]])
"""
shape_a = np.shape(mat_a)
shape_b = np.shape(mat_b)
shape_c = np.shape(mat_c)
if shape_a[0] != shape_b[0]:
raise ValueError(
f"Expected the same number of rows for A and B. \
Instead found A of size {shape_a} and B of size {shape_b}"
)
if shape_b[1] != shape_c[1]:
raise ValueError(
f"Expected the same number of columns for B and C. \
Instead found B of size {shape_b} and C of size {shape_c}"
)
a_inv = pseudo_inv
if a_inv is None:
try:
a_inv = np.linalg.inv(mat_a)
except np.linalg.LinAlgError:
raise ValueError(
"Input matrix A is not invertible. Cannot compute Schur complement."
)
return mat_c - mat_b.T @ a_inv @ mat_b
class TestSchurComplement(unittest.TestCase):
def test_schur_complement(self) -> None:
a = np.array([[1, 2, 1], [2, 1, 2], [3, 2, 4]])
b = np.array([[0, 3], [3, 0], [2, 3]])
c = np.array([[2, 1], [6, 3]])
s = schur_complement(a, b, c)
input_matrix = np.block([[a, b], [b.T, c]])
det_x = np.linalg.det(input_matrix)
det_a = np.linalg.det(a)
det_s = np.linalg.det(s)
self.assertAlmostEqual(det_x, det_a * det_s)
def test_improper_a_b_dimensions(self) -> None:
a = np.array([[1, 2, 1], [2, 1, 2], [3, 2, 4]])
b = np.array([[0, 3], [3, 0], [2, 3]])
c = np.array([[2, 1], [6, 3]])
with self.assertRaises(ValueError):
schur_complement(a, b, c)
def test_improper_b_c_dimensions(self) -> None:
a = np.array([[1, 2, 1], [2, 1, 2], [3, 2, 4]])
b = np.array([[0, 3], [3, 0], [2, 3]])
c = np.array([[2, 1, 3], [6, 3, 5]])
with self.assertRaises(ValueError):
schur_complement(a, b, c)
if __name__ == "__main__":
import doctest
doctest.testmod()
unittest.main()
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """ A Stack using a linked list like structure """
from __future__ import annotations
from collections.abc import Iterator
from typing import Generic, TypeVar
T = TypeVar("T")
class Node(Generic[T]):
def __init__(self, data: T):
self.data = data
self.next: Node[T] | None = None
def __str__(self) -> str:
return f"{self.data}"
class LinkedStack(Generic[T]):
"""
Linked List Stack implementing push (to top),
pop (from top) and is_empty
>>> stack = LinkedStack()
>>> stack.is_empty()
True
>>> stack.push(5)
>>> stack.push(9)
>>> stack.push('python')
>>> stack.is_empty()
False
>>> stack.pop()
'python'
>>> stack.push('algorithms')
>>> stack.pop()
'algorithms'
>>> stack.pop()
9
>>> stack.pop()
5
>>> stack.is_empty()
True
>>> stack.pop()
Traceback (most recent call last):
...
IndexError: pop from empty stack
"""
def __init__(self) -> None:
self.top: Node[T] | None = None
def __iter__(self) -> Iterator[T]:
node = self.top
while node:
yield node.data
node = node.next
def __str__(self) -> str:
"""
>>> stack = LinkedStack()
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> str(stack)
'a->b->c'
"""
return "->".join([str(item) for item in self])
def __len__(self) -> int:
"""
>>> stack = LinkedStack()
>>> len(stack) == 0
True
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> len(stack) == 3
True
"""
return len(tuple(iter(self)))
def is_empty(self) -> bool:
"""
>>> stack = LinkedStack()
>>> stack.is_empty()
True
>>> stack.push(1)
>>> stack.is_empty()
False
"""
return self.top is None
def push(self, item: T) -> None:
"""
>>> stack = LinkedStack()
>>> stack.push("Python")
>>> stack.push("Java")
>>> stack.push("C")
>>> str(stack)
'C->Java->Python'
"""
node = Node(item)
if not self.is_empty():
node.next = self.top
self.top = node
def pop(self) -> T:
"""
>>> stack = LinkedStack()
>>> stack.pop()
Traceback (most recent call last):
...
IndexError: pop from empty stack
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> stack.pop() == 'a'
True
>>> stack.pop() == 'b'
True
>>> stack.pop() == 'c'
True
"""
if self.is_empty():
raise IndexError("pop from empty stack")
assert isinstance(self.top, Node)
pop_node = self.top
self.top = self.top.next
return pop_node.data
def peek(self) -> T:
"""
>>> stack = LinkedStack()
>>> stack.push("Java")
>>> stack.push("C")
>>> stack.push("Python")
>>> stack.peek()
'Python'
"""
if self.is_empty():
raise IndexError("peek from empty stack")
assert self.top is not None
return self.top.data
def clear(self) -> None:
"""
>>> stack = LinkedStack()
>>> stack.push("Java")
>>> stack.push("C")
>>> stack.push("Python")
>>> str(stack)
'Python->C->Java'
>>> stack.clear()
>>> len(stack) == 0
True
"""
self.top = None
if __name__ == "__main__":
from doctest import testmod
testmod()
| """ A Stack using a linked list like structure """
from __future__ import annotations
from collections.abc import Iterator
from typing import Generic, TypeVar
T = TypeVar("T")
class Node(Generic[T]):
def __init__(self, data: T):
self.data = data
self.next: Node[T] | None = None
def __str__(self) -> str:
return f"{self.data}"
class LinkedStack(Generic[T]):
"""
Linked List Stack implementing push (to top),
pop (from top) and is_empty
>>> stack = LinkedStack()
>>> stack.is_empty()
True
>>> stack.push(5)
>>> stack.push(9)
>>> stack.push('python')
>>> stack.is_empty()
False
>>> stack.pop()
'python'
>>> stack.push('algorithms')
>>> stack.pop()
'algorithms'
>>> stack.pop()
9
>>> stack.pop()
5
>>> stack.is_empty()
True
>>> stack.pop()
Traceback (most recent call last):
...
IndexError: pop from empty stack
"""
def __init__(self) -> None:
self.top: Node[T] | None = None
def __iter__(self) -> Iterator[T]:
node = self.top
while node:
yield node.data
node = node.next
def __str__(self) -> str:
"""
>>> stack = LinkedStack()
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> str(stack)
'a->b->c'
"""
return "->".join([str(item) for item in self])
def __len__(self) -> int:
"""
>>> stack = LinkedStack()
>>> len(stack) == 0
True
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> len(stack) == 3
True
"""
return len(tuple(iter(self)))
def is_empty(self) -> bool:
"""
>>> stack = LinkedStack()
>>> stack.is_empty()
True
>>> stack.push(1)
>>> stack.is_empty()
False
"""
return self.top is None
def push(self, item: T) -> None:
"""
>>> stack = LinkedStack()
>>> stack.push("Python")
>>> stack.push("Java")
>>> stack.push("C")
>>> str(stack)
'C->Java->Python'
"""
node = Node(item)
if not self.is_empty():
node.next = self.top
self.top = node
def pop(self) -> T:
"""
>>> stack = LinkedStack()
>>> stack.pop()
Traceback (most recent call last):
...
IndexError: pop from empty stack
>>> stack.push("c")
>>> stack.push("b")
>>> stack.push("a")
>>> stack.pop() == 'a'
True
>>> stack.pop() == 'b'
True
>>> stack.pop() == 'c'
True
"""
if self.is_empty():
raise IndexError("pop from empty stack")
assert isinstance(self.top, Node)
pop_node = self.top
self.top = self.top.next
return pop_node.data
def peek(self) -> T:
"""
>>> stack = LinkedStack()
>>> stack.push("Java")
>>> stack.push("C")
>>> stack.push("Python")
>>> stack.peek()
'Python'
"""
if self.is_empty():
raise IndexError("peek from empty stack")
assert self.top is not None
return self.top.data
def clear(self) -> None:
"""
>>> stack = LinkedStack()
>>> stack.push("Java")
>>> stack.push("C")
>>> stack.push("Python")
>>> str(stack)
'Python->C->Java'
>>> stack.clear()
>>> len(stack) == 0
True
"""
self.top = None
if __name__ == "__main__":
from doctest import testmod
testmod()
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
https://en.wikipedia.org/wiki/Infix_notation
https://en.wikipedia.org/wiki/Reverse_Polish_notation
https://en.wikipedia.org/wiki/Shunting-yard_algorithm
"""
from .balanced_parentheses import balanced_parentheses
from .stack import Stack
def precedence(char: str) -> int:
"""
Return integer value representing an operator's precedence, or
order of operation.
https://en.wikipedia.org/wiki/Order_of_operations
"""
return {"+": 1, "-": 1, "*": 2, "/": 2, "^": 3}.get(char, -1)
def infix_to_postfix(expression_str: str) -> str:
"""
>>> infix_to_postfix("(1*(2+3)+4))")
Traceback (most recent call last):
...
ValueError: Mismatched parentheses
>>> infix_to_postfix("")
''
>>> infix_to_postfix("3+2")
'3 2 +'
>>> infix_to_postfix("(3+4)*5-6")
'3 4 + 5 * 6 -'
>>> infix_to_postfix("(1+2)*3/4-5")
'1 2 + 3 * 4 / 5 -'
>>> infix_to_postfix("a+b*c+(d*e+f)*g")
'a b c * + d e * f + g * +'
>>> infix_to_postfix("x^y/(5*z)+2")
'x y ^ 5 z * / 2 +'
"""
if not balanced_parentheses(expression_str):
raise ValueError("Mismatched parentheses")
stack: Stack[str] = Stack()
postfix = []
for char in expression_str:
if char.isalpha() or char.isdigit():
postfix.append(char)
elif char == "(":
stack.push(char)
elif char == ")":
while not stack.is_empty() and stack.peek() != "(":
postfix.append(stack.pop())
stack.pop()
else:
while not stack.is_empty() and precedence(char) <= precedence(stack.peek()):
postfix.append(stack.pop())
stack.push(char)
while not stack.is_empty():
postfix.append(stack.pop())
return " ".join(postfix)
if __name__ == "__main__":
from doctest import testmod
testmod()
expression = "a+b*(c^d-e)^(f+g*h)-i"
print("Infix to Postfix Notation demonstration:\n")
print("Infix notation: " + expression)
print("Postfix notation: " + infix_to_postfix(expression))
| """
https://en.wikipedia.org/wiki/Infix_notation
https://en.wikipedia.org/wiki/Reverse_Polish_notation
https://en.wikipedia.org/wiki/Shunting-yard_algorithm
"""
from .balanced_parentheses import balanced_parentheses
from .stack import Stack
def precedence(char: str) -> int:
"""
Return integer value representing an operator's precedence, or
order of operation.
https://en.wikipedia.org/wiki/Order_of_operations
"""
return {"+": 1, "-": 1, "*": 2, "/": 2, "^": 3}.get(char, -1)
def infix_to_postfix(expression_str: str) -> str:
"""
>>> infix_to_postfix("(1*(2+3)+4))")
Traceback (most recent call last):
...
ValueError: Mismatched parentheses
>>> infix_to_postfix("")
''
>>> infix_to_postfix("3+2")
'3 2 +'
>>> infix_to_postfix("(3+4)*5-6")
'3 4 + 5 * 6 -'
>>> infix_to_postfix("(1+2)*3/4-5")
'1 2 + 3 * 4 / 5 -'
>>> infix_to_postfix("a+b*c+(d*e+f)*g")
'a b c * + d e * f + g * +'
>>> infix_to_postfix("x^y/(5*z)+2")
'x y ^ 5 z * / 2 +'
"""
if not balanced_parentheses(expression_str):
raise ValueError("Mismatched parentheses")
stack: Stack[str] = Stack()
postfix = []
for char in expression_str:
if char.isalpha() or char.isdigit():
postfix.append(char)
elif char == "(":
stack.push(char)
elif char == ")":
while not stack.is_empty() and stack.peek() != "(":
postfix.append(stack.pop())
stack.pop()
else:
while not stack.is_empty() and precedence(char) <= precedence(stack.peek()):
postfix.append(stack.pop())
stack.push(char)
while not stack.is_empty():
postfix.append(stack.pop())
return " ".join(postfix)
if __name__ == "__main__":
from doctest import testmod
testmod()
expression = "a+b*(c^d-e)^(f+g*h)-i"
print("Infix to Postfix Notation demonstration:\n")
print("Infix notation: " + expression)
print("Postfix notation: " + infix_to_postfix(expression))
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Minimum cut on Ford_Fulkerson algorithm.
test_graph = [
[0, 16, 13, 0, 0, 0],
[0, 0, 10, 12, 0, 0],
[0, 4, 0, 0, 14, 0],
[0, 0, 9, 0, 0, 20],
[0, 0, 0, 7, 0, 4],
[0, 0, 0, 0, 0, 0],
]
def bfs(graph, s, t, parent):
# Return True if there is node that has not iterated.
visited = [False] * len(graph)
queue = [s]
visited[s] = True
while queue:
u = queue.pop(0)
for ind in range(len(graph[u])):
if visited[ind] is False and graph[u][ind] > 0:
queue.append(ind)
visited[ind] = True
parent[ind] = u
return visited[t]
def mincut(graph, source, sink):
"""This array is filled by BFS and to store path
>>> mincut(test_graph, source=0, sink=5)
[(1, 3), (4, 3), (4, 5)]
"""
parent = [-1] * (len(graph))
max_flow = 0
res = []
temp = [i[:] for i in graph] # Record original cut, copy.
while bfs(graph, source, sink, parent):
path_flow = float("Inf")
s = sink
while s != source:
# Find the minimum value in select path
path_flow = min(path_flow, graph[parent[s]][s])
s = parent[s]
max_flow += path_flow
v = sink
while v != source:
u = parent[v]
graph[u][v] -= path_flow
graph[v][u] += path_flow
v = parent[v]
for i in range(len(graph)):
for j in range(len(graph[0])):
if graph[i][j] == 0 and temp[i][j] > 0:
res.append((i, j))
return res
if __name__ == "__main__":
print(mincut(test_graph, source=0, sink=5))
| # Minimum cut on Ford_Fulkerson algorithm.
test_graph = [
[0, 16, 13, 0, 0, 0],
[0, 0, 10, 12, 0, 0],
[0, 4, 0, 0, 14, 0],
[0, 0, 9, 0, 0, 20],
[0, 0, 0, 7, 0, 4],
[0, 0, 0, 0, 0, 0],
]
def bfs(graph, s, t, parent):
# Return True if there is node that has not iterated.
visited = [False] * len(graph)
queue = [s]
visited[s] = True
while queue:
u = queue.pop(0)
for ind in range(len(graph[u])):
if visited[ind] is False and graph[u][ind] > 0:
queue.append(ind)
visited[ind] = True
parent[ind] = u
return visited[t]
def mincut(graph, source, sink):
"""This array is filled by BFS and to store path
>>> mincut(test_graph, source=0, sink=5)
[(1, 3), (4, 3), (4, 5)]
"""
parent = [-1] * (len(graph))
max_flow = 0
res = []
temp = [i[:] for i in graph] # Record original cut, copy.
while bfs(graph, source, sink, parent):
path_flow = float("Inf")
s = sink
while s != source:
# Find the minimum value in select path
path_flow = min(path_flow, graph[parent[s]][s])
s = parent[s]
max_flow += path_flow
v = sink
while v != source:
u = parent[v]
graph[u][v] -= path_flow
graph[v][u] += path_flow
v = parent[v]
for i in range(len(graph)):
for j in range(len(graph[0])):
if graph[i][j] == 0 and temp[i][j] > 0:
res.append((i, j))
return res
if __name__ == "__main__":
print(mincut(test_graph, source=0, sink=5))
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import math
class SegmentTree:
def __init__(self, a):
self.N = len(a)
self.st = [0] * (
4 * self.N
) # approximate the overall size of segment tree with array N
self.build(1, 0, self.N - 1)
def left(self, idx):
return idx * 2
def right(self, idx):
return idx * 2 + 1
def build(self, idx, l, r): # noqa: E741
if l == r:
self.st[idx] = A[l]
else:
mid = (l + r) // 2
self.build(self.left(idx), l, mid)
self.build(self.right(idx), mid + 1, r)
self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])
def update(self, a, b, val):
return self.update_recursive(1, 0, self.N - 1, a - 1, b - 1, val)
def update_recursive(self, idx, l, r, a, b, val): # noqa: E741
"""
update(1, 1, N, a, b, v) for update val v to [a,b]
"""
if r < a or l > b:
return True
if l == r:
self.st[idx] = val
return True
mid = (l + r) // 2
self.update_recursive(self.left(idx), l, mid, a, b, val)
self.update_recursive(self.right(idx), mid + 1, r, a, b, val)
self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])
return True
def query(self, a, b):
return self.query_recursive(1, 0, self.N - 1, a - 1, b - 1)
def query_recursive(self, idx, l, r, a, b): # noqa: E741
"""
query(1, 1, N, a, b) for query max of [a,b]
"""
if r < a or l > b:
return -math.inf
if l >= a and r <= b:
return self.st[idx]
mid = (l + r) // 2
q1 = self.query_recursive(self.left(idx), l, mid, a, b)
q2 = self.query_recursive(self.right(idx), mid + 1, r, a, b)
return max(q1, q2)
def show_data(self):
show_list = []
for i in range(1, N + 1):
show_list += [self.query(i, i)]
print(show_list)
if __name__ == "__main__":
A = [1, 2, -4, 7, 3, -5, 6, 11, -20, 9, 14, 15, 5, 2, -8]
N = 15
segt = SegmentTree(A)
print(segt.query(4, 6))
print(segt.query(7, 11))
print(segt.query(7, 12))
segt.update(1, 3, 111)
print(segt.query(1, 15))
segt.update(7, 8, 235)
segt.show_data()
| import math
class SegmentTree:
def __init__(self, a):
self.N = len(a)
self.st = [0] * (
4 * self.N
) # approximate the overall size of segment tree with array N
self.build(1, 0, self.N - 1)
def left(self, idx):
return idx * 2
def right(self, idx):
return idx * 2 + 1
def build(self, idx, l, r): # noqa: E741
if l == r:
self.st[idx] = A[l]
else:
mid = (l + r) // 2
self.build(self.left(idx), l, mid)
self.build(self.right(idx), mid + 1, r)
self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])
def update(self, a, b, val):
return self.update_recursive(1, 0, self.N - 1, a - 1, b - 1, val)
def update_recursive(self, idx, l, r, a, b, val): # noqa: E741
"""
update(1, 1, N, a, b, v) for update val v to [a,b]
"""
if r < a or l > b:
return True
if l == r:
self.st[idx] = val
return True
mid = (l + r) // 2
self.update_recursive(self.left(idx), l, mid, a, b, val)
self.update_recursive(self.right(idx), mid + 1, r, a, b, val)
self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])
return True
def query(self, a, b):
return self.query_recursive(1, 0, self.N - 1, a - 1, b - 1)
def query_recursive(self, idx, l, r, a, b): # noqa: E741
"""
query(1, 1, N, a, b) for query max of [a,b]
"""
if r < a or l > b:
return -math.inf
if l >= a and r <= b:
return self.st[idx]
mid = (l + r) // 2
q1 = self.query_recursive(self.left(idx), l, mid, a, b)
q2 = self.query_recursive(self.right(idx), mid + 1, r, a, b)
return max(q1, q2)
def show_data(self):
show_list = []
for i in range(1, N + 1):
show_list += [self.query(i, i)]
print(show_list)
if __name__ == "__main__":
A = [1, 2, -4, 7, 3, -5, 6, 11, -20, 9, 14, 15, 5, 2, -8]
N = 15
segt = SegmentTree(A)
print(segt.query(4, 6))
print(segt.query(7, 11))
print(segt.query(7, 12))
segt.update(1, 3, 111)
print(segt.query(1, 15))
segt.update(7, 8, 235)
segt.show_data()
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
We shall say that an n-digit number is pandigital if it makes use of all the
digits 1 to n exactly once; for example, the 5-digit number, 15234, is 1 through
5 pandigital.
The product 7254 is unusual, as the identity, 39 × 186 = 7254, containing
multiplicand, multiplier, and product is 1 through 9 pandigital.
Find the sum of all products whose multiplicand/multiplier/product identity can
be written as a 1 through 9 pandigital.
HINT: Some products can be obtained in more than one way so be sure to only
include it once in your sum.
"""
import itertools
def is_combination_valid(combination):
"""
Checks if a combination (a tuple of 9 digits)
is a valid product equation.
>>> is_combination_valid(('3', '9', '1', '8', '6', '7', '2', '5', '4'))
True
>>> is_combination_valid(('1', '2', '3', '4', '5', '6', '7', '8', '9'))
False
"""
return (
int("".join(combination[0:2])) * int("".join(combination[2:5]))
== int("".join(combination[5:9]))
) or (
int("".join(combination[0])) * int("".join(combination[1:5]))
== int("".join(combination[5:9]))
)
def solution():
"""
Finds the sum of all products whose multiplicand/multiplier/product identity
can be written as a 1 through 9 pandigital
>>> solution()
45228
"""
return sum(
{
int("".join(pandigital[5:9]))
for pandigital in itertools.permutations("123456789")
if is_combination_valid(pandigital)
}
)
if __name__ == "__main__":
print(solution())
| """
We shall say that an n-digit number is pandigital if it makes use of all the
digits 1 to n exactly once; for example, the 5-digit number, 15234, is 1 through
5 pandigital.
The product 7254 is unusual, as the identity, 39 × 186 = 7254, containing
multiplicand, multiplier, and product is 1 through 9 pandigital.
Find the sum of all products whose multiplicand/multiplier/product identity can
be written as a 1 through 9 pandigital.
HINT: Some products can be obtained in more than one way so be sure to only
include it once in your sum.
"""
import itertools
def is_combination_valid(combination):
"""
Checks if a combination (a tuple of 9 digits)
is a valid product equation.
>>> is_combination_valid(('3', '9', '1', '8', '6', '7', '2', '5', '4'))
True
>>> is_combination_valid(('1', '2', '3', '4', '5', '6', '7', '8', '9'))
False
"""
return (
int("".join(combination[0:2])) * int("".join(combination[2:5]))
== int("".join(combination[5:9]))
) or (
int("".join(combination[0])) * int("".join(combination[1:5]))
== int("".join(combination[5:9]))
)
def solution():
"""
Finds the sum of all products whose multiplicand/multiplier/product identity
can be written as a 1 through 9 pandigital
>>> solution()
45228
"""
return sum(
{
int("".join(pandigital[5:9]))
for pandigital in itertools.permutations("123456789")
if is_combination_valid(pandigital)
}
)
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 38: https://projecteuler.net/problem=38
Take the number 192 and multiply it by each of 1, 2, and 3:
192 × 1 = 192
192 × 2 = 384
192 × 3 = 576
By concatenating each product we get the 1 to 9 pandigital, 192384576. We will call
192384576 the concatenated product of 192 and (1,2,3)
The same can be achieved by starting with 9 and multiplying by 1, 2, 3, 4, and 5,
giving the pandigital, 918273645, which is the concatenated product of 9 and
(1,2,3,4,5).
What is the largest 1 to 9 pandigital 9-digit number that can be formed as the
concatenated product of an integer with (1,2, ... , n) where n > 1?
Solution:
Since n>1, the largest candidate for the solution will be a concactenation of
a 4-digit number and its double, a 5-digit number.
Let a be the 4-digit number.
a has 4 digits => 1000 <= a < 10000
2a has 5 digits => 10000 <= 2a < 100000
=> 5000 <= a < 10000
The concatenation of a with 2a = a * 10^5 + 2a
so our candidate for a given a is 100002 * a.
We iterate through the search space 5000 <= a < 10000 in reverse order,
calculating the candidates for each a and checking if they are 1-9 pandigital.
In case there are no 4-digit numbers that satisfy this property, we check
the 3-digit numbers with a similar formula (the example a=192 gives a lower
bound on the length of a):
a has 3 digits, etc...
=> 100 <= a < 334, candidate = a * 10^6 + 2a * 10^3 + 3a
= 1002003 * a
"""
from __future__ import annotations
def is_9_pandigital(n: int) -> bool:
"""
Checks whether n is a 9-digit 1 to 9 pandigital number.
>>> is_9_pandigital(12345)
False
>>> is_9_pandigital(156284973)
True
>>> is_9_pandigital(1562849733)
False
"""
s = str(n)
return len(s) == 9 and set(s) == set("123456789")
def solution() -> int | None:
"""
Return the largest 1 to 9 pandigital 9-digital number that can be formed as the
concatenated product of an integer with (1,2,...,n) where n > 1.
"""
for base_num in range(9999, 4999, -1):
candidate = 100002 * base_num
if is_9_pandigital(candidate):
return candidate
for base_num in range(333, 99, -1):
candidate = 1002003 * base_num
if is_9_pandigital(candidate):
return candidate
return None
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 38: https://projecteuler.net/problem=38
Take the number 192 and multiply it by each of 1, 2, and 3:
192 × 1 = 192
192 × 2 = 384
192 × 3 = 576
By concatenating each product we get the 1 to 9 pandigital, 192384576. We will call
192384576 the concatenated product of 192 and (1,2,3)
The same can be achieved by starting with 9 and multiplying by 1, 2, 3, 4, and 5,
giving the pandigital, 918273645, which is the concatenated product of 9 and
(1,2,3,4,5).
What is the largest 1 to 9 pandigital 9-digit number that can be formed as the
concatenated product of an integer with (1,2, ... , n) where n > 1?
Solution:
Since n>1, the largest candidate for the solution will be a concactenation of
a 4-digit number and its double, a 5-digit number.
Let a be the 4-digit number.
a has 4 digits => 1000 <= a < 10000
2a has 5 digits => 10000 <= 2a < 100000
=> 5000 <= a < 10000
The concatenation of a with 2a = a * 10^5 + 2a
so our candidate for a given a is 100002 * a.
We iterate through the search space 5000 <= a < 10000 in reverse order,
calculating the candidates for each a and checking if they are 1-9 pandigital.
In case there are no 4-digit numbers that satisfy this property, we check
the 3-digit numbers with a similar formula (the example a=192 gives a lower
bound on the length of a):
a has 3 digits, etc...
=> 100 <= a < 334, candidate = a * 10^6 + 2a * 10^3 + 3a
= 1002003 * a
"""
from __future__ import annotations
def is_9_pandigital(n: int) -> bool:
"""
Checks whether n is a 9-digit 1 to 9 pandigital number.
>>> is_9_pandigital(12345)
False
>>> is_9_pandigital(156284973)
True
>>> is_9_pandigital(1562849733)
False
"""
s = str(n)
return len(s) == 9 and set(s) == set("123456789")
def solution() -> int | None:
"""
Return the largest 1 to 9 pandigital 9-digital number that can be formed as the
concatenated product of an integer with (1,2,...,n) where n > 1.
"""
for base_num in range(9999, 4999, -1):
candidate = 100002 * base_num
if is_9_pandigital(candidate):
return candidate
for base_num in range(333, 99, -1):
candidate = 1002003 * base_num
if is_9_pandigital(candidate):
return candidate
return None
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Implementation of double ended queue.
"""
from __future__ import annotations
from collections.abc import Iterable
from dataclasses import dataclass
from typing import Any
class Deque:
"""
Deque data structure.
Operations
----------
append(val: Any) -> None
appendleft(val: Any) -> None
extend(iterable: Iterable) -> None
extendleft(iterable: Iterable) -> None
pop() -> Any
popleft() -> Any
Observers
---------
is_empty() -> bool
Attributes
----------
_front: _Node
front of the deque a.k.a. the first element
_back: _Node
back of the element a.k.a. the last element
_len: int
the number of nodes
"""
__slots__ = ["_front", "_back", "_len"]
@dataclass
class _Node:
"""
Representation of a node.
Contains a value and a pointer to the next node as well as to the previous one.
"""
val: Any = None
next_node: Deque._Node | None = None
prev_node: Deque._Node | None = None
class _Iterator:
"""
Helper class for iteration. Will be used to implement iteration.
Attributes
----------
_cur: _Node
the current node of the iteration.
"""
__slots__ = ["_cur"]
def __init__(self, cur: Deque._Node | None) -> None:
self._cur = cur
def __iter__(self) -> Deque._Iterator:
"""
>>> our_deque = Deque([1, 2, 3])
>>> iterator = iter(our_deque)
"""
return self
def __next__(self) -> Any:
"""
>>> our_deque = Deque([1, 2, 3])
>>> iterator = iter(our_deque)
>>> next(iterator)
1
>>> next(iterator)
2
>>> next(iterator)
3
"""
if self._cur is None:
# finished iterating
raise StopIteration
val = self._cur.val
self._cur = self._cur.next_node
return val
def __init__(self, iterable: Iterable[Any] | None = None) -> None:
self._front: Any = None
self._back: Any = None
self._len: int = 0
if iterable is not None:
# append every value to the deque
for val in iterable:
self.append(val)
def append(self, val: Any) -> None:
"""
Adds val to the end of the deque.
Time complexity: O(1)
>>> our_deque_1 = Deque([1, 2, 3])
>>> our_deque_1.append(4)
>>> our_deque_1
[1, 2, 3, 4]
>>> our_deque_2 = Deque('ab')
>>> our_deque_2.append('c')
>>> our_deque_2
['a', 'b', 'c']
>>> from collections import deque
>>> deque_collections_1 = deque([1, 2, 3])
>>> deque_collections_1.append(4)
>>> deque_collections_1
deque([1, 2, 3, 4])
>>> deque_collections_2 = deque('ab')
>>> deque_collections_2.append('c')
>>> deque_collections_2
deque(['a', 'b', 'c'])
>>> list(our_deque_1) == list(deque_collections_1)
True
>>> list(our_deque_2) == list(deque_collections_2)
True
"""
node = self._Node(val, None, None)
if self.is_empty():
# front = back
self._front = self._back = node
self._len = 1
else:
# connect nodes
self._back.next_node = node
node.prev_node = self._back
self._back = node # assign new back to the new node
self._len += 1
# make sure there were no errors
assert not self.is_empty(), "Error on appending value."
def appendleft(self, val: Any) -> None:
"""
Adds val to the beginning of the deque.
Time complexity: O(1)
>>> our_deque_1 = Deque([2, 3])
>>> our_deque_1.appendleft(1)
>>> our_deque_1
[1, 2, 3]
>>> our_deque_2 = Deque('bc')
>>> our_deque_2.appendleft('a')
>>> our_deque_2
['a', 'b', 'c']
>>> from collections import deque
>>> deque_collections_1 = deque([2, 3])
>>> deque_collections_1.appendleft(1)
>>> deque_collections_1
deque([1, 2, 3])
>>> deque_collections_2 = deque('bc')
>>> deque_collections_2.appendleft('a')
>>> deque_collections_2
deque(['a', 'b', 'c'])
>>> list(our_deque_1) == list(deque_collections_1)
True
>>> list(our_deque_2) == list(deque_collections_2)
True
"""
node = self._Node(val, None, None)
if self.is_empty():
# front = back
self._front = self._back = node
self._len = 1
else:
# connect nodes
node.next_node = self._front
self._front.prev_node = node
self._front = node # assign new front to the new node
self._len += 1
# make sure there were no errors
assert not self.is_empty(), "Error on appending value."
def extend(self, iterable: Iterable[Any]) -> None:
"""
Appends every value of iterable to the end of the deque.
Time complexity: O(n)
>>> our_deque_1 = Deque([1, 2, 3])
>>> our_deque_1.extend([4, 5])
>>> our_deque_1
[1, 2, 3, 4, 5]
>>> our_deque_2 = Deque('ab')
>>> our_deque_2.extend('cd')
>>> our_deque_2
['a', 'b', 'c', 'd']
>>> from collections import deque
>>> deque_collections_1 = deque([1, 2, 3])
>>> deque_collections_1.extend([4, 5])
>>> deque_collections_1
deque([1, 2, 3, 4, 5])
>>> deque_collections_2 = deque('ab')
>>> deque_collections_2.extend('cd')
>>> deque_collections_2
deque(['a', 'b', 'c', 'd'])
>>> list(our_deque_1) == list(deque_collections_1)
True
>>> list(our_deque_2) == list(deque_collections_2)
True
"""
for val in iterable:
self.append(val)
def extendleft(self, iterable: Iterable[Any]) -> None:
"""
Appends every value of iterable to the beginning of the deque.
Time complexity: O(n)
>>> our_deque_1 = Deque([1, 2, 3])
>>> our_deque_1.extendleft([0, -1])
>>> our_deque_1
[-1, 0, 1, 2, 3]
>>> our_deque_2 = Deque('cd')
>>> our_deque_2.extendleft('ba')
>>> our_deque_2
['a', 'b', 'c', 'd']
>>> from collections import deque
>>> deque_collections_1 = deque([1, 2, 3])
>>> deque_collections_1.extendleft([0, -1])
>>> deque_collections_1
deque([-1, 0, 1, 2, 3])
>>> deque_collections_2 = deque('cd')
>>> deque_collections_2.extendleft('ba')
>>> deque_collections_2
deque(['a', 'b', 'c', 'd'])
>>> list(our_deque_1) == list(deque_collections_1)
True
>>> list(our_deque_2) == list(deque_collections_2)
True
"""
for val in iterable:
self.appendleft(val)
def pop(self) -> Any:
"""
Removes the last element of the deque and returns it.
Time complexity: O(1)
@returns topop.val: the value of the node to pop.
>>> our_deque = Deque([1, 2, 3, 15182])
>>> our_popped = our_deque.pop()
>>> our_popped
15182
>>> our_deque
[1, 2, 3]
>>> from collections import deque
>>> deque_collections = deque([1, 2, 3, 15182])
>>> collections_popped = deque_collections.pop()
>>> collections_popped
15182
>>> deque_collections
deque([1, 2, 3])
>>> list(our_deque) == list(deque_collections)
True
>>> our_popped == collections_popped
True
"""
# make sure the deque has elements to pop
assert not self.is_empty(), "Deque is empty."
topop = self._back
self._back = self._back.prev_node # set new back
# drop the last node - python will deallocate memory automatically
self._back.next_node = None
self._len -= 1
return topop.val
def popleft(self) -> Any:
"""
Removes the first element of the deque and returns it.
Time complexity: O(1)
@returns topop.val: the value of the node to pop.
>>> our_deque = Deque([15182, 1, 2, 3])
>>> our_popped = our_deque.popleft()
>>> our_popped
15182
>>> our_deque
[1, 2, 3]
>>> from collections import deque
>>> deque_collections = deque([15182, 1, 2, 3])
>>> collections_popped = deque_collections.popleft()
>>> collections_popped
15182
>>> deque_collections
deque([1, 2, 3])
>>> list(our_deque) == list(deque_collections)
True
>>> our_popped == collections_popped
True
"""
# make sure the deque has elements to pop
assert not self.is_empty(), "Deque is empty."
topop = self._front
self._front = self._front.next_node # set new front and drop the first node
self._front.prev_node = None
self._len -= 1
return topop.val
def is_empty(self) -> bool:
"""
Checks if the deque is empty.
Time complexity: O(1)
>>> our_deque = Deque([1, 2, 3])
>>> our_deque.is_empty()
False
>>> our_empty_deque = Deque()
>>> our_empty_deque.is_empty()
True
>>> from collections import deque
>>> empty_deque_collections = deque()
>>> list(our_empty_deque) == list(empty_deque_collections)
True
"""
return self._front is None
def __len__(self) -> int:
"""
Implements len() function. Returns the length of the deque.
Time complexity: O(1)
>>> our_deque = Deque([1, 2, 3])
>>> len(our_deque)
3
>>> our_empty_deque = Deque()
>>> len(our_empty_deque)
0
>>> from collections import deque
>>> deque_collections = deque([1, 2, 3])
>>> len(deque_collections)
3
>>> empty_deque_collections = deque()
>>> len(empty_deque_collections)
0
>>> len(our_empty_deque) == len(empty_deque_collections)
True
"""
return self._len
def __eq__(self, other: object) -> bool:
"""
Implements "==" operator. Returns if *self* is equal to *other*.
Time complexity: O(n)
>>> our_deque_1 = Deque([1, 2, 3])
>>> our_deque_2 = Deque([1, 2, 3])
>>> our_deque_1 == our_deque_2
True
>>> our_deque_3 = Deque([1, 2])
>>> our_deque_1 == our_deque_3
False
>>> from collections import deque
>>> deque_collections_1 = deque([1, 2, 3])
>>> deque_collections_2 = deque([1, 2, 3])
>>> deque_collections_1 == deque_collections_2
True
>>> deque_collections_3 = deque([1, 2])
>>> deque_collections_1 == deque_collections_3
False
>>> (our_deque_1 == our_deque_2) == (deque_collections_1 == deque_collections_2)
True
>>> (our_deque_1 == our_deque_3) == (deque_collections_1 == deque_collections_3)
True
"""
if not isinstance(other, Deque):
return NotImplemented
me = self._front
oth = other._front
# if the length of the dequeues are not the same, they are not equal
if len(self) != len(other):
return False
while me is not None and oth is not None:
# compare every value
if me.val != oth.val:
return False
me = me.next_node
oth = oth.next_node
return True
def __iter__(self) -> Deque._Iterator:
"""
Implements iteration.
Time complexity: O(1)
>>> our_deque = Deque([1, 2, 3])
>>> for v in our_deque:
... print(v)
1
2
3
>>> from collections import deque
>>> deque_collections = deque([1, 2, 3])
>>> for v in deque_collections:
... print(v)
1
2
3
"""
return Deque._Iterator(self._front)
def __repr__(self) -> str:
"""
Implements representation of the deque.
Represents it as a list, with its values between '[' and ']'.
Time complexity: O(n)
>>> our_deque = Deque([1, 2, 3])
>>> our_deque
[1, 2, 3]
"""
values_list = []
aux = self._front
while aux is not None:
# append the values in a list to display
values_list.append(aux.val)
aux = aux.next_node
return f"[{', '.join(repr(val) for val in values_list)}]"
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Implementation of double ended queue.
"""
from __future__ import annotations
from collections.abc import Iterable
from dataclasses import dataclass
from typing import Any
class Deque:
"""
Deque data structure.
Operations
----------
append(val: Any) -> None
appendleft(val: Any) -> None
extend(iterable: Iterable) -> None
extendleft(iterable: Iterable) -> None
pop() -> Any
popleft() -> Any
Observers
---------
is_empty() -> bool
Attributes
----------
_front: _Node
front of the deque a.k.a. the first element
_back: _Node
back of the element a.k.a. the last element
_len: int
the number of nodes
"""
__slots__ = ["_front", "_back", "_len"]
@dataclass
class _Node:
"""
Representation of a node.
Contains a value and a pointer to the next node as well as to the previous one.
"""
val: Any = None
next_node: Deque._Node | None = None
prev_node: Deque._Node | None = None
class _Iterator:
"""
Helper class for iteration. Will be used to implement iteration.
Attributes
----------
_cur: _Node
the current node of the iteration.
"""
__slots__ = ["_cur"]
def __init__(self, cur: Deque._Node | None) -> None:
self._cur = cur
def __iter__(self) -> Deque._Iterator:
"""
>>> our_deque = Deque([1, 2, 3])
>>> iterator = iter(our_deque)
"""
return self
def __next__(self) -> Any:
"""
>>> our_deque = Deque([1, 2, 3])
>>> iterator = iter(our_deque)
>>> next(iterator)
1
>>> next(iterator)
2
>>> next(iterator)
3
"""
if self._cur is None:
# finished iterating
raise StopIteration
val = self._cur.val
self._cur = self._cur.next_node
return val
def __init__(self, iterable: Iterable[Any] | None = None) -> None:
self._front: Any = None
self._back: Any = None
self._len: int = 0
if iterable is not None:
# append every value to the deque
for val in iterable:
self.append(val)
def append(self, val: Any) -> None:
"""
Adds val to the end of the deque.
Time complexity: O(1)
>>> our_deque_1 = Deque([1, 2, 3])
>>> our_deque_1.append(4)
>>> our_deque_1
[1, 2, 3, 4]
>>> our_deque_2 = Deque('ab')
>>> our_deque_2.append('c')
>>> our_deque_2
['a', 'b', 'c']
>>> from collections import deque
>>> deque_collections_1 = deque([1, 2, 3])
>>> deque_collections_1.append(4)
>>> deque_collections_1
deque([1, 2, 3, 4])
>>> deque_collections_2 = deque('ab')
>>> deque_collections_2.append('c')
>>> deque_collections_2
deque(['a', 'b', 'c'])
>>> list(our_deque_1) == list(deque_collections_1)
True
>>> list(our_deque_2) == list(deque_collections_2)
True
"""
node = self._Node(val, None, None)
if self.is_empty():
# front = back
self._front = self._back = node
self._len = 1
else:
# connect nodes
self._back.next_node = node
node.prev_node = self._back
self._back = node # assign new back to the new node
self._len += 1
# make sure there were no errors
assert not self.is_empty(), "Error on appending value."
def appendleft(self, val: Any) -> None:
"""
Adds val to the beginning of the deque.
Time complexity: O(1)
>>> our_deque_1 = Deque([2, 3])
>>> our_deque_1.appendleft(1)
>>> our_deque_1
[1, 2, 3]
>>> our_deque_2 = Deque('bc')
>>> our_deque_2.appendleft('a')
>>> our_deque_2
['a', 'b', 'c']
>>> from collections import deque
>>> deque_collections_1 = deque([2, 3])
>>> deque_collections_1.appendleft(1)
>>> deque_collections_1
deque([1, 2, 3])
>>> deque_collections_2 = deque('bc')
>>> deque_collections_2.appendleft('a')
>>> deque_collections_2
deque(['a', 'b', 'c'])
>>> list(our_deque_1) == list(deque_collections_1)
True
>>> list(our_deque_2) == list(deque_collections_2)
True
"""
node = self._Node(val, None, None)
if self.is_empty():
# front = back
self._front = self._back = node
self._len = 1
else:
# connect nodes
node.next_node = self._front
self._front.prev_node = node
self._front = node # assign new front to the new node
self._len += 1
# make sure there were no errors
assert not self.is_empty(), "Error on appending value."
def extend(self, iterable: Iterable[Any]) -> None:
"""
Appends every value of iterable to the end of the deque.
Time complexity: O(n)
>>> our_deque_1 = Deque([1, 2, 3])
>>> our_deque_1.extend([4, 5])
>>> our_deque_1
[1, 2, 3, 4, 5]
>>> our_deque_2 = Deque('ab')
>>> our_deque_2.extend('cd')
>>> our_deque_2
['a', 'b', 'c', 'd']
>>> from collections import deque
>>> deque_collections_1 = deque([1, 2, 3])
>>> deque_collections_1.extend([4, 5])
>>> deque_collections_1
deque([1, 2, 3, 4, 5])
>>> deque_collections_2 = deque('ab')
>>> deque_collections_2.extend('cd')
>>> deque_collections_2
deque(['a', 'b', 'c', 'd'])
>>> list(our_deque_1) == list(deque_collections_1)
True
>>> list(our_deque_2) == list(deque_collections_2)
True
"""
for val in iterable:
self.append(val)
def extendleft(self, iterable: Iterable[Any]) -> None:
"""
Appends every value of iterable to the beginning of the deque.
Time complexity: O(n)
>>> our_deque_1 = Deque([1, 2, 3])
>>> our_deque_1.extendleft([0, -1])
>>> our_deque_1
[-1, 0, 1, 2, 3]
>>> our_deque_2 = Deque('cd')
>>> our_deque_2.extendleft('ba')
>>> our_deque_2
['a', 'b', 'c', 'd']
>>> from collections import deque
>>> deque_collections_1 = deque([1, 2, 3])
>>> deque_collections_1.extendleft([0, -1])
>>> deque_collections_1
deque([-1, 0, 1, 2, 3])
>>> deque_collections_2 = deque('cd')
>>> deque_collections_2.extendleft('ba')
>>> deque_collections_2
deque(['a', 'b', 'c', 'd'])
>>> list(our_deque_1) == list(deque_collections_1)
True
>>> list(our_deque_2) == list(deque_collections_2)
True
"""
for val in iterable:
self.appendleft(val)
def pop(self) -> Any:
"""
Removes the last element of the deque and returns it.
Time complexity: O(1)
@returns topop.val: the value of the node to pop.
>>> our_deque = Deque([1, 2, 3, 15182])
>>> our_popped = our_deque.pop()
>>> our_popped
15182
>>> our_deque
[1, 2, 3]
>>> from collections import deque
>>> deque_collections = deque([1, 2, 3, 15182])
>>> collections_popped = deque_collections.pop()
>>> collections_popped
15182
>>> deque_collections
deque([1, 2, 3])
>>> list(our_deque) == list(deque_collections)
True
>>> our_popped == collections_popped
True
"""
# make sure the deque has elements to pop
assert not self.is_empty(), "Deque is empty."
topop = self._back
self._back = self._back.prev_node # set new back
# drop the last node - python will deallocate memory automatically
self._back.next_node = None
self._len -= 1
return topop.val
def popleft(self) -> Any:
"""
Removes the first element of the deque and returns it.
Time complexity: O(1)
@returns topop.val: the value of the node to pop.
>>> our_deque = Deque([15182, 1, 2, 3])
>>> our_popped = our_deque.popleft()
>>> our_popped
15182
>>> our_deque
[1, 2, 3]
>>> from collections import deque
>>> deque_collections = deque([15182, 1, 2, 3])
>>> collections_popped = deque_collections.popleft()
>>> collections_popped
15182
>>> deque_collections
deque([1, 2, 3])
>>> list(our_deque) == list(deque_collections)
True
>>> our_popped == collections_popped
True
"""
# make sure the deque has elements to pop
assert not self.is_empty(), "Deque is empty."
topop = self._front
self._front = self._front.next_node # set new front and drop the first node
self._front.prev_node = None
self._len -= 1
return topop.val
def is_empty(self) -> bool:
"""
Checks if the deque is empty.
Time complexity: O(1)
>>> our_deque = Deque([1, 2, 3])
>>> our_deque.is_empty()
False
>>> our_empty_deque = Deque()
>>> our_empty_deque.is_empty()
True
>>> from collections import deque
>>> empty_deque_collections = deque()
>>> list(our_empty_deque) == list(empty_deque_collections)
True
"""
return self._front is None
def __len__(self) -> int:
"""
Implements len() function. Returns the length of the deque.
Time complexity: O(1)
>>> our_deque = Deque([1, 2, 3])
>>> len(our_deque)
3
>>> our_empty_deque = Deque()
>>> len(our_empty_deque)
0
>>> from collections import deque
>>> deque_collections = deque([1, 2, 3])
>>> len(deque_collections)
3
>>> empty_deque_collections = deque()
>>> len(empty_deque_collections)
0
>>> len(our_empty_deque) == len(empty_deque_collections)
True
"""
return self._len
def __eq__(self, other: object) -> bool:
"""
Implements "==" operator. Returns if *self* is equal to *other*.
Time complexity: O(n)
>>> our_deque_1 = Deque([1, 2, 3])
>>> our_deque_2 = Deque([1, 2, 3])
>>> our_deque_1 == our_deque_2
True
>>> our_deque_3 = Deque([1, 2])
>>> our_deque_1 == our_deque_3
False
>>> from collections import deque
>>> deque_collections_1 = deque([1, 2, 3])
>>> deque_collections_2 = deque([1, 2, 3])
>>> deque_collections_1 == deque_collections_2
True
>>> deque_collections_3 = deque([1, 2])
>>> deque_collections_1 == deque_collections_3
False
>>> (our_deque_1 == our_deque_2) == (deque_collections_1 == deque_collections_2)
True
>>> (our_deque_1 == our_deque_3) == (deque_collections_1 == deque_collections_3)
True
"""
if not isinstance(other, Deque):
return NotImplemented
me = self._front
oth = other._front
# if the length of the dequeues are not the same, they are not equal
if len(self) != len(other):
return False
while me is not None and oth is not None:
# compare every value
if me.val != oth.val:
return False
me = me.next_node
oth = oth.next_node
return True
def __iter__(self) -> Deque._Iterator:
"""
Implements iteration.
Time complexity: O(1)
>>> our_deque = Deque([1, 2, 3])
>>> for v in our_deque:
... print(v)
1
2
3
>>> from collections import deque
>>> deque_collections = deque([1, 2, 3])
>>> for v in deque_collections:
... print(v)
1
2
3
"""
return Deque._Iterator(self._front)
def __repr__(self) -> str:
"""
Implements representation of the deque.
Represents it as a list, with its values between '[' and ']'.
Time complexity: O(n)
>>> our_deque = Deque([1, 2, 3])
>>> our_deque
[1, 2, 3]
"""
values_list = []
aux = self._front
while aux is not None:
# append the values in a list to display
values_list.append(aux.val)
aux = aux.next_node
return f"[{', '.join(repr(val) for val in values_list)}]"
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from sklearn.neural_network import MLPClassifier
X = [[0.0, 0.0], [1.0, 1.0], [1.0, 0.0], [0.0, 1.0]]
y = [0, 1, 0, 0]
clf = MLPClassifier(
solver="lbfgs", alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1
)
clf.fit(X, y)
test = [[0.0, 0.0], [0.0, 1.0], [1.0, 1.0]]
Y = clf.predict(test)
def wrapper(y):
"""
>>> wrapper(Y)
[0, 0, 1]
"""
return list(y)
if __name__ == "__main__":
import doctest
doctest.testmod()
| from sklearn.neural_network import MLPClassifier
X = [[0.0, 0.0], [1.0, 1.0], [1.0, 0.0], [0.0, 1.0]]
y = [0, 1, 0, 0]
clf = MLPClassifier(
solver="lbfgs", alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1
)
clf.fit(X, y)
test = [[0.0, 0.0], [0.0, 1.0], [1.0, 1.0]]
Y = clf.predict(test)
def wrapper(y):
"""
>>> wrapper(Y)
[0, 0, 1]
"""
return list(y)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Resources:
- https://en.wikipedia.org/wiki/Conjugate_gradient_method
- https://en.wikipedia.org/wiki/Definite_symmetric_matrix
"""
from typing import Any
import numpy as np
def _is_matrix_spd(matrix: np.ndarray) -> bool:
"""
Returns True if input matrix is symmetric positive definite.
Returns False otherwise.
For a matrix to be SPD, all eigenvalues must be positive.
>>> import numpy as np
>>> matrix = np.array([
... [4.12401784, -5.01453636, -0.63865857],
... [-5.01453636, 12.33347422, -3.40493586],
... [-0.63865857, -3.40493586, 5.78591885]])
>>> _is_matrix_spd(matrix)
True
>>> matrix = np.array([
... [0.34634879, 1.96165514, 2.18277744],
... [0.74074469, -1.19648894, -1.34223498],
... [-0.7687067 , 0.06018373, -1.16315631]])
>>> _is_matrix_spd(matrix)
False
"""
# Ensure matrix is square.
assert np.shape(matrix)[0] == np.shape(matrix)[1]
# If matrix not symmetric, exit right away.
if np.allclose(matrix, matrix.T) is False:
return False
# Get eigenvalues and eignevectors for a symmetric matrix.
eigen_values, _ = np.linalg.eigh(matrix)
# Check sign of all eigenvalues.
# np.all returns a value of type np.bool_
return bool(np.all(eigen_values > 0))
def _create_spd_matrix(dimension: int) -> Any:
"""
Returns a symmetric positive definite matrix given a dimension.
Input:
dimension gives the square matrix dimension.
Output:
spd_matrix is an diminesion x dimensions symmetric positive definite (SPD) matrix.
>>> import numpy as np
>>> dimension = 3
>>> spd_matrix = _create_spd_matrix(dimension)
>>> _is_matrix_spd(spd_matrix)
True
"""
random_matrix = np.random.randn(dimension, dimension)
spd_matrix = np.dot(random_matrix, random_matrix.T)
assert _is_matrix_spd(spd_matrix)
return spd_matrix
def conjugate_gradient(
spd_matrix: np.ndarray,
load_vector: np.ndarray,
max_iterations: int = 1000,
tol: float = 1e-8,
) -> Any:
"""
Returns solution to the linear system np.dot(spd_matrix, x) = b.
Input:
spd_matrix is an NxN Symmetric Positive Definite (SPD) matrix.
load_vector is an Nx1 vector.
Output:
x is an Nx1 vector that is the solution vector.
>>> import numpy as np
>>> spd_matrix = np.array([
... [8.73256573, -5.02034289, -2.68709226],
... [-5.02034289, 3.78188322, 0.91980451],
... [-2.68709226, 0.91980451, 1.94746467]])
>>> b = np.array([
... [-5.80872761],
... [ 3.23807431],
... [ 1.95381422]])
>>> conjugate_gradient(spd_matrix, b)
array([[-0.63114139],
[-0.01561498],
[ 0.13979294]])
"""
# Ensure proper dimensionality.
assert np.shape(spd_matrix)[0] == np.shape(spd_matrix)[1]
assert np.shape(load_vector)[0] == np.shape(spd_matrix)[0]
assert _is_matrix_spd(spd_matrix)
# Initialize solution guess, residual, search direction.
x0 = np.zeros((np.shape(load_vector)[0], 1))
r0 = np.copy(load_vector)
p0 = np.copy(r0)
# Set initial errors in solution guess and residual.
error_residual = 1e9
error_x_solution = 1e9
error = 1e9
# Set iteration counter to threshold number of iterations.
iterations = 0
while error > tol:
# Save this value so we only calculate the matrix-vector product once.
w = np.dot(spd_matrix, p0)
# The main algorithm.
# Update search direction magnitude.
alpha = np.dot(r0.T, r0) / np.dot(p0.T, w)
# Update solution guess.
x = x0 + alpha * p0
# Calculate new residual.
r = r0 - alpha * w
# Calculate new Krylov subspace scale.
beta = np.dot(r.T, r) / np.dot(r0.T, r0)
# Calculate new A conjuage search direction.
p = r + beta * p0
# Calculate errors.
error_residual = np.linalg.norm(r - r0)
error_x_solution = np.linalg.norm(x - x0)
error = np.maximum(error_residual, error_x_solution)
# Update variables.
x0 = np.copy(x)
r0 = np.copy(r)
p0 = np.copy(p)
# Update number of iterations.
iterations += 1
if iterations > max_iterations:
break
return x
def test_conjugate_gradient() -> None:
"""
>>> test_conjugate_gradient() # self running tests
"""
# Create linear system with SPD matrix and known solution x_true.
dimension = 3
spd_matrix = _create_spd_matrix(dimension)
x_true = np.random.randn(dimension, 1)
b = np.dot(spd_matrix, x_true)
# Numpy solution.
x_numpy = np.linalg.solve(spd_matrix, b)
# Our implementation.
x_conjugate_gradient = conjugate_gradient(spd_matrix, b)
# Ensure both solutions are close to x_true (and therefore one another).
assert np.linalg.norm(x_numpy - x_true) <= 1e-6
assert np.linalg.norm(x_conjugate_gradient - x_true) <= 1e-6
if __name__ == "__main__":
import doctest
doctest.testmod()
test_conjugate_gradient()
| """
Resources:
- https://en.wikipedia.org/wiki/Conjugate_gradient_method
- https://en.wikipedia.org/wiki/Definite_symmetric_matrix
"""
from typing import Any
import numpy as np
def _is_matrix_spd(matrix: np.ndarray) -> bool:
"""
Returns True if input matrix is symmetric positive definite.
Returns False otherwise.
For a matrix to be SPD, all eigenvalues must be positive.
>>> import numpy as np
>>> matrix = np.array([
... [4.12401784, -5.01453636, -0.63865857],
... [-5.01453636, 12.33347422, -3.40493586],
... [-0.63865857, -3.40493586, 5.78591885]])
>>> _is_matrix_spd(matrix)
True
>>> matrix = np.array([
... [0.34634879, 1.96165514, 2.18277744],
... [0.74074469, -1.19648894, -1.34223498],
... [-0.7687067 , 0.06018373, -1.16315631]])
>>> _is_matrix_spd(matrix)
False
"""
# Ensure matrix is square.
assert np.shape(matrix)[0] == np.shape(matrix)[1]
# If matrix not symmetric, exit right away.
if np.allclose(matrix, matrix.T) is False:
return False
# Get eigenvalues and eignevectors for a symmetric matrix.
eigen_values, _ = np.linalg.eigh(matrix)
# Check sign of all eigenvalues.
# np.all returns a value of type np.bool_
return bool(np.all(eigen_values > 0))
def _create_spd_matrix(dimension: int) -> Any:
"""
Returns a symmetric positive definite matrix given a dimension.
Input:
dimension gives the square matrix dimension.
Output:
spd_matrix is an diminesion x dimensions symmetric positive definite (SPD) matrix.
>>> import numpy as np
>>> dimension = 3
>>> spd_matrix = _create_spd_matrix(dimension)
>>> _is_matrix_spd(spd_matrix)
True
"""
random_matrix = np.random.randn(dimension, dimension)
spd_matrix = np.dot(random_matrix, random_matrix.T)
assert _is_matrix_spd(spd_matrix)
return spd_matrix
def conjugate_gradient(
spd_matrix: np.ndarray,
load_vector: np.ndarray,
max_iterations: int = 1000,
tol: float = 1e-8,
) -> Any:
"""
Returns solution to the linear system np.dot(spd_matrix, x) = b.
Input:
spd_matrix is an NxN Symmetric Positive Definite (SPD) matrix.
load_vector is an Nx1 vector.
Output:
x is an Nx1 vector that is the solution vector.
>>> import numpy as np
>>> spd_matrix = np.array([
... [8.73256573, -5.02034289, -2.68709226],
... [-5.02034289, 3.78188322, 0.91980451],
... [-2.68709226, 0.91980451, 1.94746467]])
>>> b = np.array([
... [-5.80872761],
... [ 3.23807431],
... [ 1.95381422]])
>>> conjugate_gradient(spd_matrix, b)
array([[-0.63114139],
[-0.01561498],
[ 0.13979294]])
"""
# Ensure proper dimensionality.
assert np.shape(spd_matrix)[0] == np.shape(spd_matrix)[1]
assert np.shape(load_vector)[0] == np.shape(spd_matrix)[0]
assert _is_matrix_spd(spd_matrix)
# Initialize solution guess, residual, search direction.
x0 = np.zeros((np.shape(load_vector)[0], 1))
r0 = np.copy(load_vector)
p0 = np.copy(r0)
# Set initial errors in solution guess and residual.
error_residual = 1e9
error_x_solution = 1e9
error = 1e9
# Set iteration counter to threshold number of iterations.
iterations = 0
while error > tol:
# Save this value so we only calculate the matrix-vector product once.
w = np.dot(spd_matrix, p0)
# The main algorithm.
# Update search direction magnitude.
alpha = np.dot(r0.T, r0) / np.dot(p0.T, w)
# Update solution guess.
x = x0 + alpha * p0
# Calculate new residual.
r = r0 - alpha * w
# Calculate new Krylov subspace scale.
beta = np.dot(r.T, r) / np.dot(r0.T, r0)
# Calculate new A conjuage search direction.
p = r + beta * p0
# Calculate errors.
error_residual = np.linalg.norm(r - r0)
error_x_solution = np.linalg.norm(x - x0)
error = np.maximum(error_residual, error_x_solution)
# Update variables.
x0 = np.copy(x)
r0 = np.copy(r)
p0 = np.copy(p)
# Update number of iterations.
iterations += 1
if iterations > max_iterations:
break
return x
def test_conjugate_gradient() -> None:
"""
>>> test_conjugate_gradient() # self running tests
"""
# Create linear system with SPD matrix and known solution x_true.
dimension = 3
spd_matrix = _create_spd_matrix(dimension)
x_true = np.random.randn(dimension, 1)
b = np.dot(spd_matrix, x_true)
# Numpy solution.
x_numpy = np.linalg.solve(spd_matrix, b)
# Our implementation.
x_conjugate_gradient = conjugate_gradient(spd_matrix, b)
# Ensure both solutions are close to x_true (and therefore one another).
assert np.linalg.norm(x_numpy - x_true) <= 1e-6
assert np.linalg.norm(x_conjugate_gradient - x_true) <= 1e-6
if __name__ == "__main__":
import doctest
doctest.testmod()
test_conjugate_gradient()
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Pure Python implementation of the jump search algorithm.
This algorithm iterates through a sorted collection with a step of n^(1/2),
until the element compared is bigger than the one searched.
It will then perform a linear search until it matches the wanted number.
If not found, it returns -1.
"""
import math
def jump_search(arr: list, x: int) -> int:
"""
Pure Python implementation of the jump search algorithm.
Examples:
>>> jump_search([0, 1, 2, 3, 4, 5], 3)
3
>>> jump_search([-5, -2, -1], -1)
2
>>> jump_search([0, 5, 10, 20], 8)
-1
>>> jump_search([0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610], 55)
10
"""
n = len(arr)
step = int(math.floor(math.sqrt(n)))
prev = 0
while arr[min(step, n) - 1] < x:
prev = step
step += int(math.floor(math.sqrt(n)))
if prev >= n:
return -1
while arr[prev] < x:
prev = prev + 1
if prev == min(step, n):
return -1
if arr[prev] == x:
return prev
return -1
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
arr = [int(item) for item in user_input.split(",")]
x = int(input("Enter the number to be searched:\n"))
res = jump_search(arr, x)
if res == -1:
print("Number not found!")
else:
print(f"Number {x} is at index {res}")
| """
Pure Python implementation of the jump search algorithm.
This algorithm iterates through a sorted collection with a step of n^(1/2),
until the element compared is bigger than the one searched.
It will then perform a linear search until it matches the wanted number.
If not found, it returns -1.
"""
import math
def jump_search(arr: list, x: int) -> int:
"""
Pure Python implementation of the jump search algorithm.
Examples:
>>> jump_search([0, 1, 2, 3, 4, 5], 3)
3
>>> jump_search([-5, -2, -1], -1)
2
>>> jump_search([0, 5, 10, 20], 8)
-1
>>> jump_search([0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610], 55)
10
"""
n = len(arr)
step = int(math.floor(math.sqrt(n)))
prev = 0
while arr[min(step, n) - 1] < x:
prev = step
step += int(math.floor(math.sqrt(n)))
if prev >= n:
return -1
while arr[prev] < x:
prev = prev + 1
if prev == min(step, n):
return -1
if arr[prev] == x:
return prev
return -1
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
arr = [int(item) for item in user_input.split(",")]
x = int(input("Enter the number to be searched:\n"))
res = jump_search(arr, x)
if res == -1:
print("Number not found!")
else:
print(f"Number {x} is at index {res}")
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Implementing Newton Raphson method in Python
# Author: Syed Haseeb Shah (github.com/QuantumNovice)
# The Newton-Raphson method (also known as Newton's method) is a way to
# quickly find a good approximation for the root of a real-valued function
from __future__ import annotations
from decimal import Decimal
from math import * # noqa: F403
from sympy import diff
def newton_raphson(
func: str, a: float | Decimal, precision: float = 10**-10
) -> float:
"""Finds root from the point 'a' onwards by Newton-Raphson method
>>> newton_raphson("sin(x)", 2)
3.1415926536808043
>>> newton_raphson("x**2 - 5*x +2", 0.4)
0.4384471871911695
>>> newton_raphson("x**2 - 5", 0.1)
2.23606797749979
>>> newton_raphson("log(x)- 1", 2)
2.718281828458938
"""
x = a
while True:
x = Decimal(x) - (Decimal(eval(func)) / Decimal(eval(str(diff(func)))))
# This number dictates the accuracy of the answer
if abs(eval(func)) < precision:
return float(x)
# Let's Execute
if __name__ == "__main__":
# Find root of trigonometric function
# Find value of pi
print(f"The root of sin(x) = 0 is {newton_raphson('sin(x)', 2)}")
# Find root of polynomial
print(f"The root of x**2 - 5*x + 2 = 0 is {newton_raphson('x**2 - 5*x + 2', 0.4)}")
# Find Square Root of 5
print(f"The root of log(x) - 1 = 0 is {newton_raphson('log(x) - 1', 2)}")
# Exponential Roots
print(f"The root of exp(x) - 1 = 0 is {newton_raphson('exp(x) - 1', 0)}")
| # Implementing Newton Raphson method in Python
# Author: Syed Haseeb Shah (github.com/QuantumNovice)
# The Newton-Raphson method (also known as Newton's method) is a way to
# quickly find a good approximation for the root of a real-valued function
from __future__ import annotations
from decimal import Decimal
from math import * # noqa: F403
from sympy import diff
def newton_raphson(
func: str, a: float | Decimal, precision: float = 10**-10
) -> float:
"""Finds root from the point 'a' onwards by Newton-Raphson method
>>> newton_raphson("sin(x)", 2)
3.1415926536808043
>>> newton_raphson("x**2 - 5*x +2", 0.4)
0.4384471871911695
>>> newton_raphson("x**2 - 5", 0.1)
2.23606797749979
>>> newton_raphson("log(x)- 1", 2)
2.718281828458938
"""
x = a
while True:
x = Decimal(x) - (Decimal(eval(func)) / Decimal(eval(str(diff(func)))))
# This number dictates the accuracy of the answer
if abs(eval(func)) < precision:
return float(x)
# Let's Execute
if __name__ == "__main__":
# Find root of trigonometric function
# Find value of pi
print(f"The root of sin(x) = 0 is {newton_raphson('sin(x)', 2)}")
# Find root of polynomial
print(f"The root of x**2 - 5*x + 2 = 0 is {newton_raphson('x**2 - 5*x + 2', 0.4)}")
# Find Square Root of 5
print(f"The root of log(x) - 1 = 0 is {newton_raphson('log(x) - 1', 2)}")
# Exponential Roots
print(f"The root of exp(x) - 1 = 0 is {newton_raphson('exp(x) - 1', 0)}")
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Implementation of a basic regression decision tree.
Input data set: The input data set must be 1-dimensional with continuous labels.
Output: The decision tree maps a real number input to a real number output.
"""
import numpy as np
class DecisionTree:
def __init__(self, depth=5, min_leaf_size=5):
self.depth = depth
self.decision_boundary = 0
self.left = None
self.right = None
self.min_leaf_size = min_leaf_size
self.prediction = None
def mean_squared_error(self, labels, prediction):
"""
mean_squared_error:
@param labels: a one dimensional numpy array
@param prediction: a floating point value
return value: mean_squared_error calculates the error if prediction is used to
estimate the labels
>>> tester = DecisionTree()
>>> test_labels = np.array([1,2,3,4,5,6,7,8,9,10])
>>> test_prediction = float(6)
>>> tester.mean_squared_error(test_labels, test_prediction) == (
... TestDecisionTree.helper_mean_squared_error_test(test_labels,
... test_prediction))
True
>>> test_labels = np.array([1,2,3])
>>> test_prediction = float(2)
>>> tester.mean_squared_error(test_labels, test_prediction) == (
... TestDecisionTree.helper_mean_squared_error_test(test_labels,
... test_prediction))
True
"""
if labels.ndim != 1:
print("Error: Input labels must be one dimensional")
return np.mean((labels - prediction) ** 2)
def train(self, x, y):
"""
train:
@param x: a one dimensional numpy array
@param y: a one dimensional numpy array.
The contents of y are the labels for the corresponding X values
train does not have a return value
"""
"""
this section is to check that the inputs conform to our dimensionality
constraints
"""
if x.ndim != 1:
print("Error: Input data set must be one dimensional")
return
if len(x) != len(y):
print("Error: X and y have different lengths")
return
if y.ndim != 1:
print("Error: Data set labels must be one dimensional")
return
if len(x) < 2 * self.min_leaf_size:
self.prediction = np.mean(y)
return
if self.depth == 1:
self.prediction = np.mean(y)
return
best_split = 0
min_error = self.mean_squared_error(x, np.mean(y)) * 2
"""
loop over all possible splits for the decision tree. find the best split.
if no split exists that is less than 2 * error for the entire array
then the data set is not split and the average for the entire array is used as
the predictor
"""
for i in range(len(x)):
if len(x[:i]) < self.min_leaf_size:
continue
elif len(x[i:]) < self.min_leaf_size:
continue
else:
error_left = self.mean_squared_error(x[:i], np.mean(y[:i]))
error_right = self.mean_squared_error(x[i:], np.mean(y[i:]))
error = error_left + error_right
if error < min_error:
best_split = i
min_error = error
if best_split != 0:
left_x = x[:best_split]
left_y = y[:best_split]
right_x = x[best_split:]
right_y = y[best_split:]
self.decision_boundary = x[best_split]
self.left = DecisionTree(
depth=self.depth - 1, min_leaf_size=self.min_leaf_size
)
self.right = DecisionTree(
depth=self.depth - 1, min_leaf_size=self.min_leaf_size
)
self.left.train(left_x, left_y)
self.right.train(right_x, right_y)
else:
self.prediction = np.mean(y)
return
def predict(self, x):
"""
predict:
@param x: a floating point value to predict the label of
the prediction function works by recursively calling the predict function
of the appropriate subtrees based on the tree's decision boundary
"""
if self.prediction is not None:
return self.prediction
elif self.left or self.right is not None:
if x >= self.decision_boundary:
return self.right.predict(x)
else:
return self.left.predict(x)
else:
print("Error: Decision tree not yet trained")
return None
class TestDecisionTree:
"""Decision Tres test class"""
@staticmethod
def helper_mean_squared_error_test(labels, prediction):
"""
helper_mean_squared_error_test:
@param labels: a one dimensional numpy array
@param prediction: a floating point value
return value: helper_mean_squared_error_test calculates the mean squared error
"""
squared_error_sum = float(0)
for label in labels:
squared_error_sum += (label - prediction) ** 2
return float(squared_error_sum / labels.size)
def main():
"""
In this demonstration we're generating a sample data set from the sin function in
numpy. We then train a decision tree on the data set and use the decision tree to
predict the label of 10 different test values. Then the mean squared error over
this test is displayed.
"""
x = np.arange(-1.0, 1.0, 0.005)
y = np.sin(x)
tree = DecisionTree(depth=10, min_leaf_size=10)
tree.train(x, y)
test_cases = (np.random.rand(10) * 2) - 1
predictions = np.array([tree.predict(x) for x in test_cases])
avg_error = np.mean((predictions - test_cases) ** 2)
print("Test values: " + str(test_cases))
print("Predictions: " + str(predictions))
print("Average error: " + str(avg_error))
if __name__ == "__main__":
main()
import doctest
doctest.testmod(name="mean_squarred_error", verbose=True)
| """
Implementation of a basic regression decision tree.
Input data set: The input data set must be 1-dimensional with continuous labels.
Output: The decision tree maps a real number input to a real number output.
"""
import numpy as np
class DecisionTree:
def __init__(self, depth=5, min_leaf_size=5):
self.depth = depth
self.decision_boundary = 0
self.left = None
self.right = None
self.min_leaf_size = min_leaf_size
self.prediction = None
def mean_squared_error(self, labels, prediction):
"""
mean_squared_error:
@param labels: a one dimensional numpy array
@param prediction: a floating point value
return value: mean_squared_error calculates the error if prediction is used to
estimate the labels
>>> tester = DecisionTree()
>>> test_labels = np.array([1,2,3,4,5,6,7,8,9,10])
>>> test_prediction = float(6)
>>> tester.mean_squared_error(test_labels, test_prediction) == (
... TestDecisionTree.helper_mean_squared_error_test(test_labels,
... test_prediction))
True
>>> test_labels = np.array([1,2,3])
>>> test_prediction = float(2)
>>> tester.mean_squared_error(test_labels, test_prediction) == (
... TestDecisionTree.helper_mean_squared_error_test(test_labels,
... test_prediction))
True
"""
if labels.ndim != 1:
print("Error: Input labels must be one dimensional")
return np.mean((labels - prediction) ** 2)
def train(self, x, y):
"""
train:
@param x: a one dimensional numpy array
@param y: a one dimensional numpy array.
The contents of y are the labels for the corresponding X values
train does not have a return value
"""
"""
this section is to check that the inputs conform to our dimensionality
constraints
"""
if x.ndim != 1:
print("Error: Input data set must be one dimensional")
return
if len(x) != len(y):
print("Error: X and y have different lengths")
return
if y.ndim != 1:
print("Error: Data set labels must be one dimensional")
return
if len(x) < 2 * self.min_leaf_size:
self.prediction = np.mean(y)
return
if self.depth == 1:
self.prediction = np.mean(y)
return
best_split = 0
min_error = self.mean_squared_error(x, np.mean(y)) * 2
"""
loop over all possible splits for the decision tree. find the best split.
if no split exists that is less than 2 * error for the entire array
then the data set is not split and the average for the entire array is used as
the predictor
"""
for i in range(len(x)):
if len(x[:i]) < self.min_leaf_size:
continue
elif len(x[i:]) < self.min_leaf_size:
continue
else:
error_left = self.mean_squared_error(x[:i], np.mean(y[:i]))
error_right = self.mean_squared_error(x[i:], np.mean(y[i:]))
error = error_left + error_right
if error < min_error:
best_split = i
min_error = error
if best_split != 0:
left_x = x[:best_split]
left_y = y[:best_split]
right_x = x[best_split:]
right_y = y[best_split:]
self.decision_boundary = x[best_split]
self.left = DecisionTree(
depth=self.depth - 1, min_leaf_size=self.min_leaf_size
)
self.right = DecisionTree(
depth=self.depth - 1, min_leaf_size=self.min_leaf_size
)
self.left.train(left_x, left_y)
self.right.train(right_x, right_y)
else:
self.prediction = np.mean(y)
return
def predict(self, x):
"""
predict:
@param x: a floating point value to predict the label of
the prediction function works by recursively calling the predict function
of the appropriate subtrees based on the tree's decision boundary
"""
if self.prediction is not None:
return self.prediction
elif self.left or self.right is not None:
if x >= self.decision_boundary:
return self.right.predict(x)
else:
return self.left.predict(x)
else:
print("Error: Decision tree not yet trained")
return None
class TestDecisionTree:
"""Decision Tres test class"""
@staticmethod
def helper_mean_squared_error_test(labels, prediction):
"""
helper_mean_squared_error_test:
@param labels: a one dimensional numpy array
@param prediction: a floating point value
return value: helper_mean_squared_error_test calculates the mean squared error
"""
squared_error_sum = float(0)
for label in labels:
squared_error_sum += (label - prediction) ** 2
return float(squared_error_sum / labels.size)
def main():
"""
In this demonstration we're generating a sample data set from the sin function in
numpy. We then train a decision tree on the data set and use the decision tree to
predict the label of 10 different test values. Then the mean squared error over
this test is displayed.
"""
x = np.arange(-1.0, 1.0, 0.005)
y = np.sin(x)
tree = DecisionTree(depth=10, min_leaf_size=10)
tree.train(x, y)
test_cases = (np.random.rand(10) * 2) - 1
predictions = np.array([tree.predict(x) for x in test_cases])
avg_error = np.mean((predictions - test_cases) ** 2)
print("Test values: " + str(test_cases))
print("Predictions: " + str(predictions))
print("Average error: " + str(avg_error))
if __name__ == "__main__":
main()
import doctest
doctest.testmod(name="mean_squarred_error", verbose=True)
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Three distinct points are plotted at random on a Cartesian plane,
for which -1000 ≤ x, y ≤ 1000, such that a triangle is formed.
Consider the following two triangles:
A(-340,495), B(-153,-910), C(835,-947)
X(-175,41), Y(-421,-714), Z(574,-645)
It can be verified that triangle ABC contains the origin, whereas
triangle XYZ does not.
Using triangles.txt (right click and 'Save Link/Target As...'), a 27K text
file containing the coordinates of one thousand "random" triangles, find
the number of triangles for which the interior contains the origin.
NOTE: The first two examples in the file represent the triangles in the
example given above.
"""
from __future__ import annotations
from pathlib import Path
def vector_product(point1: tuple[int, int], point2: tuple[int, int]) -> int:
"""
Return the 2-d vector product of two vectors.
>>> vector_product((1, 2), (-5, 0))
10
>>> vector_product((3, 1), (6, 10))
24
"""
return point1[0] * point2[1] - point1[1] * point2[0]
def contains_origin(x1: int, y1: int, x2: int, y2: int, x3: int, y3: int) -> bool:
"""
Check if the triangle given by the points A(x1, y1), B(x2, y2), C(x3, y3)
contains the origin.
>>> contains_origin(-340, 495, -153, -910, 835, -947)
True
>>> contains_origin(-175, 41, -421, -714, 574, -645)
False
"""
point_a: tuple[int, int] = (x1, y1)
point_a_to_b: tuple[int, int] = (x2 - x1, y2 - y1)
point_a_to_c: tuple[int, int] = (x3 - x1, y3 - y1)
a: float = -vector_product(point_a, point_a_to_b) / vector_product(
point_a_to_c, point_a_to_b
)
b: float = +vector_product(point_a, point_a_to_c) / vector_product(
point_a_to_c, point_a_to_b
)
return a > 0 and b > 0 and a + b < 1
def solution(filename: str = "p102_triangles.txt") -> int:
"""
Find the number of triangles whose interior contains the origin.
>>> solution("test_triangles.txt")
1
"""
data: str = Path(__file__).parent.joinpath(filename).read_text(encoding="utf-8")
triangles: list[list[int]] = []
for line in data.strip().split("\n"):
triangles.append([int(number) for number in line.split(",")])
ret: int = 0
triangle: list[int]
for triangle in triangles:
ret += contains_origin(*triangle)
return ret
if __name__ == "__main__":
print(f"{solution() = }")
| """
Three distinct points are plotted at random on a Cartesian plane,
for which -1000 ≤ x, y ≤ 1000, such that a triangle is formed.
Consider the following two triangles:
A(-340,495), B(-153,-910), C(835,-947)
X(-175,41), Y(-421,-714), Z(574,-645)
It can be verified that triangle ABC contains the origin, whereas
triangle XYZ does not.
Using triangles.txt (right click and 'Save Link/Target As...'), a 27K text
file containing the coordinates of one thousand "random" triangles, find
the number of triangles for which the interior contains the origin.
NOTE: The first two examples in the file represent the triangles in the
example given above.
"""
from __future__ import annotations
from pathlib import Path
def vector_product(point1: tuple[int, int], point2: tuple[int, int]) -> int:
"""
Return the 2-d vector product of two vectors.
>>> vector_product((1, 2), (-5, 0))
10
>>> vector_product((3, 1), (6, 10))
24
"""
return point1[0] * point2[1] - point1[1] * point2[0]
def contains_origin(x1: int, y1: int, x2: int, y2: int, x3: int, y3: int) -> bool:
"""
Check if the triangle given by the points A(x1, y1), B(x2, y2), C(x3, y3)
contains the origin.
>>> contains_origin(-340, 495, -153, -910, 835, -947)
True
>>> contains_origin(-175, 41, -421, -714, 574, -645)
False
"""
point_a: tuple[int, int] = (x1, y1)
point_a_to_b: tuple[int, int] = (x2 - x1, y2 - y1)
point_a_to_c: tuple[int, int] = (x3 - x1, y3 - y1)
a: float = -vector_product(point_a, point_a_to_b) / vector_product(
point_a_to_c, point_a_to_b
)
b: float = +vector_product(point_a, point_a_to_c) / vector_product(
point_a_to_c, point_a_to_b
)
return a > 0 and b > 0 and a + b < 1
def solution(filename: str = "p102_triangles.txt") -> int:
"""
Find the number of triangles whose interior contains the origin.
>>> solution("test_triangles.txt")
1
"""
data: str = Path(__file__).parent.joinpath(filename).read_text(encoding="utf-8")
triangles: list[list[int]] = []
for line in data.strip().split("\n"):
triangles.append([int(number) for number in line.split(",")])
ret: int = 0
triangle: list[int]
for triangle in triangles:
ret += contains_origin(*triangle)
return ret
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Author : Alexander Pantyukhin
Date : November 1, 2022
Task:
Given a positive int number. Return True if this number is power of 2
or False otherwise.
Implementation notes: Use bit manipulation.
For example if the number is the power of two it's bits representation:
n = 0..100..00
n - 1 = 0..011..11
n & (n - 1) - no intersections = 0
"""
def is_power_of_two(number: int) -> bool:
"""
Return True if this number is power of 2 or False otherwise.
>>> is_power_of_two(0)
True
>>> is_power_of_two(1)
True
>>> is_power_of_two(2)
True
>>> is_power_of_two(4)
True
>>> is_power_of_two(6)
False
>>> is_power_of_two(8)
True
>>> is_power_of_two(17)
False
>>> is_power_of_two(-1)
Traceback (most recent call last):
...
ValueError: number must not be negative
>>> is_power_of_two(1.2)
Traceback (most recent call last):
...
TypeError: unsupported operand type(s) for &: 'float' and 'float'
# Test all powers of 2 from 0 to 10,000
>>> all(is_power_of_two(int(2 ** i)) for i in range(10000))
True
"""
if number < 0:
raise ValueError("number must not be negative")
return number & (number - 1) == 0
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Author : Alexander Pantyukhin
Date : November 1, 2022
Task:
Given a positive int number. Return True if this number is power of 2
or False otherwise.
Implementation notes: Use bit manipulation.
For example if the number is the power of two it's bits representation:
n = 0..100..00
n - 1 = 0..011..11
n & (n - 1) - no intersections = 0
"""
def is_power_of_two(number: int) -> bool:
"""
Return True if this number is power of 2 or False otherwise.
>>> is_power_of_two(0)
True
>>> is_power_of_two(1)
True
>>> is_power_of_two(2)
True
>>> is_power_of_two(4)
True
>>> is_power_of_two(6)
False
>>> is_power_of_two(8)
True
>>> is_power_of_two(17)
False
>>> is_power_of_two(-1)
Traceback (most recent call last):
...
ValueError: number must not be negative
>>> is_power_of_two(1.2)
Traceback (most recent call last):
...
TypeError: unsupported operand type(s) for &: 'float' and 'float'
# Test all powers of 2 from 0 to 10,000
>>> all(is_power_of_two(int(2 ** i)) for i in range(10000))
True
"""
if number < 0:
raise ValueError("number must not be negative")
return number & (number - 1) == 0
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
import requests
def get_hackernews_story(story_id: str) -> dict:
url = f"https://hacker-news.firebaseio.com/v0/item/{story_id}.json?print=pretty"
return requests.get(url).json()
def hackernews_top_stories(max_stories: int = 10) -> list[dict]:
"""
Get the top max_stories posts from HackerNews - https://news.ycombinator.com/
"""
url = "https://hacker-news.firebaseio.com/v0/topstories.json?print=pretty"
story_ids = requests.get(url).json()[:max_stories]
return [get_hackernews_story(story_id) for story_id in story_ids]
def hackernews_top_stories_as_markdown(max_stories: int = 10) -> str:
stories = hackernews_top_stories(max_stories)
return "\n".join("* [{title}]({url})".format(**story) for story in stories)
if __name__ == "__main__":
print(hackernews_top_stories_as_markdown())
| from __future__ import annotations
import requests
def get_hackernews_story(story_id: str) -> dict:
url = f"https://hacker-news.firebaseio.com/v0/item/{story_id}.json?print=pretty"
return requests.get(url).json()
def hackernews_top_stories(max_stories: int = 10) -> list[dict]:
"""
Get the top max_stories posts from HackerNews - https://news.ycombinator.com/
"""
url = "https://hacker-news.firebaseio.com/v0/topstories.json?print=pretty"
story_ids = requests.get(url).json()[:max_stories]
return [get_hackernews_story(story_id) for story_id in story_ids]
def hackernews_top_stories_as_markdown(max_stories: int = 10) -> str:
stories = hackernews_top_stories(max_stories)
return "\n".join("* [{title}]({url})".format(**story) for story in stories)
if __name__ == "__main__":
print(hackernews_top_stories_as_markdown())
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Conversion of pressure units.
Available Units:- Pascal,Bar,Kilopascal,Megapascal,psi(pound per square inch),
inHg(in mercury column),torr,atm
USAGE :
-> Import this file into their respective project.
-> Use the function pressure_conversion() for conversion of pressure units.
-> Parameters :
-> value : The number of from units you want to convert
-> from_type : From which type you want to convert
-> to_type : To which type you want to convert
REFERENCES :
-> Wikipedia reference: https://en.wikipedia.org/wiki/Pascal_(unit)
-> Wikipedia reference: https://en.wikipedia.org/wiki/Pound_per_square_inch
-> Wikipedia reference: https://en.wikipedia.org/wiki/Inch_of_mercury
-> Wikipedia reference: https://en.wikipedia.org/wiki/Torr
-> https://en.wikipedia.org/wiki/Standard_atmosphere_(unit)
-> https://msestudent.com/what-are-the-units-of-pressure/
-> https://www.unitconverters.net/pressure-converter.html
"""
from collections import namedtuple
from_to = namedtuple("from_to", "from_ to")
PRESSURE_CONVERSION = {
"atm": from_to(1, 1),
"pascal": from_to(0.0000098, 101325),
"bar": from_to(0.986923, 1.01325),
"kilopascal": from_to(0.00986923, 101.325),
"megapascal": from_to(9.86923, 0.101325),
"psi": from_to(0.068046, 14.6959),
"inHg": from_to(0.0334211, 29.9213),
"torr": from_to(0.00131579, 760),
}
def pressure_conversion(value: float, from_type: str, to_type: str) -> float:
"""
Conversion between pressure units.
>>> pressure_conversion(4, "atm", "pascal")
405300
>>> pressure_conversion(1, "pascal", "psi")
0.00014401981999999998
>>> pressure_conversion(1, "bar", "atm")
0.986923
>>> pressure_conversion(3, "kilopascal", "bar")
0.029999991892499998
>>> pressure_conversion(2, "megapascal", "psi")
290.074434314
>>> pressure_conversion(4, "psi", "torr")
206.85984
>>> pressure_conversion(1, "inHg", "atm")
0.0334211
>>> pressure_conversion(1, "torr", "psi")
0.019336718261000002
>>> pressure_conversion(4, "wrongUnit", "atm")
Traceback (most recent call last):
...
ValueError: Invalid 'from_type' value: 'wrongUnit' Supported values are:
atm, pascal, bar, kilopascal, megapascal, psi, inHg, torr
"""
if from_type not in PRESSURE_CONVERSION:
raise ValueError(
f"Invalid 'from_type' value: {from_type!r} Supported values are:\n"
+ ", ".join(PRESSURE_CONVERSION)
)
if to_type not in PRESSURE_CONVERSION:
raise ValueError(
f"Invalid 'to_type' value: {to_type!r}. Supported values are:\n"
+ ", ".join(PRESSURE_CONVERSION)
)
return (
value * PRESSURE_CONVERSION[from_type].from_ * PRESSURE_CONVERSION[to_type].to
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Conversion of pressure units.
Available Units:- Pascal,Bar,Kilopascal,Megapascal,psi(pound per square inch),
inHg(in mercury column),torr,atm
USAGE :
-> Import this file into their respective project.
-> Use the function pressure_conversion() for conversion of pressure units.
-> Parameters :
-> value : The number of from units you want to convert
-> from_type : From which type you want to convert
-> to_type : To which type you want to convert
REFERENCES :
-> Wikipedia reference: https://en.wikipedia.org/wiki/Pascal_(unit)
-> Wikipedia reference: https://en.wikipedia.org/wiki/Pound_per_square_inch
-> Wikipedia reference: https://en.wikipedia.org/wiki/Inch_of_mercury
-> Wikipedia reference: https://en.wikipedia.org/wiki/Torr
-> https://en.wikipedia.org/wiki/Standard_atmosphere_(unit)
-> https://msestudent.com/what-are-the-units-of-pressure/
-> https://www.unitconverters.net/pressure-converter.html
"""
from collections import namedtuple
from_to = namedtuple("from_to", "from_ to")
PRESSURE_CONVERSION = {
"atm": from_to(1, 1),
"pascal": from_to(0.0000098, 101325),
"bar": from_to(0.986923, 1.01325),
"kilopascal": from_to(0.00986923, 101.325),
"megapascal": from_to(9.86923, 0.101325),
"psi": from_to(0.068046, 14.6959),
"inHg": from_to(0.0334211, 29.9213),
"torr": from_to(0.00131579, 760),
}
def pressure_conversion(value: float, from_type: str, to_type: str) -> float:
"""
Conversion between pressure units.
>>> pressure_conversion(4, "atm", "pascal")
405300
>>> pressure_conversion(1, "pascal", "psi")
0.00014401981999999998
>>> pressure_conversion(1, "bar", "atm")
0.986923
>>> pressure_conversion(3, "kilopascal", "bar")
0.029999991892499998
>>> pressure_conversion(2, "megapascal", "psi")
290.074434314
>>> pressure_conversion(4, "psi", "torr")
206.85984
>>> pressure_conversion(1, "inHg", "atm")
0.0334211
>>> pressure_conversion(1, "torr", "psi")
0.019336718261000002
>>> pressure_conversion(4, "wrongUnit", "atm")
Traceback (most recent call last):
...
ValueError: Invalid 'from_type' value: 'wrongUnit' Supported values are:
atm, pascal, bar, kilopascal, megapascal, psi, inHg, torr
"""
if from_type not in PRESSURE_CONVERSION:
raise ValueError(
f"Invalid 'from_type' value: {from_type!r} Supported values are:\n"
+ ", ".join(PRESSURE_CONVERSION)
)
if to_type not in PRESSURE_CONVERSION:
raise ValueError(
f"Invalid 'to_type' value: {to_type!r}. Supported values are:\n"
+ ", ".join(PRESSURE_CONVERSION)
)
return (
value * PRESSURE_CONVERSION[from_type].from_ * PRESSURE_CONVERSION[to_type].to
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Pandigital prime
Problem 41: https://projecteuler.net/problem=41
We shall say that an n-digit number is pandigital if it makes use of all the digits
1 to n exactly once. For example, 2143 is a 4-digit pandigital and is also prime.
What is the largest n-digit pandigital prime that exists?
All pandigital numbers except for 1, 4 ,7 pandigital numbers are divisible by 3.
So we will check only 7 digit pandigital numbers to obtain the largest possible
pandigital prime.
"""
from __future__ import annotations
import math
from itertools import permutations
def is_prime(number: int) -> bool:
"""Checks to see if a number is a prime in O(sqrt(n)).
A number is prime if it has exactly two factors: 1 and itself.
>>> is_prime(0)
False
>>> is_prime(1)
False
>>> is_prime(2)
True
>>> is_prime(3)
True
>>> is_prime(27)
False
>>> is_prime(87)
False
>>> is_prime(563)
True
>>> is_prime(2999)
True
>>> is_prime(67483)
False
"""
if 1 < number < 4:
# 2 and 3 are primes
return True
elif number < 2 or number % 2 == 0 or number % 3 == 0:
# Negatives, 0, 1, all even numbers, all multiples of 3 are not primes
return False
# All primes number are in format of 6k +/- 1
for i in range(5, int(math.sqrt(number) + 1), 6):
if number % i == 0 or number % (i + 2) == 0:
return False
return True
def solution(n: int = 7) -> int:
"""
Returns the maximum pandigital prime number of length n.
If there are none, then it will return 0.
>>> solution(2)
0
>>> solution(4)
4231
>>> solution(7)
7652413
"""
pandigital_str = "".join(str(i) for i in range(1, n + 1))
perm_list = [int("".join(i)) for i in permutations(pandigital_str, n)]
pandigitals = [num for num in perm_list if is_prime(num)]
return max(pandigitals) if pandigitals else 0
if __name__ == "__main__":
print(f"{solution() = }")
| """
Pandigital prime
Problem 41: https://projecteuler.net/problem=41
We shall say that an n-digit number is pandigital if it makes use of all the digits
1 to n exactly once. For example, 2143 is a 4-digit pandigital and is also prime.
What is the largest n-digit pandigital prime that exists?
All pandigital numbers except for 1, 4 ,7 pandigital numbers are divisible by 3.
So we will check only 7 digit pandigital numbers to obtain the largest possible
pandigital prime.
"""
from __future__ import annotations
import math
from itertools import permutations
def is_prime(number: int) -> bool:
"""Checks to see if a number is a prime in O(sqrt(n)).
A number is prime if it has exactly two factors: 1 and itself.
>>> is_prime(0)
False
>>> is_prime(1)
False
>>> is_prime(2)
True
>>> is_prime(3)
True
>>> is_prime(27)
False
>>> is_prime(87)
False
>>> is_prime(563)
True
>>> is_prime(2999)
True
>>> is_prime(67483)
False
"""
if 1 < number < 4:
# 2 and 3 are primes
return True
elif number < 2 or number % 2 == 0 or number % 3 == 0:
# Negatives, 0, 1, all even numbers, all multiples of 3 are not primes
return False
# All primes number are in format of 6k +/- 1
for i in range(5, int(math.sqrt(number) + 1), 6):
if number % i == 0 or number % (i + 2) == 0:
return False
return True
def solution(n: int = 7) -> int:
"""
Returns the maximum pandigital prime number of length n.
If there are none, then it will return 0.
>>> solution(2)
0
>>> solution(4)
4231
>>> solution(7)
7652413
"""
pandigital_str = "".join(str(i) for i in range(1, n + 1))
perm_list = [int("".join(i)) for i in permutations(pandigital_str, n)]
pandigitals = [num for num in perm_list if is_prime(num)]
return max(pandigitals) if pandigitals else 0
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Problem 120 Square remainders: https://projecteuler.net/problem=120
Description:
Let r be the remainder when (a−1)^n + (a+1)^n is divided by a^2.
For example, if a = 7 and n = 3, then r = 42: 6^3 + 8^3 = 728 ≡ 42 mod 49.
And as n varies, so too will r, but for a = 7 it turns out that r_max = 42.
For 3 ≤ a ≤ 1000, find ∑ r_max.
Solution:
On expanding the terms, we get 2 if n is even and 2an if n is odd.
For maximizing the value, 2an < a*a => n <= (a - 1)/2 (integer division)
"""
def solution(n: int = 1000) -> int:
"""
Returns ∑ r_max for 3 <= a <= n as explained above
>>> solution(10)
300
>>> solution(100)
330750
>>> solution(1000)
333082500
"""
return sum(2 * a * ((a - 1) // 2) for a in range(3, n + 1))
if __name__ == "__main__":
print(solution())
| """
Problem 120 Square remainders: https://projecteuler.net/problem=120
Description:
Let r be the remainder when (a−1)^n + (a+1)^n is divided by a^2.
For example, if a = 7 and n = 3, then r = 42: 6^3 + 8^3 = 728 ≡ 42 mod 49.
And as n varies, so too will r, but for a = 7 it turns out that r_max = 42.
For 3 ≤ a ≤ 1000, find ∑ r_max.
Solution:
On expanding the terms, we get 2 if n is even and 2an if n is odd.
For maximizing the value, 2an < a*a => n <= (a - 1)/2 (integer division)
"""
def solution(n: int = 1000) -> int:
"""
Returns ∑ r_max for 3 <= a <= n as explained above
>>> solution(10)
300
>>> solution(100)
330750
>>> solution(1000)
333082500
"""
return sum(2 * a * ((a - 1) // 2) for a in range(3, n + 1))
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 8,183 | change space complexity of linked list's __len__ from O(n) to O(1) | ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| amirsoroush | "2023-03-15T20:50:15Z" | "2023-04-01T06:26:43Z" | dc4f603dad22eab31892855555999b552e97e9d8 | e4d90e2d5b92fdcff558f1848843dfbe20d81035 | change space complexity of linked list's __len__ from O(n) to O(1). ### Describe your change:
Following #5315 and #5320, I was convinced that it's better to calculate `__len__` each time on demand. But current implementation has "*space*" complexity problem when dealing with a huge linked list. That temporary `tuple` is unnecessary. A one-line change using built-in `sum()` and a generator expression would solve it without breaking existing codes.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Gaussian elimination method for solving a system of linear equations.
Gaussian elimination - https://en.wikipedia.org/wiki/Gaussian_elimination
"""
import numpy as np
from numpy import float64
from numpy.typing import NDArray
def retroactive_resolution(
coefficients: NDArray[float64], vector: NDArray[float64]
) -> NDArray[float64]:
"""
This function performs a retroactive linear system resolution
for triangular matrix
Examples:
2x1 + 2x2 - 1x3 = 5 2x1 + 2x2 = -1
0x1 - 2x2 - 1x3 = -7 0x1 - 2x2 = -1
0x1 + 0x2 + 5x3 = 15
>>> gaussian_elimination([[2, 2, -1], [0, -2, -1], [0, 0, 5]], [[5], [-7], [15]])
array([[2.],
[2.],
[3.]])
>>> gaussian_elimination([[2, 2], [0, -2]], [[-1], [-1]])
array([[-1. ],
[ 0.5]])
"""
rows, columns = np.shape(coefficients)
x: NDArray[float64] = np.zeros((rows, 1), dtype=float)
for row in reversed(range(rows)):
total = 0
for col in range(row + 1, columns):
total += coefficients[row, col] * x[col]
x[row, 0] = (vector[row] - total) / coefficients[row, row]
return x
def gaussian_elimination(
coefficients: NDArray[float64], vector: NDArray[float64]
) -> NDArray[float64]:
"""
This function performs Gaussian elimination method
Examples:
1x1 - 4x2 - 2x3 = -2 1x1 + 2x2 = 5
5x1 + 2x2 - 2x3 = -3 5x1 + 2x2 = 5
1x1 - 1x2 + 0x3 = 4
>>> gaussian_elimination([[1, -4, -2], [5, 2, -2], [1, -1, 0]], [[-2], [-3], [4]])
array([[ 2.3 ],
[-1.7 ],
[ 5.55]])
>>> gaussian_elimination([[1, 2], [5, 2]], [[5], [5]])
array([[0. ],
[2.5]])
"""
# coefficients must to be a square matrix so we need to check first
rows, columns = np.shape(coefficients)
if rows != columns:
return np.array((), dtype=float)
# augmented matrix
augmented_mat: NDArray[float64] = np.concatenate((coefficients, vector), axis=1)
augmented_mat = augmented_mat.astype("float64")
# scale the matrix leaving it triangular
for row in range(rows - 1):
pivot = augmented_mat[row, row]
for col in range(row + 1, columns):
factor = augmented_mat[col, row] / pivot
augmented_mat[col, :] -= factor * augmented_mat[row, :]
x = retroactive_resolution(
augmented_mat[:, 0:columns], augmented_mat[:, columns : columns + 1]
)
return x
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Gaussian elimination method for solving a system of linear equations.
Gaussian elimination - https://en.wikipedia.org/wiki/Gaussian_elimination
"""
import numpy as np
from numpy import float64
from numpy.typing import NDArray
def retroactive_resolution(
coefficients: NDArray[float64], vector: NDArray[float64]
) -> NDArray[float64]:
"""
This function performs a retroactive linear system resolution
for triangular matrix
Examples:
2x1 + 2x2 - 1x3 = 5 2x1 + 2x2 = -1
0x1 - 2x2 - 1x3 = -7 0x1 - 2x2 = -1
0x1 + 0x2 + 5x3 = 15
>>> gaussian_elimination([[2, 2, -1], [0, -2, -1], [0, 0, 5]], [[5], [-7], [15]])
array([[2.],
[2.],
[3.]])
>>> gaussian_elimination([[2, 2], [0, -2]], [[-1], [-1]])
array([[-1. ],
[ 0.5]])
"""
rows, columns = np.shape(coefficients)
x: NDArray[float64] = np.zeros((rows, 1), dtype=float)
for row in reversed(range(rows)):
total = 0
for col in range(row + 1, columns):
total += coefficients[row, col] * x[col]
x[row, 0] = (vector[row] - total) / coefficients[row, row]
return x
def gaussian_elimination(
coefficients: NDArray[float64], vector: NDArray[float64]
) -> NDArray[float64]:
"""
This function performs Gaussian elimination method
Examples:
1x1 - 4x2 - 2x3 = -2 1x1 + 2x2 = 5
5x1 + 2x2 - 2x3 = -3 5x1 + 2x2 = 5
1x1 - 1x2 + 0x3 = 4
>>> gaussian_elimination([[1, -4, -2], [5, 2, -2], [1, -1, 0]], [[-2], [-3], [4]])
array([[ 2.3 ],
[-1.7 ],
[ 5.55]])
>>> gaussian_elimination([[1, 2], [5, 2]], [[5], [5]])
array([[0. ],
[2.5]])
"""
# coefficients must to be a square matrix so we need to check first
rows, columns = np.shape(coefficients)
if rows != columns:
return np.array((), dtype=float)
# augmented matrix
augmented_mat: NDArray[float64] = np.concatenate((coefficients, vector), axis=1)
augmented_mat = augmented_mat.astype("float64")
# scale the matrix leaving it triangular
for row in range(rows - 1):
pivot = augmented_mat[row, row]
for col in range(row + 1, columns):
factor = augmented_mat[col, row] / pivot
augmented_mat[col, :] -= factor * augmented_mat[row, :]
x = retroactive_resolution(
augmented_mat[:, 0:columns], augmented_mat[:, columns : columns + 1]
)
return x
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: check-executables-have-shebangs
- id: check-yaml
- id: end-of-file-fixer
types: [python]
- id: trailing-whitespace
- id: requirements-txt-fixer
- repo: https://github.com/MarcoGorelli/auto-walrus
rev: v0.2.2
hooks:
- id: auto-walrus
- repo: https://github.com/psf/black
rev: 23.1.0
hooks:
- id: black
- repo: https://github.com/PyCQA/isort
rev: 5.12.0
hooks:
- id: isort
args:
- --profile=black
- repo: https://github.com/tox-dev/pyproject-fmt
rev: "0.9.2"
hooks:
- id: pyproject-fmt
- repo: https://github.com/abravalheri/validate-pyproject
rev: v0.12.1
hooks:
- id: validate-pyproject
- repo: https://github.com/asottile/pyupgrade
rev: v3.3.1
hooks:
- id: pyupgrade
args:
- --py311-plus
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.0.255
hooks:
- id: ruff
args:
- --ignore=E741
- repo: https://github.com/PyCQA/flake8
rev: 6.0.0
hooks:
- id: flake8 # See .flake8 for args
additional_dependencies: &flake8-plugins
- flake8-bugbear
- flake8-builtins
# - flake8-broken-line
- flake8-comprehensions
- pep8-naming
- repo: https://github.com/asottile/yesqa
rev: v1.4.0
hooks:
- id: yesqa
additional_dependencies:
*flake8-plugins
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.1.1
hooks:
- id: mypy
args:
- --ignore-missing-imports
- --install-types # See mirrors-mypy README.md
- --non-interactive
additional_dependencies: [types-requests]
- repo: https://github.com/codespell-project/codespell
rev: v2.2.4
hooks:
- id: codespell
args:
- --ignore-words-list=3rt,ans,crate,damon,fo,followings,hist,iff,kwanza,mater,secant,som,sur,tim,zar
exclude: |
(?x)^(
ciphers/prehistoric_men.txt |
strings/dictionary.txt |
strings/words.txt |
project_euler/problem_022/p022_names.txt
)$
- repo: local
hooks:
- id: validate-filenames
name: Validate filenames
entry: ./scripts/validate_filenames.py
language: script
pass_filenames: false
| repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: check-executables-have-shebangs
- id: check-toml
- id: check-yaml
- id: end-of-file-fixer
types: [python]
- id: trailing-whitespace
- id: requirements-txt-fixer
- repo: https://github.com/MarcoGorelli/auto-walrus
rev: v0.2.2
hooks:
- id: auto-walrus
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.0.255
hooks:
- id: ruff
- repo: https://github.com/psf/black
rev: 23.1.0
hooks:
- id: black
- repo: https://github.com/codespell-project/codespell
rev: v2.2.4
hooks:
- id: codespell
additional_dependencies:
- tomli
- repo: https://github.com/tox-dev/pyproject-fmt
rev: "0.9.2"
hooks:
- id: pyproject-fmt
- repo: local
hooks:
- id: validate-filenames
name: Validate filenames
entry: ./scripts/validate_filenames.py
language: script
pass_filenames: false
- repo: https://github.com/abravalheri/validate-pyproject
rev: v0.12.1
hooks:
- id: validate-pyproject
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.1.1
hooks:
- id: mypy
args:
- --ignore-missing-imports
- --install-types # See mirrors-mypy README.md
- --non-interactive
additional_dependencies: [types-requests]
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
|
## Arithmetic Analysis
* [Bisection](arithmetic_analysis/bisection.py)
* [Gaussian Elimination](arithmetic_analysis/gaussian_elimination.py)
* [In Static Equilibrium](arithmetic_analysis/in_static_equilibrium.py)
* [Intersection](arithmetic_analysis/intersection.py)
* [Jacobi Iteration Method](arithmetic_analysis/jacobi_iteration_method.py)
* [Lu Decomposition](arithmetic_analysis/lu_decomposition.py)
* [Newton Forward Interpolation](arithmetic_analysis/newton_forward_interpolation.py)
* [Newton Method](arithmetic_analysis/newton_method.py)
* [Newton Raphson](arithmetic_analysis/newton_raphson.py)
* [Newton Raphson New](arithmetic_analysis/newton_raphson_new.py)
* [Secant Method](arithmetic_analysis/secant_method.py)
## Audio Filters
* [Butterworth Filter](audio_filters/butterworth_filter.py)
* [Iir Filter](audio_filters/iir_filter.py)
* [Show Response](audio_filters/show_response.py)
## Backtracking
* [All Combinations](backtracking/all_combinations.py)
* [All Permutations](backtracking/all_permutations.py)
* [All Subsequences](backtracking/all_subsequences.py)
* [Coloring](backtracking/coloring.py)
* [Combination Sum](backtracking/combination_sum.py)
* [Hamiltonian Cycle](backtracking/hamiltonian_cycle.py)
* [Knight Tour](backtracking/knight_tour.py)
* [Minimax](backtracking/minimax.py)
* [Minmax](backtracking/minmax.py)
* [N Queens](backtracking/n_queens.py)
* [N Queens Math](backtracking/n_queens_math.py)
* [Rat In Maze](backtracking/rat_in_maze.py)
* [Sudoku](backtracking/sudoku.py)
* [Sum Of Subsets](backtracking/sum_of_subsets.py)
* [Word Search](backtracking/word_search.py)
## Bit Manipulation
* [Binary And Operator](bit_manipulation/binary_and_operator.py)
* [Binary Count Setbits](bit_manipulation/binary_count_setbits.py)
* [Binary Count Trailing Zeros](bit_manipulation/binary_count_trailing_zeros.py)
* [Binary Or Operator](bit_manipulation/binary_or_operator.py)
* [Binary Shifts](bit_manipulation/binary_shifts.py)
* [Binary Twos Complement](bit_manipulation/binary_twos_complement.py)
* [Binary Xor Operator](bit_manipulation/binary_xor_operator.py)
* [Count 1S Brian Kernighan Method](bit_manipulation/count_1s_brian_kernighan_method.py)
* [Count Number Of One Bits](bit_manipulation/count_number_of_one_bits.py)
* [Gray Code Sequence](bit_manipulation/gray_code_sequence.py)
* [Highest Set Bit](bit_manipulation/highest_set_bit.py)
* [Index Of Rightmost Set Bit](bit_manipulation/index_of_rightmost_set_bit.py)
* [Is Even](bit_manipulation/is_even.py)
* [Is Power Of Two](bit_manipulation/is_power_of_two.py)
* [Numbers Different Signs](bit_manipulation/numbers_different_signs.py)
* [Reverse Bits](bit_manipulation/reverse_bits.py)
* [Single Bit Manipulation Operations](bit_manipulation/single_bit_manipulation_operations.py)
## Blockchain
* [Chinese Remainder Theorem](blockchain/chinese_remainder_theorem.py)
* [Diophantine Equation](blockchain/diophantine_equation.py)
* [Modular Division](blockchain/modular_division.py)
## Boolean Algebra
* [And Gate](boolean_algebra/and_gate.py)
* [Nand Gate](boolean_algebra/nand_gate.py)
* [Norgate](boolean_algebra/norgate.py)
* [Not Gate](boolean_algebra/not_gate.py)
* [Or Gate](boolean_algebra/or_gate.py)
* [Quine Mc Cluskey](boolean_algebra/quine_mc_cluskey.py)
* [Xnor Gate](boolean_algebra/xnor_gate.py)
* [Xor Gate](boolean_algebra/xor_gate.py)
## Cellular Automata
* [Conways Game Of Life](cellular_automata/conways_game_of_life.py)
* [Game Of Life](cellular_automata/game_of_life.py)
* [Nagel Schrekenberg](cellular_automata/nagel_schrekenberg.py)
* [One Dimensional](cellular_automata/one_dimensional.py)
## Ciphers
* [A1Z26](ciphers/a1z26.py)
* [Affine Cipher](ciphers/affine_cipher.py)
* [Atbash](ciphers/atbash.py)
* [Autokey](ciphers/autokey.py)
* [Baconian Cipher](ciphers/baconian_cipher.py)
* [Base16](ciphers/base16.py)
* [Base32](ciphers/base32.py)
* [Base64](ciphers/base64.py)
* [Base85](ciphers/base85.py)
* [Beaufort Cipher](ciphers/beaufort_cipher.py)
* [Bifid](ciphers/bifid.py)
* [Brute Force Caesar Cipher](ciphers/brute_force_caesar_cipher.py)
* [Caesar Cipher](ciphers/caesar_cipher.py)
* [Cryptomath Module](ciphers/cryptomath_module.py)
* [Decrypt Caesar With Chi Squared](ciphers/decrypt_caesar_with_chi_squared.py)
* [Deterministic Miller Rabin](ciphers/deterministic_miller_rabin.py)
* [Diffie](ciphers/diffie.py)
* [Diffie Hellman](ciphers/diffie_hellman.py)
* [Elgamal Key Generator](ciphers/elgamal_key_generator.py)
* [Enigma Machine2](ciphers/enigma_machine2.py)
* [Hill Cipher](ciphers/hill_cipher.py)
* [Mixed Keyword Cypher](ciphers/mixed_keyword_cypher.py)
* [Mono Alphabetic Ciphers](ciphers/mono_alphabetic_ciphers.py)
* [Morse Code](ciphers/morse_code.py)
* [Onepad Cipher](ciphers/onepad_cipher.py)
* [Playfair Cipher](ciphers/playfair_cipher.py)
* [Polybius](ciphers/polybius.py)
* [Porta Cipher](ciphers/porta_cipher.py)
* [Rabin Miller](ciphers/rabin_miller.py)
* [Rail Fence Cipher](ciphers/rail_fence_cipher.py)
* [Rot13](ciphers/rot13.py)
* [Rsa Cipher](ciphers/rsa_cipher.py)
* [Rsa Factorization](ciphers/rsa_factorization.py)
* [Rsa Key Generator](ciphers/rsa_key_generator.py)
* [Shuffled Shift Cipher](ciphers/shuffled_shift_cipher.py)
* [Simple Keyword Cypher](ciphers/simple_keyword_cypher.py)
* [Simple Substitution Cipher](ciphers/simple_substitution_cipher.py)
* [Trafid Cipher](ciphers/trafid_cipher.py)
* [Transposition Cipher](ciphers/transposition_cipher.py)
* [Transposition Cipher Encrypt Decrypt File](ciphers/transposition_cipher_encrypt_decrypt_file.py)
* [Vigenere Cipher](ciphers/vigenere_cipher.py)
* [Xor Cipher](ciphers/xor_cipher.py)
## Compression
* [Burrows Wheeler](compression/burrows_wheeler.py)
* [Huffman](compression/huffman.py)
* [Lempel Ziv](compression/lempel_ziv.py)
* [Lempel Ziv Decompress](compression/lempel_ziv_decompress.py)
* [Lz77](compression/lz77.py)
* [Peak Signal To Noise Ratio](compression/peak_signal_to_noise_ratio.py)
* [Run Length Encoding](compression/run_length_encoding.py)
## Computer Vision
* [Cnn Classification](computer_vision/cnn_classification.py)
* [Flip Augmentation](computer_vision/flip_augmentation.py)
* [Harris Corner](computer_vision/harris_corner.py)
* [Horn Schunck](computer_vision/horn_schunck.py)
* [Mean Threshold](computer_vision/mean_threshold.py)
* [Mosaic Augmentation](computer_vision/mosaic_augmentation.py)
* [Pooling Functions](computer_vision/pooling_functions.py)
## Conversions
* [Astronomical Length Scale Conversion](conversions/astronomical_length_scale_conversion.py)
* [Binary To Decimal](conversions/binary_to_decimal.py)
* [Binary To Hexadecimal](conversions/binary_to_hexadecimal.py)
* [Binary To Octal](conversions/binary_to_octal.py)
* [Decimal To Any](conversions/decimal_to_any.py)
* [Decimal To Binary](conversions/decimal_to_binary.py)
* [Decimal To Binary Recursion](conversions/decimal_to_binary_recursion.py)
* [Decimal To Hexadecimal](conversions/decimal_to_hexadecimal.py)
* [Decimal To Octal](conversions/decimal_to_octal.py)
* [Excel Title To Column](conversions/excel_title_to_column.py)
* [Hex To Bin](conversions/hex_to_bin.py)
* [Hexadecimal To Decimal](conversions/hexadecimal_to_decimal.py)
* [Length Conversion](conversions/length_conversion.py)
* [Molecular Chemistry](conversions/molecular_chemistry.py)
* [Octal To Decimal](conversions/octal_to_decimal.py)
* [Prefix Conversions](conversions/prefix_conversions.py)
* [Prefix Conversions String](conversions/prefix_conversions_string.py)
* [Pressure Conversions](conversions/pressure_conversions.py)
* [Rgb Hsv Conversion](conversions/rgb_hsv_conversion.py)
* [Roman Numerals](conversions/roman_numerals.py)
* [Speed Conversions](conversions/speed_conversions.py)
* [Temperature Conversions](conversions/temperature_conversions.py)
* [Volume Conversions](conversions/volume_conversions.py)
* [Weight Conversion](conversions/weight_conversion.py)
## Data Structures
* Arrays
* [Permutations](data_structures/arrays/permutations.py)
* [Prefix Sum](data_structures/arrays/prefix_sum.py)
* Binary Tree
* [Avl Tree](data_structures/binary_tree/avl_tree.py)
* [Basic Binary Tree](data_structures/binary_tree/basic_binary_tree.py)
* [Binary Search Tree](data_structures/binary_tree/binary_search_tree.py)
* [Binary Search Tree Recursive](data_structures/binary_tree/binary_search_tree_recursive.py)
* [Binary Tree Mirror](data_structures/binary_tree/binary_tree_mirror.py)
* [Binary Tree Node Sum](data_structures/binary_tree/binary_tree_node_sum.py)
* [Binary Tree Path Sum](data_structures/binary_tree/binary_tree_path_sum.py)
* [Binary Tree Traversals](data_structures/binary_tree/binary_tree_traversals.py)
* [Diff Views Of Binary Tree](data_structures/binary_tree/diff_views_of_binary_tree.py)
* [Distribute Coins](data_structures/binary_tree/distribute_coins.py)
* [Fenwick Tree](data_structures/binary_tree/fenwick_tree.py)
* [Inorder Tree Traversal 2022](data_structures/binary_tree/inorder_tree_traversal_2022.py)
* [Is Bst](data_structures/binary_tree/is_bst.py)
* [Lazy Segment Tree](data_structures/binary_tree/lazy_segment_tree.py)
* [Lowest Common Ancestor](data_structures/binary_tree/lowest_common_ancestor.py)
* [Maximum Fenwick Tree](data_structures/binary_tree/maximum_fenwick_tree.py)
* [Merge Two Binary Trees](data_structures/binary_tree/merge_two_binary_trees.py)
* [Non Recursive Segment Tree](data_structures/binary_tree/non_recursive_segment_tree.py)
* [Number Of Possible Binary Trees](data_structures/binary_tree/number_of_possible_binary_trees.py)
* [Red Black Tree](data_structures/binary_tree/red_black_tree.py)
* [Segment Tree](data_structures/binary_tree/segment_tree.py)
* [Segment Tree Other](data_structures/binary_tree/segment_tree_other.py)
* [Treap](data_structures/binary_tree/treap.py)
* [Wavelet Tree](data_structures/binary_tree/wavelet_tree.py)
* Disjoint Set
* [Alternate Disjoint Set](data_structures/disjoint_set/alternate_disjoint_set.py)
* [Disjoint Set](data_structures/disjoint_set/disjoint_set.py)
* Hashing
* [Double Hash](data_structures/hashing/double_hash.py)
* [Hash Map](data_structures/hashing/hash_map.py)
* [Hash Table](data_structures/hashing/hash_table.py)
* [Hash Table With Linked List](data_structures/hashing/hash_table_with_linked_list.py)
* Number Theory
* [Prime Numbers](data_structures/hashing/number_theory/prime_numbers.py)
* [Quadratic Probing](data_structures/hashing/quadratic_probing.py)
* Tests
* [Test Hash Map](data_structures/hashing/tests/test_hash_map.py)
* Heap
* [Binomial Heap](data_structures/heap/binomial_heap.py)
* [Heap](data_structures/heap/heap.py)
* [Heap Generic](data_structures/heap/heap_generic.py)
* [Max Heap](data_structures/heap/max_heap.py)
* [Min Heap](data_structures/heap/min_heap.py)
* [Randomized Heap](data_structures/heap/randomized_heap.py)
* [Skew Heap](data_structures/heap/skew_heap.py)
* Linked List
* [Circular Linked List](data_structures/linked_list/circular_linked_list.py)
* [Deque Doubly](data_structures/linked_list/deque_doubly.py)
* [Doubly Linked List](data_structures/linked_list/doubly_linked_list.py)
* [Doubly Linked List Two](data_structures/linked_list/doubly_linked_list_two.py)
* [From Sequence](data_structures/linked_list/from_sequence.py)
* [Has Loop](data_structures/linked_list/has_loop.py)
* [Is Palindrome](data_structures/linked_list/is_palindrome.py)
* [Merge Two Lists](data_structures/linked_list/merge_two_lists.py)
* [Middle Element Of Linked List](data_structures/linked_list/middle_element_of_linked_list.py)
* [Print Reverse](data_structures/linked_list/print_reverse.py)
* [Singly Linked List](data_structures/linked_list/singly_linked_list.py)
* [Skip List](data_structures/linked_list/skip_list.py)
* [Swap Nodes](data_structures/linked_list/swap_nodes.py)
* Queue
* [Circular Queue](data_structures/queue/circular_queue.py)
* [Circular Queue Linked List](data_structures/queue/circular_queue_linked_list.py)
* [Double Ended Queue](data_structures/queue/double_ended_queue.py)
* [Linked Queue](data_structures/queue/linked_queue.py)
* [Priority Queue Using List](data_structures/queue/priority_queue_using_list.py)
* [Queue On List](data_structures/queue/queue_on_list.py)
* [Queue On Pseudo Stack](data_structures/queue/queue_on_pseudo_stack.py)
* Stacks
* [Balanced Parentheses](data_structures/stacks/balanced_parentheses.py)
* [Dijkstras Two Stack Algorithm](data_structures/stacks/dijkstras_two_stack_algorithm.py)
* [Evaluate Postfix Notations](data_structures/stacks/evaluate_postfix_notations.py)
* [Infix To Postfix Conversion](data_structures/stacks/infix_to_postfix_conversion.py)
* [Infix To Prefix Conversion](data_structures/stacks/infix_to_prefix_conversion.py)
* [Next Greater Element](data_structures/stacks/next_greater_element.py)
* [Postfix Evaluation](data_structures/stacks/postfix_evaluation.py)
* [Prefix Evaluation](data_structures/stacks/prefix_evaluation.py)
* [Stack](data_structures/stacks/stack.py)
* [Stack With Doubly Linked List](data_structures/stacks/stack_with_doubly_linked_list.py)
* [Stack With Singly Linked List](data_structures/stacks/stack_with_singly_linked_list.py)
* [Stock Span Problem](data_structures/stacks/stock_span_problem.py)
* Trie
* [Radix Tree](data_structures/trie/radix_tree.py)
* [Trie](data_structures/trie/trie.py)
## Digital Image Processing
* [Change Brightness](digital_image_processing/change_brightness.py)
* [Change Contrast](digital_image_processing/change_contrast.py)
* [Convert To Negative](digital_image_processing/convert_to_negative.py)
* Dithering
* [Burkes](digital_image_processing/dithering/burkes.py)
* Edge Detection
* [Canny](digital_image_processing/edge_detection/canny.py)
* Filters
* [Bilateral Filter](digital_image_processing/filters/bilateral_filter.py)
* [Convolve](digital_image_processing/filters/convolve.py)
* [Gabor Filter](digital_image_processing/filters/gabor_filter.py)
* [Gaussian Filter](digital_image_processing/filters/gaussian_filter.py)
* [Local Binary Pattern](digital_image_processing/filters/local_binary_pattern.py)
* [Median Filter](digital_image_processing/filters/median_filter.py)
* [Sobel Filter](digital_image_processing/filters/sobel_filter.py)
* Histogram Equalization
* [Histogram Stretch](digital_image_processing/histogram_equalization/histogram_stretch.py)
* [Index Calculation](digital_image_processing/index_calculation.py)
* Morphological Operations
* [Dilation Operation](digital_image_processing/morphological_operations/dilation_operation.py)
* [Erosion Operation](digital_image_processing/morphological_operations/erosion_operation.py)
* Resize
* [Resize](digital_image_processing/resize/resize.py)
* Rotation
* [Rotation](digital_image_processing/rotation/rotation.py)
* [Sepia](digital_image_processing/sepia.py)
* [Test Digital Image Processing](digital_image_processing/test_digital_image_processing.py)
## Divide And Conquer
* [Closest Pair Of Points](divide_and_conquer/closest_pair_of_points.py)
* [Convex Hull](divide_and_conquer/convex_hull.py)
* [Heaps Algorithm](divide_and_conquer/heaps_algorithm.py)
* [Heaps Algorithm Iterative](divide_and_conquer/heaps_algorithm_iterative.py)
* [Inversions](divide_and_conquer/inversions.py)
* [Kth Order Statistic](divide_and_conquer/kth_order_statistic.py)
* [Max Difference Pair](divide_and_conquer/max_difference_pair.py)
* [Max Subarray Sum](divide_and_conquer/max_subarray_sum.py)
* [Mergesort](divide_and_conquer/mergesort.py)
* [Peak](divide_and_conquer/peak.py)
* [Power](divide_and_conquer/power.py)
* [Strassen Matrix Multiplication](divide_and_conquer/strassen_matrix_multiplication.py)
## Dynamic Programming
* [Abbreviation](dynamic_programming/abbreviation.py)
* [All Construct](dynamic_programming/all_construct.py)
* [Bitmask](dynamic_programming/bitmask.py)
* [Catalan Numbers](dynamic_programming/catalan_numbers.py)
* [Climbing Stairs](dynamic_programming/climbing_stairs.py)
* [Combination Sum Iv](dynamic_programming/combination_sum_iv.py)
* [Edit Distance](dynamic_programming/edit_distance.py)
* [Factorial](dynamic_programming/factorial.py)
* [Fast Fibonacci](dynamic_programming/fast_fibonacci.py)
* [Fibonacci](dynamic_programming/fibonacci.py)
* [Fizz Buzz](dynamic_programming/fizz_buzz.py)
* [Floyd Warshall](dynamic_programming/floyd_warshall.py)
* [Integer Partition](dynamic_programming/integer_partition.py)
* [Iterating Through Submasks](dynamic_programming/iterating_through_submasks.py)
* [Knapsack](dynamic_programming/knapsack.py)
* [Longest Common Subsequence](dynamic_programming/longest_common_subsequence.py)
* [Longest Common Substring](dynamic_programming/longest_common_substring.py)
* [Longest Increasing Subsequence](dynamic_programming/longest_increasing_subsequence.py)
* [Longest Increasing Subsequence O(Nlogn)](dynamic_programming/longest_increasing_subsequence_o(nlogn).py)
* [Longest Sub Array](dynamic_programming/longest_sub_array.py)
* [Matrix Chain Order](dynamic_programming/matrix_chain_order.py)
* [Max Non Adjacent Sum](dynamic_programming/max_non_adjacent_sum.py)
* [Max Sub Array](dynamic_programming/max_sub_array.py)
* [Max Sum Contiguous Subsequence](dynamic_programming/max_sum_contiguous_subsequence.py)
* [Min Distance Up Bottom](dynamic_programming/min_distance_up_bottom.py)
* [Minimum Coin Change](dynamic_programming/minimum_coin_change.py)
* [Minimum Cost Path](dynamic_programming/minimum_cost_path.py)
* [Minimum Partition](dynamic_programming/minimum_partition.py)
* [Minimum Squares To Represent A Number](dynamic_programming/minimum_squares_to_represent_a_number.py)
* [Minimum Steps To One](dynamic_programming/minimum_steps_to_one.py)
* [Minimum Tickets Cost](dynamic_programming/minimum_tickets_cost.py)
* [Optimal Binary Search Tree](dynamic_programming/optimal_binary_search_tree.py)
* [Palindrome Partitioning](dynamic_programming/palindrome_partitioning.py)
* [Rod Cutting](dynamic_programming/rod_cutting.py)
* [Subset Generation](dynamic_programming/subset_generation.py)
* [Sum Of Subset](dynamic_programming/sum_of_subset.py)
* [Viterbi](dynamic_programming/viterbi.py)
* [Word Break](dynamic_programming/word_break.py)
## Electronics
* [Builtin Voltage](electronics/builtin_voltage.py)
* [Carrier Concentration](electronics/carrier_concentration.py)
* [Circular Convolution](electronics/circular_convolution.py)
* [Coulombs Law](electronics/coulombs_law.py)
* [Electric Conductivity](electronics/electric_conductivity.py)
* [Electric Power](electronics/electric_power.py)
* [Electrical Impedance](electronics/electrical_impedance.py)
* [Ind Reactance](electronics/ind_reactance.py)
* [Ohms Law](electronics/ohms_law.py)
* [Resistor Equivalence](electronics/resistor_equivalence.py)
* [Resonant Frequency](electronics/resonant_frequency.py)
## File Transfer
* [Receive File](file_transfer/receive_file.py)
* [Send File](file_transfer/send_file.py)
* Tests
* [Test Send File](file_transfer/tests/test_send_file.py)
## Financial
* [Equated Monthly Installments](financial/equated_monthly_installments.py)
* [Interest](financial/interest.py)
* [Price Plus Tax](financial/price_plus_tax.py)
## Fractals
* [Julia Sets](fractals/julia_sets.py)
* [Koch Snowflake](fractals/koch_snowflake.py)
* [Mandelbrot](fractals/mandelbrot.py)
* [Sierpinski Triangle](fractals/sierpinski_triangle.py)
## Fuzzy Logic
* [Fuzzy Operations](fuzzy_logic/fuzzy_operations.py)
## Genetic Algorithm
* [Basic String](genetic_algorithm/basic_string.py)
## Geodesy
* [Haversine Distance](geodesy/haversine_distance.py)
* [Lamberts Ellipsoidal Distance](geodesy/lamberts_ellipsoidal_distance.py)
## Graphics
* [Bezier Curve](graphics/bezier_curve.py)
* [Vector3 For 2D Rendering](graphics/vector3_for_2d_rendering.py)
## Graphs
* [A Star](graphs/a_star.py)
* [Articulation Points](graphs/articulation_points.py)
* [Basic Graphs](graphs/basic_graphs.py)
* [Bellman Ford](graphs/bellman_ford.py)
* [Bi Directional Dijkstra](graphs/bi_directional_dijkstra.py)
* [Bidirectional A Star](graphs/bidirectional_a_star.py)
* [Bidirectional Breadth First Search](graphs/bidirectional_breadth_first_search.py)
* [Boruvka](graphs/boruvka.py)
* [Breadth First Search](graphs/breadth_first_search.py)
* [Breadth First Search 2](graphs/breadth_first_search_2.py)
* [Breadth First Search Shortest Path](graphs/breadth_first_search_shortest_path.py)
* [Breadth First Search Shortest Path 2](graphs/breadth_first_search_shortest_path_2.py)
* [Breadth First Search Zero One Shortest Path](graphs/breadth_first_search_zero_one_shortest_path.py)
* [Check Bipartite Graph Bfs](graphs/check_bipartite_graph_bfs.py)
* [Check Bipartite Graph Dfs](graphs/check_bipartite_graph_dfs.py)
* [Check Cycle](graphs/check_cycle.py)
* [Connected Components](graphs/connected_components.py)
* [Depth First Search](graphs/depth_first_search.py)
* [Depth First Search 2](graphs/depth_first_search_2.py)
* [Dijkstra](graphs/dijkstra.py)
* [Dijkstra 2](graphs/dijkstra_2.py)
* [Dijkstra Algorithm](graphs/dijkstra_algorithm.py)
* [Dijkstra Alternate](graphs/dijkstra_alternate.py)
* [Dinic](graphs/dinic.py)
* [Directed And Undirected (Weighted) Graph](graphs/directed_and_undirected_(weighted)_graph.py)
* [Edmonds Karp Multiple Source And Sink](graphs/edmonds_karp_multiple_source_and_sink.py)
* [Eulerian Path And Circuit For Undirected Graph](graphs/eulerian_path_and_circuit_for_undirected_graph.py)
* [Even Tree](graphs/even_tree.py)
* [Finding Bridges](graphs/finding_bridges.py)
* [Frequent Pattern Graph Miner](graphs/frequent_pattern_graph_miner.py)
* [G Topological Sort](graphs/g_topological_sort.py)
* [Gale Shapley Bigraph](graphs/gale_shapley_bigraph.py)
* [Graph List](graphs/graph_list.py)
* [Graph Matrix](graphs/graph_matrix.py)
* [Graphs Floyd Warshall](graphs/graphs_floyd_warshall.py)
* [Greedy Best First](graphs/greedy_best_first.py)
* [Greedy Min Vertex Cover](graphs/greedy_min_vertex_cover.py)
* [Kahns Algorithm Long](graphs/kahns_algorithm_long.py)
* [Kahns Algorithm Topo](graphs/kahns_algorithm_topo.py)
* [Karger](graphs/karger.py)
* [Markov Chain](graphs/markov_chain.py)
* [Matching Min Vertex Cover](graphs/matching_min_vertex_cover.py)
* [Minimum Path Sum](graphs/minimum_path_sum.py)
* [Minimum Spanning Tree Boruvka](graphs/minimum_spanning_tree_boruvka.py)
* [Minimum Spanning Tree Kruskal](graphs/minimum_spanning_tree_kruskal.py)
* [Minimum Spanning Tree Kruskal2](graphs/minimum_spanning_tree_kruskal2.py)
* [Minimum Spanning Tree Prims](graphs/minimum_spanning_tree_prims.py)
* [Minimum Spanning Tree Prims2](graphs/minimum_spanning_tree_prims2.py)
* [Multi Heuristic Astar](graphs/multi_heuristic_astar.py)
* [Page Rank](graphs/page_rank.py)
* [Prim](graphs/prim.py)
* [Random Graph Generator](graphs/random_graph_generator.py)
* [Scc Kosaraju](graphs/scc_kosaraju.py)
* [Strongly Connected Components](graphs/strongly_connected_components.py)
* [Tarjans Scc](graphs/tarjans_scc.py)
* Tests
* [Test Min Spanning Tree Kruskal](graphs/tests/test_min_spanning_tree_kruskal.py)
* [Test Min Spanning Tree Prim](graphs/tests/test_min_spanning_tree_prim.py)
## Greedy Methods
* [Fractional Knapsack](greedy_methods/fractional_knapsack.py)
* [Fractional Knapsack 2](greedy_methods/fractional_knapsack_2.py)
* [Optimal Merge Pattern](greedy_methods/optimal_merge_pattern.py)
## Hashes
* [Adler32](hashes/adler32.py)
* [Chaos Machine](hashes/chaos_machine.py)
* [Djb2](hashes/djb2.py)
* [Elf](hashes/elf.py)
* [Enigma Machine](hashes/enigma_machine.py)
* [Hamming Code](hashes/hamming_code.py)
* [Luhn](hashes/luhn.py)
* [Md5](hashes/md5.py)
* [Sdbm](hashes/sdbm.py)
* [Sha1](hashes/sha1.py)
* [Sha256](hashes/sha256.py)
## Knapsack
* [Greedy Knapsack](knapsack/greedy_knapsack.py)
* [Knapsack](knapsack/knapsack.py)
* [Recursive Approach Knapsack](knapsack/recursive_approach_knapsack.py)
* Tests
* [Test Greedy Knapsack](knapsack/tests/test_greedy_knapsack.py)
* [Test Knapsack](knapsack/tests/test_knapsack.py)
## Linear Algebra
* Src
* [Conjugate Gradient](linear_algebra/src/conjugate_gradient.py)
* [Lib](linear_algebra/src/lib.py)
* [Polynom For Points](linear_algebra/src/polynom_for_points.py)
* [Power Iteration](linear_algebra/src/power_iteration.py)
* [Rayleigh Quotient](linear_algebra/src/rayleigh_quotient.py)
* [Schur Complement](linear_algebra/src/schur_complement.py)
* [Test Linear Algebra](linear_algebra/src/test_linear_algebra.py)
* [Transformations 2D](linear_algebra/src/transformations_2d.py)
## Machine Learning
* [Astar](machine_learning/astar.py)
* [Data Transformations](machine_learning/data_transformations.py)
* [Decision Tree](machine_learning/decision_tree.py)
* Forecasting
* [Run](machine_learning/forecasting/run.py)
* [Gradient Descent](machine_learning/gradient_descent.py)
* [K Means Clust](machine_learning/k_means_clust.py)
* [K Nearest Neighbours](machine_learning/k_nearest_neighbours.py)
* [Knn Sklearn](machine_learning/knn_sklearn.py)
* [Linear Discriminant Analysis](machine_learning/linear_discriminant_analysis.py)
* [Linear Regression](machine_learning/linear_regression.py)
* Local Weighted Learning
* [Local Weighted Learning](machine_learning/local_weighted_learning/local_weighted_learning.py)
* [Logistic Regression](machine_learning/logistic_regression.py)
* Lstm
* [Lstm Prediction](machine_learning/lstm/lstm_prediction.py)
* [Multilayer Perceptron Classifier](machine_learning/multilayer_perceptron_classifier.py)
* [Polymonial Regression](machine_learning/polymonial_regression.py)
* [Scoring Functions](machine_learning/scoring_functions.py)
* [Self Organizing Map](machine_learning/self_organizing_map.py)
* [Sequential Minimum Optimization](machine_learning/sequential_minimum_optimization.py)
* [Similarity Search](machine_learning/similarity_search.py)
* [Support Vector Machines](machine_learning/support_vector_machines.py)
* [Word Frequency Functions](machine_learning/word_frequency_functions.py)
* [Xgboost Classifier](machine_learning/xgboost_classifier.py)
* [Xgboost Regressor](machine_learning/xgboost_regressor.py)
## Maths
* [3N Plus 1](maths/3n_plus_1.py)
* [Abs](maths/abs.py)
* [Add](maths/add.py)
* [Addition Without Arithmetic](maths/addition_without_arithmetic.py)
* [Aliquot Sum](maths/aliquot_sum.py)
* [Allocation Number](maths/allocation_number.py)
* [Arc Length](maths/arc_length.py)
* [Area](maths/area.py)
* [Area Under Curve](maths/area_under_curve.py)
* [Armstrong Numbers](maths/armstrong_numbers.py)
* [Automorphic Number](maths/automorphic_number.py)
* [Average Absolute Deviation](maths/average_absolute_deviation.py)
* [Average Mean](maths/average_mean.py)
* [Average Median](maths/average_median.py)
* [Average Mode](maths/average_mode.py)
* [Bailey Borwein Plouffe](maths/bailey_borwein_plouffe.py)
* [Basic Maths](maths/basic_maths.py)
* [Binary Exp Mod](maths/binary_exp_mod.py)
* [Binary Exponentiation](maths/binary_exponentiation.py)
* [Binary Exponentiation 2](maths/binary_exponentiation_2.py)
* [Binary Exponentiation 3](maths/binary_exponentiation_3.py)
* [Binomial Coefficient](maths/binomial_coefficient.py)
* [Binomial Distribution](maths/binomial_distribution.py)
* [Bisection](maths/bisection.py)
* [Carmichael Number](maths/carmichael_number.py)
* [Catalan Number](maths/catalan_number.py)
* [Ceil](maths/ceil.py)
* [Check Polygon](maths/check_polygon.py)
* [Chudnovsky Algorithm](maths/chudnovsky_algorithm.py)
* [Collatz Sequence](maths/collatz_sequence.py)
* [Combinations](maths/combinations.py)
* [Decimal Isolate](maths/decimal_isolate.py)
* [Decimal To Fraction](maths/decimal_to_fraction.py)
* [Dodecahedron](maths/dodecahedron.py)
* [Double Factorial Iterative](maths/double_factorial_iterative.py)
* [Double Factorial Recursive](maths/double_factorial_recursive.py)
* [Entropy](maths/entropy.py)
* [Euclidean Distance](maths/euclidean_distance.py)
* [Euclidean Gcd](maths/euclidean_gcd.py)
* [Euler Method](maths/euler_method.py)
* [Euler Modified](maths/euler_modified.py)
* [Eulers Totient](maths/eulers_totient.py)
* [Extended Euclidean Algorithm](maths/extended_euclidean_algorithm.py)
* [Factorial](maths/factorial.py)
* [Factors](maths/factors.py)
* [Fermat Little Theorem](maths/fermat_little_theorem.py)
* [Fibonacci](maths/fibonacci.py)
* [Find Max](maths/find_max.py)
* [Find Max Recursion](maths/find_max_recursion.py)
* [Find Min](maths/find_min.py)
* [Find Min Recursion](maths/find_min_recursion.py)
* [Floor](maths/floor.py)
* [Gamma](maths/gamma.py)
* [Gamma Recursive](maths/gamma_recursive.py)
* [Gaussian](maths/gaussian.py)
* [Gaussian Error Linear Unit](maths/gaussian_error_linear_unit.py)
* [Gcd Of N Numbers](maths/gcd_of_n_numbers.py)
* [Greatest Common Divisor](maths/greatest_common_divisor.py)
* [Greedy Coin Change](maths/greedy_coin_change.py)
* [Hamming Numbers](maths/hamming_numbers.py)
* [Hardy Ramanujanalgo](maths/hardy_ramanujanalgo.py)
* [Hexagonal Number](maths/hexagonal_number.py)
* [Integration By Simpson Approx](maths/integration_by_simpson_approx.py)
* [Is Ip V4 Address Valid](maths/is_ip_v4_address_valid.py)
* [Is Square Free](maths/is_square_free.py)
* [Jaccard Similarity](maths/jaccard_similarity.py)
* [Juggler Sequence](maths/juggler_sequence.py)
* [Kadanes](maths/kadanes.py)
* [Karatsuba](maths/karatsuba.py)
* [Krishnamurthy Number](maths/krishnamurthy_number.py)
* [Kth Lexicographic Permutation](maths/kth_lexicographic_permutation.py)
* [Largest Of Very Large Numbers](maths/largest_of_very_large_numbers.py)
* [Largest Subarray Sum](maths/largest_subarray_sum.py)
* [Least Common Multiple](maths/least_common_multiple.py)
* [Line Length](maths/line_length.py)
* [Liouville Lambda](maths/liouville_lambda.py)
* [Lucas Lehmer Primality Test](maths/lucas_lehmer_primality_test.py)
* [Lucas Series](maths/lucas_series.py)
* [Maclaurin Series](maths/maclaurin_series.py)
* [Manhattan Distance](maths/manhattan_distance.py)
* [Matrix Exponentiation](maths/matrix_exponentiation.py)
* [Max Sum Sliding Window](maths/max_sum_sliding_window.py)
* [Median Of Two Arrays](maths/median_of_two_arrays.py)
* [Miller Rabin](maths/miller_rabin.py)
* [Mobius Function](maths/mobius_function.py)
* [Modular Exponential](maths/modular_exponential.py)
* [Monte Carlo](maths/monte_carlo.py)
* [Monte Carlo Dice](maths/monte_carlo_dice.py)
* [Nevilles Method](maths/nevilles_method.py)
* [Newton Raphson](maths/newton_raphson.py)
* [Number Of Digits](maths/number_of_digits.py)
* [Numerical Integration](maths/numerical_integration.py)
* [Perfect Cube](maths/perfect_cube.py)
* [Perfect Number](maths/perfect_number.py)
* [Perfect Square](maths/perfect_square.py)
* [Persistence](maths/persistence.py)
* [Pi Monte Carlo Estimation](maths/pi_monte_carlo_estimation.py)
* [Points Are Collinear 3D](maths/points_are_collinear_3d.py)
* [Pollard Rho](maths/pollard_rho.py)
* [Polynomial Evaluation](maths/polynomial_evaluation.py)
* Polynomials
* [Single Indeterminate Operations](maths/polynomials/single_indeterminate_operations.py)
* [Power Using Recursion](maths/power_using_recursion.py)
* [Prime Check](maths/prime_check.py)
* [Prime Factors](maths/prime_factors.py)
* [Prime Numbers](maths/prime_numbers.py)
* [Prime Sieve Eratosthenes](maths/prime_sieve_eratosthenes.py)
* [Primelib](maths/primelib.py)
* [Print Multiplication Table](maths/print_multiplication_table.py)
* [Pronic Number](maths/pronic_number.py)
* [Proth Number](maths/proth_number.py)
* [Pythagoras](maths/pythagoras.py)
* [Qr Decomposition](maths/qr_decomposition.py)
* [Quadratic Equations Complex Numbers](maths/quadratic_equations_complex_numbers.py)
* [Radians](maths/radians.py)
* [Radix2 Fft](maths/radix2_fft.py)
* [Relu](maths/relu.py)
* [Runge Kutta](maths/runge_kutta.py)
* [Segmented Sieve](maths/segmented_sieve.py)
* Series
* [Arithmetic](maths/series/arithmetic.py)
* [Geometric](maths/series/geometric.py)
* [Geometric Series](maths/series/geometric_series.py)
* [Harmonic](maths/series/harmonic.py)
* [Harmonic Series](maths/series/harmonic_series.py)
* [Hexagonal Numbers](maths/series/hexagonal_numbers.py)
* [P Series](maths/series/p_series.py)
* [Sieve Of Eratosthenes](maths/sieve_of_eratosthenes.py)
* [Sigmoid](maths/sigmoid.py)
* [Sigmoid Linear Unit](maths/sigmoid_linear_unit.py)
* [Signum](maths/signum.py)
* [Simpson Rule](maths/simpson_rule.py)
* [Sin](maths/sin.py)
* [Sock Merchant](maths/sock_merchant.py)
* [Softmax](maths/softmax.py)
* [Square Root](maths/square_root.py)
* [Sum Of Arithmetic Series](maths/sum_of_arithmetic_series.py)
* [Sum Of Digits](maths/sum_of_digits.py)
* [Sum Of Geometric Progression](maths/sum_of_geometric_progression.py)
* [Sum Of Harmonic Series](maths/sum_of_harmonic_series.py)
* [Sumset](maths/sumset.py)
* [Sylvester Sequence](maths/sylvester_sequence.py)
* [Test Prime Check](maths/test_prime_check.py)
* [Trapezoidal Rule](maths/trapezoidal_rule.py)
* [Triplet Sum](maths/triplet_sum.py)
* [Twin Prime](maths/twin_prime.py)
* [Two Pointer](maths/two_pointer.py)
* [Two Sum](maths/two_sum.py)
* [Ugly Numbers](maths/ugly_numbers.py)
* [Volume](maths/volume.py)
* [Weird Number](maths/weird_number.py)
* [Zellers Congruence](maths/zellers_congruence.py)
## Matrix
* [Binary Search Matrix](matrix/binary_search_matrix.py)
* [Count Islands In Matrix](matrix/count_islands_in_matrix.py)
* [Count Paths](matrix/count_paths.py)
* [Cramers Rule 2X2](matrix/cramers_rule_2x2.py)
* [Inverse Of Matrix](matrix/inverse_of_matrix.py)
* [Largest Square Area In Matrix](matrix/largest_square_area_in_matrix.py)
* [Matrix Class](matrix/matrix_class.py)
* [Matrix Operation](matrix/matrix_operation.py)
* [Max Area Of Island](matrix/max_area_of_island.py)
* [Nth Fibonacci Using Matrix Exponentiation](matrix/nth_fibonacci_using_matrix_exponentiation.py)
* [Pascal Triangle](matrix/pascal_triangle.py)
* [Rotate Matrix](matrix/rotate_matrix.py)
* [Searching In Sorted Matrix](matrix/searching_in_sorted_matrix.py)
* [Sherman Morrison](matrix/sherman_morrison.py)
* [Spiral Print](matrix/spiral_print.py)
* Tests
* [Test Matrix Operation](matrix/tests/test_matrix_operation.py)
## Networking Flow
* [Ford Fulkerson](networking_flow/ford_fulkerson.py)
* [Minimum Cut](networking_flow/minimum_cut.py)
## Neural Network
* [2 Hidden Layers Neural Network](neural_network/2_hidden_layers_neural_network.py)
* [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py)
* [Convolution Neural Network](neural_network/convolution_neural_network.py)
* [Perceptron](neural_network/perceptron.py)
* [Simple Neural Network](neural_network/simple_neural_network.py)
## Other
* [Activity Selection](other/activity_selection.py)
* [Alternative List Arrange](other/alternative_list_arrange.py)
* [Davisb Putnamb Logemannb Loveland](other/davisb_putnamb_logemannb_loveland.py)
* [Dijkstra Bankers Algorithm](other/dijkstra_bankers_algorithm.py)
* [Doomsday](other/doomsday.py)
* [Fischer Yates Shuffle](other/fischer_yates_shuffle.py)
* [Gauss Easter](other/gauss_easter.py)
* [Graham Scan](other/graham_scan.py)
* [Greedy](other/greedy.py)
* [Least Recently Used](other/least_recently_used.py)
* [Lfu Cache](other/lfu_cache.py)
* [Linear Congruential Generator](other/linear_congruential_generator.py)
* [Lru Cache](other/lru_cache.py)
* [Magicdiamondpattern](other/magicdiamondpattern.py)
* [Maximum Subarray](other/maximum_subarray.py)
* [Nested Brackets](other/nested_brackets.py)
* [Password](other/password.py)
* [Quine](other/quine.py)
* [Scoring Algorithm](other/scoring_algorithm.py)
* [Sdes](other/sdes.py)
* [Tower Of Hanoi](other/tower_of_hanoi.py)
## Physics
* [Archimedes Principle](physics/archimedes_principle.py)
* [Casimir Effect](physics/casimir_effect.py)
* [Centripetal Force](physics/centripetal_force.py)
* [Horizontal Projectile Motion](physics/horizontal_projectile_motion.py)
* [Hubble Parameter](physics/hubble_parameter.py)
* [Ideal Gas Law](physics/ideal_gas_law.py)
* [Kinetic Energy](physics/kinetic_energy.py)
* [Lorentz Transformation Four Vector](physics/lorentz_transformation_four_vector.py)
* [Malus Law](physics/malus_law.py)
* [N Body Simulation](physics/n_body_simulation.py)
* [Newtons Law Of Gravitation](physics/newtons_law_of_gravitation.py)
* [Newtons Second Law Of Motion](physics/newtons_second_law_of_motion.py)
* [Potential Energy](physics/potential_energy.py)
* [Rms Speed Of Molecule](physics/rms_speed_of_molecule.py)
* [Shear Stress](physics/shear_stress.py)
## Project Euler
* Problem 001
* [Sol1](project_euler/problem_001/sol1.py)
* [Sol2](project_euler/problem_001/sol2.py)
* [Sol3](project_euler/problem_001/sol3.py)
* [Sol4](project_euler/problem_001/sol4.py)
* [Sol5](project_euler/problem_001/sol5.py)
* [Sol6](project_euler/problem_001/sol6.py)
* [Sol7](project_euler/problem_001/sol7.py)
* Problem 002
* [Sol1](project_euler/problem_002/sol1.py)
* [Sol2](project_euler/problem_002/sol2.py)
* [Sol3](project_euler/problem_002/sol3.py)
* [Sol4](project_euler/problem_002/sol4.py)
* [Sol5](project_euler/problem_002/sol5.py)
* Problem 003
* [Sol1](project_euler/problem_003/sol1.py)
* [Sol2](project_euler/problem_003/sol2.py)
* [Sol3](project_euler/problem_003/sol3.py)
* Problem 004
* [Sol1](project_euler/problem_004/sol1.py)
* [Sol2](project_euler/problem_004/sol2.py)
* Problem 005
* [Sol1](project_euler/problem_005/sol1.py)
* [Sol2](project_euler/problem_005/sol2.py)
* Problem 006
* [Sol1](project_euler/problem_006/sol1.py)
* [Sol2](project_euler/problem_006/sol2.py)
* [Sol3](project_euler/problem_006/sol3.py)
* [Sol4](project_euler/problem_006/sol4.py)
* Problem 007
* [Sol1](project_euler/problem_007/sol1.py)
* [Sol2](project_euler/problem_007/sol2.py)
* [Sol3](project_euler/problem_007/sol3.py)
* Problem 008
* [Sol1](project_euler/problem_008/sol1.py)
* [Sol2](project_euler/problem_008/sol2.py)
* [Sol3](project_euler/problem_008/sol3.py)
* Problem 009
* [Sol1](project_euler/problem_009/sol1.py)
* [Sol2](project_euler/problem_009/sol2.py)
* [Sol3](project_euler/problem_009/sol3.py)
* Problem 010
* [Sol1](project_euler/problem_010/sol1.py)
* [Sol2](project_euler/problem_010/sol2.py)
* [Sol3](project_euler/problem_010/sol3.py)
* Problem 011
* [Sol1](project_euler/problem_011/sol1.py)
* [Sol2](project_euler/problem_011/sol2.py)
* Problem 012
* [Sol1](project_euler/problem_012/sol1.py)
* [Sol2](project_euler/problem_012/sol2.py)
* Problem 013
* [Sol1](project_euler/problem_013/sol1.py)
* Problem 014
* [Sol1](project_euler/problem_014/sol1.py)
* [Sol2](project_euler/problem_014/sol2.py)
* Problem 015
* [Sol1](project_euler/problem_015/sol1.py)
* Problem 016
* [Sol1](project_euler/problem_016/sol1.py)
* [Sol2](project_euler/problem_016/sol2.py)
* Problem 017
* [Sol1](project_euler/problem_017/sol1.py)
* Problem 018
* [Solution](project_euler/problem_018/solution.py)
* Problem 019
* [Sol1](project_euler/problem_019/sol1.py)
* Problem 020
* [Sol1](project_euler/problem_020/sol1.py)
* [Sol2](project_euler/problem_020/sol2.py)
* [Sol3](project_euler/problem_020/sol3.py)
* [Sol4](project_euler/problem_020/sol4.py)
* Problem 021
* [Sol1](project_euler/problem_021/sol1.py)
* Problem 022
* [Sol1](project_euler/problem_022/sol1.py)
* [Sol2](project_euler/problem_022/sol2.py)
* Problem 023
* [Sol1](project_euler/problem_023/sol1.py)
* Problem 024
* [Sol1](project_euler/problem_024/sol1.py)
* Problem 025
* [Sol1](project_euler/problem_025/sol1.py)
* [Sol2](project_euler/problem_025/sol2.py)
* [Sol3](project_euler/problem_025/sol3.py)
* Problem 026
* [Sol1](project_euler/problem_026/sol1.py)
* Problem 027
* [Sol1](project_euler/problem_027/sol1.py)
* Problem 028
* [Sol1](project_euler/problem_028/sol1.py)
* Problem 029
* [Sol1](project_euler/problem_029/sol1.py)
* Problem 030
* [Sol1](project_euler/problem_030/sol1.py)
* Problem 031
* [Sol1](project_euler/problem_031/sol1.py)
* [Sol2](project_euler/problem_031/sol2.py)
* Problem 032
* [Sol32](project_euler/problem_032/sol32.py)
* Problem 033
* [Sol1](project_euler/problem_033/sol1.py)
* Problem 034
* [Sol1](project_euler/problem_034/sol1.py)
* Problem 035
* [Sol1](project_euler/problem_035/sol1.py)
* Problem 036
* [Sol1](project_euler/problem_036/sol1.py)
* Problem 037
* [Sol1](project_euler/problem_037/sol1.py)
* Problem 038
* [Sol1](project_euler/problem_038/sol1.py)
* Problem 039
* [Sol1](project_euler/problem_039/sol1.py)
* Problem 040
* [Sol1](project_euler/problem_040/sol1.py)
* Problem 041
* [Sol1](project_euler/problem_041/sol1.py)
* Problem 042
* [Solution42](project_euler/problem_042/solution42.py)
* Problem 043
* [Sol1](project_euler/problem_043/sol1.py)
* Problem 044
* [Sol1](project_euler/problem_044/sol1.py)
* Problem 045
* [Sol1](project_euler/problem_045/sol1.py)
* Problem 046
* [Sol1](project_euler/problem_046/sol1.py)
* Problem 047
* [Sol1](project_euler/problem_047/sol1.py)
* Problem 048
* [Sol1](project_euler/problem_048/sol1.py)
* Problem 049
* [Sol1](project_euler/problem_049/sol1.py)
* Problem 050
* [Sol1](project_euler/problem_050/sol1.py)
* Problem 051
* [Sol1](project_euler/problem_051/sol1.py)
* Problem 052
* [Sol1](project_euler/problem_052/sol1.py)
* Problem 053
* [Sol1](project_euler/problem_053/sol1.py)
* Problem 054
* [Sol1](project_euler/problem_054/sol1.py)
* [Test Poker Hand](project_euler/problem_054/test_poker_hand.py)
* Problem 055
* [Sol1](project_euler/problem_055/sol1.py)
* Problem 056
* [Sol1](project_euler/problem_056/sol1.py)
* Problem 057
* [Sol1](project_euler/problem_057/sol1.py)
* Problem 058
* [Sol1](project_euler/problem_058/sol1.py)
* Problem 059
* [Sol1](project_euler/problem_059/sol1.py)
* Problem 062
* [Sol1](project_euler/problem_062/sol1.py)
* Problem 063
* [Sol1](project_euler/problem_063/sol1.py)
* Problem 064
* [Sol1](project_euler/problem_064/sol1.py)
* Problem 065
* [Sol1](project_euler/problem_065/sol1.py)
* Problem 067
* [Sol1](project_euler/problem_067/sol1.py)
* [Sol2](project_euler/problem_067/sol2.py)
* Problem 068
* [Sol1](project_euler/problem_068/sol1.py)
* Problem 069
* [Sol1](project_euler/problem_069/sol1.py)
* Problem 070
* [Sol1](project_euler/problem_070/sol1.py)
* Problem 071
* [Sol1](project_euler/problem_071/sol1.py)
* Problem 072
* [Sol1](project_euler/problem_072/sol1.py)
* [Sol2](project_euler/problem_072/sol2.py)
* Problem 073
* [Sol1](project_euler/problem_073/sol1.py)
* Problem 074
* [Sol1](project_euler/problem_074/sol1.py)
* [Sol2](project_euler/problem_074/sol2.py)
* Problem 075
* [Sol1](project_euler/problem_075/sol1.py)
* Problem 076
* [Sol1](project_euler/problem_076/sol1.py)
* Problem 077
* [Sol1](project_euler/problem_077/sol1.py)
* Problem 078
* [Sol1](project_euler/problem_078/sol1.py)
* Problem 080
* [Sol1](project_euler/problem_080/sol1.py)
* Problem 081
* [Sol1](project_euler/problem_081/sol1.py)
* Problem 082
* [Sol1](project_euler/problem_082/sol1.py)
* Problem 085
* [Sol1](project_euler/problem_085/sol1.py)
* Problem 086
* [Sol1](project_euler/problem_086/sol1.py)
* Problem 087
* [Sol1](project_euler/problem_087/sol1.py)
* Problem 089
* [Sol1](project_euler/problem_089/sol1.py)
* Problem 091
* [Sol1](project_euler/problem_091/sol1.py)
* Problem 092
* [Sol1](project_euler/problem_092/sol1.py)
* Problem 097
* [Sol1](project_euler/problem_097/sol1.py)
* Problem 099
* [Sol1](project_euler/problem_099/sol1.py)
* Problem 100
* [Sol1](project_euler/problem_100/sol1.py)
* Problem 101
* [Sol1](project_euler/problem_101/sol1.py)
* Problem 102
* [Sol1](project_euler/problem_102/sol1.py)
* Problem 104
* [Sol1](project_euler/problem_104/sol1.py)
* Problem 107
* [Sol1](project_euler/problem_107/sol1.py)
* Problem 109
* [Sol1](project_euler/problem_109/sol1.py)
* Problem 112
* [Sol1](project_euler/problem_112/sol1.py)
* Problem 113
* [Sol1](project_euler/problem_113/sol1.py)
* Problem 114
* [Sol1](project_euler/problem_114/sol1.py)
* Problem 115
* [Sol1](project_euler/problem_115/sol1.py)
* Problem 116
* [Sol1](project_euler/problem_116/sol1.py)
* Problem 117
* [Sol1](project_euler/problem_117/sol1.py)
* Problem 119
* [Sol1](project_euler/problem_119/sol1.py)
* Problem 120
* [Sol1](project_euler/problem_120/sol1.py)
* Problem 121
* [Sol1](project_euler/problem_121/sol1.py)
* Problem 123
* [Sol1](project_euler/problem_123/sol1.py)
* Problem 125
* [Sol1](project_euler/problem_125/sol1.py)
* Problem 129
* [Sol1](project_euler/problem_129/sol1.py)
* Problem 131
* [Sol1](project_euler/problem_131/sol1.py)
* Problem 135
* [Sol1](project_euler/problem_135/sol1.py)
* Problem 144
* [Sol1](project_euler/problem_144/sol1.py)
* Problem 145
* [Sol1](project_euler/problem_145/sol1.py)
* Problem 173
* [Sol1](project_euler/problem_173/sol1.py)
* Problem 174
* [Sol1](project_euler/problem_174/sol1.py)
* Problem 180
* [Sol1](project_euler/problem_180/sol1.py)
* Problem 188
* [Sol1](project_euler/problem_188/sol1.py)
* Problem 191
* [Sol1](project_euler/problem_191/sol1.py)
* Problem 203
* [Sol1](project_euler/problem_203/sol1.py)
* Problem 205
* [Sol1](project_euler/problem_205/sol1.py)
* Problem 206
* [Sol1](project_euler/problem_206/sol1.py)
* Problem 207
* [Sol1](project_euler/problem_207/sol1.py)
* Problem 234
* [Sol1](project_euler/problem_234/sol1.py)
* Problem 301
* [Sol1](project_euler/problem_301/sol1.py)
* Problem 493
* [Sol1](project_euler/problem_493/sol1.py)
* Problem 551
* [Sol1](project_euler/problem_551/sol1.py)
* Problem 587
* [Sol1](project_euler/problem_587/sol1.py)
* Problem 686
* [Sol1](project_euler/problem_686/sol1.py)
## Quantum
* [Bb84](quantum/bb84.py)
* [Deutsch Jozsa](quantum/deutsch_jozsa.py)
* [Half Adder](quantum/half_adder.py)
* [Not Gate](quantum/not_gate.py)
* [Q Fourier Transform](quantum/q_fourier_transform.py)
* [Q Full Adder](quantum/q_full_adder.py)
* [Quantum Entanglement](quantum/quantum_entanglement.py)
* [Quantum Teleportation](quantum/quantum_teleportation.py)
* [Ripple Adder Classic](quantum/ripple_adder_classic.py)
* [Single Qubit Measure](quantum/single_qubit_measure.py)
* [Superdense Coding](quantum/superdense_coding.py)
## Scheduling
* [First Come First Served](scheduling/first_come_first_served.py)
* [Highest Response Ratio Next](scheduling/highest_response_ratio_next.py)
* [Job Sequencing With Deadline](scheduling/job_sequencing_with_deadline.py)
* [Multi Level Feedback Queue](scheduling/multi_level_feedback_queue.py)
* [Non Preemptive Shortest Job First](scheduling/non_preemptive_shortest_job_first.py)
* [Round Robin](scheduling/round_robin.py)
* [Shortest Job First](scheduling/shortest_job_first.py)
## Searches
* [Binary Search](searches/binary_search.py)
* [Binary Tree Traversal](searches/binary_tree_traversal.py)
* [Double Linear Search](searches/double_linear_search.py)
* [Double Linear Search Recursion](searches/double_linear_search_recursion.py)
* [Fibonacci Search](searches/fibonacci_search.py)
* [Hill Climbing](searches/hill_climbing.py)
* [Interpolation Search](searches/interpolation_search.py)
* [Jump Search](searches/jump_search.py)
* [Linear Search](searches/linear_search.py)
* [Quick Select](searches/quick_select.py)
* [Sentinel Linear Search](searches/sentinel_linear_search.py)
* [Simple Binary Search](searches/simple_binary_search.py)
* [Simulated Annealing](searches/simulated_annealing.py)
* [Tabu Search](searches/tabu_search.py)
* [Ternary Search](searches/ternary_search.py)
## Sorts
* [Bead Sort](sorts/bead_sort.py)
* [Bitonic Sort](sorts/bitonic_sort.py)
* [Bogo Sort](sorts/bogo_sort.py)
* [Bubble Sort](sorts/bubble_sort.py)
* [Bucket Sort](sorts/bucket_sort.py)
* [Circle Sort](sorts/circle_sort.py)
* [Cocktail Shaker Sort](sorts/cocktail_shaker_sort.py)
* [Comb Sort](sorts/comb_sort.py)
* [Counting Sort](sorts/counting_sort.py)
* [Cycle Sort](sorts/cycle_sort.py)
* [Double Sort](sorts/double_sort.py)
* [Dutch National Flag Sort](sorts/dutch_national_flag_sort.py)
* [Exchange Sort](sorts/exchange_sort.py)
* [External Sort](sorts/external_sort.py)
* [Gnome Sort](sorts/gnome_sort.py)
* [Heap Sort](sorts/heap_sort.py)
* [Insertion Sort](sorts/insertion_sort.py)
* [Intro Sort](sorts/intro_sort.py)
* [Iterative Merge Sort](sorts/iterative_merge_sort.py)
* [Merge Insertion Sort](sorts/merge_insertion_sort.py)
* [Merge Sort](sorts/merge_sort.py)
* [Msd Radix Sort](sorts/msd_radix_sort.py)
* [Natural Sort](sorts/natural_sort.py)
* [Odd Even Sort](sorts/odd_even_sort.py)
* [Odd Even Transposition Parallel](sorts/odd_even_transposition_parallel.py)
* [Odd Even Transposition Single Threaded](sorts/odd_even_transposition_single_threaded.py)
* [Pancake Sort](sorts/pancake_sort.py)
* [Patience Sort](sorts/patience_sort.py)
* [Pigeon Sort](sorts/pigeon_sort.py)
* [Pigeonhole Sort](sorts/pigeonhole_sort.py)
* [Quick Sort](sorts/quick_sort.py)
* [Quick Sort 3 Partition](sorts/quick_sort_3_partition.py)
* [Radix Sort](sorts/radix_sort.py)
* [Random Normal Distribution Quicksort](sorts/random_normal_distribution_quicksort.py)
* [Random Pivot Quick Sort](sorts/random_pivot_quick_sort.py)
* [Recursive Bubble Sort](sorts/recursive_bubble_sort.py)
* [Recursive Insertion Sort](sorts/recursive_insertion_sort.py)
* [Recursive Mergesort Array](sorts/recursive_mergesort_array.py)
* [Recursive Quick Sort](sorts/recursive_quick_sort.py)
* [Selection Sort](sorts/selection_sort.py)
* [Shell Sort](sorts/shell_sort.py)
* [Shrink Shell Sort](sorts/shrink_shell_sort.py)
* [Slowsort](sorts/slowsort.py)
* [Stooge Sort](sorts/stooge_sort.py)
* [Strand Sort](sorts/strand_sort.py)
* [Tim Sort](sorts/tim_sort.py)
* [Topological Sort](sorts/topological_sort.py)
* [Tree Sort](sorts/tree_sort.py)
* [Unknown Sort](sorts/unknown_sort.py)
* [Wiggle Sort](sorts/wiggle_sort.py)
## Strings
* [Aho Corasick](strings/aho_corasick.py)
* [Alternative String Arrange](strings/alternative_string_arrange.py)
* [Anagrams](strings/anagrams.py)
* [Autocomplete Using Trie](strings/autocomplete_using_trie.py)
* [Barcode Validator](strings/barcode_validator.py)
* [Boyer Moore Search](strings/boyer_moore_search.py)
* [Can String Be Rearranged As Palindrome](strings/can_string_be_rearranged_as_palindrome.py)
* [Capitalize](strings/capitalize.py)
* [Check Anagrams](strings/check_anagrams.py)
* [Credit Card Validator](strings/credit_card_validator.py)
* [Detecting English Programmatically](strings/detecting_english_programmatically.py)
* [Dna](strings/dna.py)
* [Frequency Finder](strings/frequency_finder.py)
* [Hamming Distance](strings/hamming_distance.py)
* [Indian Phone Validator](strings/indian_phone_validator.py)
* [Is Contains Unique Chars](strings/is_contains_unique_chars.py)
* [Is Isogram](strings/is_isogram.py)
* [Is Palindrome](strings/is_palindrome.py)
* [Is Pangram](strings/is_pangram.py)
* [Is Spain National Id](strings/is_spain_national_id.py)
* [Is Srilankan Phone Number](strings/is_srilankan_phone_number.py)
* [Jaro Winkler](strings/jaro_winkler.py)
* [Join](strings/join.py)
* [Knuth Morris Pratt](strings/knuth_morris_pratt.py)
* [Levenshtein Distance](strings/levenshtein_distance.py)
* [Lower](strings/lower.py)
* [Manacher](strings/manacher.py)
* [Min Cost String Conversion](strings/min_cost_string_conversion.py)
* [Naive String Search](strings/naive_string_search.py)
* [Ngram](strings/ngram.py)
* [Palindrome](strings/palindrome.py)
* [Prefix Function](strings/prefix_function.py)
* [Rabin Karp](strings/rabin_karp.py)
* [Remove Duplicate](strings/remove_duplicate.py)
* [Reverse Letters](strings/reverse_letters.py)
* [Reverse Long Words](strings/reverse_long_words.py)
* [Reverse Words](strings/reverse_words.py)
* [Snake Case To Camel Pascal Case](strings/snake_case_to_camel_pascal_case.py)
* [Split](strings/split.py)
* [Text Justification](strings/text_justification.py)
* [Upper](strings/upper.py)
* [Wave](strings/wave.py)
* [Wildcard Pattern Matching](strings/wildcard_pattern_matching.py)
* [Word Occurrence](strings/word_occurrence.py)
* [Word Patterns](strings/word_patterns.py)
* [Z Function](strings/z_function.py)
## Web Programming
* [Co2 Emission](web_programming/co2_emission.py)
* [Convert Number To Words](web_programming/convert_number_to_words.py)
* [Covid Stats Via Xpath](web_programming/covid_stats_via_xpath.py)
* [Crawl Google Results](web_programming/crawl_google_results.py)
* [Crawl Google Scholar Citation](web_programming/crawl_google_scholar_citation.py)
* [Currency Converter](web_programming/currency_converter.py)
* [Current Stock Price](web_programming/current_stock_price.py)
* [Current Weather](web_programming/current_weather.py)
* [Daily Horoscope](web_programming/daily_horoscope.py)
* [Download Images From Google Query](web_programming/download_images_from_google_query.py)
* [Emails From Url](web_programming/emails_from_url.py)
* [Fetch Anime And Play](web_programming/fetch_anime_and_play.py)
* [Fetch Bbc News](web_programming/fetch_bbc_news.py)
* [Fetch Github Info](web_programming/fetch_github_info.py)
* [Fetch Jobs](web_programming/fetch_jobs.py)
* [Fetch Quotes](web_programming/fetch_quotes.py)
* [Fetch Well Rx Price](web_programming/fetch_well_rx_price.py)
* [Get Amazon Product Data](web_programming/get_amazon_product_data.py)
* [Get Imdb Top 250 Movies Csv](web_programming/get_imdb_top_250_movies_csv.py)
* [Get Imdbtop](web_programming/get_imdbtop.py)
* [Get Top Hn Posts](web_programming/get_top_hn_posts.py)
* [Get User Tweets](web_programming/get_user_tweets.py)
* [Giphy](web_programming/giphy.py)
* [Instagram Crawler](web_programming/instagram_crawler.py)
* [Instagram Pic](web_programming/instagram_pic.py)
* [Instagram Video](web_programming/instagram_video.py)
* [Nasa Data](web_programming/nasa_data.py)
* [Open Google Results](web_programming/open_google_results.py)
* [Random Anime Character](web_programming/random_anime_character.py)
* [Recaptcha Verification](web_programming/recaptcha_verification.py)
* [Reddit](web_programming/reddit.py)
* [Search Books By Isbn](web_programming/search_books_by_isbn.py)
* [Slack Message](web_programming/slack_message.py)
* [Test Fetch Github Info](web_programming/test_fetch_github_info.py)
* [World Covid19 Stats](web_programming/world_covid19_stats.py)
|
## Arithmetic Analysis
* [Bisection](arithmetic_analysis/bisection.py)
* [Gaussian Elimination](arithmetic_analysis/gaussian_elimination.py)
* [In Static Equilibrium](arithmetic_analysis/in_static_equilibrium.py)
* [Intersection](arithmetic_analysis/intersection.py)
* [Jacobi Iteration Method](arithmetic_analysis/jacobi_iteration_method.py)
* [Lu Decomposition](arithmetic_analysis/lu_decomposition.py)
* [Newton Forward Interpolation](arithmetic_analysis/newton_forward_interpolation.py)
* [Newton Method](arithmetic_analysis/newton_method.py)
* [Newton Raphson](arithmetic_analysis/newton_raphson.py)
* [Newton Raphson New](arithmetic_analysis/newton_raphson_new.py)
* [Secant Method](arithmetic_analysis/secant_method.py)
## Audio Filters
* [Butterworth Filter](audio_filters/butterworth_filter.py)
* [Iir Filter](audio_filters/iir_filter.py)
* [Show Response](audio_filters/show_response.py)
## Backtracking
* [All Combinations](backtracking/all_combinations.py)
* [All Permutations](backtracking/all_permutations.py)
* [All Subsequences](backtracking/all_subsequences.py)
* [Coloring](backtracking/coloring.py)
* [Combination Sum](backtracking/combination_sum.py)
* [Hamiltonian Cycle](backtracking/hamiltonian_cycle.py)
* [Knight Tour](backtracking/knight_tour.py)
* [Minimax](backtracking/minimax.py)
* [Minmax](backtracking/minmax.py)
* [N Queens](backtracking/n_queens.py)
* [N Queens Math](backtracking/n_queens_math.py)
* [Rat In Maze](backtracking/rat_in_maze.py)
* [Sudoku](backtracking/sudoku.py)
* [Sum Of Subsets](backtracking/sum_of_subsets.py)
* [Word Search](backtracking/word_search.py)
## Bit Manipulation
* [Binary And Operator](bit_manipulation/binary_and_operator.py)
* [Binary Count Setbits](bit_manipulation/binary_count_setbits.py)
* [Binary Count Trailing Zeros](bit_manipulation/binary_count_trailing_zeros.py)
* [Binary Or Operator](bit_manipulation/binary_or_operator.py)
* [Binary Shifts](bit_manipulation/binary_shifts.py)
* [Binary Twos Complement](bit_manipulation/binary_twos_complement.py)
* [Binary Xor Operator](bit_manipulation/binary_xor_operator.py)
* [Count 1S Brian Kernighan Method](bit_manipulation/count_1s_brian_kernighan_method.py)
* [Count Number Of One Bits](bit_manipulation/count_number_of_one_bits.py)
* [Gray Code Sequence](bit_manipulation/gray_code_sequence.py)
* [Highest Set Bit](bit_manipulation/highest_set_bit.py)
* [Index Of Rightmost Set Bit](bit_manipulation/index_of_rightmost_set_bit.py)
* [Is Even](bit_manipulation/is_even.py)
* [Is Power Of Two](bit_manipulation/is_power_of_two.py)
* [Numbers Different Signs](bit_manipulation/numbers_different_signs.py)
* [Reverse Bits](bit_manipulation/reverse_bits.py)
* [Single Bit Manipulation Operations](bit_manipulation/single_bit_manipulation_operations.py)
## Blockchain
* [Chinese Remainder Theorem](blockchain/chinese_remainder_theorem.py)
* [Diophantine Equation](blockchain/diophantine_equation.py)
* [Modular Division](blockchain/modular_division.py)
## Boolean Algebra
* [And Gate](boolean_algebra/and_gate.py)
* [Nand Gate](boolean_algebra/nand_gate.py)
* [Norgate](boolean_algebra/norgate.py)
* [Not Gate](boolean_algebra/not_gate.py)
* [Or Gate](boolean_algebra/or_gate.py)
* [Quine Mc Cluskey](boolean_algebra/quine_mc_cluskey.py)
* [Xnor Gate](boolean_algebra/xnor_gate.py)
* [Xor Gate](boolean_algebra/xor_gate.py)
## Cellular Automata
* [Conways Game Of Life](cellular_automata/conways_game_of_life.py)
* [Game Of Life](cellular_automata/game_of_life.py)
* [Nagel Schrekenberg](cellular_automata/nagel_schrekenberg.py)
* [One Dimensional](cellular_automata/one_dimensional.py)
## Ciphers
* [A1Z26](ciphers/a1z26.py)
* [Affine Cipher](ciphers/affine_cipher.py)
* [Atbash](ciphers/atbash.py)
* [Autokey](ciphers/autokey.py)
* [Baconian Cipher](ciphers/baconian_cipher.py)
* [Base16](ciphers/base16.py)
* [Base32](ciphers/base32.py)
* [Base64](ciphers/base64.py)
* [Base85](ciphers/base85.py)
* [Beaufort Cipher](ciphers/beaufort_cipher.py)
* [Bifid](ciphers/bifid.py)
* [Brute Force Caesar Cipher](ciphers/brute_force_caesar_cipher.py)
* [Caesar Cipher](ciphers/caesar_cipher.py)
* [Cryptomath Module](ciphers/cryptomath_module.py)
* [Decrypt Caesar With Chi Squared](ciphers/decrypt_caesar_with_chi_squared.py)
* [Deterministic Miller Rabin](ciphers/deterministic_miller_rabin.py)
* [Diffie](ciphers/diffie.py)
* [Diffie Hellman](ciphers/diffie_hellman.py)
* [Elgamal Key Generator](ciphers/elgamal_key_generator.py)
* [Enigma Machine2](ciphers/enigma_machine2.py)
* [Hill Cipher](ciphers/hill_cipher.py)
* [Mixed Keyword Cypher](ciphers/mixed_keyword_cypher.py)
* [Mono Alphabetic Ciphers](ciphers/mono_alphabetic_ciphers.py)
* [Morse Code](ciphers/morse_code.py)
* [Onepad Cipher](ciphers/onepad_cipher.py)
* [Playfair Cipher](ciphers/playfair_cipher.py)
* [Polybius](ciphers/polybius.py)
* [Porta Cipher](ciphers/porta_cipher.py)
* [Rabin Miller](ciphers/rabin_miller.py)
* [Rail Fence Cipher](ciphers/rail_fence_cipher.py)
* [Rot13](ciphers/rot13.py)
* [Rsa Cipher](ciphers/rsa_cipher.py)
* [Rsa Factorization](ciphers/rsa_factorization.py)
* [Rsa Key Generator](ciphers/rsa_key_generator.py)
* [Shuffled Shift Cipher](ciphers/shuffled_shift_cipher.py)
* [Simple Keyword Cypher](ciphers/simple_keyword_cypher.py)
* [Simple Substitution Cipher](ciphers/simple_substitution_cipher.py)
* [Trafid Cipher](ciphers/trafid_cipher.py)
* [Transposition Cipher](ciphers/transposition_cipher.py)
* [Transposition Cipher Encrypt Decrypt File](ciphers/transposition_cipher_encrypt_decrypt_file.py)
* [Vigenere Cipher](ciphers/vigenere_cipher.py)
* [Xor Cipher](ciphers/xor_cipher.py)
## Compression
* [Burrows Wheeler](compression/burrows_wheeler.py)
* [Huffman](compression/huffman.py)
* [Lempel Ziv](compression/lempel_ziv.py)
* [Lempel Ziv Decompress](compression/lempel_ziv_decompress.py)
* [Lz77](compression/lz77.py)
* [Peak Signal To Noise Ratio](compression/peak_signal_to_noise_ratio.py)
* [Run Length Encoding](compression/run_length_encoding.py)
## Computer Vision
* [Cnn Classification](computer_vision/cnn_classification.py)
* [Flip Augmentation](computer_vision/flip_augmentation.py)
* [Harris Corner](computer_vision/harris_corner.py)
* [Horn Schunck](computer_vision/horn_schunck.py)
* [Mean Threshold](computer_vision/mean_threshold.py)
* [Mosaic Augmentation](computer_vision/mosaic_augmentation.py)
* [Pooling Functions](computer_vision/pooling_functions.py)
## Conversions
* [Astronomical Length Scale Conversion](conversions/astronomical_length_scale_conversion.py)
* [Binary To Decimal](conversions/binary_to_decimal.py)
* [Binary To Hexadecimal](conversions/binary_to_hexadecimal.py)
* [Binary To Octal](conversions/binary_to_octal.py)
* [Decimal To Any](conversions/decimal_to_any.py)
* [Decimal To Binary](conversions/decimal_to_binary.py)
* [Decimal To Binary Recursion](conversions/decimal_to_binary_recursion.py)
* [Decimal To Hexadecimal](conversions/decimal_to_hexadecimal.py)
* [Decimal To Octal](conversions/decimal_to_octal.py)
* [Excel Title To Column](conversions/excel_title_to_column.py)
* [Hex To Bin](conversions/hex_to_bin.py)
* [Hexadecimal To Decimal](conversions/hexadecimal_to_decimal.py)
* [Length Conversion](conversions/length_conversion.py)
* [Molecular Chemistry](conversions/molecular_chemistry.py)
* [Octal To Decimal](conversions/octal_to_decimal.py)
* [Prefix Conversions](conversions/prefix_conversions.py)
* [Prefix Conversions String](conversions/prefix_conversions_string.py)
* [Pressure Conversions](conversions/pressure_conversions.py)
* [Rgb Hsv Conversion](conversions/rgb_hsv_conversion.py)
* [Roman Numerals](conversions/roman_numerals.py)
* [Speed Conversions](conversions/speed_conversions.py)
* [Temperature Conversions](conversions/temperature_conversions.py)
* [Volume Conversions](conversions/volume_conversions.py)
* [Weight Conversion](conversions/weight_conversion.py)
## Data Structures
* Arrays
* [Permutations](data_structures/arrays/permutations.py)
* [Prefix Sum](data_structures/arrays/prefix_sum.py)
* Binary Tree
* [Avl Tree](data_structures/binary_tree/avl_tree.py)
* [Basic Binary Tree](data_structures/binary_tree/basic_binary_tree.py)
* [Binary Search Tree](data_structures/binary_tree/binary_search_tree.py)
* [Binary Search Tree Recursive](data_structures/binary_tree/binary_search_tree_recursive.py)
* [Binary Tree Mirror](data_structures/binary_tree/binary_tree_mirror.py)
* [Binary Tree Node Sum](data_structures/binary_tree/binary_tree_node_sum.py)
* [Binary Tree Path Sum](data_structures/binary_tree/binary_tree_path_sum.py)
* [Binary Tree Traversals](data_structures/binary_tree/binary_tree_traversals.py)
* [Diff Views Of Binary Tree](data_structures/binary_tree/diff_views_of_binary_tree.py)
* [Distribute Coins](data_structures/binary_tree/distribute_coins.py)
* [Fenwick Tree](data_structures/binary_tree/fenwick_tree.py)
* [Inorder Tree Traversal 2022](data_structures/binary_tree/inorder_tree_traversal_2022.py)
* [Is Bst](data_structures/binary_tree/is_bst.py)
* [Lazy Segment Tree](data_structures/binary_tree/lazy_segment_tree.py)
* [Lowest Common Ancestor](data_structures/binary_tree/lowest_common_ancestor.py)
* [Maximum Fenwick Tree](data_structures/binary_tree/maximum_fenwick_tree.py)
* [Merge Two Binary Trees](data_structures/binary_tree/merge_two_binary_trees.py)
* [Non Recursive Segment Tree](data_structures/binary_tree/non_recursive_segment_tree.py)
* [Number Of Possible Binary Trees](data_structures/binary_tree/number_of_possible_binary_trees.py)
* [Red Black Tree](data_structures/binary_tree/red_black_tree.py)
* [Segment Tree](data_structures/binary_tree/segment_tree.py)
* [Segment Tree Other](data_structures/binary_tree/segment_tree_other.py)
* [Treap](data_structures/binary_tree/treap.py)
* [Wavelet Tree](data_structures/binary_tree/wavelet_tree.py)
* Disjoint Set
* [Alternate Disjoint Set](data_structures/disjoint_set/alternate_disjoint_set.py)
* [Disjoint Set](data_structures/disjoint_set/disjoint_set.py)
* Hashing
* [Double Hash](data_structures/hashing/double_hash.py)
* [Hash Map](data_structures/hashing/hash_map.py)
* [Hash Table](data_structures/hashing/hash_table.py)
* [Hash Table With Linked List](data_structures/hashing/hash_table_with_linked_list.py)
* Number Theory
* [Prime Numbers](data_structures/hashing/number_theory/prime_numbers.py)
* [Quadratic Probing](data_structures/hashing/quadratic_probing.py)
* Tests
* [Test Hash Map](data_structures/hashing/tests/test_hash_map.py)
* Heap
* [Binomial Heap](data_structures/heap/binomial_heap.py)
* [Heap](data_structures/heap/heap.py)
* [Heap Generic](data_structures/heap/heap_generic.py)
* [Max Heap](data_structures/heap/max_heap.py)
* [Min Heap](data_structures/heap/min_heap.py)
* [Randomized Heap](data_structures/heap/randomized_heap.py)
* [Skew Heap](data_structures/heap/skew_heap.py)
* Linked List
* [Circular Linked List](data_structures/linked_list/circular_linked_list.py)
* [Deque Doubly](data_structures/linked_list/deque_doubly.py)
* [Doubly Linked List](data_structures/linked_list/doubly_linked_list.py)
* [Doubly Linked List Two](data_structures/linked_list/doubly_linked_list_two.py)
* [From Sequence](data_structures/linked_list/from_sequence.py)
* [Has Loop](data_structures/linked_list/has_loop.py)
* [Is Palindrome](data_structures/linked_list/is_palindrome.py)
* [Merge Two Lists](data_structures/linked_list/merge_two_lists.py)
* [Middle Element Of Linked List](data_structures/linked_list/middle_element_of_linked_list.py)
* [Print Reverse](data_structures/linked_list/print_reverse.py)
* [Singly Linked List](data_structures/linked_list/singly_linked_list.py)
* [Skip List](data_structures/linked_list/skip_list.py)
* [Swap Nodes](data_structures/linked_list/swap_nodes.py)
* Queue
* [Circular Queue](data_structures/queue/circular_queue.py)
* [Circular Queue Linked List](data_structures/queue/circular_queue_linked_list.py)
* [Double Ended Queue](data_structures/queue/double_ended_queue.py)
* [Linked Queue](data_structures/queue/linked_queue.py)
* [Priority Queue Using List](data_structures/queue/priority_queue_using_list.py)
* [Queue On List](data_structures/queue/queue_on_list.py)
* [Queue On Pseudo Stack](data_structures/queue/queue_on_pseudo_stack.py)
* Stacks
* [Balanced Parentheses](data_structures/stacks/balanced_parentheses.py)
* [Dijkstras Two Stack Algorithm](data_structures/stacks/dijkstras_two_stack_algorithm.py)
* [Evaluate Postfix Notations](data_structures/stacks/evaluate_postfix_notations.py)
* [Infix To Postfix Conversion](data_structures/stacks/infix_to_postfix_conversion.py)
* [Infix To Prefix Conversion](data_structures/stacks/infix_to_prefix_conversion.py)
* [Next Greater Element](data_structures/stacks/next_greater_element.py)
* [Postfix Evaluation](data_structures/stacks/postfix_evaluation.py)
* [Prefix Evaluation](data_structures/stacks/prefix_evaluation.py)
* [Stack](data_structures/stacks/stack.py)
* [Stack With Doubly Linked List](data_structures/stacks/stack_with_doubly_linked_list.py)
* [Stack With Singly Linked List](data_structures/stacks/stack_with_singly_linked_list.py)
* [Stock Span Problem](data_structures/stacks/stock_span_problem.py)
* Trie
* [Radix Tree](data_structures/trie/radix_tree.py)
* [Trie](data_structures/trie/trie.py)
## Digital Image Processing
* [Change Brightness](digital_image_processing/change_brightness.py)
* [Change Contrast](digital_image_processing/change_contrast.py)
* [Convert To Negative](digital_image_processing/convert_to_negative.py)
* Dithering
* [Burkes](digital_image_processing/dithering/burkes.py)
* Edge Detection
* [Canny](digital_image_processing/edge_detection/canny.py)
* Filters
* [Bilateral Filter](digital_image_processing/filters/bilateral_filter.py)
* [Convolve](digital_image_processing/filters/convolve.py)
* [Gabor Filter](digital_image_processing/filters/gabor_filter.py)
* [Gaussian Filter](digital_image_processing/filters/gaussian_filter.py)
* [Local Binary Pattern](digital_image_processing/filters/local_binary_pattern.py)
* [Median Filter](digital_image_processing/filters/median_filter.py)
* [Sobel Filter](digital_image_processing/filters/sobel_filter.py)
* Histogram Equalization
* [Histogram Stretch](digital_image_processing/histogram_equalization/histogram_stretch.py)
* [Index Calculation](digital_image_processing/index_calculation.py)
* Morphological Operations
* [Dilation Operation](digital_image_processing/morphological_operations/dilation_operation.py)
* [Erosion Operation](digital_image_processing/morphological_operations/erosion_operation.py)
* Resize
* [Resize](digital_image_processing/resize/resize.py)
* Rotation
* [Rotation](digital_image_processing/rotation/rotation.py)
* [Sepia](digital_image_processing/sepia.py)
* [Test Digital Image Processing](digital_image_processing/test_digital_image_processing.py)
## Divide And Conquer
* [Closest Pair Of Points](divide_and_conquer/closest_pair_of_points.py)
* [Convex Hull](divide_and_conquer/convex_hull.py)
* [Heaps Algorithm](divide_and_conquer/heaps_algorithm.py)
* [Heaps Algorithm Iterative](divide_and_conquer/heaps_algorithm_iterative.py)
* [Inversions](divide_and_conquer/inversions.py)
* [Kth Order Statistic](divide_and_conquer/kth_order_statistic.py)
* [Max Difference Pair](divide_and_conquer/max_difference_pair.py)
* [Max Subarray Sum](divide_and_conquer/max_subarray_sum.py)
* [Mergesort](divide_and_conquer/mergesort.py)
* [Peak](divide_and_conquer/peak.py)
* [Power](divide_and_conquer/power.py)
* [Strassen Matrix Multiplication](divide_and_conquer/strassen_matrix_multiplication.py)
## Dynamic Programming
* [Abbreviation](dynamic_programming/abbreviation.py)
* [All Construct](dynamic_programming/all_construct.py)
* [Bitmask](dynamic_programming/bitmask.py)
* [Catalan Numbers](dynamic_programming/catalan_numbers.py)
* [Climbing Stairs](dynamic_programming/climbing_stairs.py)
* [Combination Sum Iv](dynamic_programming/combination_sum_iv.py)
* [Edit Distance](dynamic_programming/edit_distance.py)
* [Factorial](dynamic_programming/factorial.py)
* [Fast Fibonacci](dynamic_programming/fast_fibonacci.py)
* [Fibonacci](dynamic_programming/fibonacci.py)
* [Fizz Buzz](dynamic_programming/fizz_buzz.py)
* [Floyd Warshall](dynamic_programming/floyd_warshall.py)
* [Integer Partition](dynamic_programming/integer_partition.py)
* [Iterating Through Submasks](dynamic_programming/iterating_through_submasks.py)
* [Knapsack](dynamic_programming/knapsack.py)
* [Longest Common Subsequence](dynamic_programming/longest_common_subsequence.py)
* [Longest Common Substring](dynamic_programming/longest_common_substring.py)
* [Longest Increasing Subsequence](dynamic_programming/longest_increasing_subsequence.py)
* [Longest Increasing Subsequence O(Nlogn)](dynamic_programming/longest_increasing_subsequence_o(nlogn).py)
* [Longest Sub Array](dynamic_programming/longest_sub_array.py)
* [Matrix Chain Order](dynamic_programming/matrix_chain_order.py)
* [Max Non Adjacent Sum](dynamic_programming/max_non_adjacent_sum.py)
* [Max Sub Array](dynamic_programming/max_sub_array.py)
* [Max Sum Contiguous Subsequence](dynamic_programming/max_sum_contiguous_subsequence.py)
* [Min Distance Up Bottom](dynamic_programming/min_distance_up_bottom.py)
* [Minimum Coin Change](dynamic_programming/minimum_coin_change.py)
* [Minimum Cost Path](dynamic_programming/minimum_cost_path.py)
* [Minimum Partition](dynamic_programming/minimum_partition.py)
* [Minimum Squares To Represent A Number](dynamic_programming/minimum_squares_to_represent_a_number.py)
* [Minimum Steps To One](dynamic_programming/minimum_steps_to_one.py)
* [Minimum Tickets Cost](dynamic_programming/minimum_tickets_cost.py)
* [Optimal Binary Search Tree](dynamic_programming/optimal_binary_search_tree.py)
* [Palindrome Partitioning](dynamic_programming/palindrome_partitioning.py)
* [Rod Cutting](dynamic_programming/rod_cutting.py)
* [Subset Generation](dynamic_programming/subset_generation.py)
* [Sum Of Subset](dynamic_programming/sum_of_subset.py)
* [Viterbi](dynamic_programming/viterbi.py)
* [Word Break](dynamic_programming/word_break.py)
## Electronics
* [Builtin Voltage](electronics/builtin_voltage.py)
* [Carrier Concentration](electronics/carrier_concentration.py)
* [Circular Convolution](electronics/circular_convolution.py)
* [Coulombs Law](electronics/coulombs_law.py)
* [Electric Conductivity](electronics/electric_conductivity.py)
* [Electric Power](electronics/electric_power.py)
* [Electrical Impedance](electronics/electrical_impedance.py)
* [Ind Reactance](electronics/ind_reactance.py)
* [Ohms Law](electronics/ohms_law.py)
* [Resistor Equivalence](electronics/resistor_equivalence.py)
* [Resonant Frequency](electronics/resonant_frequency.py)
## File Transfer
* [Receive File](file_transfer/receive_file.py)
* [Send File](file_transfer/send_file.py)
* Tests
* [Test Send File](file_transfer/tests/test_send_file.py)
## Financial
* [Equated Monthly Installments](financial/equated_monthly_installments.py)
* [Interest](financial/interest.py)
* [Price Plus Tax](financial/price_plus_tax.py)
## Fractals
* [Julia Sets](fractals/julia_sets.py)
* [Koch Snowflake](fractals/koch_snowflake.py)
* [Mandelbrot](fractals/mandelbrot.py)
* [Sierpinski Triangle](fractals/sierpinski_triangle.py)
## Fuzzy Logic
* [Fuzzy Operations](fuzzy_logic/fuzzy_operations.py)
## Genetic Algorithm
* [Basic String](genetic_algorithm/basic_string.py)
## Geodesy
* [Haversine Distance](geodesy/haversine_distance.py)
* [Lamberts Ellipsoidal Distance](geodesy/lamberts_ellipsoidal_distance.py)
## Graphics
* [Bezier Curve](graphics/bezier_curve.py)
* [Vector3 For 2D Rendering](graphics/vector3_for_2d_rendering.py)
## Graphs
* [A Star](graphs/a_star.py)
* [Articulation Points](graphs/articulation_points.py)
* [Basic Graphs](graphs/basic_graphs.py)
* [Bellman Ford](graphs/bellman_ford.py)
* [Bi Directional Dijkstra](graphs/bi_directional_dijkstra.py)
* [Bidirectional A Star](graphs/bidirectional_a_star.py)
* [Bidirectional Breadth First Search](graphs/bidirectional_breadth_first_search.py)
* [Boruvka](graphs/boruvka.py)
* [Breadth First Search](graphs/breadth_first_search.py)
* [Breadth First Search 2](graphs/breadth_first_search_2.py)
* [Breadth First Search Shortest Path](graphs/breadth_first_search_shortest_path.py)
* [Breadth First Search Shortest Path 2](graphs/breadth_first_search_shortest_path_2.py)
* [Breadth First Search Zero One Shortest Path](graphs/breadth_first_search_zero_one_shortest_path.py)
* [Check Bipartite Graph Bfs](graphs/check_bipartite_graph_bfs.py)
* [Check Bipartite Graph Dfs](graphs/check_bipartite_graph_dfs.py)
* [Check Cycle](graphs/check_cycle.py)
* [Connected Components](graphs/connected_components.py)
* [Depth First Search](graphs/depth_first_search.py)
* [Depth First Search 2](graphs/depth_first_search_2.py)
* [Dijkstra](graphs/dijkstra.py)
* [Dijkstra 2](graphs/dijkstra_2.py)
* [Dijkstra Algorithm](graphs/dijkstra_algorithm.py)
* [Dijkstra Alternate](graphs/dijkstra_alternate.py)
* [Dinic](graphs/dinic.py)
* [Directed And Undirected (Weighted) Graph](graphs/directed_and_undirected_(weighted)_graph.py)
* [Edmonds Karp Multiple Source And Sink](graphs/edmonds_karp_multiple_source_and_sink.py)
* [Eulerian Path And Circuit For Undirected Graph](graphs/eulerian_path_and_circuit_for_undirected_graph.py)
* [Even Tree](graphs/even_tree.py)
* [Finding Bridges](graphs/finding_bridges.py)
* [Frequent Pattern Graph Miner](graphs/frequent_pattern_graph_miner.py)
* [G Topological Sort](graphs/g_topological_sort.py)
* [Gale Shapley Bigraph](graphs/gale_shapley_bigraph.py)
* [Graph List](graphs/graph_list.py)
* [Graph Matrix](graphs/graph_matrix.py)
* [Graphs Floyd Warshall](graphs/graphs_floyd_warshall.py)
* [Greedy Best First](graphs/greedy_best_first.py)
* [Greedy Min Vertex Cover](graphs/greedy_min_vertex_cover.py)
* [Kahns Algorithm Long](graphs/kahns_algorithm_long.py)
* [Kahns Algorithm Topo](graphs/kahns_algorithm_topo.py)
* [Karger](graphs/karger.py)
* [Markov Chain](graphs/markov_chain.py)
* [Matching Min Vertex Cover](graphs/matching_min_vertex_cover.py)
* [Minimum Path Sum](graphs/minimum_path_sum.py)
* [Minimum Spanning Tree Boruvka](graphs/minimum_spanning_tree_boruvka.py)
* [Minimum Spanning Tree Kruskal](graphs/minimum_spanning_tree_kruskal.py)
* [Minimum Spanning Tree Kruskal2](graphs/minimum_spanning_tree_kruskal2.py)
* [Minimum Spanning Tree Prims](graphs/minimum_spanning_tree_prims.py)
* [Minimum Spanning Tree Prims2](graphs/minimum_spanning_tree_prims2.py)
* [Multi Heuristic Astar](graphs/multi_heuristic_astar.py)
* [Page Rank](graphs/page_rank.py)
* [Prim](graphs/prim.py)
* [Random Graph Generator](graphs/random_graph_generator.py)
* [Scc Kosaraju](graphs/scc_kosaraju.py)
* [Strongly Connected Components](graphs/strongly_connected_components.py)
* [Tarjans Scc](graphs/tarjans_scc.py)
* Tests
* [Test Min Spanning Tree Kruskal](graphs/tests/test_min_spanning_tree_kruskal.py)
* [Test Min Spanning Tree Prim](graphs/tests/test_min_spanning_tree_prim.py)
## Greedy Methods
* [Fractional Knapsack](greedy_methods/fractional_knapsack.py)
* [Fractional Knapsack 2](greedy_methods/fractional_knapsack_2.py)
* [Optimal Merge Pattern](greedy_methods/optimal_merge_pattern.py)
## Hashes
* [Adler32](hashes/adler32.py)
* [Chaos Machine](hashes/chaos_machine.py)
* [Djb2](hashes/djb2.py)
* [Elf](hashes/elf.py)
* [Enigma Machine](hashes/enigma_machine.py)
* [Hamming Code](hashes/hamming_code.py)
* [Luhn](hashes/luhn.py)
* [Md5](hashes/md5.py)
* [Sdbm](hashes/sdbm.py)
* [Sha1](hashes/sha1.py)
* [Sha256](hashes/sha256.py)
## Knapsack
* [Greedy Knapsack](knapsack/greedy_knapsack.py)
* [Knapsack](knapsack/knapsack.py)
* [Recursive Approach Knapsack](knapsack/recursive_approach_knapsack.py)
* Tests
* [Test Greedy Knapsack](knapsack/tests/test_greedy_knapsack.py)
* [Test Knapsack](knapsack/tests/test_knapsack.py)
## Linear Algebra
* Src
* [Conjugate Gradient](linear_algebra/src/conjugate_gradient.py)
* [Lib](linear_algebra/src/lib.py)
* [Polynom For Points](linear_algebra/src/polynom_for_points.py)
* [Power Iteration](linear_algebra/src/power_iteration.py)
* [Rayleigh Quotient](linear_algebra/src/rayleigh_quotient.py)
* [Schur Complement](linear_algebra/src/schur_complement.py)
* [Test Linear Algebra](linear_algebra/src/test_linear_algebra.py)
* [Transformations 2D](linear_algebra/src/transformations_2d.py)
## Machine Learning
* [Astar](machine_learning/astar.py)
* [Data Transformations](machine_learning/data_transformations.py)
* [Decision Tree](machine_learning/decision_tree.py)
* Forecasting
* [Run](machine_learning/forecasting/run.py)
* [Gradient Descent](machine_learning/gradient_descent.py)
* [K Means Clust](machine_learning/k_means_clust.py)
* [K Nearest Neighbours](machine_learning/k_nearest_neighbours.py)
* [Knn Sklearn](machine_learning/knn_sklearn.py)
* [Linear Discriminant Analysis](machine_learning/linear_discriminant_analysis.py)
* [Linear Regression](machine_learning/linear_regression.py)
* Local Weighted Learning
* [Local Weighted Learning](machine_learning/local_weighted_learning/local_weighted_learning.py)
* [Logistic Regression](machine_learning/logistic_regression.py)
* Lstm
* [Lstm Prediction](machine_learning/lstm/lstm_prediction.py)
* [Multilayer Perceptron Classifier](machine_learning/multilayer_perceptron_classifier.py)
* [Polymonial Regression](machine_learning/polymonial_regression.py)
* [Scoring Functions](machine_learning/scoring_functions.py)
* [Self Organizing Map](machine_learning/self_organizing_map.py)
* [Sequential Minimum Optimization](machine_learning/sequential_minimum_optimization.py)
* [Similarity Search](machine_learning/similarity_search.py)
* [Support Vector Machines](machine_learning/support_vector_machines.py)
* [Word Frequency Functions](machine_learning/word_frequency_functions.py)
* [Xgboost Classifier](machine_learning/xgboost_classifier.py)
* [Xgboost Regressor](machine_learning/xgboost_regressor.py)
## Maths
* [3N Plus 1](maths/3n_plus_1.py)
* [Abs](maths/abs.py)
* [Add](maths/add.py)
* [Addition Without Arithmetic](maths/addition_without_arithmetic.py)
* [Aliquot Sum](maths/aliquot_sum.py)
* [Allocation Number](maths/allocation_number.py)
* [Arc Length](maths/arc_length.py)
* [Area](maths/area.py)
* [Area Under Curve](maths/area_under_curve.py)
* [Armstrong Numbers](maths/armstrong_numbers.py)
* [Automorphic Number](maths/automorphic_number.py)
* [Average Absolute Deviation](maths/average_absolute_deviation.py)
* [Average Mean](maths/average_mean.py)
* [Average Median](maths/average_median.py)
* [Average Mode](maths/average_mode.py)
* [Bailey Borwein Plouffe](maths/bailey_borwein_plouffe.py)
* [Basic Maths](maths/basic_maths.py)
* [Binary Exp Mod](maths/binary_exp_mod.py)
* [Binary Exponentiation](maths/binary_exponentiation.py)
* [Binary Exponentiation 2](maths/binary_exponentiation_2.py)
* [Binary Exponentiation 3](maths/binary_exponentiation_3.py)
* [Binomial Coefficient](maths/binomial_coefficient.py)
* [Binomial Distribution](maths/binomial_distribution.py)
* [Bisection](maths/bisection.py)
* [Carmichael Number](maths/carmichael_number.py)
* [Catalan Number](maths/catalan_number.py)
* [Ceil](maths/ceil.py)
* [Check Polygon](maths/check_polygon.py)
* [Chudnovsky Algorithm](maths/chudnovsky_algorithm.py)
* [Collatz Sequence](maths/collatz_sequence.py)
* [Combinations](maths/combinations.py)
* [Decimal Isolate](maths/decimal_isolate.py)
* [Decimal To Fraction](maths/decimal_to_fraction.py)
* [Dodecahedron](maths/dodecahedron.py)
* [Double Factorial Iterative](maths/double_factorial_iterative.py)
* [Double Factorial Recursive](maths/double_factorial_recursive.py)
* [Entropy](maths/entropy.py)
* [Euclidean Distance](maths/euclidean_distance.py)
* [Euclidean Gcd](maths/euclidean_gcd.py)
* [Euler Method](maths/euler_method.py)
* [Euler Modified](maths/euler_modified.py)
* [Eulers Totient](maths/eulers_totient.py)
* [Extended Euclidean Algorithm](maths/extended_euclidean_algorithm.py)
* [Factorial](maths/factorial.py)
* [Factors](maths/factors.py)
* [Fermat Little Theorem](maths/fermat_little_theorem.py)
* [Fibonacci](maths/fibonacci.py)
* [Find Max](maths/find_max.py)
* [Find Max Recursion](maths/find_max_recursion.py)
* [Find Min](maths/find_min.py)
* [Find Min Recursion](maths/find_min_recursion.py)
* [Floor](maths/floor.py)
* [Gamma](maths/gamma.py)
* [Gamma Recursive](maths/gamma_recursive.py)
* [Gaussian](maths/gaussian.py)
* [Gaussian Error Linear Unit](maths/gaussian_error_linear_unit.py)
* [Gcd Of N Numbers](maths/gcd_of_n_numbers.py)
* [Greatest Common Divisor](maths/greatest_common_divisor.py)
* [Greedy Coin Change](maths/greedy_coin_change.py)
* [Hamming Numbers](maths/hamming_numbers.py)
* [Hardy Ramanujanalgo](maths/hardy_ramanujanalgo.py)
* [Hexagonal Number](maths/hexagonal_number.py)
* [Integration By Simpson Approx](maths/integration_by_simpson_approx.py)
* [Is Ip V4 Address Valid](maths/is_ip_v4_address_valid.py)
* [Is Square Free](maths/is_square_free.py)
* [Jaccard Similarity](maths/jaccard_similarity.py)
* [Juggler Sequence](maths/juggler_sequence.py)
* [Kadanes](maths/kadanes.py)
* [Karatsuba](maths/karatsuba.py)
* [Krishnamurthy Number](maths/krishnamurthy_number.py)
* [Kth Lexicographic Permutation](maths/kth_lexicographic_permutation.py)
* [Largest Of Very Large Numbers](maths/largest_of_very_large_numbers.py)
* [Largest Subarray Sum](maths/largest_subarray_sum.py)
* [Least Common Multiple](maths/least_common_multiple.py)
* [Line Length](maths/line_length.py)
* [Liouville Lambda](maths/liouville_lambda.py)
* [Lucas Lehmer Primality Test](maths/lucas_lehmer_primality_test.py)
* [Lucas Series](maths/lucas_series.py)
* [Maclaurin Series](maths/maclaurin_series.py)
* [Manhattan Distance](maths/manhattan_distance.py)
* [Matrix Exponentiation](maths/matrix_exponentiation.py)
* [Max Sum Sliding Window](maths/max_sum_sliding_window.py)
* [Median Of Two Arrays](maths/median_of_two_arrays.py)
* [Miller Rabin](maths/miller_rabin.py)
* [Mobius Function](maths/mobius_function.py)
* [Modular Exponential](maths/modular_exponential.py)
* [Monte Carlo](maths/monte_carlo.py)
* [Monte Carlo Dice](maths/monte_carlo_dice.py)
* [Nevilles Method](maths/nevilles_method.py)
* [Newton Raphson](maths/newton_raphson.py)
* [Number Of Digits](maths/number_of_digits.py)
* [Numerical Integration](maths/numerical_integration.py)
* [Perfect Cube](maths/perfect_cube.py)
* [Perfect Number](maths/perfect_number.py)
* [Perfect Square](maths/perfect_square.py)
* [Persistence](maths/persistence.py)
* [Pi Monte Carlo Estimation](maths/pi_monte_carlo_estimation.py)
* [Points Are Collinear 3D](maths/points_are_collinear_3d.py)
* [Pollard Rho](maths/pollard_rho.py)
* [Polynomial Evaluation](maths/polynomial_evaluation.py)
* Polynomials
* [Single Indeterminate Operations](maths/polynomials/single_indeterminate_operations.py)
* [Power Using Recursion](maths/power_using_recursion.py)
* [Prime Check](maths/prime_check.py)
* [Prime Factors](maths/prime_factors.py)
* [Prime Numbers](maths/prime_numbers.py)
* [Prime Sieve Eratosthenes](maths/prime_sieve_eratosthenes.py)
* [Primelib](maths/primelib.py)
* [Print Multiplication Table](maths/print_multiplication_table.py)
* [Pronic Number](maths/pronic_number.py)
* [Proth Number](maths/proth_number.py)
* [Pythagoras](maths/pythagoras.py)
* [Qr Decomposition](maths/qr_decomposition.py)
* [Quadratic Equations Complex Numbers](maths/quadratic_equations_complex_numbers.py)
* [Radians](maths/radians.py)
* [Radix2 Fft](maths/radix2_fft.py)
* [Relu](maths/relu.py)
* [Runge Kutta](maths/runge_kutta.py)
* [Segmented Sieve](maths/segmented_sieve.py)
* Series
* [Arithmetic](maths/series/arithmetic.py)
* [Geometric](maths/series/geometric.py)
* [Geometric Series](maths/series/geometric_series.py)
* [Harmonic](maths/series/harmonic.py)
* [Harmonic Series](maths/series/harmonic_series.py)
* [Hexagonal Numbers](maths/series/hexagonal_numbers.py)
* [P Series](maths/series/p_series.py)
* [Sieve Of Eratosthenes](maths/sieve_of_eratosthenes.py)
* [Sigmoid](maths/sigmoid.py)
* [Sigmoid Linear Unit](maths/sigmoid_linear_unit.py)
* [Signum](maths/signum.py)
* [Simpson Rule](maths/simpson_rule.py)
* [Sin](maths/sin.py)
* [Sock Merchant](maths/sock_merchant.py)
* [Softmax](maths/softmax.py)
* [Square Root](maths/square_root.py)
* [Sum Of Arithmetic Series](maths/sum_of_arithmetic_series.py)
* [Sum Of Digits](maths/sum_of_digits.py)
* [Sum Of Geometric Progression](maths/sum_of_geometric_progression.py)
* [Sum Of Harmonic Series](maths/sum_of_harmonic_series.py)
* [Sumset](maths/sumset.py)
* [Sylvester Sequence](maths/sylvester_sequence.py)
* [Test Prime Check](maths/test_prime_check.py)
* [Trapezoidal Rule](maths/trapezoidal_rule.py)
* [Triplet Sum](maths/triplet_sum.py)
* [Twin Prime](maths/twin_prime.py)
* [Two Pointer](maths/two_pointer.py)
* [Two Sum](maths/two_sum.py)
* [Ugly Numbers](maths/ugly_numbers.py)
* [Volume](maths/volume.py)
* [Weird Number](maths/weird_number.py)
* [Zellers Congruence](maths/zellers_congruence.py)
## Matrix
* [Binary Search Matrix](matrix/binary_search_matrix.py)
* [Count Islands In Matrix](matrix/count_islands_in_matrix.py)
* [Count Paths](matrix/count_paths.py)
* [Cramers Rule 2X2](matrix/cramers_rule_2x2.py)
* [Inverse Of Matrix](matrix/inverse_of_matrix.py)
* [Largest Square Area In Matrix](matrix/largest_square_area_in_matrix.py)
* [Matrix Class](matrix/matrix_class.py)
* [Matrix Operation](matrix/matrix_operation.py)
* [Max Area Of Island](matrix/max_area_of_island.py)
* [Nth Fibonacci Using Matrix Exponentiation](matrix/nth_fibonacci_using_matrix_exponentiation.py)
* [Pascal Triangle](matrix/pascal_triangle.py)
* [Rotate Matrix](matrix/rotate_matrix.py)
* [Searching In Sorted Matrix](matrix/searching_in_sorted_matrix.py)
* [Sherman Morrison](matrix/sherman_morrison.py)
* [Spiral Print](matrix/spiral_print.py)
* Tests
* [Test Matrix Operation](matrix/tests/test_matrix_operation.py)
## Networking Flow
* [Ford Fulkerson](networking_flow/ford_fulkerson.py)
* [Minimum Cut](networking_flow/minimum_cut.py)
## Neural Network
* [2 Hidden Layers Neural Network](neural_network/2_hidden_layers_neural_network.py)
* [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py)
* [Convolution Neural Network](neural_network/convolution_neural_network.py)
* [Perceptron](neural_network/perceptron.py)
* [Simple Neural Network](neural_network/simple_neural_network.py)
## Other
* [Activity Selection](other/activity_selection.py)
* [Alternative List Arrange](other/alternative_list_arrange.py)
* [Davisb Putnamb Logemannb Loveland](other/davisb_putnamb_logemannb_loveland.py)
* [Dijkstra Bankers Algorithm](other/dijkstra_bankers_algorithm.py)
* [Doomsday](other/doomsday.py)
* [Fischer Yates Shuffle](other/fischer_yates_shuffle.py)
* [Gauss Easter](other/gauss_easter.py)
* [Graham Scan](other/graham_scan.py)
* [Greedy](other/greedy.py)
* [Least Recently Used](other/least_recently_used.py)
* [Lfu Cache](other/lfu_cache.py)
* [Linear Congruential Generator](other/linear_congruential_generator.py)
* [Lru Cache](other/lru_cache.py)
* [Magicdiamondpattern](other/magicdiamondpattern.py)
* [Maximum Subarray](other/maximum_subarray.py)
* [Nested Brackets](other/nested_brackets.py)
* [Password](other/password.py)
* [Quine](other/quine.py)
* [Scoring Algorithm](other/scoring_algorithm.py)
* [Sdes](other/sdes.py)
* [Tower Of Hanoi](other/tower_of_hanoi.py)
## Physics
* [Archimedes Principle](physics/archimedes_principle.py)
* [Casimir Effect](physics/casimir_effect.py)
* [Centripetal Force](physics/centripetal_force.py)
* [Horizontal Projectile Motion](physics/horizontal_projectile_motion.py)
* [Hubble Parameter](physics/hubble_parameter.py)
* [Ideal Gas Law](physics/ideal_gas_law.py)
* [Kinetic Energy](physics/kinetic_energy.py)
* [Lorentz Transformation Four Vector](physics/lorentz_transformation_four_vector.py)
* [Malus Law](physics/malus_law.py)
* [N Body Simulation](physics/n_body_simulation.py)
* [Newtons Law Of Gravitation](physics/newtons_law_of_gravitation.py)
* [Newtons Second Law Of Motion](physics/newtons_second_law_of_motion.py)
* [Potential Energy](physics/potential_energy.py)
* [Rms Speed Of Molecule](physics/rms_speed_of_molecule.py)
* [Shear Stress](physics/shear_stress.py)
## Project Euler
* Problem 001
* [Sol1](project_euler/problem_001/sol1.py)
* [Sol2](project_euler/problem_001/sol2.py)
* [Sol3](project_euler/problem_001/sol3.py)
* [Sol4](project_euler/problem_001/sol4.py)
* [Sol5](project_euler/problem_001/sol5.py)
* [Sol6](project_euler/problem_001/sol6.py)
* [Sol7](project_euler/problem_001/sol7.py)
* Problem 002
* [Sol1](project_euler/problem_002/sol1.py)
* [Sol2](project_euler/problem_002/sol2.py)
* [Sol3](project_euler/problem_002/sol3.py)
* [Sol4](project_euler/problem_002/sol4.py)
* [Sol5](project_euler/problem_002/sol5.py)
* Problem 003
* [Sol1](project_euler/problem_003/sol1.py)
* [Sol2](project_euler/problem_003/sol2.py)
* [Sol3](project_euler/problem_003/sol3.py)
* Problem 004
* [Sol1](project_euler/problem_004/sol1.py)
* [Sol2](project_euler/problem_004/sol2.py)
* Problem 005
* [Sol1](project_euler/problem_005/sol1.py)
* [Sol2](project_euler/problem_005/sol2.py)
* Problem 006
* [Sol1](project_euler/problem_006/sol1.py)
* [Sol2](project_euler/problem_006/sol2.py)
* [Sol3](project_euler/problem_006/sol3.py)
* [Sol4](project_euler/problem_006/sol4.py)
* Problem 007
* [Sol1](project_euler/problem_007/sol1.py)
* [Sol2](project_euler/problem_007/sol2.py)
* [Sol3](project_euler/problem_007/sol3.py)
* Problem 008
* [Sol1](project_euler/problem_008/sol1.py)
* [Sol2](project_euler/problem_008/sol2.py)
* [Sol3](project_euler/problem_008/sol3.py)
* Problem 009
* [Sol1](project_euler/problem_009/sol1.py)
* [Sol2](project_euler/problem_009/sol2.py)
* [Sol3](project_euler/problem_009/sol3.py)
* Problem 010
* [Sol1](project_euler/problem_010/sol1.py)
* [Sol2](project_euler/problem_010/sol2.py)
* [Sol3](project_euler/problem_010/sol3.py)
* Problem 011
* [Sol1](project_euler/problem_011/sol1.py)
* [Sol2](project_euler/problem_011/sol2.py)
* Problem 012
* [Sol1](project_euler/problem_012/sol1.py)
* [Sol2](project_euler/problem_012/sol2.py)
* Problem 013
* [Sol1](project_euler/problem_013/sol1.py)
* Problem 014
* [Sol1](project_euler/problem_014/sol1.py)
* [Sol2](project_euler/problem_014/sol2.py)
* Problem 015
* [Sol1](project_euler/problem_015/sol1.py)
* Problem 016
* [Sol1](project_euler/problem_016/sol1.py)
* [Sol2](project_euler/problem_016/sol2.py)
* Problem 017
* [Sol1](project_euler/problem_017/sol1.py)
* Problem 018
* [Solution](project_euler/problem_018/solution.py)
* Problem 019
* [Sol1](project_euler/problem_019/sol1.py)
* Problem 020
* [Sol1](project_euler/problem_020/sol1.py)
* [Sol2](project_euler/problem_020/sol2.py)
* [Sol3](project_euler/problem_020/sol3.py)
* [Sol4](project_euler/problem_020/sol4.py)
* Problem 021
* [Sol1](project_euler/problem_021/sol1.py)
* Problem 022
* [Sol1](project_euler/problem_022/sol1.py)
* [Sol2](project_euler/problem_022/sol2.py)
* Problem 023
* [Sol1](project_euler/problem_023/sol1.py)
* Problem 024
* [Sol1](project_euler/problem_024/sol1.py)
* Problem 025
* [Sol1](project_euler/problem_025/sol1.py)
* [Sol2](project_euler/problem_025/sol2.py)
* [Sol3](project_euler/problem_025/sol3.py)
* Problem 026
* [Sol1](project_euler/problem_026/sol1.py)
* Problem 027
* [Sol1](project_euler/problem_027/sol1.py)
* Problem 028
* [Sol1](project_euler/problem_028/sol1.py)
* Problem 029
* [Sol1](project_euler/problem_029/sol1.py)
* Problem 030
* [Sol1](project_euler/problem_030/sol1.py)
* Problem 031
* [Sol1](project_euler/problem_031/sol1.py)
* [Sol2](project_euler/problem_031/sol2.py)
* Problem 032
* [Sol32](project_euler/problem_032/sol32.py)
* Problem 033
* [Sol1](project_euler/problem_033/sol1.py)
* Problem 034
* [Sol1](project_euler/problem_034/sol1.py)
* Problem 035
* [Sol1](project_euler/problem_035/sol1.py)
* Problem 036
* [Sol1](project_euler/problem_036/sol1.py)
* Problem 037
* [Sol1](project_euler/problem_037/sol1.py)
* Problem 038
* [Sol1](project_euler/problem_038/sol1.py)
* Problem 039
* [Sol1](project_euler/problem_039/sol1.py)
* Problem 040
* [Sol1](project_euler/problem_040/sol1.py)
* Problem 041
* [Sol1](project_euler/problem_041/sol1.py)
* Problem 042
* [Solution42](project_euler/problem_042/solution42.py)
* Problem 043
* [Sol1](project_euler/problem_043/sol1.py)
* Problem 044
* [Sol1](project_euler/problem_044/sol1.py)
* Problem 045
* [Sol1](project_euler/problem_045/sol1.py)
* Problem 046
* [Sol1](project_euler/problem_046/sol1.py)
* Problem 047
* [Sol1](project_euler/problem_047/sol1.py)
* Problem 048
* [Sol1](project_euler/problem_048/sol1.py)
* Problem 049
* [Sol1](project_euler/problem_049/sol1.py)
* Problem 050
* [Sol1](project_euler/problem_050/sol1.py)
* Problem 051
* [Sol1](project_euler/problem_051/sol1.py)
* Problem 052
* [Sol1](project_euler/problem_052/sol1.py)
* Problem 053
* [Sol1](project_euler/problem_053/sol1.py)
* Problem 054
* [Sol1](project_euler/problem_054/sol1.py)
* [Test Poker Hand](project_euler/problem_054/test_poker_hand.py)
* Problem 055
* [Sol1](project_euler/problem_055/sol1.py)
* Problem 056
* [Sol1](project_euler/problem_056/sol1.py)
* Problem 057
* [Sol1](project_euler/problem_057/sol1.py)
* Problem 058
* [Sol1](project_euler/problem_058/sol1.py)
* Problem 059
* [Sol1](project_euler/problem_059/sol1.py)
* Problem 062
* [Sol1](project_euler/problem_062/sol1.py)
* Problem 063
* [Sol1](project_euler/problem_063/sol1.py)
* Problem 064
* [Sol1](project_euler/problem_064/sol1.py)
* Problem 065
* [Sol1](project_euler/problem_065/sol1.py)
* Problem 067
* [Sol1](project_euler/problem_067/sol1.py)
* [Sol2](project_euler/problem_067/sol2.py)
* Problem 068
* [Sol1](project_euler/problem_068/sol1.py)
* Problem 069
* [Sol1](project_euler/problem_069/sol1.py)
* Problem 070
* [Sol1](project_euler/problem_070/sol1.py)
* Problem 071
* [Sol1](project_euler/problem_071/sol1.py)
* Problem 072
* [Sol1](project_euler/problem_072/sol1.py)
* [Sol2](project_euler/problem_072/sol2.py)
* Problem 073
* [Sol1](project_euler/problem_073/sol1.py)
* Problem 074
* [Sol1](project_euler/problem_074/sol1.py)
* [Sol2](project_euler/problem_074/sol2.py)
* Problem 075
* [Sol1](project_euler/problem_075/sol1.py)
* Problem 076
* [Sol1](project_euler/problem_076/sol1.py)
* Problem 077
* [Sol1](project_euler/problem_077/sol1.py)
* Problem 078
* [Sol1](project_euler/problem_078/sol1.py)
* Problem 080
* [Sol1](project_euler/problem_080/sol1.py)
* Problem 081
* [Sol1](project_euler/problem_081/sol1.py)
* Problem 082
* [Sol1](project_euler/problem_082/sol1.py)
* Problem 085
* [Sol1](project_euler/problem_085/sol1.py)
* Problem 086
* [Sol1](project_euler/problem_086/sol1.py)
* Problem 087
* [Sol1](project_euler/problem_087/sol1.py)
* Problem 089
* [Sol1](project_euler/problem_089/sol1.py)
* Problem 091
* [Sol1](project_euler/problem_091/sol1.py)
* Problem 092
* [Sol1](project_euler/problem_092/sol1.py)
* Problem 097
* [Sol1](project_euler/problem_097/sol1.py)
* Problem 099
* [Sol1](project_euler/problem_099/sol1.py)
* Problem 100
* [Sol1](project_euler/problem_100/sol1.py)
* Problem 101
* [Sol1](project_euler/problem_101/sol1.py)
* Problem 102
* [Sol1](project_euler/problem_102/sol1.py)
* Problem 104
* [Sol1](project_euler/problem_104/sol1.py)
* Problem 107
* [Sol1](project_euler/problem_107/sol1.py)
* Problem 109
* [Sol1](project_euler/problem_109/sol1.py)
* Problem 112
* [Sol1](project_euler/problem_112/sol1.py)
* Problem 113
* [Sol1](project_euler/problem_113/sol1.py)
* Problem 114
* [Sol1](project_euler/problem_114/sol1.py)
* Problem 115
* [Sol1](project_euler/problem_115/sol1.py)
* Problem 116
* [Sol1](project_euler/problem_116/sol1.py)
* Problem 117
* [Sol1](project_euler/problem_117/sol1.py)
* Problem 119
* [Sol1](project_euler/problem_119/sol1.py)
* Problem 120
* [Sol1](project_euler/problem_120/sol1.py)
* Problem 121
* [Sol1](project_euler/problem_121/sol1.py)
* Problem 123
* [Sol1](project_euler/problem_123/sol1.py)
* Problem 125
* [Sol1](project_euler/problem_125/sol1.py)
* Problem 129
* [Sol1](project_euler/problem_129/sol1.py)
* Problem 131
* [Sol1](project_euler/problem_131/sol1.py)
* Problem 135
* [Sol1](project_euler/problem_135/sol1.py)
* Problem 144
* [Sol1](project_euler/problem_144/sol1.py)
* Problem 145
* [Sol1](project_euler/problem_145/sol1.py)
* Problem 173
* [Sol1](project_euler/problem_173/sol1.py)
* Problem 174
* [Sol1](project_euler/problem_174/sol1.py)
* Problem 180
* [Sol1](project_euler/problem_180/sol1.py)
* Problem 188
* [Sol1](project_euler/problem_188/sol1.py)
* Problem 191
* [Sol1](project_euler/problem_191/sol1.py)
* Problem 203
* [Sol1](project_euler/problem_203/sol1.py)
* Problem 205
* [Sol1](project_euler/problem_205/sol1.py)
* Problem 206
* [Sol1](project_euler/problem_206/sol1.py)
* Problem 207
* [Sol1](project_euler/problem_207/sol1.py)
* Problem 234
* [Sol1](project_euler/problem_234/sol1.py)
* Problem 301
* [Sol1](project_euler/problem_301/sol1.py)
* Problem 493
* [Sol1](project_euler/problem_493/sol1.py)
* Problem 551
* [Sol1](project_euler/problem_551/sol1.py)
* Problem 587
* [Sol1](project_euler/problem_587/sol1.py)
* Problem 686
* [Sol1](project_euler/problem_686/sol1.py)
## Quantum
* [Bb84](quantum/bb84.py)
* [Deutsch Jozsa](quantum/deutsch_jozsa.py)
* [Half Adder](quantum/half_adder.py)
* [Not Gate](quantum/not_gate.py)
* [Q Fourier Transform](quantum/q_fourier_transform.py)
* [Q Full Adder](quantum/q_full_adder.py)
* [Quantum Entanglement](quantum/quantum_entanglement.py)
* [Quantum Teleportation](quantum/quantum_teleportation.py)
* [Ripple Adder Classic](quantum/ripple_adder_classic.py)
* [Single Qubit Measure](quantum/single_qubit_measure.py)
* [Superdense Coding](quantum/superdense_coding.py)
## Scheduling
* [First Come First Served](scheduling/first_come_first_served.py)
* [Highest Response Ratio Next](scheduling/highest_response_ratio_next.py)
* [Job Sequencing With Deadline](scheduling/job_sequencing_with_deadline.py)
* [Multi Level Feedback Queue](scheduling/multi_level_feedback_queue.py)
* [Non Preemptive Shortest Job First](scheduling/non_preemptive_shortest_job_first.py)
* [Round Robin](scheduling/round_robin.py)
* [Shortest Job First](scheduling/shortest_job_first.py)
## Searches
* [Binary Search](searches/binary_search.py)
* [Binary Tree Traversal](searches/binary_tree_traversal.py)
* [Double Linear Search](searches/double_linear_search.py)
* [Double Linear Search Recursion](searches/double_linear_search_recursion.py)
* [Fibonacci Search](searches/fibonacci_search.py)
* [Hill Climbing](searches/hill_climbing.py)
* [Interpolation Search](searches/interpolation_search.py)
* [Jump Search](searches/jump_search.py)
* [Linear Search](searches/linear_search.py)
* [Quick Select](searches/quick_select.py)
* [Sentinel Linear Search](searches/sentinel_linear_search.py)
* [Simple Binary Search](searches/simple_binary_search.py)
* [Simulated Annealing](searches/simulated_annealing.py)
* [Tabu Search](searches/tabu_search.py)
* [Ternary Search](searches/ternary_search.py)
## Sorts
* [Bead Sort](sorts/bead_sort.py)
* [Bitonic Sort](sorts/bitonic_sort.py)
* [Bogo Sort](sorts/bogo_sort.py)
* [Bubble Sort](sorts/bubble_sort.py)
* [Bucket Sort](sorts/bucket_sort.py)
* [Circle Sort](sorts/circle_sort.py)
* [Cocktail Shaker Sort](sorts/cocktail_shaker_sort.py)
* [Comb Sort](sorts/comb_sort.py)
* [Counting Sort](sorts/counting_sort.py)
* [Cycle Sort](sorts/cycle_sort.py)
* [Double Sort](sorts/double_sort.py)
* [Dutch National Flag Sort](sorts/dutch_national_flag_sort.py)
* [Exchange Sort](sorts/exchange_sort.py)
* [External Sort](sorts/external_sort.py)
* [Gnome Sort](sorts/gnome_sort.py)
* [Heap Sort](sorts/heap_sort.py)
* [Insertion Sort](sorts/insertion_sort.py)
* [Intro Sort](sorts/intro_sort.py)
* [Iterative Merge Sort](sorts/iterative_merge_sort.py)
* [Merge Insertion Sort](sorts/merge_insertion_sort.py)
* [Merge Sort](sorts/merge_sort.py)
* [Msd Radix Sort](sorts/msd_radix_sort.py)
* [Natural Sort](sorts/natural_sort.py)
* [Odd Even Sort](sorts/odd_even_sort.py)
* [Odd Even Transposition Parallel](sorts/odd_even_transposition_parallel.py)
* [Odd Even Transposition Single Threaded](sorts/odd_even_transposition_single_threaded.py)
* [Pancake Sort](sorts/pancake_sort.py)
* [Patience Sort](sorts/patience_sort.py)
* [Pigeon Sort](sorts/pigeon_sort.py)
* [Pigeonhole Sort](sorts/pigeonhole_sort.py)
* [Quick Sort](sorts/quick_sort.py)
* [Quick Sort 3 Partition](sorts/quick_sort_3_partition.py)
* [Radix Sort](sorts/radix_sort.py)
* [Random Normal Distribution Quicksort](sorts/random_normal_distribution_quicksort.py)
* [Random Pivot Quick Sort](sorts/random_pivot_quick_sort.py)
* [Recursive Bubble Sort](sorts/recursive_bubble_sort.py)
* [Recursive Insertion Sort](sorts/recursive_insertion_sort.py)
* [Recursive Mergesort Array](sorts/recursive_mergesort_array.py)
* [Recursive Quick Sort](sorts/recursive_quick_sort.py)
* [Selection Sort](sorts/selection_sort.py)
* [Shell Sort](sorts/shell_sort.py)
* [Shrink Shell Sort](sorts/shrink_shell_sort.py)
* [Slowsort](sorts/slowsort.py)
* [Stooge Sort](sorts/stooge_sort.py)
* [Strand Sort](sorts/strand_sort.py)
* [Tim Sort](sorts/tim_sort.py)
* [Topological Sort](sorts/topological_sort.py)
* [Tree Sort](sorts/tree_sort.py)
* [Unknown Sort](sorts/unknown_sort.py)
* [Wiggle Sort](sorts/wiggle_sort.py)
## Strings
* [Aho Corasick](strings/aho_corasick.py)
* [Alternative String Arrange](strings/alternative_string_arrange.py)
* [Anagrams](strings/anagrams.py)
* [Autocomplete Using Trie](strings/autocomplete_using_trie.py)
* [Barcode Validator](strings/barcode_validator.py)
* [Boyer Moore Search](strings/boyer_moore_search.py)
* [Can String Be Rearranged As Palindrome](strings/can_string_be_rearranged_as_palindrome.py)
* [Capitalize](strings/capitalize.py)
* [Check Anagrams](strings/check_anagrams.py)
* [Credit Card Validator](strings/credit_card_validator.py)
* [Detecting English Programmatically](strings/detecting_english_programmatically.py)
* [Dna](strings/dna.py)
* [Frequency Finder](strings/frequency_finder.py)
* [Hamming Distance](strings/hamming_distance.py)
* [Indian Phone Validator](strings/indian_phone_validator.py)
* [Is Contains Unique Chars](strings/is_contains_unique_chars.py)
* [Is Isogram](strings/is_isogram.py)
* [Is Palindrome](strings/is_palindrome.py)
* [Is Pangram](strings/is_pangram.py)
* [Is Spain National Id](strings/is_spain_national_id.py)
* [Is Srilankan Phone Number](strings/is_srilankan_phone_number.py)
* [Jaro Winkler](strings/jaro_winkler.py)
* [Join](strings/join.py)
* [Knuth Morris Pratt](strings/knuth_morris_pratt.py)
* [Levenshtein Distance](strings/levenshtein_distance.py)
* [Lower](strings/lower.py)
* [Manacher](strings/manacher.py)
* [Min Cost String Conversion](strings/min_cost_string_conversion.py)
* [Naive String Search](strings/naive_string_search.py)
* [Ngram](strings/ngram.py)
* [Palindrome](strings/palindrome.py)
* [Prefix Function](strings/prefix_function.py)
* [Rabin Karp](strings/rabin_karp.py)
* [Remove Duplicate](strings/remove_duplicate.py)
* [Reverse Letters](strings/reverse_letters.py)
* [Reverse Long Words](strings/reverse_long_words.py)
* [Reverse Words](strings/reverse_words.py)
* [Snake Case To Camel Pascal Case](strings/snake_case_to_camel_pascal_case.py)
* [Split](strings/split.py)
* [Text Justification](strings/text_justification.py)
* [Upper](strings/upper.py)
* [Wave](strings/wave.py)
* [Wildcard Pattern Matching](strings/wildcard_pattern_matching.py)
* [Word Occurrence](strings/word_occurrence.py)
* [Word Patterns](strings/word_patterns.py)
* [Z Function](strings/z_function.py)
## Web Programming
* [Co2 Emission](web_programming/co2_emission.py)
* [Convert Number To Words](web_programming/convert_number_to_words.py)
* [Covid Stats Via Xpath](web_programming/covid_stats_via_xpath.py)
* [Crawl Google Results](web_programming/crawl_google_results.py)
* [Crawl Google Scholar Citation](web_programming/crawl_google_scholar_citation.py)
* [Currency Converter](web_programming/currency_converter.py)
* [Current Stock Price](web_programming/current_stock_price.py)
* [Current Weather](web_programming/current_weather.py)
* [Daily Horoscope](web_programming/daily_horoscope.py)
* [Download Images From Google Query](web_programming/download_images_from_google_query.py)
* [Emails From Url](web_programming/emails_from_url.py)
* [Fetch Anime And Play](web_programming/fetch_anime_and_play.py)
* [Fetch Bbc News](web_programming/fetch_bbc_news.py)
* [Fetch Github Info](web_programming/fetch_github_info.py)
* [Fetch Jobs](web_programming/fetch_jobs.py)
* [Fetch Quotes](web_programming/fetch_quotes.py)
* [Fetch Well Rx Price](web_programming/fetch_well_rx_price.py)
* [Get Amazon Product Data](web_programming/get_amazon_product_data.py)
* [Get Imdb Top 250 Movies Csv](web_programming/get_imdb_top_250_movies_csv.py)
* [Get Imdbtop](web_programming/get_imdbtop.py)
* [Get Top Hn Posts](web_programming/get_top_hn_posts.py)
* [Get User Tweets](web_programming/get_user_tweets.py)
* [Giphy](web_programming/giphy.py)
* [Instagram Crawler](web_programming/instagram_crawler.py)
* [Instagram Pic](web_programming/instagram_pic.py)
* [Instagram Video](web_programming/instagram_video.py)
* [Nasa Data](web_programming/nasa_data.py)
* [Open Google Results](web_programming/open_google_results.py)
* [Random Anime Character](web_programming/random_anime_character.py)
* [Recaptcha Verification](web_programming/recaptcha_verification.py)
* [Reddit](web_programming/reddit.py)
* [Search Books By Isbn](web_programming/search_books_by_isbn.py)
* [Slack Message](web_programming/slack_message.py)
* [Test Fetch Github Info](web_programming/test_fetch_github_info.py)
* [World Covid19 Stats](web_programming/world_covid19_stats.py)
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Implementing Newton Raphson method in Python
# Author: Syed Haseeb Shah (github.com/QuantumNovice)
# The Newton-Raphson method (also known as Newton's method) is a way to
# quickly find a good approximation for the root of a real-valued function
from __future__ import annotations
from decimal import Decimal
from math import * # noqa: F401, F403
from sympy import diff
def newton_raphson(
func: str, a: float | Decimal, precision: float = 10**-10
) -> float:
"""Finds root from the point 'a' onwards by Newton-Raphson method
>>> newton_raphson("sin(x)", 2)
3.1415926536808043
>>> newton_raphson("x**2 - 5*x +2", 0.4)
0.4384471871911695
>>> newton_raphson("x**2 - 5", 0.1)
2.23606797749979
>>> newton_raphson("log(x)- 1", 2)
2.718281828458938
"""
x = a
while True:
x = Decimal(x) - (Decimal(eval(func)) / Decimal(eval(str(diff(func)))))
# This number dictates the accuracy of the answer
if abs(eval(func)) < precision:
return float(x)
# Let's Execute
if __name__ == "__main__":
# Find root of trigonometric function
# Find value of pi
print(f"The root of sin(x) = 0 is {newton_raphson('sin(x)', 2)}")
# Find root of polynomial
print(f"The root of x**2 - 5*x + 2 = 0 is {newton_raphson('x**2 - 5*x + 2', 0.4)}")
# Find Square Root of 5
print(f"The root of log(x) - 1 = 0 is {newton_raphson('log(x) - 1', 2)}")
# Exponential Roots
print(f"The root of exp(x) - 1 = 0 is {newton_raphson('exp(x) - 1', 0)}")
| # Implementing Newton Raphson method in Python
# Author: Syed Haseeb Shah (github.com/QuantumNovice)
# The Newton-Raphson method (also known as Newton's method) is a way to
# quickly find a good approximation for the root of a real-valued function
from __future__ import annotations
from decimal import Decimal
from math import * # noqa: F403
from sympy import diff
def newton_raphson(
func: str, a: float | Decimal, precision: float = 10**-10
) -> float:
"""Finds root from the point 'a' onwards by Newton-Raphson method
>>> newton_raphson("sin(x)", 2)
3.1415926536808043
>>> newton_raphson("x**2 - 5*x +2", 0.4)
0.4384471871911695
>>> newton_raphson("x**2 - 5", 0.1)
2.23606797749979
>>> newton_raphson("log(x)- 1", 2)
2.718281828458938
"""
x = a
while True:
x = Decimal(x) - (Decimal(eval(func)) / Decimal(eval(str(diff(func)))))
# This number dictates the accuracy of the answer
if abs(eval(func)) < precision:
return float(x)
# Let's Execute
if __name__ == "__main__":
# Find root of trigonometric function
# Find value of pi
print(f"The root of sin(x) = 0 is {newton_raphson('sin(x)', 2)}")
# Find root of polynomial
print(f"The root of x**2 - 5*x + 2 = 0 is {newton_raphson('x**2 - 5*x + 2', 0.4)}")
# Find Square Root of 5
print(f"The root of log(x) - 1 = 0 is {newton_raphson('log(x) - 1', 2)}")
# Exponential Roots
print(f"The root of exp(x) - 1 = 0 is {newton_raphson('exp(x) - 1', 0)}")
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Implementing Newton Raphson method in Python
# Author: Saksham Gupta
#
# The Newton-Raphson method (also known as Newton's method) is a way to
# quickly find a good approximation for the root of a functreal-valued ion
# The method can also be extended to complex functions
#
# Newton's Method - https://en.wikipedia.org/wiki/Newton's_method
from sympy import diff, lambdify, symbols
from sympy.functions import * # noqa: F401, F403
def newton_raphson(
function: str,
starting_point: complex,
variable: str = "x",
precision: float = 10**-10,
multiplicity: int = 1,
) -> complex:
"""Finds root from the 'starting_point' onwards by Newton-Raphson method
Refer to https://docs.sympy.org/latest/modules/functions/index.html
for usable mathematical functions
>>> newton_raphson("sin(x)", 2)
3.141592653589793
>>> newton_raphson("x**4 -5", 0.4 + 5j)
(-7.52316384526264e-37+1.4953487812212207j)
>>> newton_raphson('log(y) - 1', 2, variable='y')
2.7182818284590455
>>> newton_raphson('exp(x) - 1', 10, precision=0.005)
1.2186556186174883e-10
>>> newton_raphson('cos(x)', 0)
Traceback (most recent call last):
...
ZeroDivisionError: Could not find root
"""
x = symbols(variable)
func = lambdify(x, function)
diff_function = lambdify(x, diff(function, x))
prev_guess = starting_point
while True:
if diff_function(prev_guess) != 0:
next_guess = prev_guess - multiplicity * func(prev_guess) / diff_function(
prev_guess
)
else:
raise ZeroDivisionError("Could not find root") from None
# Precision is checked by comparing the difference of consecutive guesses
if abs(next_guess - prev_guess) < precision:
return next_guess
prev_guess = next_guess
# Let's Execute
if __name__ == "__main__":
# Find root of trigonometric function
# Find value of pi
print(f"The root of sin(x) = 0 is {newton_raphson('sin(x)', 2)}")
# Find root of polynomial
# Find fourth Root of 5
print(f"The root of x**4 - 5 = 0 is {newton_raphson('x**4 -5', 0.4 +5j)}")
# Find value of e
print(
"The root of log(y) - 1 = 0 is ",
f"{newton_raphson('log(y) - 1', 2, variable='y')}",
)
# Exponential Roots
print(
"The root of exp(x) - 1 = 0 is",
f"{newton_raphson('exp(x) - 1', 10, precision=0.005)}",
)
# Find root of cos(x)
print(f"The root of cos(x) = 0 is {newton_raphson('cos(x)', 0)}")
| # Implementing Newton Raphson method in Python
# Author: Saksham Gupta
#
# The Newton-Raphson method (also known as Newton's method) is a way to
# quickly find a good approximation for the root of a functreal-valued ion
# The method can also be extended to complex functions
#
# Newton's Method - https://en.wikipedia.org/wiki/Newton's_method
from sympy import diff, lambdify, symbols
from sympy.functions import * # noqa: F403
def newton_raphson(
function: str,
starting_point: complex,
variable: str = "x",
precision: float = 10**-10,
multiplicity: int = 1,
) -> complex:
"""Finds root from the 'starting_point' onwards by Newton-Raphson method
Refer to https://docs.sympy.org/latest/modules/functions/index.html
for usable mathematical functions
>>> newton_raphson("sin(x)", 2)
3.141592653589793
>>> newton_raphson("x**4 -5", 0.4 + 5j)
(-7.52316384526264e-37+1.4953487812212207j)
>>> newton_raphson('log(y) - 1', 2, variable='y')
2.7182818284590455
>>> newton_raphson('exp(x) - 1', 10, precision=0.005)
1.2186556186174883e-10
>>> newton_raphson('cos(x)', 0)
Traceback (most recent call last):
...
ZeroDivisionError: Could not find root
"""
x = symbols(variable)
func = lambdify(x, function)
diff_function = lambdify(x, diff(function, x))
prev_guess = starting_point
while True:
if diff_function(prev_guess) != 0:
next_guess = prev_guess - multiplicity * func(prev_guess) / diff_function(
prev_guess
)
else:
raise ZeroDivisionError("Could not find root") from None
# Precision is checked by comparing the difference of consecutive guesses
if abs(next_guess - prev_guess) < precision:
return next_guess
prev_guess = next_guess
# Let's Execute
if __name__ == "__main__":
# Find root of trigonometric function
# Find value of pi
print(f"The root of sin(x) = 0 is {newton_raphson('sin(x)', 2)}")
# Find root of polynomial
# Find fourth Root of 5
print(f"The root of x**4 - 5 = 0 is {newton_raphson('x**4 -5', 0.4 +5j)}")
# Find value of e
print(
"The root of log(y) - 1 = 0 is ",
f"{newton_raphson('log(y) - 1', 2, variable='y')}",
)
# Exponential Roots
print(
"The root of exp(x) - 1 = 0 is",
f"{newton_raphson('exp(x) - 1', 10, precision=0.005)}",
)
# Find root of cos(x)
print(f"The root of cos(x) = 0 is {newton_raphson('cos(x)', 0)}")
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from collections.abc import Callable
class Heap:
"""
A generic Heap class, can be used as min or max by passing the key function
accordingly.
"""
def __init__(self, key: Callable | None = None) -> None:
# Stores actual heap items.
self.arr: list = []
# Stores indexes of each item for supporting updates and deletion.
self.pos_map: dict = {}
# Stores current size of heap.
self.size = 0
# Stores function used to evaluate the score of an item on which basis ordering
# will be done.
self.key = key or (lambda x: x)
def _parent(self, i: int) -> int | None:
"""Returns parent index of given index if exists else None"""
return int((i - 1) / 2) if i > 0 else None
def _left(self, i: int) -> int | None:
"""Returns left-child-index of given index if exists else None"""
left = int(2 * i + 1)
return left if 0 < left < self.size else None
def _right(self, i: int) -> int | None:
"""Returns right-child-index of given index if exists else None"""
right = int(2 * i + 2)
return right if 0 < right < self.size else None
def _swap(self, i: int, j: int) -> None:
"""Performs changes required for swapping two elements in the heap"""
# First update the indexes of the items in index map.
self.pos_map[self.arr[i][0]], self.pos_map[self.arr[j][0]] = (
self.pos_map[self.arr[j][0]],
self.pos_map[self.arr[i][0]],
)
# Then swap the items in the list.
self.arr[i], self.arr[j] = self.arr[j], self.arr[i]
def _cmp(self, i: int, j: int) -> bool:
"""Compares the two items using default comparison"""
return self.arr[i][1] < self.arr[j][1]
def _get_valid_parent(self, i: int) -> int:
"""
Returns index of valid parent as per desired ordering among given index and
both it's children
"""
left = self._left(i)
right = self._right(i)
valid_parent = i
if left is not None and not self._cmp(left, valid_parent):
valid_parent = left
if right is not None and not self._cmp(right, valid_parent):
valid_parent = right
return valid_parent
def _heapify_up(self, index: int) -> None:
"""Fixes the heap in upward direction of given index"""
parent = self._parent(index)
while parent is not None and not self._cmp(index, parent):
self._swap(index, parent)
index, parent = parent, self._parent(parent)
def _heapify_down(self, index: int) -> None:
"""Fixes the heap in downward direction of given index"""
valid_parent = self._get_valid_parent(index)
while valid_parent != index:
self._swap(index, valid_parent)
index, valid_parent = valid_parent, self._get_valid_parent(valid_parent)
def update_item(self, item: int, item_value: int) -> None:
"""Updates given item value in heap if present"""
if item not in self.pos_map:
return
index = self.pos_map[item]
self.arr[index] = [item, self.key(item_value)]
# Make sure heap is right in both up and down direction.
# Ideally only one of them will make any change.
self._heapify_up(index)
self._heapify_down(index)
def delete_item(self, item: int) -> None:
"""Deletes given item from heap if present"""
if item not in self.pos_map:
return
index = self.pos_map[item]
del self.pos_map[item]
self.arr[index] = self.arr[self.size - 1]
self.pos_map[self.arr[self.size - 1][0]] = index
self.size -= 1
# Make sure heap is right in both up and down direction. Ideally only one
# of them will make any change- so no performance loss in calling both.
if self.size > index:
self._heapify_up(index)
self._heapify_down(index)
def insert_item(self, item: int, item_value: int) -> None:
"""Inserts given item with given value in heap"""
arr_len = len(self.arr)
if arr_len == self.size:
self.arr.append([item, self.key(item_value)])
else:
self.arr[self.size] = [item, self.key(item_value)]
self.pos_map[item] = self.size
self.size += 1
self._heapify_up(self.size - 1)
def get_top(self) -> tuple | None:
"""Returns top item tuple (Calculated value, item) from heap if present"""
return self.arr[0] if self.size else None
def extract_top(self) -> tuple | None:
"""
Return top item tuple (Calculated value, item) from heap and removes it as well
if present
"""
top_item_tuple = self.get_top()
if top_item_tuple:
self.delete_item(top_item_tuple[0])
return top_item_tuple
def test_heap() -> None:
"""
>>> h = Heap() # Max-heap
>>> h.insert_item(5, 34)
>>> h.insert_item(6, 31)
>>> h.insert_item(7, 37)
>>> h.get_top()
[7, 37]
>>> h.extract_top()
[7, 37]
>>> h.extract_top()
[5, 34]
>>> h.extract_top()
[6, 31]
>>> h = Heap(key=lambda x: -x) # Min heap
>>> h.insert_item(5, 34)
>>> h.insert_item(6, 31)
>>> h.insert_item(7, 37)
>>> h.get_top()
[6, -31]
>>> h.extract_top()
[6, -31]
>>> h.extract_top()
[5, -34]
>>> h.extract_top()
[7, -37]
>>> h.insert_item(8, 45)
>>> h.insert_item(9, 40)
>>> h.insert_item(10, 50)
>>> h.get_top()
[9, -40]
>>> h.update_item(10, 30)
>>> h.get_top()
[10, -30]
>>> h.delete_item(10)
>>> h.get_top()
[9, -40]
"""
pass
if __name__ == "__main__":
import doctest
doctest.testmod()
| from collections.abc import Callable
class Heap:
"""
A generic Heap class, can be used as min or max by passing the key function
accordingly.
"""
def __init__(self, key: Callable | None = None) -> None:
# Stores actual heap items.
self.arr: list = []
# Stores indexes of each item for supporting updates and deletion.
self.pos_map: dict = {}
# Stores current size of heap.
self.size = 0
# Stores function used to evaluate the score of an item on which basis ordering
# will be done.
self.key = key or (lambda x: x)
def _parent(self, i: int) -> int | None:
"""Returns parent index of given index if exists else None"""
return int((i - 1) / 2) if i > 0 else None
def _left(self, i: int) -> int | None:
"""Returns left-child-index of given index if exists else None"""
left = int(2 * i + 1)
return left if 0 < left < self.size else None
def _right(self, i: int) -> int | None:
"""Returns right-child-index of given index if exists else None"""
right = int(2 * i + 2)
return right if 0 < right < self.size else None
def _swap(self, i: int, j: int) -> None:
"""Performs changes required for swapping two elements in the heap"""
# First update the indexes of the items in index map.
self.pos_map[self.arr[i][0]], self.pos_map[self.arr[j][0]] = (
self.pos_map[self.arr[j][0]],
self.pos_map[self.arr[i][0]],
)
# Then swap the items in the list.
self.arr[i], self.arr[j] = self.arr[j], self.arr[i]
def _cmp(self, i: int, j: int) -> bool:
"""Compares the two items using default comparison"""
return self.arr[i][1] < self.arr[j][1]
def _get_valid_parent(self, i: int) -> int:
"""
Returns index of valid parent as per desired ordering among given index and
both it's children
"""
left = self._left(i)
right = self._right(i)
valid_parent = i
if left is not None and not self._cmp(left, valid_parent):
valid_parent = left
if right is not None and not self._cmp(right, valid_parent):
valid_parent = right
return valid_parent
def _heapify_up(self, index: int) -> None:
"""Fixes the heap in upward direction of given index"""
parent = self._parent(index)
while parent is not None and not self._cmp(index, parent):
self._swap(index, parent)
index, parent = parent, self._parent(parent)
def _heapify_down(self, index: int) -> None:
"""Fixes the heap in downward direction of given index"""
valid_parent = self._get_valid_parent(index)
while valid_parent != index:
self._swap(index, valid_parent)
index, valid_parent = valid_parent, self._get_valid_parent(valid_parent)
def update_item(self, item: int, item_value: int) -> None:
"""Updates given item value in heap if present"""
if item not in self.pos_map:
return
index = self.pos_map[item]
self.arr[index] = [item, self.key(item_value)]
# Make sure heap is right in both up and down direction.
# Ideally only one of them will make any change.
self._heapify_up(index)
self._heapify_down(index)
def delete_item(self, item: int) -> None:
"""Deletes given item from heap if present"""
if item not in self.pos_map:
return
index = self.pos_map[item]
del self.pos_map[item]
self.arr[index] = self.arr[self.size - 1]
self.pos_map[self.arr[self.size - 1][0]] = index
self.size -= 1
# Make sure heap is right in both up and down direction. Ideally only one
# of them will make any change- so no performance loss in calling both.
if self.size > index:
self._heapify_up(index)
self._heapify_down(index)
def insert_item(self, item: int, item_value: int) -> None:
"""Inserts given item with given value in heap"""
arr_len = len(self.arr)
if arr_len == self.size:
self.arr.append([item, self.key(item_value)])
else:
self.arr[self.size] = [item, self.key(item_value)]
self.pos_map[item] = self.size
self.size += 1
self._heapify_up(self.size - 1)
def get_top(self) -> tuple | None:
"""Returns top item tuple (Calculated value, item) from heap if present"""
return self.arr[0] if self.size else None
def extract_top(self) -> tuple | None:
"""
Return top item tuple (Calculated value, item) from heap and removes it as well
if present
"""
top_item_tuple = self.get_top()
if top_item_tuple:
self.delete_item(top_item_tuple[0])
return top_item_tuple
def test_heap() -> None:
"""
>>> h = Heap() # Max-heap
>>> h.insert_item(5, 34)
>>> h.insert_item(6, 31)
>>> h.insert_item(7, 37)
>>> h.get_top()
[7, 37]
>>> h.extract_top()
[7, 37]
>>> h.extract_top()
[5, 34]
>>> h.extract_top()
[6, 31]
>>> h = Heap(key=lambda x: -x) # Min heap
>>> h.insert_item(5, 34)
>>> h.insert_item(6, 31)
>>> h.insert_item(7, 37)
>>> h.get_top()
[6, -31]
>>> h.extract_top()
[6, -31]
>>> h.extract_top()
[5, -34]
>>> h.extract_top()
[7, -37]
>>> h.insert_item(8, 45)
>>> h.insert_item(9, 40)
>>> h.insert_item(10, 50)
>>> h.get_top()
[9, -40]
>>> h.update_item(10, 30)
>>> h.get_top()
[10, -30]
>>> h.delete_item(10)
>>> h.get_top()
[9, -40]
"""
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Author : Alexander Pantyukhin
Date : October 14, 2022
This is implementation Dynamic Programming up bottom approach
to find edit distance.
The aim is to demonstate up bottom approach for solving the task.
The implementation was tested on the
leetcode: https://leetcode.com/problems/edit-distance/
"""
"""
Levinstein distance
Dynamic Programming: up -> down.
"""
def min_distance_up_bottom(word1: str, word2: str) -> int:
"""
>>> min_distance_up_bottom("intention", "execution")
5
>>> min_distance_up_bottom("intention", "")
9
>>> min_distance_up_bottom("", "")
0
>>> min_distance_up_bottom("zooicoarchaeologist", "zoologist")
10
"""
from functools import lru_cache
len_word1 = len(word1)
len_word2 = len(word2)
@lru_cache(maxsize=None)
def min_distance(index1: int, index2: int) -> int:
# if first word index is overflow - delete all from the second word
if index1 >= len_word1:
return len_word2 - index2
# if second word index is overflow - delete all from the first word
if index2 >= len_word2:
return len_word1 - index1
diff = int(word1[index1] != word2[index2]) # current letters not identical
return min(
1 + min_distance(index1 + 1, index2),
1 + min_distance(index1, index2 + 1),
diff + min_distance(index1 + 1, index2 + 1),
)
return min_distance(0, 0)
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Author : Alexander Pantyukhin
Date : October 14, 2022
This is implementation Dynamic Programming up bottom approach
to find edit distance.
The aim is to demonstate up bottom approach for solving the task.
The implementation was tested on the
leetcode: https://leetcode.com/problems/edit-distance/
Levinstein distance
Dynamic Programming: up -> down.
"""
import functools
def min_distance_up_bottom(word1: str, word2: str) -> int:
"""
>>> min_distance_up_bottom("intention", "execution")
5
>>> min_distance_up_bottom("intention", "")
9
>>> min_distance_up_bottom("", "")
0
>>> min_distance_up_bottom("zooicoarchaeologist", "zoologist")
10
"""
len_word1 = len(word1)
len_word2 = len(word2)
@functools.cache
def min_distance(index1: int, index2: int) -> int:
# if first word index is overflow - delete all from the second word
if index1 >= len_word1:
return len_word2 - index2
# if second word index is overflow - delete all from the first word
if index2 >= len_word2:
return len_word1 - index1
diff = int(word1[index1] != word2[index2]) # current letters not identical
return min(
1 + min_distance(index1 + 1, index2),
1 + min_distance(index1, index2 + 1),
diff + min_distance(index1 + 1, index2 + 1),
)
return min_distance(0, 0)
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Author : Alexander Pantyukhin
Date : November 1, 2022
Task:
Given a list of days when you need to travel. Each day is integer from 1 to 365.
You are able to use tickets for 1 day, 7 days and 30 days.
Each ticket has a cost.
Find the minimum cost you need to travel every day in the given list of days.
Implementation notes:
implementation Dynamic Programming up bottom approach.
Runtime complexity: O(n)
The implementation was tested on the
leetcode: https://leetcode.com/problems/minimum-cost-for-tickets/
Minimum Cost For Tickets
Dynamic Programming: up -> down.
"""
from functools import lru_cache
def mincost_tickets(days: list[int], costs: list[int]) -> int:
"""
>>> mincost_tickets([1, 4, 6, 7, 8, 20], [2, 7, 15])
11
>>> mincost_tickets([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 30, 31], [2, 7, 15])
17
>>> mincost_tickets([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 30, 31], [2, 90, 150])
24
>>> mincost_tickets([2], [2, 90, 150])
2
>>> mincost_tickets([], [2, 90, 150])
0
>>> mincost_tickets('hello', [2, 90, 150])
Traceback (most recent call last):
...
ValueError: The parameter days should be a list of integers
>>> mincost_tickets([], 'world')
Traceback (most recent call last):
...
ValueError: The parameter costs should be a list of three integers
>>> mincost_tickets([0.25, 2, 3, 4, 5, 6, 7, 8, 9, 10, 30, 31], [2, 90, 150])
Traceback (most recent call last):
...
ValueError: The parameter days should be a list of integers
>>> mincost_tickets([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 30, 31], [2, 0.9, 150])
Traceback (most recent call last):
...
ValueError: The parameter costs should be a list of three integers
>>> mincost_tickets([-1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 30, 31], [2, 90, 150])
Traceback (most recent call last):
...
ValueError: All days elements should be greater than 0
>>> mincost_tickets([2, 367], [2, 90, 150])
Traceback (most recent call last):
...
ValueError: All days elements should be less than 366
>>> mincost_tickets([2, 3, 4, 5, 6, 7, 8, 9, 10, 30, 31], [])
Traceback (most recent call last):
...
ValueError: The parameter costs should be a list of three integers
>>> mincost_tickets([], [])
Traceback (most recent call last):
...
ValueError: The parameter costs should be a list of three integers
>>> mincost_tickets([2, 3, 4, 5, 6, 7, 8, 9, 10, 30, 31], [1, 2, 3, 4])
Traceback (most recent call last):
...
ValueError: The parameter costs should be a list of three integers
"""
# Validation
if not isinstance(days, list) or not all(isinstance(day, int) for day in days):
raise ValueError("The parameter days should be a list of integers")
if len(costs) != 3 or not all(isinstance(cost, int) for cost in costs):
raise ValueError("The parameter costs should be a list of three integers")
if len(days) == 0:
return 0
if min(days) <= 0:
raise ValueError("All days elements should be greater than 0")
if max(days) >= 366:
raise ValueError("All days elements should be less than 366")
days_set = set(days)
@lru_cache(maxsize=None)
def dynamic_programming(index: int) -> int:
if index > 365:
return 0
if index not in days_set:
return dynamic_programming(index + 1)
return min(
costs[0] + dynamic_programming(index + 1),
costs[1] + dynamic_programming(index + 7),
costs[2] + dynamic_programming(index + 30),
)
return dynamic_programming(1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Author : Alexander Pantyukhin
Date : November 1, 2022
Task:
Given a list of days when you need to travel. Each day is integer from 1 to 365.
You are able to use tickets for 1 day, 7 days and 30 days.
Each ticket has a cost.
Find the minimum cost you need to travel every day in the given list of days.
Implementation notes:
implementation Dynamic Programming up bottom approach.
Runtime complexity: O(n)
The implementation was tested on the
leetcode: https://leetcode.com/problems/minimum-cost-for-tickets/
Minimum Cost For Tickets
Dynamic Programming: up -> down.
"""
import functools
def mincost_tickets(days: list[int], costs: list[int]) -> int:
"""
>>> mincost_tickets([1, 4, 6, 7, 8, 20], [2, 7, 15])
11
>>> mincost_tickets([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 30, 31], [2, 7, 15])
17
>>> mincost_tickets([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 30, 31], [2, 90, 150])
24
>>> mincost_tickets([2], [2, 90, 150])
2
>>> mincost_tickets([], [2, 90, 150])
0
>>> mincost_tickets('hello', [2, 90, 150])
Traceback (most recent call last):
...
ValueError: The parameter days should be a list of integers
>>> mincost_tickets([], 'world')
Traceback (most recent call last):
...
ValueError: The parameter costs should be a list of three integers
>>> mincost_tickets([0.25, 2, 3, 4, 5, 6, 7, 8, 9, 10, 30, 31], [2, 90, 150])
Traceback (most recent call last):
...
ValueError: The parameter days should be a list of integers
>>> mincost_tickets([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 30, 31], [2, 0.9, 150])
Traceback (most recent call last):
...
ValueError: The parameter costs should be a list of three integers
>>> mincost_tickets([-1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 30, 31], [2, 90, 150])
Traceback (most recent call last):
...
ValueError: All days elements should be greater than 0
>>> mincost_tickets([2, 367], [2, 90, 150])
Traceback (most recent call last):
...
ValueError: All days elements should be less than 366
>>> mincost_tickets([2, 3, 4, 5, 6, 7, 8, 9, 10, 30, 31], [])
Traceback (most recent call last):
...
ValueError: The parameter costs should be a list of three integers
>>> mincost_tickets([], [])
Traceback (most recent call last):
...
ValueError: The parameter costs should be a list of three integers
>>> mincost_tickets([2, 3, 4, 5, 6, 7, 8, 9, 10, 30, 31], [1, 2, 3, 4])
Traceback (most recent call last):
...
ValueError: The parameter costs should be a list of three integers
"""
# Validation
if not isinstance(days, list) or not all(isinstance(day, int) for day in days):
raise ValueError("The parameter days should be a list of integers")
if len(costs) != 3 or not all(isinstance(cost, int) for cost in costs):
raise ValueError("The parameter costs should be a list of three integers")
if len(days) == 0:
return 0
if min(days) <= 0:
raise ValueError("All days elements should be greater than 0")
if max(days) >= 366:
raise ValueError("All days elements should be less than 366")
days_set = set(days)
@functools.cache
def dynamic_programming(index: int) -> int:
if index > 365:
return 0
if index not in days_set:
return dynamic_programming(index + 1)
return min(
costs[0] + dynamic_programming(index + 1),
costs[1] + dynamic_programming(index + 7),
costs[2] + dynamic_programming(index + 30),
)
return dynamic_programming(1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Author : Alexander Pantyukhin
Date : December 12, 2022
Task:
Given a string and a list of words, return true if the string can be
segmented into a space-separated sequence of one or more words.
Note that the same word may be reused
multiple times in the segmentation.
Implementation notes: Trie + Dynamic programming up -> down.
The Trie will be used to store the words. It will be useful for scanning
available words for the current position in the string.
Leetcode:
https://leetcode.com/problems/word-break/description/
Runtime: O(n * n)
Space: O(n)
"""
from functools import lru_cache
from typing import Any
def word_break(string: str, words: list[str]) -> bool:
"""
Return True if numbers have opposite signs False otherwise.
>>> word_break("applepenapple", ["apple","pen"])
True
>>> word_break("catsandog", ["cats","dog","sand","and","cat"])
False
>>> word_break("cars", ["car","ca","rs"])
True
>>> word_break('abc', [])
False
>>> word_break(123, ['a'])
Traceback (most recent call last):
...
ValueError: the string should be not empty string
>>> word_break('', ['a'])
Traceback (most recent call last):
...
ValueError: the string should be not empty string
>>> word_break('abc', [123])
Traceback (most recent call last):
...
ValueError: the words should be a list of non-empty strings
>>> word_break('abc', [''])
Traceback (most recent call last):
...
ValueError: the words should be a list of non-empty strings
"""
# Validation
if not isinstance(string, str) or len(string) == 0:
raise ValueError("the string should be not empty string")
if not isinstance(words, list) or not all(
isinstance(item, str) and len(item) > 0 for item in words
):
raise ValueError("the words should be a list of non-empty strings")
# Build trie
trie: dict[str, Any] = {}
word_keeper_key = "WORD_KEEPER"
for word in words:
trie_node = trie
for c in word:
if c not in trie_node:
trie_node[c] = {}
trie_node = trie_node[c]
trie_node[word_keeper_key] = True
len_string = len(string)
# Dynamic programming method
@lru_cache(maxsize=None)
def is_breakable(index: int) -> bool:
"""
>>> string = 'a'
>>> is_breakable(1)
True
"""
if index == len_string:
return True
trie_node = trie
for i in range(index, len_string):
trie_node = trie_node.get(string[i], None)
if trie_node is None:
return False
if trie_node.get(word_keeper_key, False) and is_breakable(i + 1):
return True
return False
return is_breakable(0)
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Author : Alexander Pantyukhin
Date : December 12, 2022
Task:
Given a string and a list of words, return true if the string can be
segmented into a space-separated sequence of one or more words.
Note that the same word may be reused
multiple times in the segmentation.
Implementation notes: Trie + Dynamic programming up -> down.
The Trie will be used to store the words. It will be useful for scanning
available words for the current position in the string.
Leetcode:
https://leetcode.com/problems/word-break/description/
Runtime: O(n * n)
Space: O(n)
"""
import functools
from typing import Any
def word_break(string: str, words: list[str]) -> bool:
"""
Return True if numbers have opposite signs False otherwise.
>>> word_break("applepenapple", ["apple","pen"])
True
>>> word_break("catsandog", ["cats","dog","sand","and","cat"])
False
>>> word_break("cars", ["car","ca","rs"])
True
>>> word_break('abc', [])
False
>>> word_break(123, ['a'])
Traceback (most recent call last):
...
ValueError: the string should be not empty string
>>> word_break('', ['a'])
Traceback (most recent call last):
...
ValueError: the string should be not empty string
>>> word_break('abc', [123])
Traceback (most recent call last):
...
ValueError: the words should be a list of non-empty strings
>>> word_break('abc', [''])
Traceback (most recent call last):
...
ValueError: the words should be a list of non-empty strings
"""
# Validation
if not isinstance(string, str) or len(string) == 0:
raise ValueError("the string should be not empty string")
if not isinstance(words, list) or not all(
isinstance(item, str) and len(item) > 0 for item in words
):
raise ValueError("the words should be a list of non-empty strings")
# Build trie
trie: dict[str, Any] = {}
word_keeper_key = "WORD_KEEPER"
for word in words:
trie_node = trie
for c in word:
if c not in trie_node:
trie_node[c] = {}
trie_node = trie_node[c]
trie_node[word_keeper_key] = True
len_string = len(string)
# Dynamic programming method
@functools.cache
def is_breakable(index: int) -> bool:
"""
>>> string = 'a'
>>> is_breakable(1)
True
"""
if index == len_string:
return True
trie_node = trie
for i in range(index, len_string):
trie_node = trie_node.get(string[i], None)
if trie_node is None:
return False
if trie_node.get(word_keeper_key, False) and is_breakable(i + 1):
return True
return False
return is_breakable(0)
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Demonstrates implementation of SHA1 Hash function in a Python class and gives utilities
to find hash of string or hash of text from a file.
Usage: python sha1.py --string "Hello World!!"
python sha1.py --file "hello_world.txt"
When run without any arguments, it prints the hash of the string "Hello World!!
Welcome to Cryptography"
Also contains a Test class to verify that the generated Hash is same as that
returned by the hashlib library
SHA1 hash or SHA1 sum of a string is a cryptographic function which means it is easy
to calculate forwards but extremely difficult to calculate backwards. What this means
is, you can easily calculate the hash of a string, but it is extremely difficult to
know the original string if you have its hash. This property is useful to communicate
securely, send encrypted messages and is very useful in payment systems, blockchain
and cryptocurrency etc.
The Algorithm as described in the reference:
First we start with a message. The message is padded and the length of the message
is added to the end. It is then split into blocks of 512 bits or 64 bytes. The blocks
are then processed one at a time. Each block must be expanded and compressed.
The value after each compression is added to a 160bit buffer called the current hash
state. After the last block is processed the current hash state is returned as
the final hash.
Reference: https://deadhacker.com/2006/02/21/sha-1-illustrated/
"""
import argparse
import hashlib # hashlib is only used inside the Test class
import struct
import unittest
class SHA1Hash:
"""
Class to contain the entire pipeline for SHA1 Hashing Algorithm
>>> SHA1Hash(bytes('Allan', 'utf-8')).final_hash()
'872af2d8ac3d8695387e7c804bf0e02c18df9e6e'
"""
def __init__(self, data):
"""
Inititates the variables data and h. h is a list of 5 8-digit Hexadecimal
numbers corresponding to
(1732584193, 4023233417, 2562383102, 271733878, 3285377520)
respectively. We will start with this as a message digest. 0x is how you write
Hexadecimal numbers in Python
"""
self.data = data
self.h = [0x67452301, 0xEFCDAB89, 0x98BADCFE, 0x10325476, 0xC3D2E1F0]
@staticmethod
def rotate(n, b):
"""
Static method to be used inside other methods. Left rotates n by b.
>>> SHA1Hash('').rotate(12,2)
48
"""
return ((n << b) | (n >> (32 - b))) & 0xFFFFFFFF
def padding(self):
"""
Pads the input message with zeros so that padded_data has 64 bytes or 512 bits
"""
padding = b"\x80" + b"\x00" * (63 - (len(self.data) + 8) % 64)
padded_data = self.data + padding + struct.pack(">Q", 8 * len(self.data))
return padded_data
def split_blocks(self):
"""
Returns a list of bytestrings each of length 64
"""
return [
self.padded_data[i : i + 64] for i in range(0, len(self.padded_data), 64)
]
# @staticmethod
def expand_block(self, block):
"""
Takes a bytestring-block of length 64, unpacks it to a list of integers and
returns a list of 80 integers after some bit operations
"""
w = list(struct.unpack(">16L", block)) + [0] * 64
for i in range(16, 80):
w[i] = self.rotate((w[i - 3] ^ w[i - 8] ^ w[i - 14] ^ w[i - 16]), 1)
return w
def final_hash(self):
"""
Calls all the other methods to process the input. Pads the data, then splits
into blocks and then does a series of operations for each block (including
expansion).
For each block, the variable h that was initialized is copied to a,b,c,d,e
and these 5 variables a,b,c,d,e undergo several changes. After all the blocks
are processed, these 5 variables are pairwise added to h ie a to h[0], b to h[1]
and so on. This h becomes our final hash which is returned.
"""
self.padded_data = self.padding()
self.blocks = self.split_blocks()
for block in self.blocks:
expanded_block = self.expand_block(block)
a, b, c, d, e = self.h
for i in range(0, 80):
if 0 <= i < 20:
f = (b & c) | ((~b) & d)
k = 0x5A827999
elif 20 <= i < 40:
f = b ^ c ^ d
k = 0x6ED9EBA1
elif 40 <= i < 60:
f = (b & c) | (b & d) | (c & d)
k = 0x8F1BBCDC
elif 60 <= i < 80:
f = b ^ c ^ d
k = 0xCA62C1D6
a, b, c, d, e = (
self.rotate(a, 5) + f + e + k + expanded_block[i] & 0xFFFFFFFF,
a,
self.rotate(b, 30),
c,
d,
)
self.h = (
self.h[0] + a & 0xFFFFFFFF,
self.h[1] + b & 0xFFFFFFFF,
self.h[2] + c & 0xFFFFFFFF,
self.h[3] + d & 0xFFFFFFFF,
self.h[4] + e & 0xFFFFFFFF,
)
return "%08x%08x%08x%08x%08x" % tuple(self.h)
class SHA1HashTest(unittest.TestCase):
"""
Test class for the SHA1Hash class. Inherits the TestCase class from unittest
"""
def testMatchHashes(self): # noqa: N802
msg = bytes("Test String", "utf-8")
self.assertEqual(SHA1Hash(msg).final_hash(), hashlib.sha1(msg).hexdigest())
def main():
"""
Provides option 'string' or 'file' to take input and prints the calculated SHA1
hash. unittest.main() has been commented because we probably don't want to run
the test each time.
"""
# unittest.main()
parser = argparse.ArgumentParser(description="Process some strings or files")
parser.add_argument(
"--string",
dest="input_string",
default="Hello World!! Welcome to Cryptography",
help="Hash the string",
)
parser.add_argument("--file", dest="input_file", help="Hash contents of a file")
args = parser.parse_args()
input_string = args.input_string
# In any case hash input should be a bytestring
if args.input_file:
with open(args.input_file, "rb") as f:
hash_input = f.read()
else:
hash_input = bytes(input_string, "utf-8")
print(SHA1Hash(hash_input).final_hash())
if __name__ == "__main__":
main()
import doctest
doctest.testmod()
| """
Demonstrates implementation of SHA1 Hash function in a Python class and gives utilities
to find hash of string or hash of text from a file.
Usage: python sha1.py --string "Hello World!!"
python sha1.py --file "hello_world.txt"
When run without any arguments, it prints the hash of the string "Hello World!!
Welcome to Cryptography"
Also contains a Test class to verify that the generated Hash is same as that
returned by the hashlib library
SHA1 hash or SHA1 sum of a string is a cryptographic function which means it is easy
to calculate forwards but extremely difficult to calculate backwards. What this means
is, you can easily calculate the hash of a string, but it is extremely difficult to
know the original string if you have its hash. This property is useful to communicate
securely, send encrypted messages and is very useful in payment systems, blockchain
and cryptocurrency etc.
The Algorithm as described in the reference:
First we start with a message. The message is padded and the length of the message
is added to the end. It is then split into blocks of 512 bits or 64 bytes. The blocks
are then processed one at a time. Each block must be expanded and compressed.
The value after each compression is added to a 160bit buffer called the current hash
state. After the last block is processed the current hash state is returned as
the final hash.
Reference: https://deadhacker.com/2006/02/21/sha-1-illustrated/
"""
import argparse
import hashlib # hashlib is only used inside the Test class
import struct
class SHA1Hash:
"""
Class to contain the entire pipeline for SHA1 Hashing Algorithm
>>> SHA1Hash(bytes('Allan', 'utf-8')).final_hash()
'872af2d8ac3d8695387e7c804bf0e02c18df9e6e'
"""
def __init__(self, data):
"""
Inititates the variables data and h. h is a list of 5 8-digit Hexadecimal
numbers corresponding to
(1732584193, 4023233417, 2562383102, 271733878, 3285377520)
respectively. We will start with this as a message digest. 0x is how you write
Hexadecimal numbers in Python
"""
self.data = data
self.h = [0x67452301, 0xEFCDAB89, 0x98BADCFE, 0x10325476, 0xC3D2E1F0]
@staticmethod
def rotate(n, b):
"""
Static method to be used inside other methods. Left rotates n by b.
>>> SHA1Hash('').rotate(12,2)
48
"""
return ((n << b) | (n >> (32 - b))) & 0xFFFFFFFF
def padding(self):
"""
Pads the input message with zeros so that padded_data has 64 bytes or 512 bits
"""
padding = b"\x80" + b"\x00" * (63 - (len(self.data) + 8) % 64)
padded_data = self.data + padding + struct.pack(">Q", 8 * len(self.data))
return padded_data
def split_blocks(self):
"""
Returns a list of bytestrings each of length 64
"""
return [
self.padded_data[i : i + 64] for i in range(0, len(self.padded_data), 64)
]
# @staticmethod
def expand_block(self, block):
"""
Takes a bytestring-block of length 64, unpacks it to a list of integers and
returns a list of 80 integers after some bit operations
"""
w = list(struct.unpack(">16L", block)) + [0] * 64
for i in range(16, 80):
w[i] = self.rotate((w[i - 3] ^ w[i - 8] ^ w[i - 14] ^ w[i - 16]), 1)
return w
def final_hash(self):
"""
Calls all the other methods to process the input. Pads the data, then splits
into blocks and then does a series of operations for each block (including
expansion).
For each block, the variable h that was initialized is copied to a,b,c,d,e
and these 5 variables a,b,c,d,e undergo several changes. After all the blocks
are processed, these 5 variables are pairwise added to h ie a to h[0], b to h[1]
and so on. This h becomes our final hash which is returned.
"""
self.padded_data = self.padding()
self.blocks = self.split_blocks()
for block in self.blocks:
expanded_block = self.expand_block(block)
a, b, c, d, e = self.h
for i in range(0, 80):
if 0 <= i < 20:
f = (b & c) | ((~b) & d)
k = 0x5A827999
elif 20 <= i < 40:
f = b ^ c ^ d
k = 0x6ED9EBA1
elif 40 <= i < 60:
f = (b & c) | (b & d) | (c & d)
k = 0x8F1BBCDC
elif 60 <= i < 80:
f = b ^ c ^ d
k = 0xCA62C1D6
a, b, c, d, e = (
self.rotate(a, 5) + f + e + k + expanded_block[i] & 0xFFFFFFFF,
a,
self.rotate(b, 30),
c,
d,
)
self.h = (
self.h[0] + a & 0xFFFFFFFF,
self.h[1] + b & 0xFFFFFFFF,
self.h[2] + c & 0xFFFFFFFF,
self.h[3] + d & 0xFFFFFFFF,
self.h[4] + e & 0xFFFFFFFF,
)
return "%08x%08x%08x%08x%08x" % tuple(self.h)
def test_sha1_hash():
msg = b"Test String"
assert SHA1Hash(msg).final_hash() == hashlib.sha1(msg).hexdigest() # noqa: S324
def main():
"""
Provides option 'string' or 'file' to take input and prints the calculated SHA1
hash. unittest.main() has been commented because we probably don't want to run
the test each time.
"""
# unittest.main()
parser = argparse.ArgumentParser(description="Process some strings or files")
parser.add_argument(
"--string",
dest="input_string",
default="Hello World!! Welcome to Cryptography",
help="Hash the string",
)
parser.add_argument("--file", dest="input_file", help="Hash contents of a file")
args = parser.parse_args()
input_string = args.input_string
# In any case hash input should be a bytestring
if args.input_file:
with open(args.input_file, "rb") as f:
hash_input = f.read()
else:
hash_input = bytes(input_string, "utf-8")
print(SHA1Hash(hash_input).final_hash())
if __name__ == "__main__":
main()
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import numpy as np
from numpy import ndarray
from scipy.optimize import Bounds, LinearConstraint, minimize
def norm_squared(vector: ndarray) -> float:
"""
Return the squared second norm of vector
norm_squared(v) = sum(x * x for x in v)
Args:
vector (ndarray): input vector
Returns:
float: squared second norm of vector
>>> norm_squared([1, 2])
5
>>> norm_squared(np.asarray([1, 2]))
5
>>> norm_squared([0, 0])
0
"""
return np.dot(vector, vector)
class SVC:
"""
Support Vector Classifier
Args:
kernel (str): kernel to use. Default: linear
Possible choices:
- linear
regularization: constraint for soft margin (data not linearly separable)
Default: unbound
>>> SVC(kernel="asdf")
Traceback (most recent call last):
...
ValueError: Unknown kernel: asdf
>>> SVC(kernel="rbf")
Traceback (most recent call last):
...
ValueError: rbf kernel requires gamma
>>> SVC(kernel="rbf", gamma=-1)
Traceback (most recent call last):
...
ValueError: gamma must be > 0
"""
def __init__(
self,
*,
regularization: float = np.inf,
kernel: str = "linear",
gamma: float = 0,
) -> None:
self.regularization = regularization
self.gamma = gamma
if kernel == "linear":
self.kernel = self.__linear
elif kernel == "rbf":
if self.gamma == 0:
raise ValueError("rbf kernel requires gamma")
if not (isinstance(self.gamma, float) or isinstance(self.gamma, int)):
raise ValueError("gamma must be float or int")
if not self.gamma > 0:
raise ValueError("gamma must be > 0")
self.kernel = self.__rbf
# in the future, there could be a default value like in sklearn
# sklear: def_gamma = 1/(n_features * X.var()) (wiki)
# previously it was 1/(n_features)
else:
raise ValueError(f"Unknown kernel: {kernel}")
# kernels
def __linear(self, vector1: ndarray, vector2: ndarray) -> float:
"""Linear kernel (as if no kernel used at all)"""
return np.dot(vector1, vector2)
def __rbf(self, vector1: ndarray, vector2: ndarray) -> float:
"""
RBF: Radial Basis Function Kernel
Note: for more information see:
https://en.wikipedia.org/wiki/Radial_basis_function_kernel
Args:
vector1 (ndarray): first vector
vector2 (ndarray): second vector)
Returns:
float: exp(-(gamma * norm_squared(vector1 - vector2)))
"""
return np.exp(-(self.gamma * norm_squared(vector1 - vector2)))
def fit(self, observations: list[ndarray], classes: ndarray) -> None:
"""
Fits the SVC with a set of observations.
Args:
observations (list[ndarray]): list of observations
classes (ndarray): classification of each observation (in {1, -1})
"""
self.observations = observations
self.classes = classes
# using Wolfe's Dual to calculate w.
# Primal problem: minimize 1/2*norm_squared(w)
# constraint: yn(w . xn + b) >= 1
#
# With l a vector
# Dual problem: maximize sum_n(ln) -
# 1/2 * sum_n(sum_m(ln*lm*yn*ym*xn . xm))
# constraint: self.C >= ln >= 0
# and sum_n(ln*yn) = 0
# Then we get w using w = sum_n(ln*yn*xn)
# At the end we can get b ~= mean(yn - w . xn)
#
# Since we use kernels, we only need l_star to calculate b
# and to classify observations
(n,) = np.shape(classes)
def to_minimize(candidate: ndarray) -> float:
"""
Opposite of the function to maximize
Args:
candidate (ndarray): candidate array to test
Return:
float: Wolfe's Dual result to minimize
"""
s = 0
(n,) = np.shape(candidate)
for i in range(n):
for j in range(n):
s += (
candidate[i]
* candidate[j]
* classes[i]
* classes[j]
* self.kernel(observations[i], observations[j])
)
return 1 / 2 * s - sum(candidate)
ly_contraint = LinearConstraint(classes, 0, 0)
l_bounds = Bounds(0, self.regularization)
l_star = minimize(
to_minimize, np.ones(n), bounds=l_bounds, constraints=[ly_contraint]
).x
self.optimum = l_star
# calculating mean offset of separation plane to points
s = 0
for i in range(n):
for j in range(n):
s += classes[i] - classes[i] * self.optimum[i] * self.kernel(
observations[i], observations[j]
)
self.offset = s / n
def predict(self, observation: ndarray) -> int:
"""
Get the expected class of an observation
Args:
observation (Vector): observation
Returns:
int {1, -1}: expected class
>>> xs = [
... np.asarray([0, 1]), np.asarray([0, 2]),
... np.asarray([1, 1]), np.asarray([1, 2])
... ]
>>> y = np.asarray([1, 1, -1, -1])
>>> s = SVC()
>>> s.fit(xs, y)
>>> s.predict(np.asarray([0, 1]))
1
>>> s.predict(np.asarray([1, 1]))
-1
>>> s.predict(np.asarray([2, 2]))
-1
"""
s = sum(
self.optimum[n]
* self.classes[n]
* self.kernel(self.observations[n], observation)
for n in range(len(self.classes))
)
return 1 if s + self.offset >= 0 else -1
if __name__ == "__main__":
import doctest
doctest.testmod()
| import numpy as np
from numpy import ndarray
from scipy.optimize import Bounds, LinearConstraint, minimize
def norm_squared(vector: ndarray) -> float:
"""
Return the squared second norm of vector
norm_squared(v) = sum(x * x for x in v)
Args:
vector (ndarray): input vector
Returns:
float: squared second norm of vector
>>> norm_squared([1, 2])
5
>>> norm_squared(np.asarray([1, 2]))
5
>>> norm_squared([0, 0])
0
"""
return np.dot(vector, vector)
class SVC:
"""
Support Vector Classifier
Args:
kernel (str): kernel to use. Default: linear
Possible choices:
- linear
regularization: constraint for soft margin (data not linearly separable)
Default: unbound
>>> SVC(kernel="asdf")
Traceback (most recent call last):
...
ValueError: Unknown kernel: asdf
>>> SVC(kernel="rbf")
Traceback (most recent call last):
...
ValueError: rbf kernel requires gamma
>>> SVC(kernel="rbf", gamma=-1)
Traceback (most recent call last):
...
ValueError: gamma must be > 0
"""
def __init__(
self,
*,
regularization: float = np.inf,
kernel: str = "linear",
gamma: float = 0.0,
) -> None:
self.regularization = regularization
self.gamma = gamma
if kernel == "linear":
self.kernel = self.__linear
elif kernel == "rbf":
if self.gamma == 0:
raise ValueError("rbf kernel requires gamma")
if not isinstance(self.gamma, (float, int)):
raise ValueError("gamma must be float or int")
if not self.gamma > 0:
raise ValueError("gamma must be > 0")
self.kernel = self.__rbf
# in the future, there could be a default value like in sklearn
# sklear: def_gamma = 1/(n_features * X.var()) (wiki)
# previously it was 1/(n_features)
else:
raise ValueError(f"Unknown kernel: {kernel}")
# kernels
def __linear(self, vector1: ndarray, vector2: ndarray) -> float:
"""Linear kernel (as if no kernel used at all)"""
return np.dot(vector1, vector2)
def __rbf(self, vector1: ndarray, vector2: ndarray) -> float:
"""
RBF: Radial Basis Function Kernel
Note: for more information see:
https://en.wikipedia.org/wiki/Radial_basis_function_kernel
Args:
vector1 (ndarray): first vector
vector2 (ndarray): second vector)
Returns:
float: exp(-(gamma * norm_squared(vector1 - vector2)))
"""
return np.exp(-(self.gamma * norm_squared(vector1 - vector2)))
def fit(self, observations: list[ndarray], classes: ndarray) -> None:
"""
Fits the SVC with a set of observations.
Args:
observations (list[ndarray]): list of observations
classes (ndarray): classification of each observation (in {1, -1})
"""
self.observations = observations
self.classes = classes
# using Wolfe's Dual to calculate w.
# Primal problem: minimize 1/2*norm_squared(w)
# constraint: yn(w . xn + b) >= 1
#
# With l a vector
# Dual problem: maximize sum_n(ln) -
# 1/2 * sum_n(sum_m(ln*lm*yn*ym*xn . xm))
# constraint: self.C >= ln >= 0
# and sum_n(ln*yn) = 0
# Then we get w using w = sum_n(ln*yn*xn)
# At the end we can get b ~= mean(yn - w . xn)
#
# Since we use kernels, we only need l_star to calculate b
# and to classify observations
(n,) = np.shape(classes)
def to_minimize(candidate: ndarray) -> float:
"""
Opposite of the function to maximize
Args:
candidate (ndarray): candidate array to test
Return:
float: Wolfe's Dual result to minimize
"""
s = 0
(n,) = np.shape(candidate)
for i in range(n):
for j in range(n):
s += (
candidate[i]
* candidate[j]
* classes[i]
* classes[j]
* self.kernel(observations[i], observations[j])
)
return 1 / 2 * s - sum(candidate)
ly_contraint = LinearConstraint(classes, 0, 0)
l_bounds = Bounds(0, self.regularization)
l_star = minimize(
to_minimize, np.ones(n), bounds=l_bounds, constraints=[ly_contraint]
).x
self.optimum = l_star
# calculating mean offset of separation plane to points
s = 0
for i in range(n):
for j in range(n):
s += classes[i] - classes[i] * self.optimum[i] * self.kernel(
observations[i], observations[j]
)
self.offset = s / n
def predict(self, observation: ndarray) -> int:
"""
Get the expected class of an observation
Args:
observation (Vector): observation
Returns:
int {1, -1}: expected class
>>> xs = [
... np.asarray([0, 1]), np.asarray([0, 2]),
... np.asarray([1, 1]), np.asarray([1, 2])
... ]
>>> y = np.asarray([1, 1, -1, -1])
>>> s = SVC()
>>> s.fit(xs, y)
>>> s.predict(np.asarray([0, 1]))
1
>>> s.predict(np.asarray([1, 1]))
-1
>>> s.predict(np.asarray([2, 2]))
-1
"""
s = sum(
self.optimum[n]
* self.classes[n]
* self.kernel(self.observations[n], observation)
for n in range(len(self.classes))
)
return 1 if s + self.offset >= 0 else -1
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Eulers Totient function finds the number of relative primes of a number n from 1 to n
def totient(n: int) -> list:
is_prime = [True for i in range(n + 1)]
totients = [i - 1 for i in range(n + 1)]
primes = []
for i in range(2, n + 1):
if is_prime[i]:
primes.append(i)
for j in range(0, len(primes)):
if i * primes[j] >= n:
break
is_prime[i * primes[j]] = False
if i % primes[j] == 0:
totients[i * primes[j]] = totients[i] * primes[j]
break
totients[i * primes[j]] = totients[i] * (primes[j] - 1)
return totients
def test_totient() -> None:
"""
>>> n = 10
>>> totient_calculation = totient(n)
>>> for i in range(1, n):
... print(f"{i} has {totient_calculation[i]} relative primes.")
1 has 0 relative primes.
2 has 1 relative primes.
3 has 2 relative primes.
4 has 2 relative primes.
5 has 4 relative primes.
6 has 2 relative primes.
7 has 6 relative primes.
8 has 4 relative primes.
9 has 6 relative primes.
"""
pass
if __name__ == "__main__":
import doctest
doctest.testmod()
| # Eulers Totient function finds the number of relative primes of a number n from 1 to n
def totient(n: int) -> list:
"""
>>> n = 10
>>> totient_calculation = totient(n)
>>> for i in range(1, n):
... print(f"{i} has {totient_calculation[i]} relative primes.")
1 has 0 relative primes.
2 has 1 relative primes.
3 has 2 relative primes.
4 has 2 relative primes.
5 has 4 relative primes.
6 has 2 relative primes.
7 has 6 relative primes.
8 has 4 relative primes.
9 has 6 relative primes.
"""
is_prime = [True for i in range(n + 1)]
totients = [i - 1 for i in range(n + 1)]
primes = []
for i in range(2, n + 1):
if is_prime[i]:
primes.append(i)
for j in range(0, len(primes)):
if i * primes[j] >= n:
break
is_prime[i * primes[j]] = False
if i % primes[j] == 0:
totients[i * primes[j]] = totients[i] * primes[j]
break
totients[i * primes[j]] = totients[i] * (primes[j] - 1)
return totients
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # fibonacci.py
"""
Calculates the Fibonacci sequence using iteration, recursion, memoization,
and a simplified form of Binet's formula
NOTE 1: the iterative, recursive, memoization functions are more accurate than
the Binet's formula function because the Binet formula function uses floats
NOTE 2: the Binet's formula function is much more limited in the size of inputs
that it can handle due to the size limitations of Python floats
RESULTS: (n = 20)
fib_iterative runtime: 0.0055 ms
fib_recursive runtime: 6.5627 ms
fib_memoization runtime: 0.0107 ms
fib_binet runtime: 0.0174 ms
"""
from functools import lru_cache
from math import sqrt
from time import time
def time_func(func, *args, **kwargs):
"""
Times the execution of a function with parameters
"""
start = time()
output = func(*args, **kwargs)
end = time()
if int(end - start) > 0:
print(f"{func.__name__} runtime: {(end - start):0.4f} s")
else:
print(f"{func.__name__} runtime: {(end - start) * 1000:0.4f} ms")
return output
def fib_iterative(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using iteration
>>> fib_iterative(0)
[0]
>>> fib_iterative(1)
[0, 1]
>>> fib_iterative(5)
[0, 1, 1, 2, 3, 5]
>>> fib_iterative(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
if n < 0:
raise Exception("n is negative")
if n == 0:
return [0]
fib = [0, 1]
for _ in range(n - 1):
fib.append(fib[-1] + fib[-2])
return fib
def fib_recursive(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using recursion
>>> fib_iterative(0)
[0]
>>> fib_iterative(1)
[0, 1]
>>> fib_iterative(5)
[0, 1, 1, 2, 3, 5]
>>> fib_iterative(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
def fib_recursive_term(i: int) -> int:
"""
Calculates the i-th (0-indexed) Fibonacci number using recursion
"""
if i < 0:
raise Exception("n is negative")
if i < 2:
return i
return fib_recursive_term(i - 1) + fib_recursive_term(i - 2)
if n < 0:
raise Exception("n is negative")
return [fib_recursive_term(i) for i in range(n + 1)]
def fib_recursive_cached(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using recursion
>>> fib_iterative(0)
[0]
>>> fib_iterative(1)
[0, 1]
>>> fib_iterative(5)
[0, 1, 1, 2, 3, 5]
>>> fib_iterative(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
@lru_cache(maxsize=None)
def fib_recursive_term(i: int) -> int:
"""
Calculates the i-th (0-indexed) Fibonacci number using recursion
"""
if i < 0:
raise Exception("n is negative")
if i < 2:
return i
return fib_recursive_term(i - 1) + fib_recursive_term(i - 2)
if n < 0:
raise Exception("n is negative")
return [fib_recursive_term(i) for i in range(n + 1)]
def fib_memoization(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using memoization
>>> fib_memoization(0)
[0]
>>> fib_memoization(1)
[0, 1]
>>> fib_memoization(5)
[0, 1, 1, 2, 3, 5]
>>> fib_memoization(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
if n < 0:
raise Exception("n is negative")
# Cache must be outside recursuive function
# other it will reset every time it calls itself.
cache: dict[int, int] = {0: 0, 1: 1, 2: 1} # Prefilled cache
def rec_fn_memoized(num: int) -> int:
if num in cache:
return cache[num]
value = rec_fn_memoized(num - 1) + rec_fn_memoized(num - 2)
cache[num] = value
return value
return [rec_fn_memoized(i) for i in range(n + 1)]
def fib_binet(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using a simplified form
of Binet's formula:
https://en.m.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding
NOTE 1: this function diverges from fib_iterative at around n = 71, likely
due to compounding floating-point arithmetic errors
NOTE 2: this function doesn't accept n >= 1475 because it overflows
thereafter due to the size limitations of Python floats
>>> fib_binet(0)
[0]
>>> fib_binet(1)
[0, 1]
>>> fib_binet(5)
[0, 1, 1, 2, 3, 5]
>>> fib_binet(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_binet(-1)
Traceback (most recent call last):
...
Exception: n is negative
>>> fib_binet(1475)
Traceback (most recent call last):
...
Exception: n is too large
"""
if n < 0:
raise Exception("n is negative")
if n >= 1475:
raise Exception("n is too large")
sqrt_5 = sqrt(5)
phi = (1 + sqrt_5) / 2
return [round(phi**i / sqrt_5) for i in range(n + 1)]
if __name__ == "__main__":
num = 30
time_func(fib_iterative, num)
time_func(fib_recursive, num) # Around 3s runtime
time_func(fib_recursive_cached, num) # Around 0ms runtime
time_func(fib_memoization, num)
time_func(fib_binet, num)
| # fibonacci.py
"""
Calculates the Fibonacci sequence using iteration, recursion, memoization,
and a simplified form of Binet's formula
NOTE 1: the iterative, recursive, memoization functions are more accurate than
the Binet's formula function because the Binet formula function uses floats
NOTE 2: the Binet's formula function is much more limited in the size of inputs
that it can handle due to the size limitations of Python floats
RESULTS: (n = 20)
fib_iterative runtime: 0.0055 ms
fib_recursive runtime: 6.5627 ms
fib_memoization runtime: 0.0107 ms
fib_binet runtime: 0.0174 ms
"""
import functools
from math import sqrt
from time import time
def time_func(func, *args, **kwargs):
"""
Times the execution of a function with parameters
"""
start = time()
output = func(*args, **kwargs)
end = time()
if int(end - start) > 0:
print(f"{func.__name__} runtime: {(end - start):0.4f} s")
else:
print(f"{func.__name__} runtime: {(end - start) * 1000:0.4f} ms")
return output
def fib_iterative(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using iteration
>>> fib_iterative(0)
[0]
>>> fib_iterative(1)
[0, 1]
>>> fib_iterative(5)
[0, 1, 1, 2, 3, 5]
>>> fib_iterative(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
if n < 0:
raise Exception("n is negative")
if n == 0:
return [0]
fib = [0, 1]
for _ in range(n - 1):
fib.append(fib[-1] + fib[-2])
return fib
def fib_recursive(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using recursion
>>> fib_iterative(0)
[0]
>>> fib_iterative(1)
[0, 1]
>>> fib_iterative(5)
[0, 1, 1, 2, 3, 5]
>>> fib_iterative(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
def fib_recursive_term(i: int) -> int:
"""
Calculates the i-th (0-indexed) Fibonacci number using recursion
"""
if i < 0:
raise Exception("n is negative")
if i < 2:
return i
return fib_recursive_term(i - 1) + fib_recursive_term(i - 2)
if n < 0:
raise Exception("n is negative")
return [fib_recursive_term(i) for i in range(n + 1)]
def fib_recursive_cached(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using recursion
>>> fib_iterative(0)
[0]
>>> fib_iterative(1)
[0, 1]
>>> fib_iterative(5)
[0, 1, 1, 2, 3, 5]
>>> fib_iterative(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
@functools.cache
def fib_recursive_term(i: int) -> int:
"""
Calculates the i-th (0-indexed) Fibonacci number using recursion
"""
if i < 0:
raise Exception("n is negative")
if i < 2:
return i
return fib_recursive_term(i - 1) + fib_recursive_term(i - 2)
if n < 0:
raise Exception("n is negative")
return [fib_recursive_term(i) for i in range(n + 1)]
def fib_memoization(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using memoization
>>> fib_memoization(0)
[0]
>>> fib_memoization(1)
[0, 1]
>>> fib_memoization(5)
[0, 1, 1, 2, 3, 5]
>>> fib_memoization(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
if n < 0:
raise Exception("n is negative")
# Cache must be outside recursuive function
# other it will reset every time it calls itself.
cache: dict[int, int] = {0: 0, 1: 1, 2: 1} # Prefilled cache
def rec_fn_memoized(num: int) -> int:
if num in cache:
return cache[num]
value = rec_fn_memoized(num - 1) + rec_fn_memoized(num - 2)
cache[num] = value
return value
return [rec_fn_memoized(i) for i in range(n + 1)]
def fib_binet(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using a simplified form
of Binet's formula:
https://en.m.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding
NOTE 1: this function diverges from fib_iterative at around n = 71, likely
due to compounding floating-point arithmetic errors
NOTE 2: this function doesn't accept n >= 1475 because it overflows
thereafter due to the size limitations of Python floats
>>> fib_binet(0)
[0]
>>> fib_binet(1)
[0, 1]
>>> fib_binet(5)
[0, 1, 1, 2, 3, 5]
>>> fib_binet(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_binet(-1)
Traceback (most recent call last):
...
Exception: n is negative
>>> fib_binet(1475)
Traceback (most recent call last):
...
Exception: n is too large
"""
if n < 0:
raise Exception("n is negative")
if n >= 1475:
raise Exception("n is too large")
sqrt_5 = sqrt(5)
phi = (1 + sqrt_5) / 2
return [round(phi**i / sqrt_5) for i in range(n + 1)]
if __name__ == "__main__":
num = 30
time_func(fib_iterative, num)
time_func(fib_recursive, num) # Around 3s runtime
time_func(fib_recursive_cached, num) # Around 0ms runtime
time_func(fib_memoization, num)
time_func(fib_binet, num)
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Uses Pythagoras theorem to calculate the distance between two points in space."""
import math
class Point:
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
def __repr__(self) -> str:
return f"Point({self.x}, {self.y}, {self.z})"
def distance(a: Point, b: Point) -> float:
return math.sqrt(abs((b.x - a.x) ** 2 + (b.y - a.y) ** 2 + (b.z - a.z) ** 2))
def test_distance() -> None:
"""
>>> point1 = Point(2, -1, 7)
>>> point2 = Point(1, -3, 5)
>>> print(f"Distance from {point1} to {point2} is {distance(point1, point2)}")
Distance from Point(2, -1, 7) to Point(1, -3, 5) is 3.0
"""
pass
if __name__ == "__main__":
import doctest
doctest.testmod()
| """Uses Pythagoras theorem to calculate the distance between two points in space."""
import math
class Point:
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
def __repr__(self) -> str:
return f"Point({self.x}, {self.y}, {self.z})"
def distance(a: Point, b: Point) -> float:
"""
>>> point1 = Point(2, -1, 7)
>>> point2 = Point(1, -3, 5)
>>> print(f"Distance from {point1} to {point2} is {distance(point1, point2)}")
Distance from Point(2, -1, 7) to Point(1, -3, 5) is 3.0
"""
return math.sqrt(abs((b.x - a.x) ** 2 + (b.y - a.y) ** 2 + (b.z - a.z) ** 2))
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/bin/python3
"""
Quine:
A quine is a computer program which takes no input and produces a copy of its
own source code as its only output (disregarding this docstring and the shebang).
More info on: https://en.wikipedia.org/wiki/Quine_(computing)
"""
print((lambda quine: quine % quine)("print((lambda quine: quine %% quine)(%r))"))
| #!/bin/python3
# ruff: noqa
"""
Quine:
A quine is a computer program which takes no input and produces a copy of its
own source code as its only output (disregarding this docstring and the shebang).
More info on: https://en.wikipedia.org/wiki/Quine_(computing)
"""
print((lambda quine: quine % quine)("print((lambda quine: quine %% quine)(%r))"))
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 75: https://projecteuler.net/problem=75
It turns out that 12 cm is the smallest length of wire that can be bent to form an
integer sided right angle triangle in exactly one way, but there are many more examples.
12 cm: (3,4,5)
24 cm: (6,8,10)
30 cm: (5,12,13)
36 cm: (9,12,15)
40 cm: (8,15,17)
48 cm: (12,16,20)
In contrast, some lengths of wire, like 20 cm, cannot be bent to form an integer sided
right angle triangle, and other lengths allow more than one solution to be found; for
example, using 120 cm it is possible to form exactly three different integer sided
right angle triangles.
120 cm: (30,40,50), (20,48,52), (24,45,51)
Given that L is the length of the wire, for how many values of L ≤ 1,500,000 can
exactly one integer sided right angle triangle be formed?
Solution: we generate all pythagorean triples using Euclid's formula and
keep track of the frequencies of the perimeters.
Reference: https://en.wikipedia.org/wiki/Pythagorean_triple#Generating_a_triple
"""
from collections import defaultdict
from math import gcd
from typing import DefaultDict
def solution(limit: int = 1500000) -> int:
"""
Return the number of values of L <= limit such that a wire of length L can be
formmed into an integer sided right angle triangle in exactly one way.
>>> solution(50)
6
>>> solution(1000)
112
>>> solution(50000)
5502
"""
frequencies: DefaultDict = defaultdict(int)
euclid_m = 2
while 2 * euclid_m * (euclid_m + 1) <= limit:
for euclid_n in range((euclid_m % 2) + 1, euclid_m, 2):
if gcd(euclid_m, euclid_n) > 1:
continue
primitive_perimeter = 2 * euclid_m * (euclid_m + euclid_n)
for perimeter in range(primitive_perimeter, limit + 1, primitive_perimeter):
frequencies[perimeter] += 1
euclid_m += 1
return sum(1 for frequency in frequencies.values() if frequency == 1)
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 75: https://projecteuler.net/problem=75
It turns out that 12 cm is the smallest length of wire that can be bent to form an
integer sided right angle triangle in exactly one way, but there are many more examples.
12 cm: (3,4,5)
24 cm: (6,8,10)
30 cm: (5,12,13)
36 cm: (9,12,15)
40 cm: (8,15,17)
48 cm: (12,16,20)
In contrast, some lengths of wire, like 20 cm, cannot be bent to form an integer sided
right angle triangle, and other lengths allow more than one solution to be found; for
example, using 120 cm it is possible to form exactly three different integer sided
right angle triangles.
120 cm: (30,40,50), (20,48,52), (24,45,51)
Given that L is the length of the wire, for how many values of L ≤ 1,500,000 can
exactly one integer sided right angle triangle be formed?
Solution: we generate all pythagorean triples using Euclid's formula and
keep track of the frequencies of the perimeters.
Reference: https://en.wikipedia.org/wiki/Pythagorean_triple#Generating_a_triple
"""
from collections import defaultdict
from math import gcd
def solution(limit: int = 1500000) -> int:
"""
Return the number of values of L <= limit such that a wire of length L can be
formmed into an integer sided right angle triangle in exactly one way.
>>> solution(50)
6
>>> solution(1000)
112
>>> solution(50000)
5502
"""
frequencies: defaultdict = defaultdict(int)
euclid_m = 2
while 2 * euclid_m * (euclid_m + 1) <= limit:
for euclid_n in range((euclid_m % 2) + 1, euclid_m, 2):
if gcd(euclid_m, euclid_n) > 1:
continue
primitive_perimeter = 2 * euclid_m * (euclid_m + euclid_n)
for perimeter in range(primitive_perimeter, limit + 1, primitive_perimeter):
frequencies[perimeter] += 1
euclid_m += 1
return sum(1 for frequency in frequencies.values() if frequency == 1)
if __name__ == "__main__":
print(f"{solution() = }")
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| [tool.pytest.ini_options]
markers = [
"mat_ops: mark a test as utilizing matrix operations.",
]
addopts = [
"--durations=10",
"--doctest-modules",
"--showlocals",
]
[tool.coverage.report]
omit = [".env/*"]
sort = "Cover"
#[report]
#sort = Cover
#omit =
# .env/*
# backtracking/*
| [tool.pytest.ini_options]
markers = [
"mat_ops: mark a test as utilizing matrix operations.",
]
addopts = [
"--durations=10",
"--doctest-modules",
"--showlocals",
]
[tool.coverage.report]
omit = [".env/*"]
sort = "Cover"
[tool.codespell]
ignore-words-list = "3rt,ans,crate,damon,fo,followings,hist,iff,kwanza,mater,secant,som,sur,tim,zar"
skip = "./.*,*.json,ciphers/prehistoric_men.txt,project_euler/problem_022/p022_names.txt,pyproject.toml,strings/dictionary.txt,strings/words.txt"
[tool.ruff]
ignore = [ # `ruff rule S101` for a description of that rule
"B904", # B904: Within an `except` clause, raise exceptions with `raise ... from err`
"B905", # B905: `zip()` without an explicit `strict=` parameter
"E741", # E741: Ambiguous variable name 'l'
"G004", # G004 Logging statement uses f-string
"N999", # N999: Invalid module name
"PLC1901", # PLC1901: `{}` can be simplified to `{}` as an empty string is falsey
"PLR2004", # PLR2004: Magic value used in comparison
"PLR5501", # PLR5501: Consider using `elif` instead of `else`
"PLW0120", # PLW0120: `else` clause on loop without a `break` statement
"PLW060", # PLW060: Using global for `{name}` but no assignment is done -- DO NOT FIX
"PLW2901", # PLW2901: Redefined loop variable
"RUF00", # RUF00: Ambiguous unicode character -- DO NOT FIX
"RUF100", # RUF100: Unused `noqa` directive
"S101", # S101: Use of `assert` detected -- DO NOT FIX
"S105", # S105: Possible hardcoded password: 'password'
"S113", # S113: Probable use of requests call without timeout
"UP038", # UP038: Use `X | Y` in `{}` call instead of `(X, Y)` -- DO NOT FIX
]
select = [ # https://beta.ruff.rs/docs/rules
"A", # A: builtins
"B", # B: bugbear
"C40", # C40: comprehensions
"C90", # C90: mccabe code complexity
"E", # E: pycodestyle errors
"F", # F: pyflakes
"G", # G: logging format
"I", # I: isort
"N", # N: pep8 naming
"PL", # PL: pylint
"PIE", # PIE: pie
"PYI", # PYI: type hinting stub files
"RUF", # RUF: ruff
"S", # S: bandit
"TID", # TID: tidy imports
"UP", # UP: pyupgrade
"W", # W: pycodestyle warnings
"YTT", # YTT: year 2020
]
target-version = "py311"
[tool.ruff.mccabe] # DO NOT INCREASE THIS VALUE
max-complexity = 20 # default: 10
[tool.ruff.pylint] # DO NOT INCREASE THESE VALUES
max-args = 10 # default: 5
max-branches = 20 # default: 12
max-returns = 8 # default: 6
max-statements = 88 # default: 50
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/env python
#
# Sort large text files in a minimum amount of memory
#
import argparse
import os
class FileSplitter:
BLOCK_FILENAME_FORMAT = "block_{0}.dat"
def __init__(self, filename):
self.filename = filename
self.block_filenames = []
def write_block(self, data, block_number):
filename = self.BLOCK_FILENAME_FORMAT.format(block_number)
with open(filename, "w") as file:
file.write(data)
self.block_filenames.append(filename)
def get_block_filenames(self):
return self.block_filenames
def split(self, block_size, sort_key=None):
i = 0
with open(self.filename) as file:
while True:
lines = file.readlines(block_size)
if lines == []:
break
if sort_key is None:
lines.sort()
else:
lines.sort(key=sort_key)
self.write_block("".join(lines), i)
i += 1
def cleanup(self):
map(os.remove, self.block_filenames)
class NWayMerge:
def select(self, choices):
min_index = -1
min_str = None
for i in range(len(choices)):
if min_str is None or choices[i] < min_str:
min_index = i
return min_index
class FilesArray:
def __init__(self, files):
self.files = files
self.empty = set()
self.num_buffers = len(files)
self.buffers = {i: None for i in range(self.num_buffers)}
def get_dict(self):
return {
i: self.buffers[i] for i in range(self.num_buffers) if i not in self.empty
}
def refresh(self):
for i in range(self.num_buffers):
if self.buffers[i] is None and i not in self.empty:
self.buffers[i] = self.files[i].readline()
if self.buffers[i] == "":
self.empty.add(i)
self.files[i].close()
if len(self.empty) == self.num_buffers:
return False
return True
def unshift(self, index):
value = self.buffers[index]
self.buffers[index] = None
return value
class FileMerger:
def __init__(self, merge_strategy):
self.merge_strategy = merge_strategy
def merge(self, filenames, outfilename, buffer_size):
buffers = FilesArray(self.get_file_handles(filenames, buffer_size))
with open(outfilename, "w", buffer_size) as outfile:
while buffers.refresh():
min_index = self.merge_strategy.select(buffers.get_dict())
outfile.write(buffers.unshift(min_index))
def get_file_handles(self, filenames, buffer_size):
files = {}
for i in range(len(filenames)):
files[i] = open(filenames[i], "r", buffer_size)
return files
class ExternalSort:
def __init__(self, block_size):
self.block_size = block_size
def sort(self, filename, sort_key=None):
num_blocks = self.get_number_blocks(filename, self.block_size)
splitter = FileSplitter(filename)
splitter.split(self.block_size, sort_key)
merger = FileMerger(NWayMerge())
buffer_size = self.block_size / (num_blocks + 1)
merger.merge(splitter.get_block_filenames(), filename + ".out", buffer_size)
splitter.cleanup()
def get_number_blocks(self, filename, block_size):
return (os.stat(filename).st_size / block_size) + 1
def parse_memory(string):
if string[-1].lower() == "k":
return int(string[:-1]) * 1024
elif string[-1].lower() == "m":
return int(string[:-1]) * 1024 * 1024
elif string[-1].lower() == "g":
return int(string[:-1]) * 1024 * 1024 * 1024
else:
return int(string)
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"-m", "--mem", help="amount of memory to use for sorting", default="100M"
)
parser.add_argument(
"filename", metavar="<filename>", nargs=1, help="name of file to sort"
)
args = parser.parse_args()
sorter = ExternalSort(parse_memory(args.mem))
sorter.sort(args.filename[0])
if __name__ == "__main__":
main()
| #!/usr/bin/env python
#
# Sort large text files in a minimum amount of memory
#
import argparse
import os
class FileSplitter:
BLOCK_FILENAME_FORMAT = "block_{0}.dat"
def __init__(self, filename):
self.filename = filename
self.block_filenames = []
def write_block(self, data, block_number):
filename = self.BLOCK_FILENAME_FORMAT.format(block_number)
with open(filename, "w") as file:
file.write(data)
self.block_filenames.append(filename)
def get_block_filenames(self):
return self.block_filenames
def split(self, block_size, sort_key=None):
i = 0
with open(self.filename) as file:
while True:
lines = file.readlines(block_size)
if lines == []:
break
if sort_key is None:
lines.sort()
else:
lines.sort(key=sort_key)
self.write_block("".join(lines), i)
i += 1
def cleanup(self):
map(os.remove, self.block_filenames)
class NWayMerge:
def select(self, choices):
min_index = -1
min_str = None
for i in range(len(choices)):
if min_str is None or choices[i] < min_str:
min_index = i
return min_index
class FilesArray:
def __init__(self, files):
self.files = files
self.empty = set()
self.num_buffers = len(files)
self.buffers = {i: None for i in range(self.num_buffers)}
def get_dict(self):
return {
i: self.buffers[i] for i in range(self.num_buffers) if i not in self.empty
}
def refresh(self):
for i in range(self.num_buffers):
if self.buffers[i] is None and i not in self.empty:
self.buffers[i] = self.files[i].readline()
if self.buffers[i] == "":
self.empty.add(i)
self.files[i].close()
if len(self.empty) == self.num_buffers:
return False
return True
def unshift(self, index):
value = self.buffers[index]
self.buffers[index] = None
return value
class FileMerger:
def __init__(self, merge_strategy):
self.merge_strategy = merge_strategy
def merge(self, filenames, outfilename, buffer_size):
buffers = FilesArray(self.get_file_handles(filenames, buffer_size))
with open(outfilename, "w", buffer_size) as outfile:
while buffers.refresh():
min_index = self.merge_strategy.select(buffers.get_dict())
outfile.write(buffers.unshift(min_index))
def get_file_handles(self, filenames, buffer_size):
files = {}
for i in range(len(filenames)):
files[i] = open(filenames[i], "r", buffer_size) # noqa: UP015
return files
class ExternalSort:
def __init__(self, block_size):
self.block_size = block_size
def sort(self, filename, sort_key=None):
num_blocks = self.get_number_blocks(filename, self.block_size)
splitter = FileSplitter(filename)
splitter.split(self.block_size, sort_key)
merger = FileMerger(NWayMerge())
buffer_size = self.block_size / (num_blocks + 1)
merger.merge(splitter.get_block_filenames(), filename + ".out", buffer_size)
splitter.cleanup()
def get_number_blocks(self, filename, block_size):
return (os.stat(filename).st_size / block_size) + 1
def parse_memory(string):
if string[-1].lower() == "k":
return int(string[:-1]) * 1024
elif string[-1].lower() == "m":
return int(string[:-1]) * 1024 * 1024
elif string[-1].lower() == "g":
return int(string[:-1]) * 1024 * 1024 * 1024
else:
return int(string)
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"-m", "--mem", help="amount of memory to use for sorting", default="100M"
)
parser.add_argument(
"filename", metavar="<filename>", nargs=1, help="name of file to sort"
)
args = parser.parse_args()
sorter = ExternalSort(parse_memory(args.mem))
sorter.sort(args.filename[0])
if __name__ == "__main__":
main()
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
wiki: https://en.wikipedia.org/wiki/Anagram
"""
from collections import defaultdict
from typing import DefaultDict
def check_anagrams(first_str: str, second_str: str) -> bool:
"""
Two strings are anagrams if they are made up of the same letters but are
arranged differently (ignoring the case).
>>> check_anagrams('Silent', 'Listen')
True
>>> check_anagrams('This is a string', 'Is this a string')
True
>>> check_anagrams('This is a string', 'Is this a string')
True
>>> check_anagrams('There', 'Their')
False
"""
first_str = first_str.lower().strip()
second_str = second_str.lower().strip()
# Remove whitespace
first_str = first_str.replace(" ", "")
second_str = second_str.replace(" ", "")
# Strings of different lengths are not anagrams
if len(first_str) != len(second_str):
return False
# Default values for count should be 0
count: DefaultDict[str, int] = defaultdict(int)
# For each character in input strings,
# increment count in the corresponding
for i in range(len(first_str)):
count[first_str[i]] += 1
count[second_str[i]] -= 1
return all(_count == 0 for _count in count.values())
if __name__ == "__main__":
from doctest import testmod
testmod()
input_a = input("Enter the first string ").strip()
input_b = input("Enter the second string ").strip()
status = check_anagrams(input_a, input_b)
print(f"{input_a} and {input_b} are {'' if status else 'not '}anagrams.")
| """
wiki: https://en.wikipedia.org/wiki/Anagram
"""
from collections import defaultdict
def check_anagrams(first_str: str, second_str: str) -> bool:
"""
Two strings are anagrams if they are made up of the same letters but are
arranged differently (ignoring the case).
>>> check_anagrams('Silent', 'Listen')
True
>>> check_anagrams('This is a string', 'Is this a string')
True
>>> check_anagrams('This is a string', 'Is this a string')
True
>>> check_anagrams('There', 'Their')
False
"""
first_str = first_str.lower().strip()
second_str = second_str.lower().strip()
# Remove whitespace
first_str = first_str.replace(" ", "")
second_str = second_str.replace(" ", "")
# Strings of different lengths are not anagrams
if len(first_str) != len(second_str):
return False
# Default values for count should be 0
count: defaultdict[str, int] = defaultdict(int)
# For each character in input strings,
# increment count in the corresponding
for i in range(len(first_str)):
count[first_str[i]] += 1
count[second_str[i]] -= 1
return all(_count == 0 for _count in count.values())
if __name__ == "__main__":
from doctest import testmod
testmod()
input_a = input("Enter the first string ").strip()
input_b = input("Enter the second string ").strip()
status = check_anagrams(input_a, input_b)
print(f"{input_a} and {input_b} are {'' if status else 'not '}anagrams.")
| 1 |
TheAlgorithms/Python | 8,178 | Replace bandit, flake8, isort, and pyupgrade with ruff | ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| cclauss | "2023-03-14T00:17:59Z" | "2023-03-15T12:58:26Z" | adc3ccdabede375df5cff62c3c8f06d8a191a803 | c96241b5a5052af466894ef90c7a7c749ba872eb | Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change:
[Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed.
The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors.

* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
* [x] Improve testing
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Created by sarathkaul on 17/11/19
# Modified by Arkadip Bhattacharya(@darkmatter18) on 20/04/2020
from collections import defaultdict
from typing import DefaultDict
def word_occurrence(sentence: str) -> dict:
"""
>>> from collections import Counter
>>> SENTENCE = "a b A b c b d b d e f e g e h e i e j e 0"
>>> occurence_dict = word_occurrence(SENTENCE)
>>> all(occurence_dict[word] == count for word, count
... in Counter(SENTENCE.split()).items())
True
>>> dict(word_occurrence("Two spaces"))
{'Two': 1, 'spaces': 1}
"""
occurrence: DefaultDict[str, int] = defaultdict(int)
# Creating a dictionary containing count of each word
for word in sentence.split():
occurrence[word] += 1
return occurrence
if __name__ == "__main__":
for word, count in word_occurrence("INPUT STRING").items():
print(f"{word}: {count}")
| # Created by sarathkaul on 17/11/19
# Modified by Arkadip Bhattacharya(@darkmatter18) on 20/04/2020
from collections import defaultdict
def word_occurrence(sentence: str) -> dict:
"""
>>> from collections import Counter
>>> SENTENCE = "a b A b c b d b d e f e g e h e i e j e 0"
>>> occurence_dict = word_occurrence(SENTENCE)
>>> all(occurence_dict[word] == count for word, count
... in Counter(SENTENCE.split()).items())
True
>>> dict(word_occurrence("Two spaces"))
{'Two': 1, 'spaces': 1}
"""
occurrence: defaultdict[str, int] = defaultdict(int)
# Creating a dictionary containing count of each word
for word in sentence.split():
occurrence[word] += 1
return occurrence
if __name__ == "__main__":
for word, count in word_occurrence("INPUT STRING").items():
print(f"{word}: {count}")
| 1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.