repo_name
stringclasses
1 value
pr_number
int64
4.12k
11.2k
pr_title
stringlengths
9
107
pr_description
stringlengths
107
5.48k
author
stringlengths
4
18
date_created
unknown
date_merged
unknown
previous_commit
stringlengths
40
40
pr_commit
stringlengths
40
40
query
stringlengths
118
5.52k
before_content
stringlengths
0
7.93M
after_content
stringlengths
0
7.93M
label
int64
-1
1
TheAlgorithms/Python
8,178
Replace bandit, flake8, isort, and pyupgrade with ruff
### Describe your change: [Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed. The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors. ![image](https://user-images.githubusercontent.com/3709715/223758136-afc386d2-70aa-4eff-953a-2c2d82ceea23.png) * [ ] Add an algorithm? * [ ] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? * [x] Improve testing ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [x] All new Python files are placed inside an existing directory. * [x] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2023-03-14T00:17:59Z"
"2023-03-15T12:58:26Z"
adc3ccdabede375df5cff62c3c8f06d8a191a803
c96241b5a5052af466894ef90c7a7c749ba872eb
Replace bandit, flake8, isort, and pyupgrade with ruff. ### Describe your change: [Ruff](https://beta.ruff.rs/) supports [over 500 lint rules](https://beta.ruff.rs/docs/rules) including bandit, isort, pylint, pyupgrade, and flake8 plus its plugins, and is written in Rust for speed. The `ruff` Action uses minimal steps to run in ~10 seconds, rapidly providing intuitive GitHub Annotations to contributors. ![image](https://user-images.githubusercontent.com/3709715/223758136-afc386d2-70aa-4eff-953a-2c2d82ceea23.png) * [ ] Add an algorithm? * [ ] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? * [x] Improve testing ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [x] All new Python files are placed inside an existing directory. * [x] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
"""Factorial of a positive integer -- https://en.wikipedia.org/wiki/Factorial """ def factorial(number: int) -> int: """ Calculate the factorial of specified number (n!). >>> import math >>> all(factorial(i) == math.factorial(i) for i in range(20)) True >>> factorial(0.1) Traceback (most recent call last): ... ValueError: factorial() only accepts integral values >>> factorial(-1) Traceback (most recent call last): ... ValueError: factorial() not defined for negative values >>> factorial(1) 1 >>> factorial(6) 720 >>> factorial(0) 1 """ if number != int(number): raise ValueError("factorial() only accepts integral values") if number < 0: raise ValueError("factorial() not defined for negative values") value = 1 for i in range(1, number + 1): value *= i return value def factorial_recursive(n: int) -> int: """ Calculate the factorial of a positive integer https://en.wikipedia.org/wiki/Factorial >>> import math >>> all(factorial(i) == math.factorial(i) for i in range(20)) True >>> factorial(0.1) Traceback (most recent call last): ... ValueError: factorial() only accepts integral values >>> factorial(-1) Traceback (most recent call last): ... ValueError: factorial() not defined for negative values """ if not isinstance(n, int): raise ValueError("factorial() only accepts integral values") if n < 0: raise ValueError("factorial() not defined for negative values") return 1 if n == 0 or n == 1 else n * factorial(n - 1) if __name__ == "__main__": import doctest doctest.testmod() n = int(input("Enter a positive integer: ").strip() or 0) print(f"factorial{n} is {factorial(n)}")
"""Factorial of a positive integer -- https://en.wikipedia.org/wiki/Factorial """ def factorial(number: int) -> int: """ Calculate the factorial of specified number (n!). >>> import math >>> all(factorial(i) == math.factorial(i) for i in range(20)) True >>> factorial(0.1) Traceback (most recent call last): ... ValueError: factorial() only accepts integral values >>> factorial(-1) Traceback (most recent call last): ... ValueError: factorial() not defined for negative values >>> factorial(1) 1 >>> factorial(6) 720 >>> factorial(0) 1 """ if number != int(number): raise ValueError("factorial() only accepts integral values") if number < 0: raise ValueError("factorial() not defined for negative values") value = 1 for i in range(1, number + 1): value *= i return value def factorial_recursive(n: int) -> int: """ Calculate the factorial of a positive integer https://en.wikipedia.org/wiki/Factorial >>> import math >>> all(factorial(i) == math.factorial(i) for i in range(20)) True >>> factorial(0.1) Traceback (most recent call last): ... ValueError: factorial() only accepts integral values >>> factorial(-1) Traceback (most recent call last): ... ValueError: factorial() not defined for negative values """ if not isinstance(n, int): raise ValueError("factorial() only accepts integral values") if n < 0: raise ValueError("factorial() not defined for negative values") return 1 if n == 0 or n == 1 else n * factorial(n - 1) if __name__ == "__main__": import doctest doctest.testmod() n = int(input("Enter a positive integer: ").strip() or 0) print(f"factorial{n} is {factorial(n)}")
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v4.4.0 hooks: - id: check-executables-have-shebangs - id: check-yaml - id: end-of-file-fixer types: [python] - id: trailing-whitespace - id: requirements-txt-fixer - repo: https://github.com/MarcoGorelli/auto-walrus rev: v0.2.2 hooks: - id: auto-walrus - repo: https://github.com/psf/black rev: 23.1.0 hooks: - id: black - repo: https://github.com/PyCQA/isort rev: 5.12.0 hooks: - id: isort args: - --profile=black - repo: https://github.com/tox-dev/pyproject-fmt rev: "0.9.2" hooks: - id: pyproject-fmt - repo: https://github.com/abravalheri/validate-pyproject rev: v0.12.1 hooks: - id: validate-pyproject - repo: https://github.com/asottile/pyupgrade rev: v3.3.1 hooks: - id: pyupgrade args: - --py311-plus - repo: https://github.com/charliermarsh/ruff-pre-commit rev: v0.0.254 hooks: - id: ruff args: - --ignore=E741 - repo: https://github.com/PyCQA/flake8 rev: 6.0.0 hooks: - id: flake8 # See .flake8 for args additional_dependencies: &flake8-plugins - flake8-bugbear - flake8-builtins # - flake8-broken-line - flake8-comprehensions - pep8-naming - repo: https://github.com/asottile/yesqa rev: v1.4.0 hooks: - id: yesqa additional_dependencies: *flake8-plugins - repo: https://github.com/pre-commit/mirrors-mypy rev: v1.0.1 hooks: - id: mypy args: - --ignore-missing-imports - --install-types # See mirrors-mypy README.md - --non-interactive additional_dependencies: [types-requests] - repo: https://github.com/codespell-project/codespell rev: v2.2.2 hooks: - id: codespell args: - --ignore-words-list=ans,crate,damon,fo,followings,hist,iff,mater,secant,som,sur,tim,zar exclude: | (?x)^( ciphers/prehistoric_men.txt | strings/dictionary.txt | strings/words.txt | project_euler/problem_022/p022_names.txt )$ - repo: local hooks: - id: validate-filenames name: Validate filenames entry: ./scripts/validate_filenames.py language: script pass_filenames: false
repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v4.4.0 hooks: - id: check-executables-have-shebangs - id: check-yaml - id: end-of-file-fixer types: [python] - id: trailing-whitespace - id: requirements-txt-fixer - repo: https://github.com/MarcoGorelli/auto-walrus rev: v0.2.2 hooks: - id: auto-walrus - repo: https://github.com/psf/black rev: 23.1.0 hooks: - id: black - repo: https://github.com/PyCQA/isort rev: 5.12.0 hooks: - id: isort args: - --profile=black - repo: https://github.com/tox-dev/pyproject-fmt rev: "0.9.2" hooks: - id: pyproject-fmt - repo: https://github.com/abravalheri/validate-pyproject rev: v0.12.1 hooks: - id: validate-pyproject - repo: https://github.com/asottile/pyupgrade rev: v3.3.1 hooks: - id: pyupgrade args: - --py311-plus - repo: https://github.com/charliermarsh/ruff-pre-commit rev: v0.0.255 hooks: - id: ruff args: - --ignore=E741 - repo: https://github.com/PyCQA/flake8 rev: 6.0.0 hooks: - id: flake8 # See .flake8 for args additional_dependencies: &flake8-plugins - flake8-bugbear - flake8-builtins # - flake8-broken-line - flake8-comprehensions - pep8-naming - repo: https://github.com/asottile/yesqa rev: v1.4.0 hooks: - id: yesqa additional_dependencies: *flake8-plugins - repo: https://github.com/pre-commit/mirrors-mypy rev: v1.1.1 hooks: - id: mypy args: - --ignore-missing-imports - --install-types # See mirrors-mypy README.md - --non-interactive additional_dependencies: [types-requests] - repo: https://github.com/codespell-project/codespell rev: v2.2.4 hooks: - id: codespell args: - --ignore-words-list=3rt,ans,crate,damon,fo,followings,hist,iff,kwanza,mater,secant,som,sur,tim,zar exclude: | (?x)^( ciphers/prehistoric_men.txt | strings/dictionary.txt | strings/words.txt | project_euler/problem_022/p022_names.txt )$ - repo: local hooks: - id: validate-filenames name: Validate filenames entry: ./scripts/validate_filenames.py language: script pass_filenames: false
1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
## Arithmetic Analysis * [Bisection](arithmetic_analysis/bisection.py) * [Gaussian Elimination](arithmetic_analysis/gaussian_elimination.py) * [In Static Equilibrium](arithmetic_analysis/in_static_equilibrium.py) * [Intersection](arithmetic_analysis/intersection.py) * [Jacobi Iteration Method](arithmetic_analysis/jacobi_iteration_method.py) * [Lu Decomposition](arithmetic_analysis/lu_decomposition.py) * [Newton Forward Interpolation](arithmetic_analysis/newton_forward_interpolation.py) * [Newton Method](arithmetic_analysis/newton_method.py) * [Newton Raphson](arithmetic_analysis/newton_raphson.py) * [Newton Raphson New](arithmetic_analysis/newton_raphson_new.py) * [Secant Method](arithmetic_analysis/secant_method.py) ## Audio Filters * [Butterworth Filter](audio_filters/butterworth_filter.py) * [Iir Filter](audio_filters/iir_filter.py) * [Show Response](audio_filters/show_response.py) ## Backtracking * [All Combinations](backtracking/all_combinations.py) * [All Permutations](backtracking/all_permutations.py) * [All Subsequences](backtracking/all_subsequences.py) * [Coloring](backtracking/coloring.py) * [Combination Sum](backtracking/combination_sum.py) * [Hamiltonian Cycle](backtracking/hamiltonian_cycle.py) * [Knight Tour](backtracking/knight_tour.py) * [Minimax](backtracking/minimax.py) * [Minmax](backtracking/minmax.py) * [N Queens](backtracking/n_queens.py) * [N Queens Math](backtracking/n_queens_math.py) * [Rat In Maze](backtracking/rat_in_maze.py) * [Sudoku](backtracking/sudoku.py) * [Sum Of Subsets](backtracking/sum_of_subsets.py) * [Word Search](backtracking/word_search.py) ## Bit Manipulation * [Binary And Operator](bit_manipulation/binary_and_operator.py) * [Binary Count Setbits](bit_manipulation/binary_count_setbits.py) * [Binary Count Trailing Zeros](bit_manipulation/binary_count_trailing_zeros.py) * [Binary Or Operator](bit_manipulation/binary_or_operator.py) * [Binary Shifts](bit_manipulation/binary_shifts.py) * [Binary Twos Complement](bit_manipulation/binary_twos_complement.py) * [Binary Xor Operator](bit_manipulation/binary_xor_operator.py) * [Count 1S Brian Kernighan Method](bit_manipulation/count_1s_brian_kernighan_method.py) * [Count Number Of One Bits](bit_manipulation/count_number_of_one_bits.py) * [Gray Code Sequence](bit_manipulation/gray_code_sequence.py) * [Highest Set Bit](bit_manipulation/highest_set_bit.py) * [Index Of Rightmost Set Bit](bit_manipulation/index_of_rightmost_set_bit.py) * [Is Even](bit_manipulation/is_even.py) * [Is Power Of Two](bit_manipulation/is_power_of_two.py) * [Numbers Different Signs](bit_manipulation/numbers_different_signs.py) * [Reverse Bits](bit_manipulation/reverse_bits.py) * [Single Bit Manipulation Operations](bit_manipulation/single_bit_manipulation_operations.py) ## Blockchain * [Chinese Remainder Theorem](blockchain/chinese_remainder_theorem.py) * [Diophantine Equation](blockchain/diophantine_equation.py) * [Modular Division](blockchain/modular_division.py) ## Boolean Algebra * [And Gate](boolean_algebra/and_gate.py) * [Nand Gate](boolean_algebra/nand_gate.py) * [Norgate](boolean_algebra/norgate.py) * [Not Gate](boolean_algebra/not_gate.py) * [Or Gate](boolean_algebra/or_gate.py) * [Quine Mc Cluskey](boolean_algebra/quine_mc_cluskey.py) * [Xnor Gate](boolean_algebra/xnor_gate.py) * [Xor Gate](boolean_algebra/xor_gate.py) ## Cellular Automata * [Conways Game Of Life](cellular_automata/conways_game_of_life.py) * [Game Of Life](cellular_automata/game_of_life.py) * [Nagel Schrekenberg](cellular_automata/nagel_schrekenberg.py) * [One Dimensional](cellular_automata/one_dimensional.py) ## Ciphers * [A1Z26](ciphers/a1z26.py) * [Affine Cipher](ciphers/affine_cipher.py) * [Atbash](ciphers/atbash.py) * [Autokey](ciphers/autokey.py) * [Baconian Cipher](ciphers/baconian_cipher.py) * [Base16](ciphers/base16.py) * [Base32](ciphers/base32.py) * [Base64](ciphers/base64.py) * [Base85](ciphers/base85.py) * [Beaufort Cipher](ciphers/beaufort_cipher.py) * [Bifid](ciphers/bifid.py) * [Brute Force Caesar Cipher](ciphers/brute_force_caesar_cipher.py) * [Caesar Cipher](ciphers/caesar_cipher.py) * [Cryptomath Module](ciphers/cryptomath_module.py) * [Decrypt Caesar With Chi Squared](ciphers/decrypt_caesar_with_chi_squared.py) * [Deterministic Miller Rabin](ciphers/deterministic_miller_rabin.py) * [Diffie](ciphers/diffie.py) * [Diffie Hellman](ciphers/diffie_hellman.py) * [Elgamal Key Generator](ciphers/elgamal_key_generator.py) * [Enigma Machine2](ciphers/enigma_machine2.py) * [Hill Cipher](ciphers/hill_cipher.py) * [Mixed Keyword Cypher](ciphers/mixed_keyword_cypher.py) * [Mono Alphabetic Ciphers](ciphers/mono_alphabetic_ciphers.py) * [Morse Code](ciphers/morse_code.py) * [Onepad Cipher](ciphers/onepad_cipher.py) * [Playfair Cipher](ciphers/playfair_cipher.py) * [Polybius](ciphers/polybius.py) * [Porta Cipher](ciphers/porta_cipher.py) * [Rabin Miller](ciphers/rabin_miller.py) * [Rail Fence Cipher](ciphers/rail_fence_cipher.py) * [Rot13](ciphers/rot13.py) * [Rsa Cipher](ciphers/rsa_cipher.py) * [Rsa Factorization](ciphers/rsa_factorization.py) * [Rsa Key Generator](ciphers/rsa_key_generator.py) * [Shuffled Shift Cipher](ciphers/shuffled_shift_cipher.py) * [Simple Keyword Cypher](ciphers/simple_keyword_cypher.py) * [Simple Substitution Cipher](ciphers/simple_substitution_cipher.py) * [Trafid Cipher](ciphers/trafid_cipher.py) * [Transposition Cipher](ciphers/transposition_cipher.py) * [Transposition Cipher Encrypt Decrypt File](ciphers/transposition_cipher_encrypt_decrypt_file.py) * [Vigenere Cipher](ciphers/vigenere_cipher.py) * [Xor Cipher](ciphers/xor_cipher.py) ## Compression * [Burrows Wheeler](compression/burrows_wheeler.py) * [Huffman](compression/huffman.py) * [Lempel Ziv](compression/lempel_ziv.py) * [Lempel Ziv Decompress](compression/lempel_ziv_decompress.py) * [Lz77](compression/lz77.py) * [Peak Signal To Noise Ratio](compression/peak_signal_to_noise_ratio.py) * [Run Length Encoding](compression/run_length_encoding.py) ## Computer Vision * [Cnn Classification](computer_vision/cnn_classification.py) * [Flip Augmentation](computer_vision/flip_augmentation.py) * [Harris Corner](computer_vision/harris_corner.py) * [Horn Schunck](computer_vision/horn_schunck.py) * [Mean Threshold](computer_vision/mean_threshold.py) * [Mosaic Augmentation](computer_vision/mosaic_augmentation.py) * [Pooling Functions](computer_vision/pooling_functions.py) ## Conversions * [Astronomical Length Scale Conversion](conversions/astronomical_length_scale_conversion.py) * [Binary To Decimal](conversions/binary_to_decimal.py) * [Binary To Hexadecimal](conversions/binary_to_hexadecimal.py) * [Binary To Octal](conversions/binary_to_octal.py) * [Decimal To Any](conversions/decimal_to_any.py) * [Decimal To Binary](conversions/decimal_to_binary.py) * [Decimal To Binary Recursion](conversions/decimal_to_binary_recursion.py) * [Decimal To Hexadecimal](conversions/decimal_to_hexadecimal.py) * [Decimal To Octal](conversions/decimal_to_octal.py) * [Excel Title To Column](conversions/excel_title_to_column.py) * [Hex To Bin](conversions/hex_to_bin.py) * [Hexadecimal To Decimal](conversions/hexadecimal_to_decimal.py) * [Length Conversion](conversions/length_conversion.py) * [Molecular Chemistry](conversions/molecular_chemistry.py) * [Octal To Decimal](conversions/octal_to_decimal.py) * [Prefix Conversions](conversions/prefix_conversions.py) * [Prefix Conversions String](conversions/prefix_conversions_string.py) * [Pressure Conversions](conversions/pressure_conversions.py) * [Rgb Hsv Conversion](conversions/rgb_hsv_conversion.py) * [Roman Numerals](conversions/roman_numerals.py) * [Speed Conversions](conversions/speed_conversions.py) * [Temperature Conversions](conversions/temperature_conversions.py) * [Volume Conversions](conversions/volume_conversions.py) * [Weight Conversion](conversions/weight_conversion.py) ## Data Structures * Arrays * [Permutations](data_structures/arrays/permutations.py) * [Prefix Sum](data_structures/arrays/prefix_sum.py) * Binary Tree * [Avl Tree](data_structures/binary_tree/avl_tree.py) * [Basic Binary Tree](data_structures/binary_tree/basic_binary_tree.py) * [Binary Search Tree](data_structures/binary_tree/binary_search_tree.py) * [Binary Search Tree Recursive](data_structures/binary_tree/binary_search_tree_recursive.py) * [Binary Tree Mirror](data_structures/binary_tree/binary_tree_mirror.py) * [Binary Tree Node Sum](data_structures/binary_tree/binary_tree_node_sum.py) * [Binary Tree Path Sum](data_structures/binary_tree/binary_tree_path_sum.py) * [Binary Tree Traversals](data_structures/binary_tree/binary_tree_traversals.py) * [Diff Views Of Binary Tree](data_structures/binary_tree/diff_views_of_binary_tree.py) * [Distribute Coins](data_structures/binary_tree/distribute_coins.py) * [Fenwick Tree](data_structures/binary_tree/fenwick_tree.py) * [Inorder Tree Traversal 2022](data_structures/binary_tree/inorder_tree_traversal_2022.py) * [Is Bst](data_structures/binary_tree/is_bst.py) * [Lazy Segment Tree](data_structures/binary_tree/lazy_segment_tree.py) * [Lowest Common Ancestor](data_structures/binary_tree/lowest_common_ancestor.py) * [Maximum Fenwick Tree](data_structures/binary_tree/maximum_fenwick_tree.py) * [Merge Two Binary Trees](data_structures/binary_tree/merge_two_binary_trees.py) * [Non Recursive Segment Tree](data_structures/binary_tree/non_recursive_segment_tree.py) * [Number Of Possible Binary Trees](data_structures/binary_tree/number_of_possible_binary_trees.py) * [Red Black Tree](data_structures/binary_tree/red_black_tree.py) * [Segment Tree](data_structures/binary_tree/segment_tree.py) * [Segment Tree Other](data_structures/binary_tree/segment_tree_other.py) * [Treap](data_structures/binary_tree/treap.py) * [Wavelet Tree](data_structures/binary_tree/wavelet_tree.py) * Disjoint Set * [Alternate Disjoint Set](data_structures/disjoint_set/alternate_disjoint_set.py) * [Disjoint Set](data_structures/disjoint_set/disjoint_set.py) * Hashing * [Double Hash](data_structures/hashing/double_hash.py) * [Hash Table](data_structures/hashing/hash_table.py) * [Hash Table With Linked List](data_structures/hashing/hash_table_with_linked_list.py) * Number Theory * [Prime Numbers](data_structures/hashing/number_theory/prime_numbers.py) * [Quadratic Probing](data_structures/hashing/quadratic_probing.py) * Heap * [Binomial Heap](data_structures/heap/binomial_heap.py) * [Heap](data_structures/heap/heap.py) * [Heap Generic](data_structures/heap/heap_generic.py) * [Max Heap](data_structures/heap/max_heap.py) * [Min Heap](data_structures/heap/min_heap.py) * [Randomized Heap](data_structures/heap/randomized_heap.py) * [Skew Heap](data_structures/heap/skew_heap.py) * Linked List * [Circular Linked List](data_structures/linked_list/circular_linked_list.py) * [Deque Doubly](data_structures/linked_list/deque_doubly.py) * [Doubly Linked List](data_structures/linked_list/doubly_linked_list.py) * [Doubly Linked List Two](data_structures/linked_list/doubly_linked_list_two.py) * [From Sequence](data_structures/linked_list/from_sequence.py) * [Has Loop](data_structures/linked_list/has_loop.py) * [Is Palindrome](data_structures/linked_list/is_palindrome.py) * [Merge Two Lists](data_structures/linked_list/merge_two_lists.py) * [Middle Element Of Linked List](data_structures/linked_list/middle_element_of_linked_list.py) * [Print Reverse](data_structures/linked_list/print_reverse.py) * [Singly Linked List](data_structures/linked_list/singly_linked_list.py) * [Skip List](data_structures/linked_list/skip_list.py) * [Swap Nodes](data_structures/linked_list/swap_nodes.py) * Queue * [Circular Queue](data_structures/queue/circular_queue.py) * [Circular Queue Linked List](data_structures/queue/circular_queue_linked_list.py) * [Double Ended Queue](data_structures/queue/double_ended_queue.py) * [Linked Queue](data_structures/queue/linked_queue.py) * [Priority Queue Using List](data_structures/queue/priority_queue_using_list.py) * [Queue On List](data_structures/queue/queue_on_list.py) * [Queue On Pseudo Stack](data_structures/queue/queue_on_pseudo_stack.py) * Stacks * [Balanced Parentheses](data_structures/stacks/balanced_parentheses.py) * [Dijkstras Two Stack Algorithm](data_structures/stacks/dijkstras_two_stack_algorithm.py) * [Evaluate Postfix Notations](data_structures/stacks/evaluate_postfix_notations.py) * [Infix To Postfix Conversion](data_structures/stacks/infix_to_postfix_conversion.py) * [Infix To Prefix Conversion](data_structures/stacks/infix_to_prefix_conversion.py) * [Next Greater Element](data_structures/stacks/next_greater_element.py) * [Postfix Evaluation](data_structures/stacks/postfix_evaluation.py) * [Prefix Evaluation](data_structures/stacks/prefix_evaluation.py) * [Stack](data_structures/stacks/stack.py) * [Stack With Doubly Linked List](data_structures/stacks/stack_with_doubly_linked_list.py) * [Stack With Singly Linked List](data_structures/stacks/stack_with_singly_linked_list.py) * [Stock Span Problem](data_structures/stacks/stock_span_problem.py) * Trie * [Radix Tree](data_structures/trie/radix_tree.py) * [Trie](data_structures/trie/trie.py) ## Digital Image Processing * [Change Brightness](digital_image_processing/change_brightness.py) * [Change Contrast](digital_image_processing/change_contrast.py) * [Convert To Negative](digital_image_processing/convert_to_negative.py) * Dithering * [Burkes](digital_image_processing/dithering/burkes.py) * Edge Detection * [Canny](digital_image_processing/edge_detection/canny.py) * Filters * [Bilateral Filter](digital_image_processing/filters/bilateral_filter.py) * [Convolve](digital_image_processing/filters/convolve.py) * [Gabor Filter](digital_image_processing/filters/gabor_filter.py) * [Gaussian Filter](digital_image_processing/filters/gaussian_filter.py) * [Local Binary Pattern](digital_image_processing/filters/local_binary_pattern.py) * [Median Filter](digital_image_processing/filters/median_filter.py) * [Sobel Filter](digital_image_processing/filters/sobel_filter.py) * Histogram Equalization * [Histogram Stretch](digital_image_processing/histogram_equalization/histogram_stretch.py) * [Index Calculation](digital_image_processing/index_calculation.py) * Morphological Operations * [Dilation Operation](digital_image_processing/morphological_operations/dilation_operation.py) * [Erosion Operation](digital_image_processing/morphological_operations/erosion_operation.py) * Resize * [Resize](digital_image_processing/resize/resize.py) * Rotation * [Rotation](digital_image_processing/rotation/rotation.py) * [Sepia](digital_image_processing/sepia.py) * [Test Digital Image Processing](digital_image_processing/test_digital_image_processing.py) ## Divide And Conquer * [Closest Pair Of Points](divide_and_conquer/closest_pair_of_points.py) * [Convex Hull](divide_and_conquer/convex_hull.py) * [Heaps Algorithm](divide_and_conquer/heaps_algorithm.py) * [Heaps Algorithm Iterative](divide_and_conquer/heaps_algorithm_iterative.py) * [Inversions](divide_and_conquer/inversions.py) * [Kth Order Statistic](divide_and_conquer/kth_order_statistic.py) * [Max Difference Pair](divide_and_conquer/max_difference_pair.py) * [Max Subarray Sum](divide_and_conquer/max_subarray_sum.py) * [Mergesort](divide_and_conquer/mergesort.py) * [Peak](divide_and_conquer/peak.py) * [Power](divide_and_conquer/power.py) * [Strassen Matrix Multiplication](divide_and_conquer/strassen_matrix_multiplication.py) ## Dynamic Programming * [Abbreviation](dynamic_programming/abbreviation.py) * [All Construct](dynamic_programming/all_construct.py) * [Bitmask](dynamic_programming/bitmask.py) * [Catalan Numbers](dynamic_programming/catalan_numbers.py) * [Climbing Stairs](dynamic_programming/climbing_stairs.py) * [Combination Sum Iv](dynamic_programming/combination_sum_iv.py) * [Edit Distance](dynamic_programming/edit_distance.py) * [Factorial](dynamic_programming/factorial.py) * [Fast Fibonacci](dynamic_programming/fast_fibonacci.py) * [Fibonacci](dynamic_programming/fibonacci.py) * [Fizz Buzz](dynamic_programming/fizz_buzz.py) * [Floyd Warshall](dynamic_programming/floyd_warshall.py) * [Integer Partition](dynamic_programming/integer_partition.py) * [Iterating Through Submasks](dynamic_programming/iterating_through_submasks.py) * [Knapsack](dynamic_programming/knapsack.py) * [Longest Common Subsequence](dynamic_programming/longest_common_subsequence.py) * [Longest Common Substring](dynamic_programming/longest_common_substring.py) * [Longest Increasing Subsequence](dynamic_programming/longest_increasing_subsequence.py) * [Longest Increasing Subsequence O(Nlogn)](dynamic_programming/longest_increasing_subsequence_o(nlogn).py) * [Longest Sub Array](dynamic_programming/longest_sub_array.py) * [Matrix Chain Order](dynamic_programming/matrix_chain_order.py) * [Max Non Adjacent Sum](dynamic_programming/max_non_adjacent_sum.py) * [Max Sub Array](dynamic_programming/max_sub_array.py) * [Max Sum Contiguous Subsequence](dynamic_programming/max_sum_contiguous_subsequence.py) * [Min Distance Up Bottom](dynamic_programming/min_distance_up_bottom.py) * [Minimum Coin Change](dynamic_programming/minimum_coin_change.py) * [Minimum Cost Path](dynamic_programming/minimum_cost_path.py) * [Minimum Partition](dynamic_programming/minimum_partition.py) * [Minimum Squares To Represent A Number](dynamic_programming/minimum_squares_to_represent_a_number.py) * [Minimum Steps To One](dynamic_programming/minimum_steps_to_one.py) * [Minimum Tickets Cost](dynamic_programming/minimum_tickets_cost.py) * [Optimal Binary Search Tree](dynamic_programming/optimal_binary_search_tree.py) * [Palindrome Partitioning](dynamic_programming/palindrome_partitioning.py) * [Rod Cutting](dynamic_programming/rod_cutting.py) * [Subset Generation](dynamic_programming/subset_generation.py) * [Sum Of Subset](dynamic_programming/sum_of_subset.py) * [Viterbi](dynamic_programming/viterbi.py) * [Word Break](dynamic_programming/word_break.py) ## Electronics * [Builtin Voltage](electronics/builtin_voltage.py) * [Carrier Concentration](electronics/carrier_concentration.py) * [Coulombs Law](electronics/coulombs_law.py) * [Electric Conductivity](electronics/electric_conductivity.py) * [Electric Power](electronics/electric_power.py) * [Electrical Impedance](electronics/electrical_impedance.py) * [Ind Reactance](electronics/ind_reactance.py) * [Ohms Law](electronics/ohms_law.py) * [Resistor Equivalence](electronics/resistor_equivalence.py) * [Resonant Frequency](electronics/resonant_frequency.py) ## File Transfer * [Receive File](file_transfer/receive_file.py) * [Send File](file_transfer/send_file.py) * Tests * [Test Send File](file_transfer/tests/test_send_file.py) ## Financial * [Equated Monthly Installments](financial/equated_monthly_installments.py) * [Interest](financial/interest.py) * [Price Plus Tax](financial/price_plus_tax.py) ## Fractals * [Julia Sets](fractals/julia_sets.py) * [Koch Snowflake](fractals/koch_snowflake.py) * [Mandelbrot](fractals/mandelbrot.py) * [Sierpinski Triangle](fractals/sierpinski_triangle.py) ## Fuzzy Logic * [Fuzzy Operations](fuzzy_logic/fuzzy_operations.py) ## Genetic Algorithm * [Basic String](genetic_algorithm/basic_string.py) ## Geodesy * [Haversine Distance](geodesy/haversine_distance.py) * [Lamberts Ellipsoidal Distance](geodesy/lamberts_ellipsoidal_distance.py) ## Graphics * [Bezier Curve](graphics/bezier_curve.py) * [Vector3 For 2D Rendering](graphics/vector3_for_2d_rendering.py) ## Graphs * [A Star](graphs/a_star.py) * [Articulation Points](graphs/articulation_points.py) * [Basic Graphs](graphs/basic_graphs.py) * [Bellman Ford](graphs/bellman_ford.py) * [Bi Directional Dijkstra](graphs/bi_directional_dijkstra.py) * [Bidirectional A Star](graphs/bidirectional_a_star.py) * [Bidirectional Breadth First Search](graphs/bidirectional_breadth_first_search.py) * [Boruvka](graphs/boruvka.py) * [Breadth First Search](graphs/breadth_first_search.py) * [Breadth First Search 2](graphs/breadth_first_search_2.py) * [Breadth First Search Shortest Path](graphs/breadth_first_search_shortest_path.py) * [Breadth First Search Shortest Path 2](graphs/breadth_first_search_shortest_path_2.py) * [Breadth First Search Zero One Shortest Path](graphs/breadth_first_search_zero_one_shortest_path.py) * [Check Bipartite Graph Bfs](graphs/check_bipartite_graph_bfs.py) * [Check Bipartite Graph Dfs](graphs/check_bipartite_graph_dfs.py) * [Check Cycle](graphs/check_cycle.py) * [Connected Components](graphs/connected_components.py) * [Depth First Search](graphs/depth_first_search.py) * [Depth First Search 2](graphs/depth_first_search_2.py) * [Dijkstra](graphs/dijkstra.py) * [Dijkstra 2](graphs/dijkstra_2.py) * [Dijkstra Algorithm](graphs/dijkstra_algorithm.py) * [Dijkstra Alternate](graphs/dijkstra_alternate.py) * [Dinic](graphs/dinic.py) * [Directed And Undirected (Weighted) Graph](graphs/directed_and_undirected_(weighted)_graph.py) * [Edmonds Karp Multiple Source And Sink](graphs/edmonds_karp_multiple_source_and_sink.py) * [Eulerian Path And Circuit For Undirected Graph](graphs/eulerian_path_and_circuit_for_undirected_graph.py) * [Even Tree](graphs/even_tree.py) * [Finding Bridges](graphs/finding_bridges.py) * [Frequent Pattern Graph Miner](graphs/frequent_pattern_graph_miner.py) * [G Topological Sort](graphs/g_topological_sort.py) * [Gale Shapley Bigraph](graphs/gale_shapley_bigraph.py) * [Graph List](graphs/graph_list.py) * [Graph Matrix](graphs/graph_matrix.py) * [Graphs Floyd Warshall](graphs/graphs_floyd_warshall.py) * [Greedy Best First](graphs/greedy_best_first.py) * [Greedy Min Vertex Cover](graphs/greedy_min_vertex_cover.py) * [Kahns Algorithm Long](graphs/kahns_algorithm_long.py) * [Kahns Algorithm Topo](graphs/kahns_algorithm_topo.py) * [Karger](graphs/karger.py) * [Markov Chain](graphs/markov_chain.py) * [Matching Min Vertex Cover](graphs/matching_min_vertex_cover.py) * [Minimum Path Sum](graphs/minimum_path_sum.py) * [Minimum Spanning Tree Boruvka](graphs/minimum_spanning_tree_boruvka.py) * [Minimum Spanning Tree Kruskal](graphs/minimum_spanning_tree_kruskal.py) * [Minimum Spanning Tree Kruskal2](graphs/minimum_spanning_tree_kruskal2.py) * [Minimum Spanning Tree Prims](graphs/minimum_spanning_tree_prims.py) * [Minimum Spanning Tree Prims2](graphs/minimum_spanning_tree_prims2.py) * [Multi Heuristic Astar](graphs/multi_heuristic_astar.py) * [Page Rank](graphs/page_rank.py) * [Prim](graphs/prim.py) * [Random Graph Generator](graphs/random_graph_generator.py) * [Scc Kosaraju](graphs/scc_kosaraju.py) * [Strongly Connected Components](graphs/strongly_connected_components.py) * [Tarjans Scc](graphs/tarjans_scc.py) * Tests * [Test Min Spanning Tree Kruskal](graphs/tests/test_min_spanning_tree_kruskal.py) * [Test Min Spanning Tree Prim](graphs/tests/test_min_spanning_tree_prim.py) ## Greedy Methods * [Fractional Knapsack](greedy_methods/fractional_knapsack.py) * [Fractional Knapsack 2](greedy_methods/fractional_knapsack_2.py) * [Optimal Merge Pattern](greedy_methods/optimal_merge_pattern.py) ## Hashes * [Adler32](hashes/adler32.py) * [Chaos Machine](hashes/chaos_machine.py) * [Djb2](hashes/djb2.py) * [Elf](hashes/elf.py) * [Enigma Machine](hashes/enigma_machine.py) * [Hamming Code](hashes/hamming_code.py) * [Luhn](hashes/luhn.py) * [Md5](hashes/md5.py) * [Sdbm](hashes/sdbm.py) * [Sha1](hashes/sha1.py) * [Sha256](hashes/sha256.py) ## Knapsack * [Greedy Knapsack](knapsack/greedy_knapsack.py) * [Knapsack](knapsack/knapsack.py) * [Recursive Approach Knapsack](knapsack/recursive_approach_knapsack.py) * Tests * [Test Greedy Knapsack](knapsack/tests/test_greedy_knapsack.py) * [Test Knapsack](knapsack/tests/test_knapsack.py) ## Linear Algebra * Src * [Conjugate Gradient](linear_algebra/src/conjugate_gradient.py) * [Lib](linear_algebra/src/lib.py) * [Polynom For Points](linear_algebra/src/polynom_for_points.py) * [Power Iteration](linear_algebra/src/power_iteration.py) * [Rayleigh Quotient](linear_algebra/src/rayleigh_quotient.py) * [Schur Complement](linear_algebra/src/schur_complement.py) * [Test Linear Algebra](linear_algebra/src/test_linear_algebra.py) * [Transformations 2D](linear_algebra/src/transformations_2d.py) ## Machine Learning * [Astar](machine_learning/astar.py) * [Data Transformations](machine_learning/data_transformations.py) * [Decision Tree](machine_learning/decision_tree.py) * Forecasting * [Run](machine_learning/forecasting/run.py) * [Gradient Descent](machine_learning/gradient_descent.py) * [K Means Clust](machine_learning/k_means_clust.py) * [K Nearest Neighbours](machine_learning/k_nearest_neighbours.py) * [Knn Sklearn](machine_learning/knn_sklearn.py) * [Linear Discriminant Analysis](machine_learning/linear_discriminant_analysis.py) * [Linear Regression](machine_learning/linear_regression.py) * Local Weighted Learning * [Local Weighted Learning](machine_learning/local_weighted_learning/local_weighted_learning.py) * [Logistic Regression](machine_learning/logistic_regression.py) * Lstm * [Lstm Prediction](machine_learning/lstm/lstm_prediction.py) * [Multilayer Perceptron Classifier](machine_learning/multilayer_perceptron_classifier.py) * [Polymonial Regression](machine_learning/polymonial_regression.py) * [Scoring Functions](machine_learning/scoring_functions.py) * [Self Organizing Map](machine_learning/self_organizing_map.py) * [Sequential Minimum Optimization](machine_learning/sequential_minimum_optimization.py) * [Similarity Search](machine_learning/similarity_search.py) * [Support Vector Machines](machine_learning/support_vector_machines.py) * [Word Frequency Functions](machine_learning/word_frequency_functions.py) * [Xgboost Classifier](machine_learning/xgboost_classifier.py) * [Xgboost Regressor](machine_learning/xgboost_regressor.py) ## Maths * [3N Plus 1](maths/3n_plus_1.py) * [Abs](maths/abs.py) * [Add](maths/add.py) * [Addition Without Arithmetic](maths/addition_without_arithmetic.py) * [Aliquot Sum](maths/aliquot_sum.py) * [Allocation Number](maths/allocation_number.py) * [Arc Length](maths/arc_length.py) * [Area](maths/area.py) * [Area Under Curve](maths/area_under_curve.py) * [Armstrong Numbers](maths/armstrong_numbers.py) * [Automorphic Number](maths/automorphic_number.py) * [Average Absolute Deviation](maths/average_absolute_deviation.py) * [Average Mean](maths/average_mean.py) * [Average Median](maths/average_median.py) * [Average Mode](maths/average_mode.py) * [Bailey Borwein Plouffe](maths/bailey_borwein_plouffe.py) * [Basic Maths](maths/basic_maths.py) * [Binary Exp Mod](maths/binary_exp_mod.py) * [Binary Exponentiation](maths/binary_exponentiation.py) * [Binary Exponentiation 2](maths/binary_exponentiation_2.py) * [Binary Exponentiation 3](maths/binary_exponentiation_3.py) * [Binomial Coefficient](maths/binomial_coefficient.py) * [Binomial Distribution](maths/binomial_distribution.py) * [Bisection](maths/bisection.py) * [Carmichael Number](maths/carmichael_number.py) * [Catalan Number](maths/catalan_number.py) * [Ceil](maths/ceil.py) * [Check Polygon](maths/check_polygon.py) * [Chudnovsky Algorithm](maths/chudnovsky_algorithm.py) * [Collatz Sequence](maths/collatz_sequence.py) * [Combinations](maths/combinations.py) * [Decimal Isolate](maths/decimal_isolate.py) * [Decimal To Fraction](maths/decimal_to_fraction.py) * [Dodecahedron](maths/dodecahedron.py) * [Double Factorial Iterative](maths/double_factorial_iterative.py) * [Double Factorial Recursive](maths/double_factorial_recursive.py) * [Entropy](maths/entropy.py) * [Euclidean Distance](maths/euclidean_distance.py) * [Euclidean Gcd](maths/euclidean_gcd.py) * [Euler Method](maths/euler_method.py) * [Euler Modified](maths/euler_modified.py) * [Eulers Totient](maths/eulers_totient.py) * [Extended Euclidean Algorithm](maths/extended_euclidean_algorithm.py) * [Factorial](maths/factorial.py) * [Factors](maths/factors.py) * [Fermat Little Theorem](maths/fermat_little_theorem.py) * [Fibonacci](maths/fibonacci.py) * [Find Max](maths/find_max.py) * [Find Max Recursion](maths/find_max_recursion.py) * [Find Min](maths/find_min.py) * [Find Min Recursion](maths/find_min_recursion.py) * [Floor](maths/floor.py) * [Gamma](maths/gamma.py) * [Gamma Recursive](maths/gamma_recursive.py) * [Gaussian](maths/gaussian.py) * [Gaussian Error Linear Unit](maths/gaussian_error_linear_unit.py) * [Gcd Of N Numbers](maths/gcd_of_n_numbers.py) * [Greatest Common Divisor](maths/greatest_common_divisor.py) * [Greedy Coin Change](maths/greedy_coin_change.py) * [Hamming Numbers](maths/hamming_numbers.py) * [Hardy Ramanujanalgo](maths/hardy_ramanujanalgo.py) * [Hexagonal Number](maths/hexagonal_number.py) * [Integration By Simpson Approx](maths/integration_by_simpson_approx.py) * [Is Ip V4 Address Valid](maths/is_ip_v4_address_valid.py) * [Is Square Free](maths/is_square_free.py) * [Jaccard Similarity](maths/jaccard_similarity.py) * [Juggler Sequence](maths/juggler_sequence.py) * [Kadanes](maths/kadanes.py) * [Karatsuba](maths/karatsuba.py) * [Krishnamurthy Number](maths/krishnamurthy_number.py) * [Kth Lexicographic Permutation](maths/kth_lexicographic_permutation.py) * [Largest Of Very Large Numbers](maths/largest_of_very_large_numbers.py) * [Largest Subarray Sum](maths/largest_subarray_sum.py) * [Least Common Multiple](maths/least_common_multiple.py) * [Line Length](maths/line_length.py) * [Liouville Lambda](maths/liouville_lambda.py) * [Lucas Lehmer Primality Test](maths/lucas_lehmer_primality_test.py) * [Lucas Series](maths/lucas_series.py) * [Maclaurin Series](maths/maclaurin_series.py) * [Manhattan Distance](maths/manhattan_distance.py) * [Matrix Exponentiation](maths/matrix_exponentiation.py) * [Max Sum Sliding Window](maths/max_sum_sliding_window.py) * [Median Of Two Arrays](maths/median_of_two_arrays.py) * [Miller Rabin](maths/miller_rabin.py) * [Mobius Function](maths/mobius_function.py) * [Modular Exponential](maths/modular_exponential.py) * [Monte Carlo](maths/monte_carlo.py) * [Monte Carlo Dice](maths/monte_carlo_dice.py) * [Nevilles Method](maths/nevilles_method.py) * [Newton Raphson](maths/newton_raphson.py) * [Number Of Digits](maths/number_of_digits.py) * [Numerical Integration](maths/numerical_integration.py) * [Perfect Cube](maths/perfect_cube.py) * [Perfect Number](maths/perfect_number.py) * [Perfect Square](maths/perfect_square.py) * [Persistence](maths/persistence.py) * [Pi Monte Carlo Estimation](maths/pi_monte_carlo_estimation.py) * [Points Are Collinear 3D](maths/points_are_collinear_3d.py) * [Pollard Rho](maths/pollard_rho.py) * [Polynomial Evaluation](maths/polynomial_evaluation.py) * Polynomials * [Single Indeterminate Operations](maths/polynomials/single_indeterminate_operations.py) * [Power Using Recursion](maths/power_using_recursion.py) * [Prime Check](maths/prime_check.py) * [Prime Factors](maths/prime_factors.py) * [Prime Numbers](maths/prime_numbers.py) * [Prime Sieve Eratosthenes](maths/prime_sieve_eratosthenes.py) * [Primelib](maths/primelib.py) * [Print Multiplication Table](maths/print_multiplication_table.py) * [Pronic Number](maths/pronic_number.py) * [Proth Number](maths/proth_number.py) * [Pythagoras](maths/pythagoras.py) * [Qr Decomposition](maths/qr_decomposition.py) * [Quadratic Equations Complex Numbers](maths/quadratic_equations_complex_numbers.py) * [Radians](maths/radians.py) * [Radix2 Fft](maths/radix2_fft.py) * [Relu](maths/relu.py) * [Runge Kutta](maths/runge_kutta.py) * [Segmented Sieve](maths/segmented_sieve.py) * Series * [Arithmetic](maths/series/arithmetic.py) * [Geometric](maths/series/geometric.py) * [Geometric Series](maths/series/geometric_series.py) * [Harmonic](maths/series/harmonic.py) * [Harmonic Series](maths/series/harmonic_series.py) * [Hexagonal Numbers](maths/series/hexagonal_numbers.py) * [P Series](maths/series/p_series.py) * [Sieve Of Eratosthenes](maths/sieve_of_eratosthenes.py) * [Sigmoid](maths/sigmoid.py) * [Sigmoid Linear Unit](maths/sigmoid_linear_unit.py) * [Signum](maths/signum.py) * [Simpson Rule](maths/simpson_rule.py) * [Sin](maths/sin.py) * [Sock Merchant](maths/sock_merchant.py) * [Softmax](maths/softmax.py) * [Square Root](maths/square_root.py) * [Sum Of Arithmetic Series](maths/sum_of_arithmetic_series.py) * [Sum Of Digits](maths/sum_of_digits.py) * [Sum Of Geometric Progression](maths/sum_of_geometric_progression.py) * [Sum Of Harmonic Series](maths/sum_of_harmonic_series.py) * [Sumset](maths/sumset.py) * [Sylvester Sequence](maths/sylvester_sequence.py) * [Test Prime Check](maths/test_prime_check.py) * [Trapezoidal Rule](maths/trapezoidal_rule.py) * [Triplet Sum](maths/triplet_sum.py) * [Twin Prime](maths/twin_prime.py) * [Two Pointer](maths/two_pointer.py) * [Two Sum](maths/two_sum.py) * [Ugly Numbers](maths/ugly_numbers.py) * [Volume](maths/volume.py) * [Weird Number](maths/weird_number.py) * [Zellers Congruence](maths/zellers_congruence.py) ## Matrix * [Binary Search Matrix](matrix/binary_search_matrix.py) * [Count Islands In Matrix](matrix/count_islands_in_matrix.py) * [Count Paths](matrix/count_paths.py) * [Cramers Rule 2X2](matrix/cramers_rule_2x2.py) * [Inverse Of Matrix](matrix/inverse_of_matrix.py) * [Largest Square Area In Matrix](matrix/largest_square_area_in_matrix.py) * [Matrix Class](matrix/matrix_class.py) * [Matrix Operation](matrix/matrix_operation.py) * [Max Area Of Island](matrix/max_area_of_island.py) * [Nth Fibonacci Using Matrix Exponentiation](matrix/nth_fibonacci_using_matrix_exponentiation.py) * [Pascal Triangle](matrix/pascal_triangle.py) * [Rotate Matrix](matrix/rotate_matrix.py) * [Searching In Sorted Matrix](matrix/searching_in_sorted_matrix.py) * [Sherman Morrison](matrix/sherman_morrison.py) * [Spiral Print](matrix/spiral_print.py) * Tests * [Test Matrix Operation](matrix/tests/test_matrix_operation.py) ## Networking Flow * [Ford Fulkerson](networking_flow/ford_fulkerson.py) * [Minimum Cut](networking_flow/minimum_cut.py) ## Neural Network * [2 Hidden Layers Neural Network](neural_network/2_hidden_layers_neural_network.py) * [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py) * [Convolution Neural Network](neural_network/convolution_neural_network.py) * [Perceptron](neural_network/perceptron.py) * [Simple Neural Network](neural_network/simple_neural_network.py) ## Other * [Activity Selection](other/activity_selection.py) * [Alternative List Arrange](other/alternative_list_arrange.py) * [Davisb Putnamb Logemannb Loveland](other/davisb_putnamb_logemannb_loveland.py) * [Dijkstra Bankers Algorithm](other/dijkstra_bankers_algorithm.py) * [Doomsday](other/doomsday.py) * [Fischer Yates Shuffle](other/fischer_yates_shuffle.py) * [Gauss Easter](other/gauss_easter.py) * [Graham Scan](other/graham_scan.py) * [Greedy](other/greedy.py) * [Least Recently Used](other/least_recently_used.py) * [Lfu Cache](other/lfu_cache.py) * [Linear Congruential Generator](other/linear_congruential_generator.py) * [Lru Cache](other/lru_cache.py) * [Magicdiamondpattern](other/magicdiamondpattern.py) * [Maximum Subarray](other/maximum_subarray.py) * [Nested Brackets](other/nested_brackets.py) * [Password](other/password.py) * [Quine](other/quine.py) * [Scoring Algorithm](other/scoring_algorithm.py) * [Sdes](other/sdes.py) * [Tower Of Hanoi](other/tower_of_hanoi.py) ## Physics * [Archimedes Principle](physics/archimedes_principle.py) * [Casimir Effect](physics/casimir_effect.py) * [Centripetal Force](physics/centripetal_force.py) * [Horizontal Projectile Motion](physics/horizontal_projectile_motion.py) * [Hubble Parameter](physics/hubble_parameter.py) * [Ideal Gas Law](physics/ideal_gas_law.py) * [Kinetic Energy](physics/kinetic_energy.py) * [Lorentz Transformation Four Vector](physics/lorentz_transformation_four_vector.py) * [Malus Law](physics/malus_law.py) * [N Body Simulation](physics/n_body_simulation.py) * [Newtons Law Of Gravitation](physics/newtons_law_of_gravitation.py) * [Newtons Second Law Of Motion](physics/newtons_second_law_of_motion.py) * [Potential Energy](physics/potential_energy.py) * [Rms Speed Of Molecule](physics/rms_speed_of_molecule.py) * [Shear Stress](physics/shear_stress.py) ## Project Euler * Problem 001 * [Sol1](project_euler/problem_001/sol1.py) * [Sol2](project_euler/problem_001/sol2.py) * [Sol3](project_euler/problem_001/sol3.py) * [Sol4](project_euler/problem_001/sol4.py) * [Sol5](project_euler/problem_001/sol5.py) * [Sol6](project_euler/problem_001/sol6.py) * [Sol7](project_euler/problem_001/sol7.py) * Problem 002 * [Sol1](project_euler/problem_002/sol1.py) * [Sol2](project_euler/problem_002/sol2.py) * [Sol3](project_euler/problem_002/sol3.py) * [Sol4](project_euler/problem_002/sol4.py) * [Sol5](project_euler/problem_002/sol5.py) * Problem 003 * [Sol1](project_euler/problem_003/sol1.py) * [Sol2](project_euler/problem_003/sol2.py) * [Sol3](project_euler/problem_003/sol3.py) * Problem 004 * [Sol1](project_euler/problem_004/sol1.py) * [Sol2](project_euler/problem_004/sol2.py) * Problem 005 * [Sol1](project_euler/problem_005/sol1.py) * [Sol2](project_euler/problem_005/sol2.py) * Problem 006 * [Sol1](project_euler/problem_006/sol1.py) * [Sol2](project_euler/problem_006/sol2.py) * [Sol3](project_euler/problem_006/sol3.py) * [Sol4](project_euler/problem_006/sol4.py) * Problem 007 * [Sol1](project_euler/problem_007/sol1.py) * [Sol2](project_euler/problem_007/sol2.py) * [Sol3](project_euler/problem_007/sol3.py) * Problem 008 * [Sol1](project_euler/problem_008/sol1.py) * [Sol2](project_euler/problem_008/sol2.py) * [Sol3](project_euler/problem_008/sol3.py) * Problem 009 * [Sol1](project_euler/problem_009/sol1.py) * [Sol2](project_euler/problem_009/sol2.py) * [Sol3](project_euler/problem_009/sol3.py) * Problem 010 * [Sol1](project_euler/problem_010/sol1.py) * [Sol2](project_euler/problem_010/sol2.py) * [Sol3](project_euler/problem_010/sol3.py) * Problem 011 * [Sol1](project_euler/problem_011/sol1.py) * [Sol2](project_euler/problem_011/sol2.py) * Problem 012 * [Sol1](project_euler/problem_012/sol1.py) * [Sol2](project_euler/problem_012/sol2.py) * Problem 013 * [Sol1](project_euler/problem_013/sol1.py) * Problem 014 * [Sol1](project_euler/problem_014/sol1.py) * [Sol2](project_euler/problem_014/sol2.py) * Problem 015 * [Sol1](project_euler/problem_015/sol1.py) * Problem 016 * [Sol1](project_euler/problem_016/sol1.py) * [Sol2](project_euler/problem_016/sol2.py) * Problem 017 * [Sol1](project_euler/problem_017/sol1.py) * Problem 018 * [Solution](project_euler/problem_018/solution.py) * Problem 019 * [Sol1](project_euler/problem_019/sol1.py) * Problem 020 * [Sol1](project_euler/problem_020/sol1.py) * [Sol2](project_euler/problem_020/sol2.py) * [Sol3](project_euler/problem_020/sol3.py) * [Sol4](project_euler/problem_020/sol4.py) * Problem 021 * [Sol1](project_euler/problem_021/sol1.py) * Problem 022 * [Sol1](project_euler/problem_022/sol1.py) * [Sol2](project_euler/problem_022/sol2.py) * Problem 023 * [Sol1](project_euler/problem_023/sol1.py) * Problem 024 * [Sol1](project_euler/problem_024/sol1.py) * Problem 025 * [Sol1](project_euler/problem_025/sol1.py) * [Sol2](project_euler/problem_025/sol2.py) * [Sol3](project_euler/problem_025/sol3.py) * Problem 026 * [Sol1](project_euler/problem_026/sol1.py) * Problem 027 * [Sol1](project_euler/problem_027/sol1.py) * Problem 028 * [Sol1](project_euler/problem_028/sol1.py) * Problem 029 * [Sol1](project_euler/problem_029/sol1.py) * Problem 030 * [Sol1](project_euler/problem_030/sol1.py) * Problem 031 * [Sol1](project_euler/problem_031/sol1.py) * [Sol2](project_euler/problem_031/sol2.py) * Problem 032 * [Sol32](project_euler/problem_032/sol32.py) * Problem 033 * [Sol1](project_euler/problem_033/sol1.py) * Problem 034 * [Sol1](project_euler/problem_034/sol1.py) * Problem 035 * [Sol1](project_euler/problem_035/sol1.py) * Problem 036 * [Sol1](project_euler/problem_036/sol1.py) * Problem 037 * [Sol1](project_euler/problem_037/sol1.py) * Problem 038 * [Sol1](project_euler/problem_038/sol1.py) * Problem 039 * [Sol1](project_euler/problem_039/sol1.py) * Problem 040 * [Sol1](project_euler/problem_040/sol1.py) * Problem 041 * [Sol1](project_euler/problem_041/sol1.py) * Problem 042 * [Solution42](project_euler/problem_042/solution42.py) * Problem 043 * [Sol1](project_euler/problem_043/sol1.py) * Problem 044 * [Sol1](project_euler/problem_044/sol1.py) * Problem 045 * [Sol1](project_euler/problem_045/sol1.py) * Problem 046 * [Sol1](project_euler/problem_046/sol1.py) * Problem 047 * [Sol1](project_euler/problem_047/sol1.py) * Problem 048 * [Sol1](project_euler/problem_048/sol1.py) * Problem 049 * [Sol1](project_euler/problem_049/sol1.py) * Problem 050 * [Sol1](project_euler/problem_050/sol1.py) * Problem 051 * [Sol1](project_euler/problem_051/sol1.py) * Problem 052 * [Sol1](project_euler/problem_052/sol1.py) * Problem 053 * [Sol1](project_euler/problem_053/sol1.py) * Problem 054 * [Sol1](project_euler/problem_054/sol1.py) * [Test Poker Hand](project_euler/problem_054/test_poker_hand.py) * Problem 055 * [Sol1](project_euler/problem_055/sol1.py) * Problem 056 * [Sol1](project_euler/problem_056/sol1.py) * Problem 057 * [Sol1](project_euler/problem_057/sol1.py) * Problem 058 * [Sol1](project_euler/problem_058/sol1.py) * Problem 059 * [Sol1](project_euler/problem_059/sol1.py) * Problem 062 * [Sol1](project_euler/problem_062/sol1.py) * Problem 063 * [Sol1](project_euler/problem_063/sol1.py) * Problem 064 * [Sol1](project_euler/problem_064/sol1.py) * Problem 065 * [Sol1](project_euler/problem_065/sol1.py) * Problem 067 * [Sol1](project_euler/problem_067/sol1.py) * [Sol2](project_euler/problem_067/sol2.py) * Problem 068 * [Sol1](project_euler/problem_068/sol1.py) * Problem 069 * [Sol1](project_euler/problem_069/sol1.py) * Problem 070 * [Sol1](project_euler/problem_070/sol1.py) * Problem 071 * [Sol1](project_euler/problem_071/sol1.py) * Problem 072 * [Sol1](project_euler/problem_072/sol1.py) * [Sol2](project_euler/problem_072/sol2.py) * Problem 073 * [Sol1](project_euler/problem_073/sol1.py) * Problem 074 * [Sol1](project_euler/problem_074/sol1.py) * [Sol2](project_euler/problem_074/sol2.py) * Problem 075 * [Sol1](project_euler/problem_075/sol1.py) * Problem 076 * [Sol1](project_euler/problem_076/sol1.py) * Problem 077 * [Sol1](project_euler/problem_077/sol1.py) * Problem 078 * [Sol1](project_euler/problem_078/sol1.py) * Problem 080 * [Sol1](project_euler/problem_080/sol1.py) * Problem 081 * [Sol1](project_euler/problem_081/sol1.py) * Problem 082 * [Sol1](project_euler/problem_082/sol1.py) * Problem 085 * [Sol1](project_euler/problem_085/sol1.py) * Problem 086 * [Sol1](project_euler/problem_086/sol1.py) * Problem 087 * [Sol1](project_euler/problem_087/sol1.py) * Problem 089 * [Sol1](project_euler/problem_089/sol1.py) * Problem 091 * [Sol1](project_euler/problem_091/sol1.py) * Problem 092 * [Sol1](project_euler/problem_092/sol1.py) * Problem 097 * [Sol1](project_euler/problem_097/sol1.py) * Problem 099 * [Sol1](project_euler/problem_099/sol1.py) * Problem 101 * [Sol1](project_euler/problem_101/sol1.py) * Problem 102 * [Sol1](project_euler/problem_102/sol1.py) * Problem 104 * [Sol1](project_euler/problem_104/sol1.py) * Problem 107 * [Sol1](project_euler/problem_107/sol1.py) * Problem 109 * [Sol1](project_euler/problem_109/sol1.py) * Problem 112 * [Sol1](project_euler/problem_112/sol1.py) * Problem 113 * [Sol1](project_euler/problem_113/sol1.py) * Problem 114 * [Sol1](project_euler/problem_114/sol1.py) * Problem 115 * [Sol1](project_euler/problem_115/sol1.py) * Problem 116 * [Sol1](project_euler/problem_116/sol1.py) * Problem 117 * [Sol1](project_euler/problem_117/sol1.py) * Problem 119 * [Sol1](project_euler/problem_119/sol1.py) * Problem 120 * [Sol1](project_euler/problem_120/sol1.py) * Problem 121 * [Sol1](project_euler/problem_121/sol1.py) * Problem 123 * [Sol1](project_euler/problem_123/sol1.py) * Problem 125 * [Sol1](project_euler/problem_125/sol1.py) * Problem 129 * [Sol1](project_euler/problem_129/sol1.py) * Problem 135 * [Sol1](project_euler/problem_135/sol1.py) * Problem 144 * [Sol1](project_euler/problem_144/sol1.py) * Problem 145 * [Sol1](project_euler/problem_145/sol1.py) * Problem 173 * [Sol1](project_euler/problem_173/sol1.py) * Problem 174 * [Sol1](project_euler/problem_174/sol1.py) * Problem 180 * [Sol1](project_euler/problem_180/sol1.py) * Problem 188 * [Sol1](project_euler/problem_188/sol1.py) * Problem 191 * [Sol1](project_euler/problem_191/sol1.py) * Problem 203 * [Sol1](project_euler/problem_203/sol1.py) * Problem 205 * [Sol1](project_euler/problem_205/sol1.py) * Problem 206 * [Sol1](project_euler/problem_206/sol1.py) * Problem 207 * [Sol1](project_euler/problem_207/sol1.py) * Problem 234 * [Sol1](project_euler/problem_234/sol1.py) * Problem 301 * [Sol1](project_euler/problem_301/sol1.py) * Problem 493 * [Sol1](project_euler/problem_493/sol1.py) * Problem 551 * [Sol1](project_euler/problem_551/sol1.py) * Problem 587 * [Sol1](project_euler/problem_587/sol1.py) * Problem 686 * [Sol1](project_euler/problem_686/sol1.py) ## Quantum * [Bb84](quantum/bb84.py) * [Deutsch Jozsa](quantum/deutsch_jozsa.py) * [Half Adder](quantum/half_adder.py) * [Not Gate](quantum/not_gate.py) * [Q Fourier Transform](quantum/q_fourier_transform.py) * [Q Full Adder](quantum/q_full_adder.py) * [Quantum Entanglement](quantum/quantum_entanglement.py) * [Quantum Teleportation](quantum/quantum_teleportation.py) * [Ripple Adder Classic](quantum/ripple_adder_classic.py) * [Single Qubit Measure](quantum/single_qubit_measure.py) * [Superdense Coding](quantum/superdense_coding.py) ## Scheduling * [First Come First Served](scheduling/first_come_first_served.py) * [Highest Response Ratio Next](scheduling/highest_response_ratio_next.py) * [Job Sequencing With Deadline](scheduling/job_sequencing_with_deadline.py) * [Multi Level Feedback Queue](scheduling/multi_level_feedback_queue.py) * [Non Preemptive Shortest Job First](scheduling/non_preemptive_shortest_job_first.py) * [Round Robin](scheduling/round_robin.py) * [Shortest Job First](scheduling/shortest_job_first.py) ## Searches * [Binary Search](searches/binary_search.py) * [Binary Tree Traversal](searches/binary_tree_traversal.py) * [Double Linear Search](searches/double_linear_search.py) * [Double Linear Search Recursion](searches/double_linear_search_recursion.py) * [Fibonacci Search](searches/fibonacci_search.py) * [Hill Climbing](searches/hill_climbing.py) * [Interpolation Search](searches/interpolation_search.py) * [Jump Search](searches/jump_search.py) * [Linear Search](searches/linear_search.py) * [Quick Select](searches/quick_select.py) * [Sentinel Linear Search](searches/sentinel_linear_search.py) * [Simple Binary Search](searches/simple_binary_search.py) * [Simulated Annealing](searches/simulated_annealing.py) * [Tabu Search](searches/tabu_search.py) * [Ternary Search](searches/ternary_search.py) ## Sorts * [Bead Sort](sorts/bead_sort.py) * [Bitonic Sort](sorts/bitonic_sort.py) * [Bogo Sort](sorts/bogo_sort.py) * [Bubble Sort](sorts/bubble_sort.py) * [Bucket Sort](sorts/bucket_sort.py) * [Circle Sort](sorts/circle_sort.py) * [Cocktail Shaker Sort](sorts/cocktail_shaker_sort.py) * [Comb Sort](sorts/comb_sort.py) * [Counting Sort](sorts/counting_sort.py) * [Cycle Sort](sorts/cycle_sort.py) * [Double Sort](sorts/double_sort.py) * [Dutch National Flag Sort](sorts/dutch_national_flag_sort.py) * [Exchange Sort](sorts/exchange_sort.py) * [External Sort](sorts/external_sort.py) * [Gnome Sort](sorts/gnome_sort.py) * [Heap Sort](sorts/heap_sort.py) * [Insertion Sort](sorts/insertion_sort.py) * [Intro Sort](sorts/intro_sort.py) * [Iterative Merge Sort](sorts/iterative_merge_sort.py) * [Merge Insertion Sort](sorts/merge_insertion_sort.py) * [Merge Sort](sorts/merge_sort.py) * [Msd Radix Sort](sorts/msd_radix_sort.py) * [Natural Sort](sorts/natural_sort.py) * [Odd Even Sort](sorts/odd_even_sort.py) * [Odd Even Transposition Parallel](sorts/odd_even_transposition_parallel.py) * [Odd Even Transposition Single Threaded](sorts/odd_even_transposition_single_threaded.py) * [Pancake Sort](sorts/pancake_sort.py) * [Patience Sort](sorts/patience_sort.py) * [Pigeon Sort](sorts/pigeon_sort.py) * [Pigeonhole Sort](sorts/pigeonhole_sort.py) * [Quick Sort](sorts/quick_sort.py) * [Quick Sort 3 Partition](sorts/quick_sort_3_partition.py) * [Radix Sort](sorts/radix_sort.py) * [Random Normal Distribution Quicksort](sorts/random_normal_distribution_quicksort.py) * [Random Pivot Quick Sort](sorts/random_pivot_quick_sort.py) * [Recursive Bubble Sort](sorts/recursive_bubble_sort.py) * [Recursive Insertion Sort](sorts/recursive_insertion_sort.py) * [Recursive Mergesort Array](sorts/recursive_mergesort_array.py) * [Recursive Quick Sort](sorts/recursive_quick_sort.py) * [Selection Sort](sorts/selection_sort.py) * [Shell Sort](sorts/shell_sort.py) * [Shrink Shell Sort](sorts/shrink_shell_sort.py) * [Slowsort](sorts/slowsort.py) * [Stooge Sort](sorts/stooge_sort.py) * [Strand Sort](sorts/strand_sort.py) * [Tim Sort](sorts/tim_sort.py) * [Topological Sort](sorts/topological_sort.py) * [Tree Sort](sorts/tree_sort.py) * [Unknown Sort](sorts/unknown_sort.py) * [Wiggle Sort](sorts/wiggle_sort.py) ## Strings * [Aho Corasick](strings/aho_corasick.py) * [Alternative String Arrange](strings/alternative_string_arrange.py) * [Anagrams](strings/anagrams.py) * [Autocomplete Using Trie](strings/autocomplete_using_trie.py) * [Barcode Validator](strings/barcode_validator.py) * [Boyer Moore Search](strings/boyer_moore_search.py) * [Can String Be Rearranged As Palindrome](strings/can_string_be_rearranged_as_palindrome.py) * [Capitalize](strings/capitalize.py) * [Check Anagrams](strings/check_anagrams.py) * [Credit Card Validator](strings/credit_card_validator.py) * [Detecting English Programmatically](strings/detecting_english_programmatically.py) * [Dna](strings/dna.py) * [Frequency Finder](strings/frequency_finder.py) * [Hamming Distance](strings/hamming_distance.py) * [Indian Phone Validator](strings/indian_phone_validator.py) * [Is Contains Unique Chars](strings/is_contains_unique_chars.py) * [Is Isogram](strings/is_isogram.py) * [Is Palindrome](strings/is_palindrome.py) * [Is Pangram](strings/is_pangram.py) * [Is Spain National Id](strings/is_spain_national_id.py) * [Is Srilankan Phone Number](strings/is_srilankan_phone_number.py) * [Jaro Winkler](strings/jaro_winkler.py) * [Join](strings/join.py) * [Knuth Morris Pratt](strings/knuth_morris_pratt.py) * [Levenshtein Distance](strings/levenshtein_distance.py) * [Lower](strings/lower.py) * [Manacher](strings/manacher.py) * [Min Cost String Conversion](strings/min_cost_string_conversion.py) * [Naive String Search](strings/naive_string_search.py) * [Ngram](strings/ngram.py) * [Palindrome](strings/palindrome.py) * [Prefix Function](strings/prefix_function.py) * [Rabin Karp](strings/rabin_karp.py) * [Remove Duplicate](strings/remove_duplicate.py) * [Reverse Letters](strings/reverse_letters.py) * [Reverse Long Words](strings/reverse_long_words.py) * [Reverse Words](strings/reverse_words.py) * [Snake Case To Camel Pascal Case](strings/snake_case_to_camel_pascal_case.py) * [Split](strings/split.py) * [Text Justification](strings/text_justification.py) * [Upper](strings/upper.py) * [Wave](strings/wave.py) * [Wildcard Pattern Matching](strings/wildcard_pattern_matching.py) * [Word Occurrence](strings/word_occurrence.py) * [Word Patterns](strings/word_patterns.py) * [Z Function](strings/z_function.py) ## Web Programming * [Co2 Emission](web_programming/co2_emission.py) * [Convert Number To Words](web_programming/convert_number_to_words.py) * [Covid Stats Via Xpath](web_programming/covid_stats_via_xpath.py) * [Crawl Google Results](web_programming/crawl_google_results.py) * [Crawl Google Scholar Citation](web_programming/crawl_google_scholar_citation.py) * [Currency Converter](web_programming/currency_converter.py) * [Current Stock Price](web_programming/current_stock_price.py) * [Current Weather](web_programming/current_weather.py) * [Daily Horoscope](web_programming/daily_horoscope.py) * [Download Images From Google Query](web_programming/download_images_from_google_query.py) * [Emails From Url](web_programming/emails_from_url.py) * [Fetch Anime And Play](web_programming/fetch_anime_and_play.py) * [Fetch Bbc News](web_programming/fetch_bbc_news.py) * [Fetch Github Info](web_programming/fetch_github_info.py) * [Fetch Jobs](web_programming/fetch_jobs.py) * [Fetch Quotes](web_programming/fetch_quotes.py) * [Fetch Well Rx Price](web_programming/fetch_well_rx_price.py) * [Get Amazon Product Data](web_programming/get_amazon_product_data.py) * [Get Imdb Top 250 Movies Csv](web_programming/get_imdb_top_250_movies_csv.py) * [Get Imdbtop](web_programming/get_imdbtop.py) * [Get Top Hn Posts](web_programming/get_top_hn_posts.py) * [Get User Tweets](web_programming/get_user_tweets.py) * [Giphy](web_programming/giphy.py) * [Instagram Crawler](web_programming/instagram_crawler.py) * [Instagram Pic](web_programming/instagram_pic.py) * [Instagram Video](web_programming/instagram_video.py) * [Nasa Data](web_programming/nasa_data.py) * [Open Google Results](web_programming/open_google_results.py) * [Random Anime Character](web_programming/random_anime_character.py) * [Recaptcha Verification](web_programming/recaptcha_verification.py) * [Reddit](web_programming/reddit.py) * [Search Books By Isbn](web_programming/search_books_by_isbn.py) * [Slack Message](web_programming/slack_message.py) * [Test Fetch Github Info](web_programming/test_fetch_github_info.py) * [World Covid19 Stats](web_programming/world_covid19_stats.py)
## Arithmetic Analysis * [Bisection](arithmetic_analysis/bisection.py) * [Gaussian Elimination](arithmetic_analysis/gaussian_elimination.py) * [In Static Equilibrium](arithmetic_analysis/in_static_equilibrium.py) * [Intersection](arithmetic_analysis/intersection.py) * [Jacobi Iteration Method](arithmetic_analysis/jacobi_iteration_method.py) * [Lu Decomposition](arithmetic_analysis/lu_decomposition.py) * [Newton Forward Interpolation](arithmetic_analysis/newton_forward_interpolation.py) * [Newton Method](arithmetic_analysis/newton_method.py) * [Newton Raphson](arithmetic_analysis/newton_raphson.py) * [Newton Raphson New](arithmetic_analysis/newton_raphson_new.py) * [Secant Method](arithmetic_analysis/secant_method.py) ## Audio Filters * [Butterworth Filter](audio_filters/butterworth_filter.py) * [Iir Filter](audio_filters/iir_filter.py) * [Show Response](audio_filters/show_response.py) ## Backtracking * [All Combinations](backtracking/all_combinations.py) * [All Permutations](backtracking/all_permutations.py) * [All Subsequences](backtracking/all_subsequences.py) * [Coloring](backtracking/coloring.py) * [Combination Sum](backtracking/combination_sum.py) * [Hamiltonian Cycle](backtracking/hamiltonian_cycle.py) * [Knight Tour](backtracking/knight_tour.py) * [Minimax](backtracking/minimax.py) * [Minmax](backtracking/minmax.py) * [N Queens](backtracking/n_queens.py) * [N Queens Math](backtracking/n_queens_math.py) * [Rat In Maze](backtracking/rat_in_maze.py) * [Sudoku](backtracking/sudoku.py) * [Sum Of Subsets](backtracking/sum_of_subsets.py) * [Word Search](backtracking/word_search.py) ## Bit Manipulation * [Binary And Operator](bit_manipulation/binary_and_operator.py) * [Binary Count Setbits](bit_manipulation/binary_count_setbits.py) * [Binary Count Trailing Zeros](bit_manipulation/binary_count_trailing_zeros.py) * [Binary Or Operator](bit_manipulation/binary_or_operator.py) * [Binary Shifts](bit_manipulation/binary_shifts.py) * [Binary Twos Complement](bit_manipulation/binary_twos_complement.py) * [Binary Xor Operator](bit_manipulation/binary_xor_operator.py) * [Count 1S Brian Kernighan Method](bit_manipulation/count_1s_brian_kernighan_method.py) * [Count Number Of One Bits](bit_manipulation/count_number_of_one_bits.py) * [Gray Code Sequence](bit_manipulation/gray_code_sequence.py) * [Highest Set Bit](bit_manipulation/highest_set_bit.py) * [Index Of Rightmost Set Bit](bit_manipulation/index_of_rightmost_set_bit.py) * [Is Even](bit_manipulation/is_even.py) * [Is Power Of Two](bit_manipulation/is_power_of_two.py) * [Numbers Different Signs](bit_manipulation/numbers_different_signs.py) * [Reverse Bits](bit_manipulation/reverse_bits.py) * [Single Bit Manipulation Operations](bit_manipulation/single_bit_manipulation_operations.py) ## Blockchain * [Chinese Remainder Theorem](blockchain/chinese_remainder_theorem.py) * [Diophantine Equation](blockchain/diophantine_equation.py) * [Modular Division](blockchain/modular_division.py) ## Boolean Algebra * [And Gate](boolean_algebra/and_gate.py) * [Nand Gate](boolean_algebra/nand_gate.py) * [Norgate](boolean_algebra/norgate.py) * [Not Gate](boolean_algebra/not_gate.py) * [Or Gate](boolean_algebra/or_gate.py) * [Quine Mc Cluskey](boolean_algebra/quine_mc_cluskey.py) * [Xnor Gate](boolean_algebra/xnor_gate.py) * [Xor Gate](boolean_algebra/xor_gate.py) ## Cellular Automata * [Conways Game Of Life](cellular_automata/conways_game_of_life.py) * [Game Of Life](cellular_automata/game_of_life.py) * [Nagel Schrekenberg](cellular_automata/nagel_schrekenberg.py) * [One Dimensional](cellular_automata/one_dimensional.py) ## Ciphers * [A1Z26](ciphers/a1z26.py) * [Affine Cipher](ciphers/affine_cipher.py) * [Atbash](ciphers/atbash.py) * [Autokey](ciphers/autokey.py) * [Baconian Cipher](ciphers/baconian_cipher.py) * [Base16](ciphers/base16.py) * [Base32](ciphers/base32.py) * [Base64](ciphers/base64.py) * [Base85](ciphers/base85.py) * [Beaufort Cipher](ciphers/beaufort_cipher.py) * [Bifid](ciphers/bifid.py) * [Brute Force Caesar Cipher](ciphers/brute_force_caesar_cipher.py) * [Caesar Cipher](ciphers/caesar_cipher.py) * [Cryptomath Module](ciphers/cryptomath_module.py) * [Decrypt Caesar With Chi Squared](ciphers/decrypt_caesar_with_chi_squared.py) * [Deterministic Miller Rabin](ciphers/deterministic_miller_rabin.py) * [Diffie](ciphers/diffie.py) * [Diffie Hellman](ciphers/diffie_hellman.py) * [Elgamal Key Generator](ciphers/elgamal_key_generator.py) * [Enigma Machine2](ciphers/enigma_machine2.py) * [Hill Cipher](ciphers/hill_cipher.py) * [Mixed Keyword Cypher](ciphers/mixed_keyword_cypher.py) * [Mono Alphabetic Ciphers](ciphers/mono_alphabetic_ciphers.py) * [Morse Code](ciphers/morse_code.py) * [Onepad Cipher](ciphers/onepad_cipher.py) * [Playfair Cipher](ciphers/playfair_cipher.py) * [Polybius](ciphers/polybius.py) * [Porta Cipher](ciphers/porta_cipher.py) * [Rabin Miller](ciphers/rabin_miller.py) * [Rail Fence Cipher](ciphers/rail_fence_cipher.py) * [Rot13](ciphers/rot13.py) * [Rsa Cipher](ciphers/rsa_cipher.py) * [Rsa Factorization](ciphers/rsa_factorization.py) * [Rsa Key Generator](ciphers/rsa_key_generator.py) * [Shuffled Shift Cipher](ciphers/shuffled_shift_cipher.py) * [Simple Keyword Cypher](ciphers/simple_keyword_cypher.py) * [Simple Substitution Cipher](ciphers/simple_substitution_cipher.py) * [Trafid Cipher](ciphers/trafid_cipher.py) * [Transposition Cipher](ciphers/transposition_cipher.py) * [Transposition Cipher Encrypt Decrypt File](ciphers/transposition_cipher_encrypt_decrypt_file.py) * [Vigenere Cipher](ciphers/vigenere_cipher.py) * [Xor Cipher](ciphers/xor_cipher.py) ## Compression * [Burrows Wheeler](compression/burrows_wheeler.py) * [Huffman](compression/huffman.py) * [Lempel Ziv](compression/lempel_ziv.py) * [Lempel Ziv Decompress](compression/lempel_ziv_decompress.py) * [Lz77](compression/lz77.py) * [Peak Signal To Noise Ratio](compression/peak_signal_to_noise_ratio.py) * [Run Length Encoding](compression/run_length_encoding.py) ## Computer Vision * [Cnn Classification](computer_vision/cnn_classification.py) * [Flip Augmentation](computer_vision/flip_augmentation.py) * [Harris Corner](computer_vision/harris_corner.py) * [Horn Schunck](computer_vision/horn_schunck.py) * [Mean Threshold](computer_vision/mean_threshold.py) * [Mosaic Augmentation](computer_vision/mosaic_augmentation.py) * [Pooling Functions](computer_vision/pooling_functions.py) ## Conversions * [Astronomical Length Scale Conversion](conversions/astronomical_length_scale_conversion.py) * [Binary To Decimal](conversions/binary_to_decimal.py) * [Binary To Hexadecimal](conversions/binary_to_hexadecimal.py) * [Binary To Octal](conversions/binary_to_octal.py) * [Decimal To Any](conversions/decimal_to_any.py) * [Decimal To Binary](conversions/decimal_to_binary.py) * [Decimal To Binary Recursion](conversions/decimal_to_binary_recursion.py) * [Decimal To Hexadecimal](conversions/decimal_to_hexadecimal.py) * [Decimal To Octal](conversions/decimal_to_octal.py) * [Excel Title To Column](conversions/excel_title_to_column.py) * [Hex To Bin](conversions/hex_to_bin.py) * [Hexadecimal To Decimal](conversions/hexadecimal_to_decimal.py) * [Length Conversion](conversions/length_conversion.py) * [Molecular Chemistry](conversions/molecular_chemistry.py) * [Octal To Decimal](conversions/octal_to_decimal.py) * [Prefix Conversions](conversions/prefix_conversions.py) * [Prefix Conversions String](conversions/prefix_conversions_string.py) * [Pressure Conversions](conversions/pressure_conversions.py) * [Rgb Hsv Conversion](conversions/rgb_hsv_conversion.py) * [Roman Numerals](conversions/roman_numerals.py) * [Speed Conversions](conversions/speed_conversions.py) * [Temperature Conversions](conversions/temperature_conversions.py) * [Volume Conversions](conversions/volume_conversions.py) * [Weight Conversion](conversions/weight_conversion.py) ## Data Structures * Arrays * [Permutations](data_structures/arrays/permutations.py) * [Prefix Sum](data_structures/arrays/prefix_sum.py) * Binary Tree * [Avl Tree](data_structures/binary_tree/avl_tree.py) * [Basic Binary Tree](data_structures/binary_tree/basic_binary_tree.py) * [Binary Search Tree](data_structures/binary_tree/binary_search_tree.py) * [Binary Search Tree Recursive](data_structures/binary_tree/binary_search_tree_recursive.py) * [Binary Tree Mirror](data_structures/binary_tree/binary_tree_mirror.py) * [Binary Tree Node Sum](data_structures/binary_tree/binary_tree_node_sum.py) * [Binary Tree Path Sum](data_structures/binary_tree/binary_tree_path_sum.py) * [Binary Tree Traversals](data_structures/binary_tree/binary_tree_traversals.py) * [Diff Views Of Binary Tree](data_structures/binary_tree/diff_views_of_binary_tree.py) * [Distribute Coins](data_structures/binary_tree/distribute_coins.py) * [Fenwick Tree](data_structures/binary_tree/fenwick_tree.py) * [Inorder Tree Traversal 2022](data_structures/binary_tree/inorder_tree_traversal_2022.py) * [Is Bst](data_structures/binary_tree/is_bst.py) * [Lazy Segment Tree](data_structures/binary_tree/lazy_segment_tree.py) * [Lowest Common Ancestor](data_structures/binary_tree/lowest_common_ancestor.py) * [Maximum Fenwick Tree](data_structures/binary_tree/maximum_fenwick_tree.py) * [Merge Two Binary Trees](data_structures/binary_tree/merge_two_binary_trees.py) * [Non Recursive Segment Tree](data_structures/binary_tree/non_recursive_segment_tree.py) * [Number Of Possible Binary Trees](data_structures/binary_tree/number_of_possible_binary_trees.py) * [Red Black Tree](data_structures/binary_tree/red_black_tree.py) * [Segment Tree](data_structures/binary_tree/segment_tree.py) * [Segment Tree Other](data_structures/binary_tree/segment_tree_other.py) * [Treap](data_structures/binary_tree/treap.py) * [Wavelet Tree](data_structures/binary_tree/wavelet_tree.py) * Disjoint Set * [Alternate Disjoint Set](data_structures/disjoint_set/alternate_disjoint_set.py) * [Disjoint Set](data_structures/disjoint_set/disjoint_set.py) * Hashing * [Double Hash](data_structures/hashing/double_hash.py) * [Hash Table](data_structures/hashing/hash_table.py) * [Hash Table With Linked List](data_structures/hashing/hash_table_with_linked_list.py) * Number Theory * [Prime Numbers](data_structures/hashing/number_theory/prime_numbers.py) * [Quadratic Probing](data_structures/hashing/quadratic_probing.py) * Heap * [Binomial Heap](data_structures/heap/binomial_heap.py) * [Heap](data_structures/heap/heap.py) * [Heap Generic](data_structures/heap/heap_generic.py) * [Max Heap](data_structures/heap/max_heap.py) * [Min Heap](data_structures/heap/min_heap.py) * [Randomized Heap](data_structures/heap/randomized_heap.py) * [Skew Heap](data_structures/heap/skew_heap.py) * Linked List * [Circular Linked List](data_structures/linked_list/circular_linked_list.py) * [Deque Doubly](data_structures/linked_list/deque_doubly.py) * [Doubly Linked List](data_structures/linked_list/doubly_linked_list.py) * [Doubly Linked List Two](data_structures/linked_list/doubly_linked_list_two.py) * [From Sequence](data_structures/linked_list/from_sequence.py) * [Has Loop](data_structures/linked_list/has_loop.py) * [Is Palindrome](data_structures/linked_list/is_palindrome.py) * [Merge Two Lists](data_structures/linked_list/merge_two_lists.py) * [Middle Element Of Linked List](data_structures/linked_list/middle_element_of_linked_list.py) * [Print Reverse](data_structures/linked_list/print_reverse.py) * [Singly Linked List](data_structures/linked_list/singly_linked_list.py) * [Skip List](data_structures/linked_list/skip_list.py) * [Swap Nodes](data_structures/linked_list/swap_nodes.py) * Queue * [Circular Queue](data_structures/queue/circular_queue.py) * [Circular Queue Linked List](data_structures/queue/circular_queue_linked_list.py) * [Double Ended Queue](data_structures/queue/double_ended_queue.py) * [Linked Queue](data_structures/queue/linked_queue.py) * [Priority Queue Using List](data_structures/queue/priority_queue_using_list.py) * [Queue On List](data_structures/queue/queue_on_list.py) * [Queue On Pseudo Stack](data_structures/queue/queue_on_pseudo_stack.py) * Stacks * [Balanced Parentheses](data_structures/stacks/balanced_parentheses.py) * [Dijkstras Two Stack Algorithm](data_structures/stacks/dijkstras_two_stack_algorithm.py) * [Evaluate Postfix Notations](data_structures/stacks/evaluate_postfix_notations.py) * [Infix To Postfix Conversion](data_structures/stacks/infix_to_postfix_conversion.py) * [Infix To Prefix Conversion](data_structures/stacks/infix_to_prefix_conversion.py) * [Next Greater Element](data_structures/stacks/next_greater_element.py) * [Postfix Evaluation](data_structures/stacks/postfix_evaluation.py) * [Prefix Evaluation](data_structures/stacks/prefix_evaluation.py) * [Stack](data_structures/stacks/stack.py) * [Stack With Doubly Linked List](data_structures/stacks/stack_with_doubly_linked_list.py) * [Stack With Singly Linked List](data_structures/stacks/stack_with_singly_linked_list.py) * [Stock Span Problem](data_structures/stacks/stock_span_problem.py) * Trie * [Radix Tree](data_structures/trie/radix_tree.py) * [Trie](data_structures/trie/trie.py) ## Digital Image Processing * [Change Brightness](digital_image_processing/change_brightness.py) * [Change Contrast](digital_image_processing/change_contrast.py) * [Convert To Negative](digital_image_processing/convert_to_negative.py) * Dithering * [Burkes](digital_image_processing/dithering/burkes.py) * Edge Detection * [Canny](digital_image_processing/edge_detection/canny.py) * Filters * [Bilateral Filter](digital_image_processing/filters/bilateral_filter.py) * [Convolve](digital_image_processing/filters/convolve.py) * [Gabor Filter](digital_image_processing/filters/gabor_filter.py) * [Gaussian Filter](digital_image_processing/filters/gaussian_filter.py) * [Local Binary Pattern](digital_image_processing/filters/local_binary_pattern.py) * [Median Filter](digital_image_processing/filters/median_filter.py) * [Sobel Filter](digital_image_processing/filters/sobel_filter.py) * Histogram Equalization * [Histogram Stretch](digital_image_processing/histogram_equalization/histogram_stretch.py) * [Index Calculation](digital_image_processing/index_calculation.py) * Morphological Operations * [Dilation Operation](digital_image_processing/morphological_operations/dilation_operation.py) * [Erosion Operation](digital_image_processing/morphological_operations/erosion_operation.py) * Resize * [Resize](digital_image_processing/resize/resize.py) * Rotation * [Rotation](digital_image_processing/rotation/rotation.py) * [Sepia](digital_image_processing/sepia.py) * [Test Digital Image Processing](digital_image_processing/test_digital_image_processing.py) ## Divide And Conquer * [Closest Pair Of Points](divide_and_conquer/closest_pair_of_points.py) * [Convex Hull](divide_and_conquer/convex_hull.py) * [Heaps Algorithm](divide_and_conquer/heaps_algorithm.py) * [Heaps Algorithm Iterative](divide_and_conquer/heaps_algorithm_iterative.py) * [Inversions](divide_and_conquer/inversions.py) * [Kth Order Statistic](divide_and_conquer/kth_order_statistic.py) * [Max Difference Pair](divide_and_conquer/max_difference_pair.py) * [Max Subarray Sum](divide_and_conquer/max_subarray_sum.py) * [Mergesort](divide_and_conquer/mergesort.py) * [Peak](divide_and_conquer/peak.py) * [Power](divide_and_conquer/power.py) * [Strassen Matrix Multiplication](divide_and_conquer/strassen_matrix_multiplication.py) ## Dynamic Programming * [Abbreviation](dynamic_programming/abbreviation.py) * [All Construct](dynamic_programming/all_construct.py) * [Bitmask](dynamic_programming/bitmask.py) * [Catalan Numbers](dynamic_programming/catalan_numbers.py) * [Climbing Stairs](dynamic_programming/climbing_stairs.py) * [Combination Sum Iv](dynamic_programming/combination_sum_iv.py) * [Edit Distance](dynamic_programming/edit_distance.py) * [Factorial](dynamic_programming/factorial.py) * [Fast Fibonacci](dynamic_programming/fast_fibonacci.py) * [Fibonacci](dynamic_programming/fibonacci.py) * [Fizz Buzz](dynamic_programming/fizz_buzz.py) * [Floyd Warshall](dynamic_programming/floyd_warshall.py) * [Integer Partition](dynamic_programming/integer_partition.py) * [Iterating Through Submasks](dynamic_programming/iterating_through_submasks.py) * [Knapsack](dynamic_programming/knapsack.py) * [Longest Common Subsequence](dynamic_programming/longest_common_subsequence.py) * [Longest Common Substring](dynamic_programming/longest_common_substring.py) * [Longest Increasing Subsequence](dynamic_programming/longest_increasing_subsequence.py) * [Longest Increasing Subsequence O(Nlogn)](dynamic_programming/longest_increasing_subsequence_o(nlogn).py) * [Longest Sub Array](dynamic_programming/longest_sub_array.py) * [Matrix Chain Order](dynamic_programming/matrix_chain_order.py) * [Max Non Adjacent Sum](dynamic_programming/max_non_adjacent_sum.py) * [Max Sub Array](dynamic_programming/max_sub_array.py) * [Max Sum Contiguous Subsequence](dynamic_programming/max_sum_contiguous_subsequence.py) * [Min Distance Up Bottom](dynamic_programming/min_distance_up_bottom.py) * [Minimum Coin Change](dynamic_programming/minimum_coin_change.py) * [Minimum Cost Path](dynamic_programming/minimum_cost_path.py) * [Minimum Partition](dynamic_programming/minimum_partition.py) * [Minimum Squares To Represent A Number](dynamic_programming/minimum_squares_to_represent_a_number.py) * [Minimum Steps To One](dynamic_programming/minimum_steps_to_one.py) * [Minimum Tickets Cost](dynamic_programming/minimum_tickets_cost.py) * [Optimal Binary Search Tree](dynamic_programming/optimal_binary_search_tree.py) * [Palindrome Partitioning](dynamic_programming/palindrome_partitioning.py) * [Rod Cutting](dynamic_programming/rod_cutting.py) * [Subset Generation](dynamic_programming/subset_generation.py) * [Sum Of Subset](dynamic_programming/sum_of_subset.py) * [Viterbi](dynamic_programming/viterbi.py) * [Word Break](dynamic_programming/word_break.py) ## Electronics * [Builtin Voltage](electronics/builtin_voltage.py) * [Carrier Concentration](electronics/carrier_concentration.py) * [Circular Convolution](electronics/circular_convolution.py) * [Coulombs Law](electronics/coulombs_law.py) * [Electric Conductivity](electronics/electric_conductivity.py) * [Electric Power](electronics/electric_power.py) * [Electrical Impedance](electronics/electrical_impedance.py) * [Ind Reactance](electronics/ind_reactance.py) * [Ohms Law](electronics/ohms_law.py) * [Resistor Equivalence](electronics/resistor_equivalence.py) * [Resonant Frequency](electronics/resonant_frequency.py) ## File Transfer * [Receive File](file_transfer/receive_file.py) * [Send File](file_transfer/send_file.py) * Tests * [Test Send File](file_transfer/tests/test_send_file.py) ## Financial * [Equated Monthly Installments](financial/equated_monthly_installments.py) * [Interest](financial/interest.py) * [Price Plus Tax](financial/price_plus_tax.py) ## Fractals * [Julia Sets](fractals/julia_sets.py) * [Koch Snowflake](fractals/koch_snowflake.py) * [Mandelbrot](fractals/mandelbrot.py) * [Sierpinski Triangle](fractals/sierpinski_triangle.py) ## Fuzzy Logic * [Fuzzy Operations](fuzzy_logic/fuzzy_operations.py) ## Genetic Algorithm * [Basic String](genetic_algorithm/basic_string.py) ## Geodesy * [Haversine Distance](geodesy/haversine_distance.py) * [Lamberts Ellipsoidal Distance](geodesy/lamberts_ellipsoidal_distance.py) ## Graphics * [Bezier Curve](graphics/bezier_curve.py) * [Vector3 For 2D Rendering](graphics/vector3_for_2d_rendering.py) ## Graphs * [A Star](graphs/a_star.py) * [Articulation Points](graphs/articulation_points.py) * [Basic Graphs](graphs/basic_graphs.py) * [Bellman Ford](graphs/bellman_ford.py) * [Bi Directional Dijkstra](graphs/bi_directional_dijkstra.py) * [Bidirectional A Star](graphs/bidirectional_a_star.py) * [Bidirectional Breadth First Search](graphs/bidirectional_breadth_first_search.py) * [Boruvka](graphs/boruvka.py) * [Breadth First Search](graphs/breadth_first_search.py) * [Breadth First Search 2](graphs/breadth_first_search_2.py) * [Breadth First Search Shortest Path](graphs/breadth_first_search_shortest_path.py) * [Breadth First Search Shortest Path 2](graphs/breadth_first_search_shortest_path_2.py) * [Breadth First Search Zero One Shortest Path](graphs/breadth_first_search_zero_one_shortest_path.py) * [Check Bipartite Graph Bfs](graphs/check_bipartite_graph_bfs.py) * [Check Bipartite Graph Dfs](graphs/check_bipartite_graph_dfs.py) * [Check Cycle](graphs/check_cycle.py) * [Connected Components](graphs/connected_components.py) * [Depth First Search](graphs/depth_first_search.py) * [Depth First Search 2](graphs/depth_first_search_2.py) * [Dijkstra](graphs/dijkstra.py) * [Dijkstra 2](graphs/dijkstra_2.py) * [Dijkstra Algorithm](graphs/dijkstra_algorithm.py) * [Dijkstra Alternate](graphs/dijkstra_alternate.py) * [Dinic](graphs/dinic.py) * [Directed And Undirected (Weighted) Graph](graphs/directed_and_undirected_(weighted)_graph.py) * [Edmonds Karp Multiple Source And Sink](graphs/edmonds_karp_multiple_source_and_sink.py) * [Eulerian Path And Circuit For Undirected Graph](graphs/eulerian_path_and_circuit_for_undirected_graph.py) * [Even Tree](graphs/even_tree.py) * [Finding Bridges](graphs/finding_bridges.py) * [Frequent Pattern Graph Miner](graphs/frequent_pattern_graph_miner.py) * [G Topological Sort](graphs/g_topological_sort.py) * [Gale Shapley Bigraph](graphs/gale_shapley_bigraph.py) * [Graph List](graphs/graph_list.py) * [Graph Matrix](graphs/graph_matrix.py) * [Graphs Floyd Warshall](graphs/graphs_floyd_warshall.py) * [Greedy Best First](graphs/greedy_best_first.py) * [Greedy Min Vertex Cover](graphs/greedy_min_vertex_cover.py) * [Kahns Algorithm Long](graphs/kahns_algorithm_long.py) * [Kahns Algorithm Topo](graphs/kahns_algorithm_topo.py) * [Karger](graphs/karger.py) * [Markov Chain](graphs/markov_chain.py) * [Matching Min Vertex Cover](graphs/matching_min_vertex_cover.py) * [Minimum Path Sum](graphs/minimum_path_sum.py) * [Minimum Spanning Tree Boruvka](graphs/minimum_spanning_tree_boruvka.py) * [Minimum Spanning Tree Kruskal](graphs/minimum_spanning_tree_kruskal.py) * [Minimum Spanning Tree Kruskal2](graphs/minimum_spanning_tree_kruskal2.py) * [Minimum Spanning Tree Prims](graphs/minimum_spanning_tree_prims.py) * [Minimum Spanning Tree Prims2](graphs/minimum_spanning_tree_prims2.py) * [Multi Heuristic Astar](graphs/multi_heuristic_astar.py) * [Page Rank](graphs/page_rank.py) * [Prim](graphs/prim.py) * [Random Graph Generator](graphs/random_graph_generator.py) * [Scc Kosaraju](graphs/scc_kosaraju.py) * [Strongly Connected Components](graphs/strongly_connected_components.py) * [Tarjans Scc](graphs/tarjans_scc.py) * Tests * [Test Min Spanning Tree Kruskal](graphs/tests/test_min_spanning_tree_kruskal.py) * [Test Min Spanning Tree Prim](graphs/tests/test_min_spanning_tree_prim.py) ## Greedy Methods * [Fractional Knapsack](greedy_methods/fractional_knapsack.py) * [Fractional Knapsack 2](greedy_methods/fractional_knapsack_2.py) * [Optimal Merge Pattern](greedy_methods/optimal_merge_pattern.py) ## Hashes * [Adler32](hashes/adler32.py) * [Chaos Machine](hashes/chaos_machine.py) * [Djb2](hashes/djb2.py) * [Elf](hashes/elf.py) * [Enigma Machine](hashes/enigma_machine.py) * [Hamming Code](hashes/hamming_code.py) * [Luhn](hashes/luhn.py) * [Md5](hashes/md5.py) * [Sdbm](hashes/sdbm.py) * [Sha1](hashes/sha1.py) * [Sha256](hashes/sha256.py) ## Knapsack * [Greedy Knapsack](knapsack/greedy_knapsack.py) * [Knapsack](knapsack/knapsack.py) * [Recursive Approach Knapsack](knapsack/recursive_approach_knapsack.py) * Tests * [Test Greedy Knapsack](knapsack/tests/test_greedy_knapsack.py) * [Test Knapsack](knapsack/tests/test_knapsack.py) ## Linear Algebra * Src * [Conjugate Gradient](linear_algebra/src/conjugate_gradient.py) * [Lib](linear_algebra/src/lib.py) * [Polynom For Points](linear_algebra/src/polynom_for_points.py) * [Power Iteration](linear_algebra/src/power_iteration.py) * [Rayleigh Quotient](linear_algebra/src/rayleigh_quotient.py) * [Schur Complement](linear_algebra/src/schur_complement.py) * [Test Linear Algebra](linear_algebra/src/test_linear_algebra.py) * [Transformations 2D](linear_algebra/src/transformations_2d.py) ## Machine Learning * [Astar](machine_learning/astar.py) * [Data Transformations](machine_learning/data_transformations.py) * [Decision Tree](machine_learning/decision_tree.py) * Forecasting * [Run](machine_learning/forecasting/run.py) * [Gradient Descent](machine_learning/gradient_descent.py) * [K Means Clust](machine_learning/k_means_clust.py) * [K Nearest Neighbours](machine_learning/k_nearest_neighbours.py) * [Knn Sklearn](machine_learning/knn_sklearn.py) * [Linear Discriminant Analysis](machine_learning/linear_discriminant_analysis.py) * [Linear Regression](machine_learning/linear_regression.py) * Local Weighted Learning * [Local Weighted Learning](machine_learning/local_weighted_learning/local_weighted_learning.py) * [Logistic Regression](machine_learning/logistic_regression.py) * Lstm * [Lstm Prediction](machine_learning/lstm/lstm_prediction.py) * [Multilayer Perceptron Classifier](machine_learning/multilayer_perceptron_classifier.py) * [Polymonial Regression](machine_learning/polymonial_regression.py) * [Scoring Functions](machine_learning/scoring_functions.py) * [Self Organizing Map](machine_learning/self_organizing_map.py) * [Sequential Minimum Optimization](machine_learning/sequential_minimum_optimization.py) * [Similarity Search](machine_learning/similarity_search.py) * [Support Vector Machines](machine_learning/support_vector_machines.py) * [Word Frequency Functions](machine_learning/word_frequency_functions.py) * [Xgboost Classifier](machine_learning/xgboost_classifier.py) * [Xgboost Regressor](machine_learning/xgboost_regressor.py) ## Maths * [3N Plus 1](maths/3n_plus_1.py) * [Abs](maths/abs.py) * [Add](maths/add.py) * [Addition Without Arithmetic](maths/addition_without_arithmetic.py) * [Aliquot Sum](maths/aliquot_sum.py) * [Allocation Number](maths/allocation_number.py) * [Arc Length](maths/arc_length.py) * [Area](maths/area.py) * [Area Under Curve](maths/area_under_curve.py) * [Armstrong Numbers](maths/armstrong_numbers.py) * [Automorphic Number](maths/automorphic_number.py) * [Average Absolute Deviation](maths/average_absolute_deviation.py) * [Average Mean](maths/average_mean.py) * [Average Median](maths/average_median.py) * [Average Mode](maths/average_mode.py) * [Bailey Borwein Plouffe](maths/bailey_borwein_plouffe.py) * [Basic Maths](maths/basic_maths.py) * [Binary Exp Mod](maths/binary_exp_mod.py) * [Binary Exponentiation](maths/binary_exponentiation.py) * [Binary Exponentiation 2](maths/binary_exponentiation_2.py) * [Binary Exponentiation 3](maths/binary_exponentiation_3.py) * [Binomial Coefficient](maths/binomial_coefficient.py) * [Binomial Distribution](maths/binomial_distribution.py) * [Bisection](maths/bisection.py) * [Carmichael Number](maths/carmichael_number.py) * [Catalan Number](maths/catalan_number.py) * [Ceil](maths/ceil.py) * [Check Polygon](maths/check_polygon.py) * [Chudnovsky Algorithm](maths/chudnovsky_algorithm.py) * [Collatz Sequence](maths/collatz_sequence.py) * [Combinations](maths/combinations.py) * [Decimal Isolate](maths/decimal_isolate.py) * [Decimal To Fraction](maths/decimal_to_fraction.py) * [Dodecahedron](maths/dodecahedron.py) * [Double Factorial Iterative](maths/double_factorial_iterative.py) * [Double Factorial Recursive](maths/double_factorial_recursive.py) * [Entropy](maths/entropy.py) * [Euclidean Distance](maths/euclidean_distance.py) * [Euclidean Gcd](maths/euclidean_gcd.py) * [Euler Method](maths/euler_method.py) * [Euler Modified](maths/euler_modified.py) * [Eulers Totient](maths/eulers_totient.py) * [Extended Euclidean Algorithm](maths/extended_euclidean_algorithm.py) * [Factorial](maths/factorial.py) * [Factors](maths/factors.py) * [Fermat Little Theorem](maths/fermat_little_theorem.py) * [Fibonacci](maths/fibonacci.py) * [Find Max](maths/find_max.py) * [Find Max Recursion](maths/find_max_recursion.py) * [Find Min](maths/find_min.py) * [Find Min Recursion](maths/find_min_recursion.py) * [Floor](maths/floor.py) * [Gamma](maths/gamma.py) * [Gamma Recursive](maths/gamma_recursive.py) * [Gaussian](maths/gaussian.py) * [Gaussian Error Linear Unit](maths/gaussian_error_linear_unit.py) * [Gcd Of N Numbers](maths/gcd_of_n_numbers.py) * [Greatest Common Divisor](maths/greatest_common_divisor.py) * [Greedy Coin Change](maths/greedy_coin_change.py) * [Hamming Numbers](maths/hamming_numbers.py) * [Hardy Ramanujanalgo](maths/hardy_ramanujanalgo.py) * [Hexagonal Number](maths/hexagonal_number.py) * [Integration By Simpson Approx](maths/integration_by_simpson_approx.py) * [Is Ip V4 Address Valid](maths/is_ip_v4_address_valid.py) * [Is Square Free](maths/is_square_free.py) * [Jaccard Similarity](maths/jaccard_similarity.py) * [Juggler Sequence](maths/juggler_sequence.py) * [Kadanes](maths/kadanes.py) * [Karatsuba](maths/karatsuba.py) * [Krishnamurthy Number](maths/krishnamurthy_number.py) * [Kth Lexicographic Permutation](maths/kth_lexicographic_permutation.py) * [Largest Of Very Large Numbers](maths/largest_of_very_large_numbers.py) * [Largest Subarray Sum](maths/largest_subarray_sum.py) * [Least Common Multiple](maths/least_common_multiple.py) * [Line Length](maths/line_length.py) * [Liouville Lambda](maths/liouville_lambda.py) * [Lucas Lehmer Primality Test](maths/lucas_lehmer_primality_test.py) * [Lucas Series](maths/lucas_series.py) * [Maclaurin Series](maths/maclaurin_series.py) * [Manhattan Distance](maths/manhattan_distance.py) * [Matrix Exponentiation](maths/matrix_exponentiation.py) * [Max Sum Sliding Window](maths/max_sum_sliding_window.py) * [Median Of Two Arrays](maths/median_of_two_arrays.py) * [Miller Rabin](maths/miller_rabin.py) * [Mobius Function](maths/mobius_function.py) * [Modular Exponential](maths/modular_exponential.py) * [Monte Carlo](maths/monte_carlo.py) * [Monte Carlo Dice](maths/monte_carlo_dice.py) * [Nevilles Method](maths/nevilles_method.py) * [Newton Raphson](maths/newton_raphson.py) * [Number Of Digits](maths/number_of_digits.py) * [Numerical Integration](maths/numerical_integration.py) * [Perfect Cube](maths/perfect_cube.py) * [Perfect Number](maths/perfect_number.py) * [Perfect Square](maths/perfect_square.py) * [Persistence](maths/persistence.py) * [Pi Monte Carlo Estimation](maths/pi_monte_carlo_estimation.py) * [Points Are Collinear 3D](maths/points_are_collinear_3d.py) * [Pollard Rho](maths/pollard_rho.py) * [Polynomial Evaluation](maths/polynomial_evaluation.py) * Polynomials * [Single Indeterminate Operations](maths/polynomials/single_indeterminate_operations.py) * [Power Using Recursion](maths/power_using_recursion.py) * [Prime Check](maths/prime_check.py) * [Prime Factors](maths/prime_factors.py) * [Prime Numbers](maths/prime_numbers.py) * [Prime Sieve Eratosthenes](maths/prime_sieve_eratosthenes.py) * [Primelib](maths/primelib.py) * [Print Multiplication Table](maths/print_multiplication_table.py) * [Pronic Number](maths/pronic_number.py) * [Proth Number](maths/proth_number.py) * [Pythagoras](maths/pythagoras.py) * [Qr Decomposition](maths/qr_decomposition.py) * [Quadratic Equations Complex Numbers](maths/quadratic_equations_complex_numbers.py) * [Radians](maths/radians.py) * [Radix2 Fft](maths/radix2_fft.py) * [Relu](maths/relu.py) * [Runge Kutta](maths/runge_kutta.py) * [Segmented Sieve](maths/segmented_sieve.py) * Series * [Arithmetic](maths/series/arithmetic.py) * [Geometric](maths/series/geometric.py) * [Geometric Series](maths/series/geometric_series.py) * [Harmonic](maths/series/harmonic.py) * [Harmonic Series](maths/series/harmonic_series.py) * [Hexagonal Numbers](maths/series/hexagonal_numbers.py) * [P Series](maths/series/p_series.py) * [Sieve Of Eratosthenes](maths/sieve_of_eratosthenes.py) * [Sigmoid](maths/sigmoid.py) * [Sigmoid Linear Unit](maths/sigmoid_linear_unit.py) * [Signum](maths/signum.py) * [Simpson Rule](maths/simpson_rule.py) * [Sin](maths/sin.py) * [Sock Merchant](maths/sock_merchant.py) * [Softmax](maths/softmax.py) * [Square Root](maths/square_root.py) * [Sum Of Arithmetic Series](maths/sum_of_arithmetic_series.py) * [Sum Of Digits](maths/sum_of_digits.py) * [Sum Of Geometric Progression](maths/sum_of_geometric_progression.py) * [Sum Of Harmonic Series](maths/sum_of_harmonic_series.py) * [Sumset](maths/sumset.py) * [Sylvester Sequence](maths/sylvester_sequence.py) * [Test Prime Check](maths/test_prime_check.py) * [Trapezoidal Rule](maths/trapezoidal_rule.py) * [Triplet Sum](maths/triplet_sum.py) * [Twin Prime](maths/twin_prime.py) * [Two Pointer](maths/two_pointer.py) * [Two Sum](maths/two_sum.py) * [Ugly Numbers](maths/ugly_numbers.py) * [Volume](maths/volume.py) * [Weird Number](maths/weird_number.py) * [Zellers Congruence](maths/zellers_congruence.py) ## Matrix * [Binary Search Matrix](matrix/binary_search_matrix.py) * [Count Islands In Matrix](matrix/count_islands_in_matrix.py) * [Count Paths](matrix/count_paths.py) * [Cramers Rule 2X2](matrix/cramers_rule_2x2.py) * [Inverse Of Matrix](matrix/inverse_of_matrix.py) * [Largest Square Area In Matrix](matrix/largest_square_area_in_matrix.py) * [Matrix Class](matrix/matrix_class.py) * [Matrix Operation](matrix/matrix_operation.py) * [Max Area Of Island](matrix/max_area_of_island.py) * [Nth Fibonacci Using Matrix Exponentiation](matrix/nth_fibonacci_using_matrix_exponentiation.py) * [Pascal Triangle](matrix/pascal_triangle.py) * [Rotate Matrix](matrix/rotate_matrix.py) * [Searching In Sorted Matrix](matrix/searching_in_sorted_matrix.py) * [Sherman Morrison](matrix/sherman_morrison.py) * [Spiral Print](matrix/spiral_print.py) * Tests * [Test Matrix Operation](matrix/tests/test_matrix_operation.py) ## Networking Flow * [Ford Fulkerson](networking_flow/ford_fulkerson.py) * [Minimum Cut](networking_flow/minimum_cut.py) ## Neural Network * [2 Hidden Layers Neural Network](neural_network/2_hidden_layers_neural_network.py) * [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py) * [Convolution Neural Network](neural_network/convolution_neural_network.py) * [Perceptron](neural_network/perceptron.py) * [Simple Neural Network](neural_network/simple_neural_network.py) ## Other * [Activity Selection](other/activity_selection.py) * [Alternative List Arrange](other/alternative_list_arrange.py) * [Davisb Putnamb Logemannb Loveland](other/davisb_putnamb_logemannb_loveland.py) * [Dijkstra Bankers Algorithm](other/dijkstra_bankers_algorithm.py) * [Doomsday](other/doomsday.py) * [Fischer Yates Shuffle](other/fischer_yates_shuffle.py) * [Gauss Easter](other/gauss_easter.py) * [Graham Scan](other/graham_scan.py) * [Greedy](other/greedy.py) * [Least Recently Used](other/least_recently_used.py) * [Lfu Cache](other/lfu_cache.py) * [Linear Congruential Generator](other/linear_congruential_generator.py) * [Lru Cache](other/lru_cache.py) * [Magicdiamondpattern](other/magicdiamondpattern.py) * [Maximum Subarray](other/maximum_subarray.py) * [Nested Brackets](other/nested_brackets.py) * [Password](other/password.py) * [Quine](other/quine.py) * [Scoring Algorithm](other/scoring_algorithm.py) * [Sdes](other/sdes.py) * [Tower Of Hanoi](other/tower_of_hanoi.py) ## Physics * [Archimedes Principle](physics/archimedes_principle.py) * [Casimir Effect](physics/casimir_effect.py) * [Centripetal Force](physics/centripetal_force.py) * [Horizontal Projectile Motion](physics/horizontal_projectile_motion.py) * [Hubble Parameter](physics/hubble_parameter.py) * [Ideal Gas Law](physics/ideal_gas_law.py) * [Kinetic Energy](physics/kinetic_energy.py) * [Lorentz Transformation Four Vector](physics/lorentz_transformation_four_vector.py) * [Malus Law](physics/malus_law.py) * [N Body Simulation](physics/n_body_simulation.py) * [Newtons Law Of Gravitation](physics/newtons_law_of_gravitation.py) * [Newtons Second Law Of Motion](physics/newtons_second_law_of_motion.py) * [Potential Energy](physics/potential_energy.py) * [Rms Speed Of Molecule](physics/rms_speed_of_molecule.py) * [Shear Stress](physics/shear_stress.py) ## Project Euler * Problem 001 * [Sol1](project_euler/problem_001/sol1.py) * [Sol2](project_euler/problem_001/sol2.py) * [Sol3](project_euler/problem_001/sol3.py) * [Sol4](project_euler/problem_001/sol4.py) * [Sol5](project_euler/problem_001/sol5.py) * [Sol6](project_euler/problem_001/sol6.py) * [Sol7](project_euler/problem_001/sol7.py) * Problem 002 * [Sol1](project_euler/problem_002/sol1.py) * [Sol2](project_euler/problem_002/sol2.py) * [Sol3](project_euler/problem_002/sol3.py) * [Sol4](project_euler/problem_002/sol4.py) * [Sol5](project_euler/problem_002/sol5.py) * Problem 003 * [Sol1](project_euler/problem_003/sol1.py) * [Sol2](project_euler/problem_003/sol2.py) * [Sol3](project_euler/problem_003/sol3.py) * Problem 004 * [Sol1](project_euler/problem_004/sol1.py) * [Sol2](project_euler/problem_004/sol2.py) * Problem 005 * [Sol1](project_euler/problem_005/sol1.py) * [Sol2](project_euler/problem_005/sol2.py) * Problem 006 * [Sol1](project_euler/problem_006/sol1.py) * [Sol2](project_euler/problem_006/sol2.py) * [Sol3](project_euler/problem_006/sol3.py) * [Sol4](project_euler/problem_006/sol4.py) * Problem 007 * [Sol1](project_euler/problem_007/sol1.py) * [Sol2](project_euler/problem_007/sol2.py) * [Sol3](project_euler/problem_007/sol3.py) * Problem 008 * [Sol1](project_euler/problem_008/sol1.py) * [Sol2](project_euler/problem_008/sol2.py) * [Sol3](project_euler/problem_008/sol3.py) * Problem 009 * [Sol1](project_euler/problem_009/sol1.py) * [Sol2](project_euler/problem_009/sol2.py) * [Sol3](project_euler/problem_009/sol3.py) * Problem 010 * [Sol1](project_euler/problem_010/sol1.py) * [Sol2](project_euler/problem_010/sol2.py) * [Sol3](project_euler/problem_010/sol3.py) * Problem 011 * [Sol1](project_euler/problem_011/sol1.py) * [Sol2](project_euler/problem_011/sol2.py) * Problem 012 * [Sol1](project_euler/problem_012/sol1.py) * [Sol2](project_euler/problem_012/sol2.py) * Problem 013 * [Sol1](project_euler/problem_013/sol1.py) * Problem 014 * [Sol1](project_euler/problem_014/sol1.py) * [Sol2](project_euler/problem_014/sol2.py) * Problem 015 * [Sol1](project_euler/problem_015/sol1.py) * Problem 016 * [Sol1](project_euler/problem_016/sol1.py) * [Sol2](project_euler/problem_016/sol2.py) * Problem 017 * [Sol1](project_euler/problem_017/sol1.py) * Problem 018 * [Solution](project_euler/problem_018/solution.py) * Problem 019 * [Sol1](project_euler/problem_019/sol1.py) * Problem 020 * [Sol1](project_euler/problem_020/sol1.py) * [Sol2](project_euler/problem_020/sol2.py) * [Sol3](project_euler/problem_020/sol3.py) * [Sol4](project_euler/problem_020/sol4.py) * Problem 021 * [Sol1](project_euler/problem_021/sol1.py) * Problem 022 * [Sol1](project_euler/problem_022/sol1.py) * [Sol2](project_euler/problem_022/sol2.py) * Problem 023 * [Sol1](project_euler/problem_023/sol1.py) * Problem 024 * [Sol1](project_euler/problem_024/sol1.py) * Problem 025 * [Sol1](project_euler/problem_025/sol1.py) * [Sol2](project_euler/problem_025/sol2.py) * [Sol3](project_euler/problem_025/sol3.py) * Problem 026 * [Sol1](project_euler/problem_026/sol1.py) * Problem 027 * [Sol1](project_euler/problem_027/sol1.py) * Problem 028 * [Sol1](project_euler/problem_028/sol1.py) * Problem 029 * [Sol1](project_euler/problem_029/sol1.py) * Problem 030 * [Sol1](project_euler/problem_030/sol1.py) * Problem 031 * [Sol1](project_euler/problem_031/sol1.py) * [Sol2](project_euler/problem_031/sol2.py) * Problem 032 * [Sol32](project_euler/problem_032/sol32.py) * Problem 033 * [Sol1](project_euler/problem_033/sol1.py) * Problem 034 * [Sol1](project_euler/problem_034/sol1.py) * Problem 035 * [Sol1](project_euler/problem_035/sol1.py) * Problem 036 * [Sol1](project_euler/problem_036/sol1.py) * Problem 037 * [Sol1](project_euler/problem_037/sol1.py) * Problem 038 * [Sol1](project_euler/problem_038/sol1.py) * Problem 039 * [Sol1](project_euler/problem_039/sol1.py) * Problem 040 * [Sol1](project_euler/problem_040/sol1.py) * Problem 041 * [Sol1](project_euler/problem_041/sol1.py) * Problem 042 * [Solution42](project_euler/problem_042/solution42.py) * Problem 043 * [Sol1](project_euler/problem_043/sol1.py) * Problem 044 * [Sol1](project_euler/problem_044/sol1.py) * Problem 045 * [Sol1](project_euler/problem_045/sol1.py) * Problem 046 * [Sol1](project_euler/problem_046/sol1.py) * Problem 047 * [Sol1](project_euler/problem_047/sol1.py) * Problem 048 * [Sol1](project_euler/problem_048/sol1.py) * Problem 049 * [Sol1](project_euler/problem_049/sol1.py) * Problem 050 * [Sol1](project_euler/problem_050/sol1.py) * Problem 051 * [Sol1](project_euler/problem_051/sol1.py) * Problem 052 * [Sol1](project_euler/problem_052/sol1.py) * Problem 053 * [Sol1](project_euler/problem_053/sol1.py) * Problem 054 * [Sol1](project_euler/problem_054/sol1.py) * [Test Poker Hand](project_euler/problem_054/test_poker_hand.py) * Problem 055 * [Sol1](project_euler/problem_055/sol1.py) * Problem 056 * [Sol1](project_euler/problem_056/sol1.py) * Problem 057 * [Sol1](project_euler/problem_057/sol1.py) * Problem 058 * [Sol1](project_euler/problem_058/sol1.py) * Problem 059 * [Sol1](project_euler/problem_059/sol1.py) * Problem 062 * [Sol1](project_euler/problem_062/sol1.py) * Problem 063 * [Sol1](project_euler/problem_063/sol1.py) * Problem 064 * [Sol1](project_euler/problem_064/sol1.py) * Problem 065 * [Sol1](project_euler/problem_065/sol1.py) * Problem 067 * [Sol1](project_euler/problem_067/sol1.py) * [Sol2](project_euler/problem_067/sol2.py) * Problem 068 * [Sol1](project_euler/problem_068/sol1.py) * Problem 069 * [Sol1](project_euler/problem_069/sol1.py) * Problem 070 * [Sol1](project_euler/problem_070/sol1.py) * Problem 071 * [Sol1](project_euler/problem_071/sol1.py) * Problem 072 * [Sol1](project_euler/problem_072/sol1.py) * [Sol2](project_euler/problem_072/sol2.py) * Problem 073 * [Sol1](project_euler/problem_073/sol1.py) * Problem 074 * [Sol1](project_euler/problem_074/sol1.py) * [Sol2](project_euler/problem_074/sol2.py) * Problem 075 * [Sol1](project_euler/problem_075/sol1.py) * Problem 076 * [Sol1](project_euler/problem_076/sol1.py) * Problem 077 * [Sol1](project_euler/problem_077/sol1.py) * Problem 078 * [Sol1](project_euler/problem_078/sol1.py) * Problem 080 * [Sol1](project_euler/problem_080/sol1.py) * Problem 081 * [Sol1](project_euler/problem_081/sol1.py) * Problem 082 * [Sol1](project_euler/problem_082/sol1.py) * Problem 085 * [Sol1](project_euler/problem_085/sol1.py) * Problem 086 * [Sol1](project_euler/problem_086/sol1.py) * Problem 087 * [Sol1](project_euler/problem_087/sol1.py) * Problem 089 * [Sol1](project_euler/problem_089/sol1.py) * Problem 091 * [Sol1](project_euler/problem_091/sol1.py) * Problem 092 * [Sol1](project_euler/problem_092/sol1.py) * Problem 097 * [Sol1](project_euler/problem_097/sol1.py) * Problem 099 * [Sol1](project_euler/problem_099/sol1.py) * Problem 101 * [Sol1](project_euler/problem_101/sol1.py) * Problem 102 * [Sol1](project_euler/problem_102/sol1.py) * Problem 104 * [Sol1](project_euler/problem_104/sol1.py) * Problem 107 * [Sol1](project_euler/problem_107/sol1.py) * Problem 109 * [Sol1](project_euler/problem_109/sol1.py) * Problem 112 * [Sol1](project_euler/problem_112/sol1.py) * Problem 113 * [Sol1](project_euler/problem_113/sol1.py) * Problem 114 * [Sol1](project_euler/problem_114/sol1.py) * Problem 115 * [Sol1](project_euler/problem_115/sol1.py) * Problem 116 * [Sol1](project_euler/problem_116/sol1.py) * Problem 117 * [Sol1](project_euler/problem_117/sol1.py) * Problem 119 * [Sol1](project_euler/problem_119/sol1.py) * Problem 120 * [Sol1](project_euler/problem_120/sol1.py) * Problem 121 * [Sol1](project_euler/problem_121/sol1.py) * Problem 123 * [Sol1](project_euler/problem_123/sol1.py) * Problem 125 * [Sol1](project_euler/problem_125/sol1.py) * Problem 129 * [Sol1](project_euler/problem_129/sol1.py) * Problem 135 * [Sol1](project_euler/problem_135/sol1.py) * Problem 144 * [Sol1](project_euler/problem_144/sol1.py) * Problem 145 * [Sol1](project_euler/problem_145/sol1.py) * Problem 173 * [Sol1](project_euler/problem_173/sol1.py) * Problem 174 * [Sol1](project_euler/problem_174/sol1.py) * Problem 180 * [Sol1](project_euler/problem_180/sol1.py) * Problem 188 * [Sol1](project_euler/problem_188/sol1.py) * Problem 191 * [Sol1](project_euler/problem_191/sol1.py) * Problem 203 * [Sol1](project_euler/problem_203/sol1.py) * Problem 205 * [Sol1](project_euler/problem_205/sol1.py) * Problem 206 * [Sol1](project_euler/problem_206/sol1.py) * Problem 207 * [Sol1](project_euler/problem_207/sol1.py) * Problem 234 * [Sol1](project_euler/problem_234/sol1.py) * Problem 301 * [Sol1](project_euler/problem_301/sol1.py) * Problem 493 * [Sol1](project_euler/problem_493/sol1.py) * Problem 551 * [Sol1](project_euler/problem_551/sol1.py) * Problem 587 * [Sol1](project_euler/problem_587/sol1.py) * Problem 686 * [Sol1](project_euler/problem_686/sol1.py) ## Quantum * [Bb84](quantum/bb84.py) * [Deutsch Jozsa](quantum/deutsch_jozsa.py) * [Half Adder](quantum/half_adder.py) * [Not Gate](quantum/not_gate.py) * [Q Fourier Transform](quantum/q_fourier_transform.py) * [Q Full Adder](quantum/q_full_adder.py) * [Quantum Entanglement](quantum/quantum_entanglement.py) * [Quantum Teleportation](quantum/quantum_teleportation.py) * [Ripple Adder Classic](quantum/ripple_adder_classic.py) * [Single Qubit Measure](quantum/single_qubit_measure.py) * [Superdense Coding](quantum/superdense_coding.py) ## Scheduling * [First Come First Served](scheduling/first_come_first_served.py) * [Highest Response Ratio Next](scheduling/highest_response_ratio_next.py) * [Job Sequencing With Deadline](scheduling/job_sequencing_with_deadline.py) * [Multi Level Feedback Queue](scheduling/multi_level_feedback_queue.py) * [Non Preemptive Shortest Job First](scheduling/non_preemptive_shortest_job_first.py) * [Round Robin](scheduling/round_robin.py) * [Shortest Job First](scheduling/shortest_job_first.py) ## Searches * [Binary Search](searches/binary_search.py) * [Binary Tree Traversal](searches/binary_tree_traversal.py) * [Double Linear Search](searches/double_linear_search.py) * [Double Linear Search Recursion](searches/double_linear_search_recursion.py) * [Fibonacci Search](searches/fibonacci_search.py) * [Hill Climbing](searches/hill_climbing.py) * [Interpolation Search](searches/interpolation_search.py) * [Jump Search](searches/jump_search.py) * [Linear Search](searches/linear_search.py) * [Quick Select](searches/quick_select.py) * [Sentinel Linear Search](searches/sentinel_linear_search.py) * [Simple Binary Search](searches/simple_binary_search.py) * [Simulated Annealing](searches/simulated_annealing.py) * [Tabu Search](searches/tabu_search.py) * [Ternary Search](searches/ternary_search.py) ## Sorts * [Bead Sort](sorts/bead_sort.py) * [Bitonic Sort](sorts/bitonic_sort.py) * [Bogo Sort](sorts/bogo_sort.py) * [Bubble Sort](sorts/bubble_sort.py) * [Bucket Sort](sorts/bucket_sort.py) * [Circle Sort](sorts/circle_sort.py) * [Cocktail Shaker Sort](sorts/cocktail_shaker_sort.py) * [Comb Sort](sorts/comb_sort.py) * [Counting Sort](sorts/counting_sort.py) * [Cycle Sort](sorts/cycle_sort.py) * [Double Sort](sorts/double_sort.py) * [Dutch National Flag Sort](sorts/dutch_national_flag_sort.py) * [Exchange Sort](sorts/exchange_sort.py) * [External Sort](sorts/external_sort.py) * [Gnome Sort](sorts/gnome_sort.py) * [Heap Sort](sorts/heap_sort.py) * [Insertion Sort](sorts/insertion_sort.py) * [Intro Sort](sorts/intro_sort.py) * [Iterative Merge Sort](sorts/iterative_merge_sort.py) * [Merge Insertion Sort](sorts/merge_insertion_sort.py) * [Merge Sort](sorts/merge_sort.py) * [Msd Radix Sort](sorts/msd_radix_sort.py) * [Natural Sort](sorts/natural_sort.py) * [Odd Even Sort](sorts/odd_even_sort.py) * [Odd Even Transposition Parallel](sorts/odd_even_transposition_parallel.py) * [Odd Even Transposition Single Threaded](sorts/odd_even_transposition_single_threaded.py) * [Pancake Sort](sorts/pancake_sort.py) * [Patience Sort](sorts/patience_sort.py) * [Pigeon Sort](sorts/pigeon_sort.py) * [Pigeonhole Sort](sorts/pigeonhole_sort.py) * [Quick Sort](sorts/quick_sort.py) * [Quick Sort 3 Partition](sorts/quick_sort_3_partition.py) * [Radix Sort](sorts/radix_sort.py) * [Random Normal Distribution Quicksort](sorts/random_normal_distribution_quicksort.py) * [Random Pivot Quick Sort](sorts/random_pivot_quick_sort.py) * [Recursive Bubble Sort](sorts/recursive_bubble_sort.py) * [Recursive Insertion Sort](sorts/recursive_insertion_sort.py) * [Recursive Mergesort Array](sorts/recursive_mergesort_array.py) * [Recursive Quick Sort](sorts/recursive_quick_sort.py) * [Selection Sort](sorts/selection_sort.py) * [Shell Sort](sorts/shell_sort.py) * [Shrink Shell Sort](sorts/shrink_shell_sort.py) * [Slowsort](sorts/slowsort.py) * [Stooge Sort](sorts/stooge_sort.py) * [Strand Sort](sorts/strand_sort.py) * [Tim Sort](sorts/tim_sort.py) * [Topological Sort](sorts/topological_sort.py) * [Tree Sort](sorts/tree_sort.py) * [Unknown Sort](sorts/unknown_sort.py) * [Wiggle Sort](sorts/wiggle_sort.py) ## Strings * [Aho Corasick](strings/aho_corasick.py) * [Alternative String Arrange](strings/alternative_string_arrange.py) * [Anagrams](strings/anagrams.py) * [Autocomplete Using Trie](strings/autocomplete_using_trie.py) * [Barcode Validator](strings/barcode_validator.py) * [Boyer Moore Search](strings/boyer_moore_search.py) * [Can String Be Rearranged As Palindrome](strings/can_string_be_rearranged_as_palindrome.py) * [Capitalize](strings/capitalize.py) * [Check Anagrams](strings/check_anagrams.py) * [Credit Card Validator](strings/credit_card_validator.py) * [Detecting English Programmatically](strings/detecting_english_programmatically.py) * [Dna](strings/dna.py) * [Frequency Finder](strings/frequency_finder.py) * [Hamming Distance](strings/hamming_distance.py) * [Indian Phone Validator](strings/indian_phone_validator.py) * [Is Contains Unique Chars](strings/is_contains_unique_chars.py) * [Is Isogram](strings/is_isogram.py) * [Is Palindrome](strings/is_palindrome.py) * [Is Pangram](strings/is_pangram.py) * [Is Spain National Id](strings/is_spain_national_id.py) * [Is Srilankan Phone Number](strings/is_srilankan_phone_number.py) * [Jaro Winkler](strings/jaro_winkler.py) * [Join](strings/join.py) * [Knuth Morris Pratt](strings/knuth_morris_pratt.py) * [Levenshtein Distance](strings/levenshtein_distance.py) * [Lower](strings/lower.py) * [Manacher](strings/manacher.py) * [Min Cost String Conversion](strings/min_cost_string_conversion.py) * [Naive String Search](strings/naive_string_search.py) * [Ngram](strings/ngram.py) * [Palindrome](strings/palindrome.py) * [Prefix Function](strings/prefix_function.py) * [Rabin Karp](strings/rabin_karp.py) * [Remove Duplicate](strings/remove_duplicate.py) * [Reverse Letters](strings/reverse_letters.py) * [Reverse Long Words](strings/reverse_long_words.py) * [Reverse Words](strings/reverse_words.py) * [Snake Case To Camel Pascal Case](strings/snake_case_to_camel_pascal_case.py) * [Split](strings/split.py) * [Text Justification](strings/text_justification.py) * [Upper](strings/upper.py) * [Wave](strings/wave.py) * [Wildcard Pattern Matching](strings/wildcard_pattern_matching.py) * [Word Occurrence](strings/word_occurrence.py) * [Word Patterns](strings/word_patterns.py) * [Z Function](strings/z_function.py) ## Web Programming * [Co2 Emission](web_programming/co2_emission.py) * [Convert Number To Words](web_programming/convert_number_to_words.py) * [Covid Stats Via Xpath](web_programming/covid_stats_via_xpath.py) * [Crawl Google Results](web_programming/crawl_google_results.py) * [Crawl Google Scholar Citation](web_programming/crawl_google_scholar_citation.py) * [Currency Converter](web_programming/currency_converter.py) * [Current Stock Price](web_programming/current_stock_price.py) * [Current Weather](web_programming/current_weather.py) * [Daily Horoscope](web_programming/daily_horoscope.py) * [Download Images From Google Query](web_programming/download_images_from_google_query.py) * [Emails From Url](web_programming/emails_from_url.py) * [Fetch Anime And Play](web_programming/fetch_anime_and_play.py) * [Fetch Bbc News](web_programming/fetch_bbc_news.py) * [Fetch Github Info](web_programming/fetch_github_info.py) * [Fetch Jobs](web_programming/fetch_jobs.py) * [Fetch Quotes](web_programming/fetch_quotes.py) * [Fetch Well Rx Price](web_programming/fetch_well_rx_price.py) * [Get Amazon Product Data](web_programming/get_amazon_product_data.py) * [Get Imdb Top 250 Movies Csv](web_programming/get_imdb_top_250_movies_csv.py) * [Get Imdbtop](web_programming/get_imdbtop.py) * [Get Top Hn Posts](web_programming/get_top_hn_posts.py) * [Get User Tweets](web_programming/get_user_tweets.py) * [Giphy](web_programming/giphy.py) * [Instagram Crawler](web_programming/instagram_crawler.py) * [Instagram Pic](web_programming/instagram_pic.py) * [Instagram Video](web_programming/instagram_video.py) * [Nasa Data](web_programming/nasa_data.py) * [Open Google Results](web_programming/open_google_results.py) * [Random Anime Character](web_programming/random_anime_character.py) * [Recaptcha Verification](web_programming/recaptcha_verification.py) * [Reddit](web_programming/reddit.py) * [Search Books By Isbn](web_programming/search_books_by_isbn.py) * [Slack Message](web_programming/slack_message.py) * [Test Fetch Github Info](web_programming/test_fetch_github_info.py) * [World Covid19 Stats](web_programming/world_covid19_stats.py)
1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
""" Implementation of sequential minimal optimization (SMO) for support vector machines (SVM). Sequential minimal optimization (SMO) is an algorithm for solving the quadratic programming (QP) problem that arises during the training of support vector machines. It was invented by John Platt in 1998. Input: 0: type: numpy.ndarray. 1: first column of ndarray must be tags of samples, must be 1 or -1. 2: rows of ndarray represent samples. Usage: Command: python3 sequential_minimum_optimization.py Code: from sequential_minimum_optimization import SmoSVM, Kernel kernel = Kernel(kernel='poly', degree=3., coef0=1., gamma=0.5) init_alphas = np.zeros(train.shape[0]) SVM = SmoSVM(train=train, alpha_list=init_alphas, kernel_func=kernel, cost=0.4, b=0.0, tolerance=0.001) SVM.fit() predict = SVM.predict(test_samples) Reference: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/smo-book.pdf https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-98-14.pdf """ import os import sys import urllib.request import numpy as np import pandas as pd from matplotlib import pyplot as plt from sklearn.datasets import make_blobs, make_circles from sklearn.preprocessing import StandardScaler CANCER_DATASET_URL = ( "https://archive.ics.uci.edu/ml/machine-learning-databases/" "breast-cancer-wisconsin/wdbc.data" ) class SmoSVM: def __init__( self, train, kernel_func, alpha_list=None, cost=0.4, b=0.0, tolerance=0.001, auto_norm=True, ): self._init = True self._auto_norm = auto_norm self._c = np.float64(cost) self._b = np.float64(b) self._tol = np.float64(tolerance) if tolerance > 0.0001 else np.float64(0.001) self.tags = train[:, 0] self.samples = self._norm(train[:, 1:]) if self._auto_norm else train[:, 1:] self.alphas = alpha_list if alpha_list is not None else np.zeros(train.shape[0]) self.Kernel = kernel_func self._eps = 0.001 self._all_samples = list(range(self.length)) self._K_matrix = self._calculate_k_matrix() self._error = np.zeros(self.length) self._unbound = [] self.choose_alpha = self._choose_alphas() # Calculate alphas using SMO algorithm def fit(self): k = self._k state = None while True: # 1: Find alpha1, alpha2 try: i1, i2 = self.choose_alpha.send(state) state = None except StopIteration: print("Optimization done!\nEvery sample satisfy the KKT condition!") break # 2: calculate new alpha2 and new alpha1 y1, y2 = self.tags[i1], self.tags[i2] a1, a2 = self.alphas[i1].copy(), self.alphas[i2].copy() e1, e2 = self._e(i1), self._e(i2) args = (i1, i2, a1, a2, e1, e2, y1, y2) a1_new, a2_new = self._get_new_alpha(*args) if not a1_new and not a2_new: state = False continue self.alphas[i1], self.alphas[i2] = a1_new, a2_new # 3: update threshold(b) b1_new = np.float64( -e1 - y1 * k(i1, i1) * (a1_new - a1) - y2 * k(i2, i1) * (a2_new - a2) + self._b ) b2_new = np.float64( -e2 - y2 * k(i2, i2) * (a2_new - a2) - y1 * k(i1, i2) * (a1_new - a1) + self._b ) if 0.0 < a1_new < self._c: b = b1_new if 0.0 < a2_new < self._c: b = b2_new if not (np.float64(0) < a2_new < self._c) and not ( np.float64(0) < a1_new < self._c ): b = (b1_new + b2_new) / 2.0 b_old = self._b self._b = b # 4: update error value,here we only calculate those non-bound samples' # error self._unbound = [i for i in self._all_samples if self._is_unbound(i)] for s in self.unbound: if s in (i1, i2): continue self._error[s] += ( y1 * (a1_new - a1) * k(i1, s) + y2 * (a2_new - a2) * k(i2, s) + (self._b - b_old) ) # if i1 or i2 is non-bound,update there error value to zero if self._is_unbound(i1): self._error[i1] = 0 if self._is_unbound(i2): self._error[i2] = 0 # Predict test samples def predict(self, test_samples, classify=True): if test_samples.shape[1] > self.samples.shape[1]: raise ValueError( "Test samples' feature length does not equal to that of train samples" ) if self._auto_norm: test_samples = self._norm(test_samples) results = [] for test_sample in test_samples: result = self._predict(test_sample) if classify: results.append(1 if result > 0 else -1) else: results.append(result) return np.array(results) # Check if alpha violate KKT condition def _check_obey_kkt(self, index): alphas = self.alphas tol = self._tol r = self._e(index) * self.tags[index] c = self._c return (r < -tol and alphas[index] < c) or (r > tol and alphas[index] > 0.0) # Get value calculated from kernel function def _k(self, i1, i2): # for test samples,use Kernel function if isinstance(i2, np.ndarray): return self.Kernel(self.samples[i1], i2) # for train samples,Kernel values have been saved in matrix else: return self._K_matrix[i1, i2] # Get sample's error def _e(self, index): """ Two cases: 1:Sample[index] is non-bound,Fetch error from list: _error 2:sample[index] is bound,Use predicted value deduct true value: g(xi) - yi """ # get from error data if self._is_unbound(index): return self._error[index] # get by g(xi) - yi else: gx = np.dot(self.alphas * self.tags, self._K_matrix[:, index]) + self._b yi = self.tags[index] return gx - yi # Calculate Kernel matrix of all possible i1,i2 ,saving time def _calculate_k_matrix(self): k_matrix = np.zeros([self.length, self.length]) for i in self._all_samples: for j in self._all_samples: k_matrix[i, j] = np.float64( self.Kernel(self.samples[i, :], self.samples[j, :]) ) return k_matrix # Predict test sample's tag def _predict(self, sample): k = self._k predicted_value = ( np.sum( [ self.alphas[i1] * self.tags[i1] * k(i1, sample) for i1 in self._all_samples ] ) + self._b ) return predicted_value # Choose alpha1 and alpha2 def _choose_alphas(self): locis = yield from self._choose_a1() if not locis: return None return locis def _choose_a1(self): """ Choose first alpha ;steps: 1:First loop over all sample 2:Second loop over all non-bound samples till all non-bound samples does not voilate kkt condition. 3:Repeat this two process endlessly,till all samples does not voilate kkt condition samples after first loop. """ while True: all_not_obey = True # all sample print("scanning all sample!") for i1 in [i for i in self._all_samples if self._check_obey_kkt(i)]: all_not_obey = False yield from self._choose_a2(i1) # non-bound sample print("scanning non-bound sample!") while True: not_obey = True for i1 in [ i for i in self._all_samples if self._check_obey_kkt(i) and self._is_unbound(i) ]: not_obey = False yield from self._choose_a2(i1) if not_obey: print("all non-bound samples fit the KKT condition!") break if all_not_obey: print("all samples fit the KKT condition! Optimization done!") break return False def _choose_a2(self, i1): """ Choose the second alpha by using heuristic algorithm ;steps: 1: Choose alpha2 which gets the maximum step size (|E1 - E2|). 2: Start in a random point,loop over all non-bound samples till alpha1 and alpha2 are optimized. 3: Start in a random point,loop over all samples till alpha1 and alpha2 are optimized. """ self._unbound = [i for i in self._all_samples if self._is_unbound(i)] if len(self.unbound) > 0: tmp_error = self._error.copy().tolist() tmp_error_dict = { index: value for index, value in enumerate(tmp_error) if self._is_unbound(index) } if self._e(i1) >= 0: i2 = min(tmp_error_dict, key=lambda index: tmp_error_dict[index]) else: i2 = max(tmp_error_dict, key=lambda index: tmp_error_dict[index]) cmd = yield i1, i2 if cmd is None: return for i2 in np.roll(self.unbound, np.random.choice(self.length)): cmd = yield i1, i2 if cmd is None: return for i2 in np.roll(self._all_samples, np.random.choice(self.length)): cmd = yield i1, i2 if cmd is None: return # Get the new alpha2 and new alpha1 def _get_new_alpha(self, i1, i2, a1, a2, e1, e2, y1, y2): k = self._k if i1 == i2: return None, None # calculate L and H which bound the new alpha2 s = y1 * y2 if s == -1: l, h = max(0.0, a2 - a1), min(self._c, self._c + a2 - a1) else: l, h = max(0.0, a2 + a1 - self._c), min(self._c, a2 + a1) if l == h: return None, None # calculate eta k11 = k(i1, i1) k22 = k(i2, i2) k12 = k(i1, i2) # select the new alpha2 which could get the minimal objectives if (eta := k11 + k22 - 2.0 * k12) > 0.0: a2_new_unc = a2 + (y2 * (e1 - e2)) / eta # a2_new has a boundary if a2_new_unc >= h: a2_new = h elif a2_new_unc <= l: a2_new = l else: a2_new = a2_new_unc else: b = self._b l1 = a1 + s * (a2 - l) h1 = a1 + s * (a2 - h) # way 1 f1 = y1 * (e1 + b) - a1 * k(i1, i1) - s * a2 * k(i1, i2) f2 = y2 * (e2 + b) - a2 * k(i2, i2) - s * a1 * k(i1, i2) ol = ( l1 * f1 + l * f2 + 1 / 2 * l1**2 * k(i1, i1) + 1 / 2 * l**2 * k(i2, i2) + s * l * l1 * k(i1, i2) ) oh = ( h1 * f1 + h * f2 + 1 / 2 * h1**2 * k(i1, i1) + 1 / 2 * h**2 * k(i2, i2) + s * h * h1 * k(i1, i2) ) """ # way 2 Use objective function check which alpha2 new could get the minimal objectives """ if ol < (oh - self._eps): a2_new = l elif ol > oh + self._eps: a2_new = h else: a2_new = a2 # a1_new has a boundary too a1_new = a1 + s * (a2 - a2_new) if a1_new < 0: a2_new += s * a1_new a1_new = 0 if a1_new > self._c: a2_new += s * (a1_new - self._c) a1_new = self._c return a1_new, a2_new # Normalise data using min_max way def _norm(self, data): if self._init: self._min = np.min(data, axis=0) self._max = np.max(data, axis=0) self._init = False return (data - self._min) / (self._max - self._min) else: return (data - self._min) / (self._max - self._min) def _is_unbound(self, index): return bool(0.0 < self.alphas[index] < self._c) def _is_support(self, index): return bool(self.alphas[index] > 0) @property def unbound(self): return self._unbound @property def support(self): return [i for i in range(self.length) if self._is_support(i)] @property def length(self): return self.samples.shape[0] class Kernel: def __init__(self, kernel, degree=1.0, coef0=0.0, gamma=1.0): self.degree = np.float64(degree) self.coef0 = np.float64(coef0) self.gamma = np.float64(gamma) self._kernel_name = kernel self._kernel = self._get_kernel(kernel_name=kernel) self._check() def _polynomial(self, v1, v2): return (self.gamma * np.inner(v1, v2) + self.coef0) ** self.degree def _linear(self, v1, v2): return np.inner(v1, v2) + self.coef0 def _rbf(self, v1, v2): return np.exp(-1 * (self.gamma * np.linalg.norm(v1 - v2) ** 2)) def _check(self): if self._kernel == self._rbf and self.gamma < 0: raise ValueError("gamma value must greater than 0") def _get_kernel(self, kernel_name): maps = {"linear": self._linear, "poly": self._polynomial, "rbf": self._rbf} return maps[kernel_name] def __call__(self, v1, v2): return self._kernel(v1, v2) def __repr__(self): return self._kernel_name def count_time(func): def call_func(*args, **kwargs): import time start_time = time.time() func(*args, **kwargs) end_time = time.time() print(f"smo algorithm cost {end_time - start_time} seconds") return call_func @count_time def test_cancel_data(): print("Hello!\nStart test svm by smo algorithm!") # 0: download dataset and load into pandas' dataframe if not os.path.exists(r"cancel_data.csv"): request = urllib.request.Request( CANCER_DATASET_URL, headers={"User-Agent": "Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)"}, ) response = urllib.request.urlopen(request) content = response.read().decode("utf-8") with open(r"cancel_data.csv", "w") as f: f.write(content) data = pd.read_csv(r"cancel_data.csv", header=None) # 1: pre-processing data del data[data.columns.tolist()[0]] data = data.dropna(axis=0) data = data.replace({"M": np.float64(1), "B": np.float64(-1)}) samples = np.array(data)[:, :] # 2: dividing data into train_data data and test_data data train_data, test_data = samples[:328, :], samples[328:, :] test_tags, test_samples = test_data[:, 0], test_data[:, 1:] # 3: choose kernel function,and set initial alphas to zero(optional) mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5) al = np.zeros(train_data.shape[0]) # 4: calculating best alphas using SMO algorithm and predict test_data samples mysvm = SmoSVM( train=train_data, kernel_func=mykernel, alpha_list=al, cost=0.4, b=0.0, tolerance=0.001, ) mysvm.fit() predict = mysvm.predict(test_samples) # 5: check accuracy score = 0 test_num = test_tags.shape[0] for i in range(test_tags.shape[0]): if test_tags[i] == predict[i]: score += 1 print(f"\nall: {test_num}\nright: {score}\nfalse: {test_num - score}") print(f"Rough Accuracy: {score / test_tags.shape[0]}") def test_demonstration(): # change stdout print("\nStart plot,please wait!!!") sys.stdout = open(os.devnull, "w") ax1 = plt.subplot2grid((2, 2), (0, 0)) ax2 = plt.subplot2grid((2, 2), (0, 1)) ax3 = plt.subplot2grid((2, 2), (1, 0)) ax4 = plt.subplot2grid((2, 2), (1, 1)) ax1.set_title("linear svm,cost:0.1") test_linear_kernel(ax1, cost=0.1) ax2.set_title("linear svm,cost:500") test_linear_kernel(ax2, cost=500) ax3.set_title("rbf kernel svm,cost:0.1") test_rbf_kernel(ax3, cost=0.1) ax4.set_title("rbf kernel svm,cost:500") test_rbf_kernel(ax4, cost=500) sys.stdout = sys.__stdout__ print("Plot done!!!") def test_linear_kernel(ax, cost): train_x, train_y = make_blobs( n_samples=500, centers=2, n_features=2, random_state=1 ) train_y[train_y == 0] = -1 scaler = StandardScaler() train_x_scaled = scaler.fit_transform(train_x, train_y) train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled)) mykernel = Kernel(kernel="linear", degree=5, coef0=1, gamma=0.5) mysvm = SmoSVM( train=train_data, kernel_func=mykernel, cost=cost, tolerance=0.001, auto_norm=False, ) mysvm.fit() plot_partition_boundary(mysvm, train_data, ax=ax) def test_rbf_kernel(ax, cost): train_x, train_y = make_circles( n_samples=500, noise=0.1, factor=0.1, random_state=1 ) train_y[train_y == 0] = -1 scaler = StandardScaler() train_x_scaled = scaler.fit_transform(train_x, train_y) train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled)) mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5) mysvm = SmoSVM( train=train_data, kernel_func=mykernel, cost=cost, tolerance=0.001, auto_norm=False, ) mysvm.fit() plot_partition_boundary(mysvm, train_data, ax=ax) def plot_partition_boundary( model, train_data, ax, resolution=100, colors=("b", "k", "r") ): """ We can not get the optimum w of our kernel svm model which is different from linear svm. For this reason, we generate randomly distributed points with high desity and prediced values of these points are calculated by using our tained model. Then we could use this prediced values to draw contour map. And this contour map can represent svm's partition boundary. """ train_data_x = train_data[:, 1] train_data_y = train_data[:, 2] train_data_tags = train_data[:, 0] xrange = np.linspace(train_data_x.min(), train_data_x.max(), resolution) yrange = np.linspace(train_data_y.min(), train_data_y.max(), resolution) test_samples = np.array([(x, y) for x in xrange for y in yrange]).reshape( resolution * resolution, 2 ) test_tags = model.predict(test_samples, classify=False) grid = test_tags.reshape((len(xrange), len(yrange))) # Plot contour map which represents the partition boundary ax.contour( xrange, yrange, np.mat(grid).T, levels=(-1, 0, 1), linestyles=("--", "-", "--"), linewidths=(1, 1, 1), colors=colors, ) # Plot all train samples ax.scatter( train_data_x, train_data_y, c=train_data_tags, cmap=plt.cm.Dark2, lw=0, alpha=0.5, ) # Plot support vectors support = model.support ax.scatter( train_data_x[support], train_data_y[support], c=train_data_tags[support], cmap=plt.cm.Dark2, ) if __name__ == "__main__": test_cancel_data() test_demonstration() plt.show()
""" Implementation of sequential minimal optimization (SMO) for support vector machines (SVM). Sequential minimal optimization (SMO) is an algorithm for solving the quadratic programming (QP) problem that arises during the training of support vector machines. It was invented by John Platt in 1998. Input: 0: type: numpy.ndarray. 1: first column of ndarray must be tags of samples, must be 1 or -1. 2: rows of ndarray represent samples. Usage: Command: python3 sequential_minimum_optimization.py Code: from sequential_minimum_optimization import SmoSVM, Kernel kernel = Kernel(kernel='poly', degree=3., coef0=1., gamma=0.5) init_alphas = np.zeros(train.shape[0]) SVM = SmoSVM(train=train, alpha_list=init_alphas, kernel_func=kernel, cost=0.4, b=0.0, tolerance=0.001) SVM.fit() predict = SVM.predict(test_samples) Reference: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/smo-book.pdf https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-98-14.pdf """ import os import sys import urllib.request import numpy as np import pandas as pd from matplotlib import pyplot as plt from sklearn.datasets import make_blobs, make_circles from sklearn.preprocessing import StandardScaler CANCER_DATASET_URL = ( "https://archive.ics.uci.edu/ml/machine-learning-databases/" "breast-cancer-wisconsin/wdbc.data" ) class SmoSVM: def __init__( self, train, kernel_func, alpha_list=None, cost=0.4, b=0.0, tolerance=0.001, auto_norm=True, ): self._init = True self._auto_norm = auto_norm self._c = np.float64(cost) self._b = np.float64(b) self._tol = np.float64(tolerance) if tolerance > 0.0001 else np.float64(0.001) self.tags = train[:, 0] self.samples = self._norm(train[:, 1:]) if self._auto_norm else train[:, 1:] self.alphas = alpha_list if alpha_list is not None else np.zeros(train.shape[0]) self.Kernel = kernel_func self._eps = 0.001 self._all_samples = list(range(self.length)) self._K_matrix = self._calculate_k_matrix() self._error = np.zeros(self.length) self._unbound = [] self.choose_alpha = self._choose_alphas() # Calculate alphas using SMO algorithm def fit(self): k = self._k state = None while True: # 1: Find alpha1, alpha2 try: i1, i2 = self.choose_alpha.send(state) state = None except StopIteration: print("Optimization done!\nEvery sample satisfy the KKT condition!") break # 2: calculate new alpha2 and new alpha1 y1, y2 = self.tags[i1], self.tags[i2] a1, a2 = self.alphas[i1].copy(), self.alphas[i2].copy() e1, e2 = self._e(i1), self._e(i2) args = (i1, i2, a1, a2, e1, e2, y1, y2) a1_new, a2_new = self._get_new_alpha(*args) if not a1_new and not a2_new: state = False continue self.alphas[i1], self.alphas[i2] = a1_new, a2_new # 3: update threshold(b) b1_new = np.float64( -e1 - y1 * k(i1, i1) * (a1_new - a1) - y2 * k(i2, i1) * (a2_new - a2) + self._b ) b2_new = np.float64( -e2 - y2 * k(i2, i2) * (a2_new - a2) - y1 * k(i1, i2) * (a1_new - a1) + self._b ) if 0.0 < a1_new < self._c: b = b1_new if 0.0 < a2_new < self._c: b = b2_new if not (np.float64(0) < a2_new < self._c) and not ( np.float64(0) < a1_new < self._c ): b = (b1_new + b2_new) / 2.0 b_old = self._b self._b = b # 4: update error value,here we only calculate those non-bound samples' # error self._unbound = [i for i in self._all_samples if self._is_unbound(i)] for s in self.unbound: if s in (i1, i2): continue self._error[s] += ( y1 * (a1_new - a1) * k(i1, s) + y2 * (a2_new - a2) * k(i2, s) + (self._b - b_old) ) # if i1 or i2 is non-bound,update there error value to zero if self._is_unbound(i1): self._error[i1] = 0 if self._is_unbound(i2): self._error[i2] = 0 # Predict test samples def predict(self, test_samples, classify=True): if test_samples.shape[1] > self.samples.shape[1]: raise ValueError( "Test samples' feature length does not equal to that of train samples" ) if self._auto_norm: test_samples = self._norm(test_samples) results = [] for test_sample in test_samples: result = self._predict(test_sample) if classify: results.append(1 if result > 0 else -1) else: results.append(result) return np.array(results) # Check if alpha violate KKT condition def _check_obey_kkt(self, index): alphas = self.alphas tol = self._tol r = self._e(index) * self.tags[index] c = self._c return (r < -tol and alphas[index] < c) or (r > tol and alphas[index] > 0.0) # Get value calculated from kernel function def _k(self, i1, i2): # for test samples,use Kernel function if isinstance(i2, np.ndarray): return self.Kernel(self.samples[i1], i2) # for train samples,Kernel values have been saved in matrix else: return self._K_matrix[i1, i2] # Get sample's error def _e(self, index): """ Two cases: 1:Sample[index] is non-bound,Fetch error from list: _error 2:sample[index] is bound,Use predicted value deduct true value: g(xi) - yi """ # get from error data if self._is_unbound(index): return self._error[index] # get by g(xi) - yi else: gx = np.dot(self.alphas * self.tags, self._K_matrix[:, index]) + self._b yi = self.tags[index] return gx - yi # Calculate Kernel matrix of all possible i1,i2 ,saving time def _calculate_k_matrix(self): k_matrix = np.zeros([self.length, self.length]) for i in self._all_samples: for j in self._all_samples: k_matrix[i, j] = np.float64( self.Kernel(self.samples[i, :], self.samples[j, :]) ) return k_matrix # Predict test sample's tag def _predict(self, sample): k = self._k predicted_value = ( np.sum( [ self.alphas[i1] * self.tags[i1] * k(i1, sample) for i1 in self._all_samples ] ) + self._b ) return predicted_value # Choose alpha1 and alpha2 def _choose_alphas(self): locis = yield from self._choose_a1() if not locis: return None return locis def _choose_a1(self): """ Choose first alpha ;steps: 1:First loop over all sample 2:Second loop over all non-bound samples till all non-bound samples does not voilate kkt condition. 3:Repeat this two process endlessly,till all samples does not voilate kkt condition samples after first loop. """ while True: all_not_obey = True # all sample print("scanning all sample!") for i1 in [i for i in self._all_samples if self._check_obey_kkt(i)]: all_not_obey = False yield from self._choose_a2(i1) # non-bound sample print("scanning non-bound sample!") while True: not_obey = True for i1 in [ i for i in self._all_samples if self._check_obey_kkt(i) and self._is_unbound(i) ]: not_obey = False yield from self._choose_a2(i1) if not_obey: print("all non-bound samples fit the KKT condition!") break if all_not_obey: print("all samples fit the KKT condition! Optimization done!") break return False def _choose_a2(self, i1): """ Choose the second alpha by using heuristic algorithm ;steps: 1: Choose alpha2 which gets the maximum step size (|E1 - E2|). 2: Start in a random point,loop over all non-bound samples till alpha1 and alpha2 are optimized. 3: Start in a random point,loop over all samples till alpha1 and alpha2 are optimized. """ self._unbound = [i for i in self._all_samples if self._is_unbound(i)] if len(self.unbound) > 0: tmp_error = self._error.copy().tolist() tmp_error_dict = { index: value for index, value in enumerate(tmp_error) if self._is_unbound(index) } if self._e(i1) >= 0: i2 = min(tmp_error_dict, key=lambda index: tmp_error_dict[index]) else: i2 = max(tmp_error_dict, key=lambda index: tmp_error_dict[index]) cmd = yield i1, i2 if cmd is None: return for i2 in np.roll(self.unbound, np.random.choice(self.length)): cmd = yield i1, i2 if cmd is None: return for i2 in np.roll(self._all_samples, np.random.choice(self.length)): cmd = yield i1, i2 if cmd is None: return # Get the new alpha2 and new alpha1 def _get_new_alpha(self, i1, i2, a1, a2, e1, e2, y1, y2): k = self._k if i1 == i2: return None, None # calculate L and H which bound the new alpha2 s = y1 * y2 if s == -1: l, h = max(0.0, a2 - a1), min(self._c, self._c + a2 - a1) else: l, h = max(0.0, a2 + a1 - self._c), min(self._c, a2 + a1) if l == h: return None, None # calculate eta k11 = k(i1, i1) k22 = k(i2, i2) k12 = k(i1, i2) # select the new alpha2 which could get the minimal objectives if (eta := k11 + k22 - 2.0 * k12) > 0.0: a2_new_unc = a2 + (y2 * (e1 - e2)) / eta # a2_new has a boundary if a2_new_unc >= h: a2_new = h elif a2_new_unc <= l: a2_new = l else: a2_new = a2_new_unc else: b = self._b l1 = a1 + s * (a2 - l) h1 = a1 + s * (a2 - h) # way 1 f1 = y1 * (e1 + b) - a1 * k(i1, i1) - s * a2 * k(i1, i2) f2 = y2 * (e2 + b) - a2 * k(i2, i2) - s * a1 * k(i1, i2) ol = ( l1 * f1 + l * f2 + 1 / 2 * l1**2 * k(i1, i1) + 1 / 2 * l**2 * k(i2, i2) + s * l * l1 * k(i1, i2) ) oh = ( h1 * f1 + h * f2 + 1 / 2 * h1**2 * k(i1, i1) + 1 / 2 * h**2 * k(i2, i2) + s * h * h1 * k(i1, i2) ) """ # way 2 Use objective function check which alpha2 new could get the minimal objectives """ if ol < (oh - self._eps): a2_new = l elif ol > oh + self._eps: a2_new = h else: a2_new = a2 # a1_new has a boundary too a1_new = a1 + s * (a2 - a2_new) if a1_new < 0: a2_new += s * a1_new a1_new = 0 if a1_new > self._c: a2_new += s * (a1_new - self._c) a1_new = self._c return a1_new, a2_new # Normalise data using min_max way def _norm(self, data): if self._init: self._min = np.min(data, axis=0) self._max = np.max(data, axis=0) self._init = False return (data - self._min) / (self._max - self._min) else: return (data - self._min) / (self._max - self._min) def _is_unbound(self, index): return bool(0.0 < self.alphas[index] < self._c) def _is_support(self, index): return bool(self.alphas[index] > 0) @property def unbound(self): return self._unbound @property def support(self): return [i for i in range(self.length) if self._is_support(i)] @property def length(self): return self.samples.shape[0] class Kernel: def __init__(self, kernel, degree=1.0, coef0=0.0, gamma=1.0): self.degree = np.float64(degree) self.coef0 = np.float64(coef0) self.gamma = np.float64(gamma) self._kernel_name = kernel self._kernel = self._get_kernel(kernel_name=kernel) self._check() def _polynomial(self, v1, v2): return (self.gamma * np.inner(v1, v2) + self.coef0) ** self.degree def _linear(self, v1, v2): return np.inner(v1, v2) + self.coef0 def _rbf(self, v1, v2): return np.exp(-1 * (self.gamma * np.linalg.norm(v1 - v2) ** 2)) def _check(self): if self._kernel == self._rbf and self.gamma < 0: raise ValueError("gamma value must greater than 0") def _get_kernel(self, kernel_name): maps = {"linear": self._linear, "poly": self._polynomial, "rbf": self._rbf} return maps[kernel_name] def __call__(self, v1, v2): return self._kernel(v1, v2) def __repr__(self): return self._kernel_name def count_time(func): def call_func(*args, **kwargs): import time start_time = time.time() func(*args, **kwargs) end_time = time.time() print(f"smo algorithm cost {end_time - start_time} seconds") return call_func @count_time def test_cancel_data(): print("Hello!\nStart test svm by smo algorithm!") # 0: download dataset and load into pandas' dataframe if not os.path.exists(r"cancel_data.csv"): request = urllib.request.Request( CANCER_DATASET_URL, headers={"User-Agent": "Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)"}, ) response = urllib.request.urlopen(request) content = response.read().decode("utf-8") with open(r"cancel_data.csv", "w") as f: f.write(content) data = pd.read_csv(r"cancel_data.csv", header=None) # 1: pre-processing data del data[data.columns.tolist()[0]] data = data.dropna(axis=0) data = data.replace({"M": np.float64(1), "B": np.float64(-1)}) samples = np.array(data)[:, :] # 2: dividing data into train_data data and test_data data train_data, test_data = samples[:328, :], samples[328:, :] test_tags, test_samples = test_data[:, 0], test_data[:, 1:] # 3: choose kernel function,and set initial alphas to zero(optional) mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5) al = np.zeros(train_data.shape[0]) # 4: calculating best alphas using SMO algorithm and predict test_data samples mysvm = SmoSVM( train=train_data, kernel_func=mykernel, alpha_list=al, cost=0.4, b=0.0, tolerance=0.001, ) mysvm.fit() predict = mysvm.predict(test_samples) # 5: check accuracy score = 0 test_num = test_tags.shape[0] for i in range(test_tags.shape[0]): if test_tags[i] == predict[i]: score += 1 print(f"\nall: {test_num}\nright: {score}\nfalse: {test_num - score}") print(f"Rough Accuracy: {score / test_tags.shape[0]}") def test_demonstration(): # change stdout print("\nStart plot,please wait!!!") sys.stdout = open(os.devnull, "w") ax1 = plt.subplot2grid((2, 2), (0, 0)) ax2 = plt.subplot2grid((2, 2), (0, 1)) ax3 = plt.subplot2grid((2, 2), (1, 0)) ax4 = plt.subplot2grid((2, 2), (1, 1)) ax1.set_title("linear svm,cost:0.1") test_linear_kernel(ax1, cost=0.1) ax2.set_title("linear svm,cost:500") test_linear_kernel(ax2, cost=500) ax3.set_title("rbf kernel svm,cost:0.1") test_rbf_kernel(ax3, cost=0.1) ax4.set_title("rbf kernel svm,cost:500") test_rbf_kernel(ax4, cost=500) sys.stdout = sys.__stdout__ print("Plot done!!!") def test_linear_kernel(ax, cost): train_x, train_y = make_blobs( n_samples=500, centers=2, n_features=2, random_state=1 ) train_y[train_y == 0] = -1 scaler = StandardScaler() train_x_scaled = scaler.fit_transform(train_x, train_y) train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled)) mykernel = Kernel(kernel="linear", degree=5, coef0=1, gamma=0.5) mysvm = SmoSVM( train=train_data, kernel_func=mykernel, cost=cost, tolerance=0.001, auto_norm=False, ) mysvm.fit() plot_partition_boundary(mysvm, train_data, ax=ax) def test_rbf_kernel(ax, cost): train_x, train_y = make_circles( n_samples=500, noise=0.1, factor=0.1, random_state=1 ) train_y[train_y == 0] = -1 scaler = StandardScaler() train_x_scaled = scaler.fit_transform(train_x, train_y) train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled)) mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5) mysvm = SmoSVM( train=train_data, kernel_func=mykernel, cost=cost, tolerance=0.001, auto_norm=False, ) mysvm.fit() plot_partition_boundary(mysvm, train_data, ax=ax) def plot_partition_boundary( model, train_data, ax, resolution=100, colors=("b", "k", "r") ): """ We can not get the optimum w of our kernel svm model which is different from linear svm. For this reason, we generate randomly distributed points with high desity and prediced values of these points are calculated by using our trained model. Then we could use this prediced values to draw contour map. And this contour map can represent svm's partition boundary. """ train_data_x = train_data[:, 1] train_data_y = train_data[:, 2] train_data_tags = train_data[:, 0] xrange = np.linspace(train_data_x.min(), train_data_x.max(), resolution) yrange = np.linspace(train_data_y.min(), train_data_y.max(), resolution) test_samples = np.array([(x, y) for x in xrange for y in yrange]).reshape( resolution * resolution, 2 ) test_tags = model.predict(test_samples, classify=False) grid = test_tags.reshape((len(xrange), len(yrange))) # Plot contour map which represents the partition boundary ax.contour( xrange, yrange, np.mat(grid).T, levels=(-1, 0, 1), linestyles=("--", "-", "--"), linewidths=(1, 1, 1), colors=colors, ) # Plot all train samples ax.scatter( train_data_x, train_data_y, c=train_data_tags, cmap=plt.cm.Dark2, lw=0, alpha=0.5, ) # Plot support vectors support = model.support ax.scatter( train_data_x[support], train_data_y[support], c=train_data_tags[support], cmap=plt.cm.Dark2, ) if __name__ == "__main__": test_cancel_data() test_demonstration() plt.show()
1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
""" Lorentz transformations describe the transition between two inertial reference frames F and F', each of which is moving in some direction with respect to the other. This code only calculates Lorentz transformations for movement in the x direction with no spacial rotation (i.e., a Lorentz boost in the x direction). The Lorentz transformations are calculated here as linear transformations of four-vectors [ct, x, y, z] described by Minkowski space. Note that t (time) is multiplied by c (the speed of light) in the first entry of each four-vector. Thus, if X = [ct; x; y; z] and X' = [ct'; x'; y'; z'] are the four-vectors for two inertial reference frames and X' moves in the x direction with velocity v with respect to X, then the Lorentz transformation from X to X' is X' = BX, where | γ -γβ 0 0| B = |-γβ γ 0 0| | 0 0 1 0| | 0 0 0 1| is the matrix describing the Lorentz boost between X and X', γ = 1 / √(1 - v²/c²) is the Lorentz factor, and β = v/c is the velocity as a fraction of c. Reference: https://en.wikipedia.org/wiki/Lorentz_transformation """ from math import sqrt import numpy as np from sympy import symbols # Coefficient # Speed of light (m/s) c = 299792458 # Symbols ct, x, y, z = symbols("ct x y z") # Vehicle's speed divided by speed of light (no units) def beta(velocity: float) -> float: """ Calculates β = v/c, the given velocity as a fraction of c >>> beta(c) 1.0 >>> beta(199792458) 0.666435904801848 >>> beta(1e5) 0.00033356409519815205 >>> beta(0.2) Traceback (most recent call last): ... ValueError: Speed must be greater than or equal to 1! """ if velocity > c: raise ValueError("Speed must not exceed light speed 299,792,458 [m/s]!") elif velocity < 1: # Usually the speed should be much higher than 1 (c order of magnitude) raise ValueError("Speed must be greater than or equal to 1!") return velocity / c def gamma(velocity: float) -> float: """ Calculate the Lorentz factor γ = 1 / √(1 - v²/c²) for a given velocity >>> gamma(4) 1.0000000000000002 >>> gamma(1e5) 1.0000000556325075 >>> gamma(3e7) 1.005044845777813 >>> gamma(2.8e8) 2.7985595722318277 >>> gamma(299792451) 4627.49902669495 >>> gamma(0.3) Traceback (most recent call last): ... ValueError: Speed must be greater than or equal to 1! >>> gamma(2 * c) Traceback (most recent call last): ... ValueError: Speed must not exceed light speed 299,792,458 [m/s]! """ return 1 / sqrt(1 - beta(velocity) ** 2) def transformation_matrix(velocity: float) -> np.ndarray: """ Calculate the Lorentz transformation matrix for movement in the x direction: | γ -γβ 0 0| |-γβ γ 0 0| | 0 0 1 0| | 0 0 0 1| where γ is the Lorentz factor and β is the velocity as a fraction of c >>> transformation_matrix(29979245) array([[ 1.00503781, -0.10050378, 0. , 0. ], [-0.10050378, 1.00503781, 0. , 0. ], [ 0. , 0. , 1. , 0. ], [ 0. , 0. , 0. , 1. ]]) >>> transformation_matrix(19979245.2) array([[ 1.00222811, -0.06679208, 0. , 0. ], [-0.06679208, 1.00222811, 0. , 0. ], [ 0. , 0. , 1. , 0. ], [ 0. , 0. , 0. , 1. ]]) >>> transformation_matrix(1) array([[ 1.00000000e+00, -3.33564095e-09, 0.00000000e+00, 0.00000000e+00], [-3.33564095e-09, 1.00000000e+00, 0.00000000e+00, 0.00000000e+00], [ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00, 0.00000000e+00], [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]]) >>> transformation_matrix(0) Traceback (most recent call last): ... ValueError: Speed must be greater than or equal to 1! >>> transformation_matrix(c * 1.5) Traceback (most recent call last): ... ValueError: Speed must not exceed light speed 299,792,458 [m/s]! """ return np.array( [ [gamma(velocity), -gamma(velocity) * beta(velocity), 0, 0], [-gamma(velocity) * beta(velocity), gamma(velocity), 0, 0], [0, 0, 1, 0], [0, 0, 0, 1], ] ) def transform(velocity: float, event: np.ndarray | None = None) -> np.ndarray: """ Calculate a Lorentz transformation for movement in the x direction given a velocity and a four-vector for an inertial reference frame If no four-vector is given, then calculate the transformation symbolically with variables >>> transform(29979245, np.array([1, 2, 3, 4])) array([ 3.01302757e+08, -3.01302729e+07, 3.00000000e+00, 4.00000000e+00]) >>> transform(29979245) array([1.00503781498831*ct - 0.100503778816875*x, -0.100503778816875*ct + 1.00503781498831*x, 1.0*y, 1.0*z], dtype=object) >>> transform(19879210.2) array([1.0022057787097*ct - 0.066456172618675*x, -0.066456172618675*ct + 1.0022057787097*x, 1.0*y, 1.0*z], dtype=object) >>> transform(299792459, np.array([1, 1, 1, 1])) Traceback (most recent call last): ... ValueError: Speed must not exceed light speed 299,792,458 [m/s]! >>> transform(-1, np.array([1, 1, 1, 1])) Traceback (most recent call last): ... ValueError: Speed must be greater than or equal to 1! """ # Ensure event is not empty if event is None: event = np.array([ct, x, y, z]) # Symbolic four vector else: event[0] *= c # x0 is ct (speed of light * time) return transformation_matrix(velocity) @ event if __name__ == "__main__": import doctest doctest.testmod() # Example of symbolic vector: four_vector = transform(29979245) print("Example of four vector: ") print(f"ct' = {four_vector[0]}") print(f"x' = {four_vector[1]}") print(f"y' = {four_vector[2]}") print(f"z' = {four_vector[3]}") # Substitute symbols with numerical values sub_dict = {ct: c, x: 1, y: 1, z: 1} numerical_vector = [four_vector[i].subs(sub_dict) for i in range(4)] print(f"\n{numerical_vector}")
""" Lorentz transformations describe the transition between two inertial reference frames F and F', each of which is moving in some direction with respect to the other. This code only calculates Lorentz transformations for movement in the x direction with no spatial rotation (i.e., a Lorentz boost in the x direction). The Lorentz transformations are calculated here as linear transformations of four-vectors [ct, x, y, z] described by Minkowski space. Note that t (time) is multiplied by c (the speed of light) in the first entry of each four-vector. Thus, if X = [ct; x; y; z] and X' = [ct'; x'; y'; z'] are the four-vectors for two inertial reference frames and X' moves in the x direction with velocity v with respect to X, then the Lorentz transformation from X to X' is X' = BX, where | γ -γβ 0 0| B = |-γβ γ 0 0| | 0 0 1 0| | 0 0 0 1| is the matrix describing the Lorentz boost between X and X', γ = 1 / √(1 - v²/c²) is the Lorentz factor, and β = v/c is the velocity as a fraction of c. Reference: https://en.wikipedia.org/wiki/Lorentz_transformation """ from math import sqrt import numpy as np from sympy import symbols # Coefficient # Speed of light (m/s) c = 299792458 # Symbols ct, x, y, z = symbols("ct x y z") # Vehicle's speed divided by speed of light (no units) def beta(velocity: float) -> float: """ Calculates β = v/c, the given velocity as a fraction of c >>> beta(c) 1.0 >>> beta(199792458) 0.666435904801848 >>> beta(1e5) 0.00033356409519815205 >>> beta(0.2) Traceback (most recent call last): ... ValueError: Speed must be greater than or equal to 1! """ if velocity > c: raise ValueError("Speed must not exceed light speed 299,792,458 [m/s]!") elif velocity < 1: # Usually the speed should be much higher than 1 (c order of magnitude) raise ValueError("Speed must be greater than or equal to 1!") return velocity / c def gamma(velocity: float) -> float: """ Calculate the Lorentz factor γ = 1 / √(1 - v²/c²) for a given velocity >>> gamma(4) 1.0000000000000002 >>> gamma(1e5) 1.0000000556325075 >>> gamma(3e7) 1.005044845777813 >>> gamma(2.8e8) 2.7985595722318277 >>> gamma(299792451) 4627.49902669495 >>> gamma(0.3) Traceback (most recent call last): ... ValueError: Speed must be greater than or equal to 1! >>> gamma(2 * c) Traceback (most recent call last): ... ValueError: Speed must not exceed light speed 299,792,458 [m/s]! """ return 1 / sqrt(1 - beta(velocity) ** 2) def transformation_matrix(velocity: float) -> np.ndarray: """ Calculate the Lorentz transformation matrix for movement in the x direction: | γ -γβ 0 0| |-γβ γ 0 0| | 0 0 1 0| | 0 0 0 1| where γ is the Lorentz factor and β is the velocity as a fraction of c >>> transformation_matrix(29979245) array([[ 1.00503781, -0.10050378, 0. , 0. ], [-0.10050378, 1.00503781, 0. , 0. ], [ 0. , 0. , 1. , 0. ], [ 0. , 0. , 0. , 1. ]]) >>> transformation_matrix(19979245.2) array([[ 1.00222811, -0.06679208, 0. , 0. ], [-0.06679208, 1.00222811, 0. , 0. ], [ 0. , 0. , 1. , 0. ], [ 0. , 0. , 0. , 1. ]]) >>> transformation_matrix(1) array([[ 1.00000000e+00, -3.33564095e-09, 0.00000000e+00, 0.00000000e+00], [-3.33564095e-09, 1.00000000e+00, 0.00000000e+00, 0.00000000e+00], [ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00, 0.00000000e+00], [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]]) >>> transformation_matrix(0) Traceback (most recent call last): ... ValueError: Speed must be greater than or equal to 1! >>> transformation_matrix(c * 1.5) Traceback (most recent call last): ... ValueError: Speed must not exceed light speed 299,792,458 [m/s]! """ return np.array( [ [gamma(velocity), -gamma(velocity) * beta(velocity), 0, 0], [-gamma(velocity) * beta(velocity), gamma(velocity), 0, 0], [0, 0, 1, 0], [0, 0, 0, 1], ] ) def transform(velocity: float, event: np.ndarray | None = None) -> np.ndarray: """ Calculate a Lorentz transformation for movement in the x direction given a velocity and a four-vector for an inertial reference frame If no four-vector is given, then calculate the transformation symbolically with variables >>> transform(29979245, np.array([1, 2, 3, 4])) array([ 3.01302757e+08, -3.01302729e+07, 3.00000000e+00, 4.00000000e+00]) >>> transform(29979245) array([1.00503781498831*ct - 0.100503778816875*x, -0.100503778816875*ct + 1.00503781498831*x, 1.0*y, 1.0*z], dtype=object) >>> transform(19879210.2) array([1.0022057787097*ct - 0.066456172618675*x, -0.066456172618675*ct + 1.0022057787097*x, 1.0*y, 1.0*z], dtype=object) >>> transform(299792459, np.array([1, 1, 1, 1])) Traceback (most recent call last): ... ValueError: Speed must not exceed light speed 299,792,458 [m/s]! >>> transform(-1, np.array([1, 1, 1, 1])) Traceback (most recent call last): ... ValueError: Speed must be greater than or equal to 1! """ # Ensure event is not empty if event is None: event = np.array([ct, x, y, z]) # Symbolic four vector else: event[0] *= c # x0 is ct (speed of light * time) return transformation_matrix(velocity) @ event if __name__ == "__main__": import doctest doctest.testmod() # Example of symbolic vector: four_vector = transform(29979245) print("Example of four vector: ") print(f"ct' = {four_vector[0]}") print(f"x' = {four_vector[1]}") print(f"y' = {four_vector[2]}") print(f"z' = {four_vector[3]}") # Substitute symbols with numerical values sub_dict = {ct: c, x: 1, y: 1, z: 1} numerical_vector = [four_vector[i].subs(sub_dict) for i in range(4)] print(f"\n{numerical_vector}")
1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
#
#
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
import unittest import numpy as np def schur_complement( mat_a: np.ndarray, mat_b: np.ndarray, mat_c: np.ndarray, pseudo_inv: np.ndarray | None = None, ) -> np.ndarray: """ Schur complement of a symmetric matrix X given as a 2x2 block matrix consisting of matrices A, B and C. Matrix A must be quadratic and non-singular. In case A is singular, a pseudo-inverse may be provided using the pseudo_inv argument. Link to Wiki: https://en.wikipedia.org/wiki/Schur_complement See also Convex Optimization – Boyd and Vandenberghe, A.5.5 >>> import numpy as np >>> a = np.array([[1, 2], [2, 1]]) >>> b = np.array([[0, 3], [3, 0]]) >>> c = np.array([[2, 1], [6, 3]]) >>> schur_complement(a, b, c) array([[ 5., -5.], [ 0., 6.]]) """ shape_a = np.shape(mat_a) shape_b = np.shape(mat_b) shape_c = np.shape(mat_c) if shape_a[0] != shape_b[0]: raise ValueError( f"Expected the same number of rows for A and B. \ Instead found A of size {shape_a} and B of size {shape_b}" ) if shape_b[1] != shape_c[1]: raise ValueError( f"Expected the same number of columns for B and C. \ Instead found B of size {shape_b} and C of size {shape_c}" ) a_inv = pseudo_inv if a_inv is None: try: a_inv = np.linalg.inv(mat_a) except np.linalg.LinAlgError: raise ValueError( "Input matrix A is not invertible. Cannot compute Schur complement." ) return mat_c - mat_b.T @ a_inv @ mat_b class TestSchurComplement(unittest.TestCase): def test_schur_complement(self) -> None: a = np.array([[1, 2, 1], [2, 1, 2], [3, 2, 4]]) b = np.array([[0, 3], [3, 0], [2, 3]]) c = np.array([[2, 1], [6, 3]]) s = schur_complement(a, b, c) input_matrix = np.block([[a, b], [b.T, c]]) det_x = np.linalg.det(input_matrix) det_a = np.linalg.det(a) det_s = np.linalg.det(s) self.assertAlmostEqual(det_x, det_a * det_s) def test_improper_a_b_dimensions(self) -> None: a = np.array([[1, 2, 1], [2, 1, 2], [3, 2, 4]]) b = np.array([[0, 3], [3, 0], [2, 3]]) c = np.array([[2, 1], [6, 3]]) with self.assertRaises(ValueError): schur_complement(a, b, c) def test_improper_b_c_dimensions(self) -> None: a = np.array([[1, 2, 1], [2, 1, 2], [3, 2, 4]]) b = np.array([[0, 3], [3, 0], [2, 3]]) c = np.array([[2, 1, 3], [6, 3, 5]]) with self.assertRaises(ValueError): schur_complement(a, b, c) if __name__ == "__main__": import doctest doctest.testmod() unittest.main()
import unittest import numpy as np def schur_complement( mat_a: np.ndarray, mat_b: np.ndarray, mat_c: np.ndarray, pseudo_inv: np.ndarray | None = None, ) -> np.ndarray: """ Schur complement of a symmetric matrix X given as a 2x2 block matrix consisting of matrices A, B and C. Matrix A must be quadratic and non-singular. In case A is singular, a pseudo-inverse may be provided using the pseudo_inv argument. Link to Wiki: https://en.wikipedia.org/wiki/Schur_complement See also Convex Optimization – Boyd and Vandenberghe, A.5.5 >>> import numpy as np >>> a = np.array([[1, 2], [2, 1]]) >>> b = np.array([[0, 3], [3, 0]]) >>> c = np.array([[2, 1], [6, 3]]) >>> schur_complement(a, b, c) array([[ 5., -5.], [ 0., 6.]]) """ shape_a = np.shape(mat_a) shape_b = np.shape(mat_b) shape_c = np.shape(mat_c) if shape_a[0] != shape_b[0]: raise ValueError( f"Expected the same number of rows for A and B. \ Instead found A of size {shape_a} and B of size {shape_b}" ) if shape_b[1] != shape_c[1]: raise ValueError( f"Expected the same number of columns for B and C. \ Instead found B of size {shape_b} and C of size {shape_c}" ) a_inv = pseudo_inv if a_inv is None: try: a_inv = np.linalg.inv(mat_a) except np.linalg.LinAlgError: raise ValueError( "Input matrix A is not invertible. Cannot compute Schur complement." ) return mat_c - mat_b.T @ a_inv @ mat_b class TestSchurComplement(unittest.TestCase): def test_schur_complement(self) -> None: a = np.array([[1, 2, 1], [2, 1, 2], [3, 2, 4]]) b = np.array([[0, 3], [3, 0], [2, 3]]) c = np.array([[2, 1], [6, 3]]) s = schur_complement(a, b, c) input_matrix = np.block([[a, b], [b.T, c]]) det_x = np.linalg.det(input_matrix) det_a = np.linalg.det(a) det_s = np.linalg.det(s) self.assertAlmostEqual(det_x, det_a * det_s) def test_improper_a_b_dimensions(self) -> None: a = np.array([[1, 2, 1], [2, 1, 2], [3, 2, 4]]) b = np.array([[0, 3], [3, 0], [2, 3]]) c = np.array([[2, 1], [6, 3]]) with self.assertRaises(ValueError): schur_complement(a, b, c) def test_improper_b_c_dimensions(self) -> None: a = np.array([[1, 2, 1], [2, 1, 2], [3, 2, 4]]) b = np.array([[0, 3], [3, 0], [2, 3]]) c = np.array([[2, 1, 3], [6, 3, 5]]) with self.assertRaises(ValueError): schur_complement(a, b, c) if __name__ == "__main__": import doctest doctest.testmod() unittest.main()
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
from __future__ import annotations def stable_matching( donor_pref: list[list[int]], recipient_pref: list[list[int]] ) -> list[int]: """ Finds the stable match in any bipartite graph, i.e a pairing where no 2 objects prefer each other over their partner. The function accepts the preferences of oegan donors and recipients (where both are assigned numbers from 0 to n-1) and returns a list where the index position corresponds to the donor and value at the index is the organ recipient. To better understand the algorithm, see also: https://github.com/akashvshroff/Gale_Shapley_Stable_Matching (README). https://www.youtube.com/watch?v=Qcv1IqHWAzg&t=13s (Numberphile YouTube). >>> donor_pref = [[0, 1, 3, 2], [0, 2, 3, 1], [1, 0, 2, 3], [0, 3, 1, 2]] >>> recipient_pref = [[3, 1, 2, 0], [3, 1, 0, 2], [0, 3, 1, 2], [1, 0, 3, 2]] >>> stable_matching(donor_pref, recipient_pref) [1, 2, 3, 0] """ assert len(donor_pref) == len(recipient_pref) n = len(donor_pref) unmatched_donors = list(range(n)) donor_record = [-1] * n # who the donor has donated to rec_record = [-1] * n # who the recipient has received from num_donations = [0] * n while unmatched_donors: donor = unmatched_donors[0] donor_preference = donor_pref[donor] recipient = donor_preference[num_donations[donor]] num_donations[donor] += 1 rec_preference = recipient_pref[recipient] prev_donor = rec_record[recipient] if prev_donor != -1: if rec_preference.index(prev_donor) > rec_preference.index(donor): rec_record[recipient] = donor donor_record[donor] = recipient unmatched_donors.append(prev_donor) unmatched_donors.remove(donor) else: rec_record[recipient] = donor donor_record[donor] = recipient unmatched_donors.remove(donor) return donor_record
from __future__ import annotations def stable_matching( donor_pref: list[list[int]], recipient_pref: list[list[int]] ) -> list[int]: """ Finds the stable match in any bipartite graph, i.e a pairing where no 2 objects prefer each other over their partner. The function accepts the preferences of oegan donors and recipients (where both are assigned numbers from 0 to n-1) and returns a list where the index position corresponds to the donor and value at the index is the organ recipient. To better understand the algorithm, see also: https://github.com/akashvshroff/Gale_Shapley_Stable_Matching (README). https://www.youtube.com/watch?v=Qcv1IqHWAzg&t=13s (Numberphile YouTube). >>> donor_pref = [[0, 1, 3, 2], [0, 2, 3, 1], [1, 0, 2, 3], [0, 3, 1, 2]] >>> recipient_pref = [[3, 1, 2, 0], [3, 1, 0, 2], [0, 3, 1, 2], [1, 0, 3, 2]] >>> stable_matching(donor_pref, recipient_pref) [1, 2, 3, 0] """ assert len(donor_pref) == len(recipient_pref) n = len(donor_pref) unmatched_donors = list(range(n)) donor_record = [-1] * n # who the donor has donated to rec_record = [-1] * n # who the recipient has received from num_donations = [0] * n while unmatched_donors: donor = unmatched_donors[0] donor_preference = donor_pref[donor] recipient = donor_preference[num_donations[donor]] num_donations[donor] += 1 rec_preference = recipient_pref[recipient] prev_donor = rec_record[recipient] if prev_donor != -1: if rec_preference.index(prev_donor) > rec_preference.index(donor): rec_record[recipient] = donor donor_record[donor] = recipient unmatched_donors.append(prev_donor) unmatched_donors.remove(donor) else: rec_record[recipient] = donor donor_record[donor] = recipient unmatched_donors.remove(donor) return donor_record
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
# https://www.investopedia.com from __future__ import annotations def simple_interest( principal: float, daily_interest_rate: float, days_between_payments: int ) -> float: """ >>> simple_interest(18000.0, 0.06, 3) 3240.0 >>> simple_interest(0.5, 0.06, 3) 0.09 >>> simple_interest(18000.0, 0.01, 10) 1800.0 >>> simple_interest(18000.0, 0.0, 3) 0.0 >>> simple_interest(5500.0, 0.01, 100) 5500.0 >>> simple_interest(10000.0, -0.06, 3) Traceback (most recent call last): ... ValueError: daily_interest_rate must be >= 0 >>> simple_interest(-10000.0, 0.06, 3) Traceback (most recent call last): ... ValueError: principal must be > 0 >>> simple_interest(5500.0, 0.01, -5) Traceback (most recent call last): ... ValueError: days_between_payments must be > 0 """ if days_between_payments <= 0: raise ValueError("days_between_payments must be > 0") if daily_interest_rate < 0: raise ValueError("daily_interest_rate must be >= 0") if principal <= 0: raise ValueError("principal must be > 0") return principal * daily_interest_rate * days_between_payments def compound_interest( principal: float, nominal_annual_interest_rate_percentage: float, number_of_compounding_periods: int, ) -> float: """ >>> compound_interest(10000.0, 0.05, 3) 1576.2500000000014 >>> compound_interest(10000.0, 0.05, 1) 500.00000000000045 >>> compound_interest(0.5, 0.05, 3) 0.07881250000000006 >>> compound_interest(10000.0, 0.06, -4) Traceback (most recent call last): ... ValueError: number_of_compounding_periods must be > 0 >>> compound_interest(10000.0, -3.5, 3.0) Traceback (most recent call last): ... ValueError: nominal_annual_interest_rate_percentage must be >= 0 >>> compound_interest(-5500.0, 0.01, 5) Traceback (most recent call last): ... ValueError: principal must be > 0 """ if number_of_compounding_periods <= 0: raise ValueError("number_of_compounding_periods must be > 0") if nominal_annual_interest_rate_percentage < 0: raise ValueError("nominal_annual_interest_rate_percentage must be >= 0") if principal <= 0: raise ValueError("principal must be > 0") return principal * ( (1 + nominal_annual_interest_rate_percentage) ** number_of_compounding_periods - 1 ) if __name__ == "__main__": import doctest doctest.testmod()
# https://www.investopedia.com from __future__ import annotations def simple_interest( principal: float, daily_interest_rate: float, days_between_payments: int ) -> float: """ >>> simple_interest(18000.0, 0.06, 3) 3240.0 >>> simple_interest(0.5, 0.06, 3) 0.09 >>> simple_interest(18000.0, 0.01, 10) 1800.0 >>> simple_interest(18000.0, 0.0, 3) 0.0 >>> simple_interest(5500.0, 0.01, 100) 5500.0 >>> simple_interest(10000.0, -0.06, 3) Traceback (most recent call last): ... ValueError: daily_interest_rate must be >= 0 >>> simple_interest(-10000.0, 0.06, 3) Traceback (most recent call last): ... ValueError: principal must be > 0 >>> simple_interest(5500.0, 0.01, -5) Traceback (most recent call last): ... ValueError: days_between_payments must be > 0 """ if days_between_payments <= 0: raise ValueError("days_between_payments must be > 0") if daily_interest_rate < 0: raise ValueError("daily_interest_rate must be >= 0") if principal <= 0: raise ValueError("principal must be > 0") return principal * daily_interest_rate * days_between_payments def compound_interest( principal: float, nominal_annual_interest_rate_percentage: float, number_of_compounding_periods: int, ) -> float: """ >>> compound_interest(10000.0, 0.05, 3) 1576.2500000000014 >>> compound_interest(10000.0, 0.05, 1) 500.00000000000045 >>> compound_interest(0.5, 0.05, 3) 0.07881250000000006 >>> compound_interest(10000.0, 0.06, -4) Traceback (most recent call last): ... ValueError: number_of_compounding_periods must be > 0 >>> compound_interest(10000.0, -3.5, 3.0) Traceback (most recent call last): ... ValueError: nominal_annual_interest_rate_percentage must be >= 0 >>> compound_interest(-5500.0, 0.01, 5) Traceback (most recent call last): ... ValueError: principal must be > 0 """ if number_of_compounding_periods <= 0: raise ValueError("number_of_compounding_periods must be > 0") if nominal_annual_interest_rate_percentage < 0: raise ValueError("nominal_annual_interest_rate_percentage must be >= 0") if principal <= 0: raise ValueError("principal must be > 0") return principal * ( (1 + nominal_annual_interest_rate_percentage) ** number_of_compounding_periods - 1 ) if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
""" Highest response ratio next (HRRN) scheduling is a non-preemptive discipline. It was developed as modification of shortest job next or shortest job first (SJN or SJF) to mitigate the problem of process starvation. https://en.wikipedia.org/wiki/Highest_response_ratio_next """ from statistics import mean import numpy as np def calculate_turn_around_time( process_name: list, arrival_time: list, burst_time: list, no_of_process: int ) -> list: """ Calculate the turn around time of each processes Return: The turn around time time for each process. >>> calculate_turn_around_time(["A", "B", "C"], [3, 5, 8], [2, 4, 6], 3) [2, 4, 7] >>> calculate_turn_around_time(["A", "B", "C"], [0, 2, 4], [3, 5, 7], 3) [3, 6, 11] """ current_time = 0 # Number of processes finished finished_process_count = 0 # Displays the finished process. # If it is 0, the performance is completed if it is 1, before the performance. finished_process = [0] * no_of_process # List to include calculation results turn_around_time = [0] * no_of_process # Sort by arrival time. burst_time = [burst_time[i] for i in np.argsort(arrival_time)] process_name = [process_name[i] for i in np.argsort(arrival_time)] arrival_time.sort() while no_of_process > finished_process_count: """ If the current time is less than the arrival time of the process that arrives first among the processes that have not been performed, change the current time. """ i = 0 while finished_process[i] == 1: i += 1 if current_time < arrival_time[i]: current_time = arrival_time[i] response_ratio = 0 # Index showing the location of the process being performed loc = 0 # Saves the current response ratio. temp = 0 for i in range(0, no_of_process): if finished_process[i] == 0 and arrival_time[i] <= current_time: temp = (burst_time[i] + (current_time - arrival_time[i])) / burst_time[ i ] if response_ratio < temp: response_ratio = temp loc = i # Calculate the turn around time turn_around_time[loc] = current_time + burst_time[loc] - arrival_time[loc] current_time += burst_time[loc] # Indicates that the process has been performed. finished_process[loc] = 1 # Increase finished_process_count by 1 finished_process_count += 1 return turn_around_time def calculate_waiting_time( process_name: list, turn_around_time: list, burst_time: list, no_of_process: int ) -> list: """ Calculate the waiting time of each processes. Return: The waiting time for each process. >>> calculate_waiting_time(["A", "B", "C"], [2, 4, 7], [2, 4, 6], 3) [0, 0, 1] >>> calculate_waiting_time(["A", "B", "C"], [3, 6, 11], [3, 5, 7], 3) [0, 1, 4] """ waiting_time = [0] * no_of_process for i in range(0, no_of_process): waiting_time[i] = turn_around_time[i] - burst_time[i] return waiting_time if __name__ == "__main__": no_of_process = 5 process_name = ["A", "B", "C", "D", "E"] arrival_time = [1, 2, 3, 4, 5] burst_time = [1, 2, 3, 4, 5] turn_around_time = calculate_turn_around_time( process_name, arrival_time, burst_time, no_of_process ) waiting_time = calculate_waiting_time( process_name, turn_around_time, burst_time, no_of_process ) print("Process name \tArrival time \tBurst time \tTurn around time \tWaiting time") for i in range(0, no_of_process): print( f"{process_name[i]}\t\t{arrival_time[i]}\t\t{burst_time[i]}\t\t" f"{turn_around_time[i]}\t\t\t{waiting_time[i]}" ) print(f"average waiting time : {mean(waiting_time):.5f}") print(f"average turn around time : {mean(turn_around_time):.5f}")
""" Highest response ratio next (HRRN) scheduling is a non-preemptive discipline. It was developed as modification of shortest job next or shortest job first (SJN or SJF) to mitigate the problem of process starvation. https://en.wikipedia.org/wiki/Highest_response_ratio_next """ from statistics import mean import numpy as np def calculate_turn_around_time( process_name: list, arrival_time: list, burst_time: list, no_of_process: int ) -> list: """ Calculate the turn around time of each processes Return: The turn around time time for each process. >>> calculate_turn_around_time(["A", "B", "C"], [3, 5, 8], [2, 4, 6], 3) [2, 4, 7] >>> calculate_turn_around_time(["A", "B", "C"], [0, 2, 4], [3, 5, 7], 3) [3, 6, 11] """ current_time = 0 # Number of processes finished finished_process_count = 0 # Displays the finished process. # If it is 0, the performance is completed if it is 1, before the performance. finished_process = [0] * no_of_process # List to include calculation results turn_around_time = [0] * no_of_process # Sort by arrival time. burst_time = [burst_time[i] for i in np.argsort(arrival_time)] process_name = [process_name[i] for i in np.argsort(arrival_time)] arrival_time.sort() while no_of_process > finished_process_count: """ If the current time is less than the arrival time of the process that arrives first among the processes that have not been performed, change the current time. """ i = 0 while finished_process[i] == 1: i += 1 if current_time < arrival_time[i]: current_time = arrival_time[i] response_ratio = 0 # Index showing the location of the process being performed loc = 0 # Saves the current response ratio. temp = 0 for i in range(0, no_of_process): if finished_process[i] == 0 and arrival_time[i] <= current_time: temp = (burst_time[i] + (current_time - arrival_time[i])) / burst_time[ i ] if response_ratio < temp: response_ratio = temp loc = i # Calculate the turn around time turn_around_time[loc] = current_time + burst_time[loc] - arrival_time[loc] current_time += burst_time[loc] # Indicates that the process has been performed. finished_process[loc] = 1 # Increase finished_process_count by 1 finished_process_count += 1 return turn_around_time def calculate_waiting_time( process_name: list, turn_around_time: list, burst_time: list, no_of_process: int ) -> list: """ Calculate the waiting time of each processes. Return: The waiting time for each process. >>> calculate_waiting_time(["A", "B", "C"], [2, 4, 7], [2, 4, 6], 3) [0, 0, 1] >>> calculate_waiting_time(["A", "B", "C"], [3, 6, 11], [3, 5, 7], 3) [0, 1, 4] """ waiting_time = [0] * no_of_process for i in range(0, no_of_process): waiting_time[i] = turn_around_time[i] - burst_time[i] return waiting_time if __name__ == "__main__": no_of_process = 5 process_name = ["A", "B", "C", "D", "E"] arrival_time = [1, 2, 3, 4, 5] burst_time = [1, 2, 3, 4, 5] turn_around_time = calculate_turn_around_time( process_name, arrival_time, burst_time, no_of_process ) waiting_time = calculate_waiting_time( process_name, turn_around_time, burst_time, no_of_process ) print("Process name \tArrival time \tBurst time \tTurn around time \tWaiting time") for i in range(0, no_of_process): print( f"{process_name[i]}\t\t{arrival_time[i]}\t\t{burst_time[i]}\t\t" f"{turn_around_time[i]}\t\t\t{waiting_time[i]}" ) print(f"average waiting time : {mean(waiting_time):.5f}") print(f"average turn around time : {mean(turn_around_time):.5f}")
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
# Print all subset combinations of n element in given set of r element. def combination_util(arr, n, r, index, data, i): """ Current combination is ready to be printed, print it arr[] ---> Input Array data[] ---> Temporary array to store current combination start & end ---> Staring and Ending indexes in arr[] index ---> Current index in data[] r ---> Size of a combination to be printed """ if index == r: for j in range(r): print(data[j], end=" ") print(" ") return # When no more elements are there to put in data[] if i >= n: return # current is included, put next at next location data[index] = arr[i] combination_util(arr, n, r, index + 1, data, i + 1) # current is excluded, replace it with # next (Note that i+1 is passed, but # index is not changed) combination_util(arr, n, r, index, data, i + 1) # The main function that prints all combinations # of size r in arr[] of size n. This function # mainly uses combinationUtil() def print_combination(arr, n, r): # A temporary array to store all combination one by one data = [0] * r # Print all combination using temporary array 'data[]' combination_util(arr, n, r, 0, data, 0) if __name__ == "__main__": # Driver code to check the function above arr = [10, 20, 30, 40, 50] print_combination(arr, len(arr), 3) # This code is contributed by Ambuj sahu
# Print all subset combinations of n element in given set of r element. def combination_util(arr, n, r, index, data, i): """ Current combination is ready to be printed, print it arr[] ---> Input Array data[] ---> Temporary array to store current combination start & end ---> Staring and Ending indexes in arr[] index ---> Current index in data[] r ---> Size of a combination to be printed """ if index == r: for j in range(r): print(data[j], end=" ") print(" ") return # When no more elements are there to put in data[] if i >= n: return # current is included, put next at next location data[index] = arr[i] combination_util(arr, n, r, index + 1, data, i + 1) # current is excluded, replace it with # next (Note that i+1 is passed, but # index is not changed) combination_util(arr, n, r, index, data, i + 1) # The main function that prints all combinations # of size r in arr[] of size n. This function # mainly uses combinationUtil() def print_combination(arr, n, r): # A temporary array to store all combination one by one data = [0] * r # Print all combination using temporary array 'data[]' combination_util(arr, n, r, 0, data, 0) if __name__ == "__main__": # Driver code to check the function above arr = [10, 20, 30, 40, 50] print_combination(arr, len(arr), 3) # This code is contributed by Ambuj sahu
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
#
#
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
""" Problem 16: https://projecteuler.net/problem=16 2^15 = 32768 and the sum of its digits is 3 + 2 + 7 + 6 + 8 = 26. What is the sum of the digits of the number 2^1000? """ def solution(power: int = 1000) -> int: """Returns the sum of the digits of the number 2^power. >>> solution(1000) 1366 >>> solution(50) 76 >>> solution(20) 31 >>> solution(15) 26 """ num = 2**power string_num = str(num) list_num = list(string_num) sum_of_num = 0 for i in list_num: sum_of_num += int(i) return sum_of_num if __name__ == "__main__": power = int(input("Enter the power of 2: ").strip()) print("2 ^ ", power, " = ", 2**power) result = solution(power) print("Sum of the digits is: ", result)
""" Problem 16: https://projecteuler.net/problem=16 2^15 = 32768 and the sum of its digits is 3 + 2 + 7 + 6 + 8 = 26. What is the sum of the digits of the number 2^1000? """ def solution(power: int = 1000) -> int: """Returns the sum of the digits of the number 2^power. >>> solution(1000) 1366 >>> solution(50) 76 >>> solution(20) 31 >>> solution(15) 26 """ num = 2**power string_num = str(num) list_num = list(string_num) sum_of_num = 0 for i in list_num: sum_of_num += int(i) return sum_of_num if __name__ == "__main__": power = int(input("Enter the power of 2: ").strip()) print("2 ^ ", power, " = ", 2**power) result = solution(power) print("Sum of the digits is: ", result)
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
def is_contains_unique_chars(input_str: str) -> bool: """ Check if all characters in the string is unique or not. >>> is_contains_unique_chars("I_love.py") True >>> is_contains_unique_chars("I don't love Python") False Time complexity: O(n) Space complexity: O(1) 19320 bytes as we are having 144697 characters in unicode """ # Each bit will represent each unicode character # For example 65th bit representing 'A' # https://stackoverflow.com/a/12811293 bitmap = 0 for ch in input_str: ch_unicode = ord(ch) ch_bit_index_on = pow(2, ch_unicode) # If we already turned on bit for current character's unicode if bitmap >> ch_unicode & 1 == 1: return False bitmap |= ch_bit_index_on return True if __name__ == "__main__": import doctest doctest.testmod()
def is_contains_unique_chars(input_str: str) -> bool: """ Check if all characters in the string is unique or not. >>> is_contains_unique_chars("I_love.py") True >>> is_contains_unique_chars("I don't love Python") False Time complexity: O(n) Space complexity: O(1) 19320 bytes as we are having 144697 characters in unicode """ # Each bit will represent each unicode character # For example 65th bit representing 'A' # https://stackoverflow.com/a/12811293 bitmap = 0 for ch in input_str: ch_unicode = ord(ch) ch_bit_index_on = pow(2, ch_unicode) # If we already turned on bit for current character's unicode if bitmap >> ch_unicode & 1 == 1: return False bitmap |= ch_bit_index_on return True if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
from __future__ import annotations def diophantine(a: int, b: int, c: int) -> tuple[float, float]: """ Diophantine Equation : Given integers a,b,c ( at least one of a and b != 0), the diophantine equation a*x + b*y = c has a solution (where x and y are integers) iff gcd(a,b) divides c. GCD ( Greatest Common Divisor ) or HCF ( Highest Common Factor ) >>> diophantine(10,6,14) (-7.0, 14.0) >>> diophantine(391,299,-69) (9.0, -12.0) But above equation has one more solution i.e., x = -4, y = 5. That's why we need diophantine all solution function. """ assert ( c % greatest_common_divisor(a, b) == 0 ) # greatest_common_divisor(a,b) function implemented below (d, x, y) = extended_gcd(a, b) # extended_gcd(a,b) function implemented below r = c / d return (r * x, r * y) def diophantine_all_soln(a: int, b: int, c: int, n: int = 2) -> None: """ Lemma : if n|ab and gcd(a,n) = 1, then n|b. Finding All solutions of Diophantine Equations: Theorem : Let gcd(a,b) = d, a = d*p, b = d*q. If (x0,y0) is a solution of Diophantine Equation a*x + b*y = c. a*x0 + b*y0 = c, then all the solutions have the form a(x0 + t*q) + b(y0 - t*p) = c, where t is an arbitrary integer. n is the number of solution you want, n = 2 by default >>> diophantine_all_soln(10, 6, 14) -7.0 14.0 -4.0 9.0 >>> diophantine_all_soln(10, 6, 14, 4) -7.0 14.0 -4.0 9.0 -1.0 4.0 2.0 -1.0 >>> diophantine_all_soln(391, 299, -69, n = 4) 9.0 -12.0 22.0 -29.0 35.0 -46.0 48.0 -63.0 """ (x0, y0) = diophantine(a, b, c) # Initial value d = greatest_common_divisor(a, b) p = a // d q = b // d for i in range(n): x = x0 + i * q y = y0 - i * p print(x, y) def greatest_common_divisor(a: int, b: int) -> int: """ Euclid's Lemma : d divides a and b, if and only if d divides a-b and b Euclid's Algorithm >>> greatest_common_divisor(7,5) 1 Note : In number theory, two integers a and b are said to be relatively prime, mutually prime, or co-prime if the only positive integer (factor) that divides both of them is 1 i.e., gcd(a,b) = 1. >>> greatest_common_divisor(121, 11) 11 """ if a < b: a, b = b, a while a % b != 0: a, b = b, a % b return b def extended_gcd(a: int, b: int) -> tuple[int, int, int]: """ Extended Euclid's Algorithm : If d divides a and b and d = a*x + b*y for integers x and y, then d = gcd(a,b) >>> extended_gcd(10, 6) (2, -1, 2) >>> extended_gcd(7, 5) (1, -2, 3) """ assert a >= 0 and b >= 0 if b == 0: d, x, y = a, 1, 0 else: (d, p, q) = extended_gcd(b, a % b) x = q y = p - q * (a // b) assert a % d == 0 and b % d == 0 assert d == a * x + b * y return (d, x, y) if __name__ == "__main__": from doctest import testmod testmod(name="diophantine", verbose=True) testmod(name="diophantine_all_soln", verbose=True) testmod(name="extended_gcd", verbose=True) testmod(name="greatest_common_divisor", verbose=True)
from __future__ import annotations def diophantine(a: int, b: int, c: int) -> tuple[float, float]: """ Diophantine Equation : Given integers a,b,c ( at least one of a and b != 0), the diophantine equation a*x + b*y = c has a solution (where x and y are integers) iff gcd(a,b) divides c. GCD ( Greatest Common Divisor ) or HCF ( Highest Common Factor ) >>> diophantine(10,6,14) (-7.0, 14.0) >>> diophantine(391,299,-69) (9.0, -12.0) But above equation has one more solution i.e., x = -4, y = 5. That's why we need diophantine all solution function. """ assert ( c % greatest_common_divisor(a, b) == 0 ) # greatest_common_divisor(a,b) function implemented below (d, x, y) = extended_gcd(a, b) # extended_gcd(a,b) function implemented below r = c / d return (r * x, r * y) def diophantine_all_soln(a: int, b: int, c: int, n: int = 2) -> None: """ Lemma : if n|ab and gcd(a,n) = 1, then n|b. Finding All solutions of Diophantine Equations: Theorem : Let gcd(a,b) = d, a = d*p, b = d*q. If (x0,y0) is a solution of Diophantine Equation a*x + b*y = c. a*x0 + b*y0 = c, then all the solutions have the form a(x0 + t*q) + b(y0 - t*p) = c, where t is an arbitrary integer. n is the number of solution you want, n = 2 by default >>> diophantine_all_soln(10, 6, 14) -7.0 14.0 -4.0 9.0 >>> diophantine_all_soln(10, 6, 14, 4) -7.0 14.0 -4.0 9.0 -1.0 4.0 2.0 -1.0 >>> diophantine_all_soln(391, 299, -69, n = 4) 9.0 -12.0 22.0 -29.0 35.0 -46.0 48.0 -63.0 """ (x0, y0) = diophantine(a, b, c) # Initial value d = greatest_common_divisor(a, b) p = a // d q = b // d for i in range(n): x = x0 + i * q y = y0 - i * p print(x, y) def greatest_common_divisor(a: int, b: int) -> int: """ Euclid's Lemma : d divides a and b, if and only if d divides a-b and b Euclid's Algorithm >>> greatest_common_divisor(7,5) 1 Note : In number theory, two integers a and b are said to be relatively prime, mutually prime, or co-prime if the only positive integer (factor) that divides both of them is 1 i.e., gcd(a,b) = 1. >>> greatest_common_divisor(121, 11) 11 """ if a < b: a, b = b, a while a % b != 0: a, b = b, a % b return b def extended_gcd(a: int, b: int) -> tuple[int, int, int]: """ Extended Euclid's Algorithm : If d divides a and b and d = a*x + b*y for integers x and y, then d = gcd(a,b) >>> extended_gcd(10, 6) (2, -1, 2) >>> extended_gcd(7, 5) (1, -2, 3) """ assert a >= 0 and b >= 0 if b == 0: d, x, y = a, 1, 0 else: (d, p, q) = extended_gcd(b, a % b) x = q y = p - q * (a // b) assert a % d == 0 and b % d == 0 assert d == a * x + b * y return (d, x, y) if __name__ == "__main__": from doctest import testmod testmod(name="diophantine", verbose=True) testmod(name="diophantine_all_soln", verbose=True) testmod(name="extended_gcd", verbose=True) testmod(name="greatest_common_divisor", verbose=True)
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
# Frequency Finder import string # frequency taken from https://en.wikipedia.org/wiki/Letter_frequency english_letter_freq = { "E": 12.70, "T": 9.06, "A": 8.17, "O": 7.51, "I": 6.97, "N": 6.75, "S": 6.33, "H": 6.09, "R": 5.99, "D": 4.25, "L": 4.03, "C": 2.78, "U": 2.76, "M": 2.41, "W": 2.36, "F": 2.23, "G": 2.02, "Y": 1.97, "P": 1.93, "B": 1.29, "V": 0.98, "K": 0.77, "J": 0.15, "X": 0.15, "Q": 0.10, "Z": 0.07, } ETAOIN = "ETAOINSHRDLCUMWFGYPBVKJXQZ" LETTERS = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" def get_letter_count(message: str) -> dict[str, int]: letter_count = {letter: 0 for letter in string.ascii_uppercase} for letter in message.upper(): if letter in LETTERS: letter_count[letter] += 1 return letter_count def get_item_at_index_zero(x: tuple) -> str: return x[0] def get_frequency_order(message: str) -> str: letter_to_freq = get_letter_count(message) freq_to_letter: dict[int, list[str]] = { freq: [] for letter, freq in letter_to_freq.items() } for letter in LETTERS: freq_to_letter[letter_to_freq[letter]].append(letter) freq_to_letter_str: dict[int, str] = {} for freq in freq_to_letter: freq_to_letter[freq].sort(key=ETAOIN.find, reverse=True) freq_to_letter_str[freq] = "".join(freq_to_letter[freq]) freq_pairs = list(freq_to_letter_str.items()) freq_pairs.sort(key=get_item_at_index_zero, reverse=True) freq_order: list[str] = [freq_pair[1] for freq_pair in freq_pairs] return "".join(freq_order) def english_freq_match_score(message: str) -> int: """ >>> english_freq_match_score('Hello World') 1 """ freq_order = get_frequency_order(message) match_score = 0 for common_letter in ETAOIN[:6]: if common_letter in freq_order[:6]: match_score += 1 for uncommon_letter in ETAOIN[-6:]: if uncommon_letter in freq_order[-6:]: match_score += 1 return match_score if __name__ == "__main__": import doctest doctest.testmod()
# Frequency Finder import string # frequency taken from https://en.wikipedia.org/wiki/Letter_frequency english_letter_freq = { "E": 12.70, "T": 9.06, "A": 8.17, "O": 7.51, "I": 6.97, "N": 6.75, "S": 6.33, "H": 6.09, "R": 5.99, "D": 4.25, "L": 4.03, "C": 2.78, "U": 2.76, "M": 2.41, "W": 2.36, "F": 2.23, "G": 2.02, "Y": 1.97, "P": 1.93, "B": 1.29, "V": 0.98, "K": 0.77, "J": 0.15, "X": 0.15, "Q": 0.10, "Z": 0.07, } ETAOIN = "ETAOINSHRDLCUMWFGYPBVKJXQZ" LETTERS = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" def get_letter_count(message: str) -> dict[str, int]: letter_count = {letter: 0 for letter in string.ascii_uppercase} for letter in message.upper(): if letter in LETTERS: letter_count[letter] += 1 return letter_count def get_item_at_index_zero(x: tuple) -> str: return x[0] def get_frequency_order(message: str) -> str: letter_to_freq = get_letter_count(message) freq_to_letter: dict[int, list[str]] = { freq: [] for letter, freq in letter_to_freq.items() } for letter in LETTERS: freq_to_letter[letter_to_freq[letter]].append(letter) freq_to_letter_str: dict[int, str] = {} for freq in freq_to_letter: freq_to_letter[freq].sort(key=ETAOIN.find, reverse=True) freq_to_letter_str[freq] = "".join(freq_to_letter[freq]) freq_pairs = list(freq_to_letter_str.items()) freq_pairs.sort(key=get_item_at_index_zero, reverse=True) freq_order: list[str] = [freq_pair[1] for freq_pair in freq_pairs] return "".join(freq_order) def english_freq_match_score(message: str) -> int: """ >>> english_freq_match_score('Hello World') 1 """ freq_order = get_frequency_order(message) match_score = 0 for common_letter in ETAOIN[:6]: if common_letter in freq_order[:6]: match_score += 1 for uncommon_letter in ETAOIN[-6:]: if uncommon_letter in freq_order[-6:]: match_score += 1 return match_score if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
#!/usr/bin/python # Logistic Regression from scratch # In[62]: # In[63]: # importing all the required libraries """ Implementing logistic regression for classification problem Helpful resources: Coursera ML course https://medium.com/@martinpella/logistic-regression-from-scratch-in-python-124c5636b8ac """ import numpy as np from matplotlib import pyplot as plt from sklearn import datasets # get_ipython().run_line_magic('matplotlib', 'inline') # In[67]: # sigmoid function or logistic function is used as a hypothesis function in # classification problems def sigmoid_function(z): return 1 / (1 + np.exp(-z)) def cost_function(h, y): return (-y * np.log(h) - (1 - y) * np.log(1 - h)).mean() def log_likelihood(x, y, weights): scores = np.dot(x, weights) return np.sum(y * scores - np.log(1 + np.exp(scores))) # here alpha is the learning rate, X is the feature matrix,y is the target matrix def logistic_reg(alpha, x, y, max_iterations=70000): theta = np.zeros(x.shape[1]) for iterations in range(max_iterations): z = np.dot(x, theta) h = sigmoid_function(z) gradient = np.dot(x.T, h - y) / y.size theta = theta - alpha * gradient # updating the weights z = np.dot(x, theta) h = sigmoid_function(z) j = cost_function(h, y) if iterations % 100 == 0: print(f"loss: {j} \t") # printing the loss after every 100 iterations return theta # In[68]: if __name__ == "__main__": iris = datasets.load_iris() x = iris.data[:, :2] y = (iris.target != 0) * 1 alpha = 0.1 theta = logistic_reg(alpha, x, y, max_iterations=70000) print("theta: ", theta) # printing the theta i.e our weights vector def predict_prob(x): return sigmoid_function( np.dot(x, theta) ) # predicting the value of probability from the logistic regression algorithm plt.figure(figsize=(10, 6)) plt.scatter(x[y == 0][:, 0], x[y == 0][:, 1], color="b", label="0") plt.scatter(x[y == 1][:, 0], x[y == 1][:, 1], color="r", label="1") (x1_min, x1_max) = (x[:, 0].min(), x[:, 0].max()) (x2_min, x2_max) = (x[:, 1].min(), x[:, 1].max()) (xx1, xx2) = np.meshgrid(np.linspace(x1_min, x1_max), np.linspace(x2_min, x2_max)) grid = np.c_[xx1.ravel(), xx2.ravel()] probs = predict_prob(grid).reshape(xx1.shape) plt.contour(xx1, xx2, probs, [0.5], linewidths=1, colors="black") plt.legend() plt.show()
#!/usr/bin/python # Logistic Regression from scratch # In[62]: # In[63]: # importing all the required libraries """ Implementing logistic regression for classification problem Helpful resources: Coursera ML course https://medium.com/@martinpella/logistic-regression-from-scratch-in-python-124c5636b8ac """ import numpy as np from matplotlib import pyplot as plt from sklearn import datasets # get_ipython().run_line_magic('matplotlib', 'inline') # In[67]: # sigmoid function or logistic function is used as a hypothesis function in # classification problems def sigmoid_function(z): return 1 / (1 + np.exp(-z)) def cost_function(h, y): return (-y * np.log(h) - (1 - y) * np.log(1 - h)).mean() def log_likelihood(x, y, weights): scores = np.dot(x, weights) return np.sum(y * scores - np.log(1 + np.exp(scores))) # here alpha is the learning rate, X is the feature matrix,y is the target matrix def logistic_reg(alpha, x, y, max_iterations=70000): theta = np.zeros(x.shape[1]) for iterations in range(max_iterations): z = np.dot(x, theta) h = sigmoid_function(z) gradient = np.dot(x.T, h - y) / y.size theta = theta - alpha * gradient # updating the weights z = np.dot(x, theta) h = sigmoid_function(z) j = cost_function(h, y) if iterations % 100 == 0: print(f"loss: {j} \t") # printing the loss after every 100 iterations return theta # In[68]: if __name__ == "__main__": iris = datasets.load_iris() x = iris.data[:, :2] y = (iris.target != 0) * 1 alpha = 0.1 theta = logistic_reg(alpha, x, y, max_iterations=70000) print("theta: ", theta) # printing the theta i.e our weights vector def predict_prob(x): return sigmoid_function( np.dot(x, theta) ) # predicting the value of probability from the logistic regression algorithm plt.figure(figsize=(10, 6)) plt.scatter(x[y == 0][:, 0], x[y == 0][:, 1], color="b", label="0") plt.scatter(x[y == 1][:, 0], x[y == 1][:, 1], color="r", label="1") (x1_min, x1_max) = (x[:, 0].min(), x[:, 0].max()) (x2_min, x2_max) = (x[:, 1].min(), x[:, 1].max()) (xx1, xx2) = np.meshgrid(np.linspace(x1_min, x1_max), np.linspace(x2_min, x2_max)) grid = np.c_[xx1.ravel(), xx2.ravel()] probs = predict_prob(grid).reshape(xx1.shape) plt.contour(xx1, xx2, probs, [0.5], linewidths=1, colors="black") plt.legend() plt.show()
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
from __future__ import annotations import requests valid_terms = set( """approved_at_utc approved_by author_flair_background_color author_flair_css_class author_flair_richtext author_flair_template_id author_fullname author_premium can_mod_post category clicked content_categories created_utc downs edited gilded gildings hidden hide_score is_created_from_ads_ui is_meta is_original_content is_reddit_media_domain is_video link_flair_css_class link_flair_richtext link_flair_text link_flair_text_color media_embed mod_reason_title name permalink pwls quarantine saved score secure_media secure_media_embed selftext subreddit subreddit_name_prefixed subreddit_type thumbnail title top_awarded_type total_awards_received ups upvote_ratio url user_reports""".split() ) def get_subreddit_data( subreddit: str, limit: int = 1, age: str = "new", wanted_data: list | None = None ) -> dict: """ subreddit : Subreddit to query limit : Number of posts to fetch age : ["new", "top", "hot"] wanted_data : Get only the required data in the list """ wanted_data = wanted_data or [] if invalid_search_terms := ", ".join(sorted(set(wanted_data) - valid_terms)): raise ValueError(f"Invalid search term: {invalid_search_terms}") response = requests.get( f"https://reddit.com/r/{subreddit}/{age}.json?limit={limit}", headers={"User-agent": "A random string"}, ) if response.status_code == 429: raise requests.HTTPError data = response.json() if not wanted_data: return {id_: data["data"]["children"][id_] for id_ in range(limit)} data_dict = {} for id_ in range(limit): data_dict[id_] = { item: data["data"]["children"][id_]["data"][item] for item in wanted_data } return data_dict if __name__ == "__main__": # If you get Error 429, that means you are rate limited.Try after some time print(get_subreddit_data("learnpython", wanted_data=["title", "url", "selftext"]))
from __future__ import annotations import requests valid_terms = set( """approved_at_utc approved_by author_flair_background_color author_flair_css_class author_flair_richtext author_flair_template_id author_fullname author_premium can_mod_post category clicked content_categories created_utc downs edited gilded gildings hidden hide_score is_created_from_ads_ui is_meta is_original_content is_reddit_media_domain is_video link_flair_css_class link_flair_richtext link_flair_text link_flair_text_color media_embed mod_reason_title name permalink pwls quarantine saved score secure_media secure_media_embed selftext subreddit subreddit_name_prefixed subreddit_type thumbnail title top_awarded_type total_awards_received ups upvote_ratio url user_reports""".split() ) def get_subreddit_data( subreddit: str, limit: int = 1, age: str = "new", wanted_data: list | None = None ) -> dict: """ subreddit : Subreddit to query limit : Number of posts to fetch age : ["new", "top", "hot"] wanted_data : Get only the required data in the list """ wanted_data = wanted_data or [] if invalid_search_terms := ", ".join(sorted(set(wanted_data) - valid_terms)): raise ValueError(f"Invalid search term: {invalid_search_terms}") response = requests.get( f"https://reddit.com/r/{subreddit}/{age}.json?limit={limit}", headers={"User-agent": "A random string"}, ) if response.status_code == 429: raise requests.HTTPError data = response.json() if not wanted_data: return {id_: data["data"]["children"][id_] for id_ in range(limit)} data_dict = {} for id_ in range(limit): data_dict[id_] = { item: data["data"]["children"][id_]["data"][item] for item in wanted_data } return data_dict if __name__ == "__main__": # If you get Error 429, that means you are rate limited.Try after some time print(get_subreddit_data("learnpython", wanted_data=["title", "url", "selftext"]))
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
alphabets = [chr(i) for i in range(32, 126)] gear_one = list(range(len(alphabets))) gear_two = list(range(len(alphabets))) gear_three = list(range(len(alphabets))) reflector = list(reversed(range(len(alphabets)))) code = [] gear_one_pos = gear_two_pos = gear_three_pos = 0 def rotator(): global gear_one_pos global gear_two_pos global gear_three_pos i = gear_one[0] gear_one.append(i) del gear_one[0] gear_one_pos += 1 if gear_one_pos % int(len(alphabets)) == 0: i = gear_two[0] gear_two.append(i) del gear_two[0] gear_two_pos += 1 if gear_two_pos % int(len(alphabets)) == 0: i = gear_three[0] gear_three.append(i) del gear_three[0] gear_three_pos += 1 def engine(input_character): target = alphabets.index(input_character) target = gear_one[target] target = gear_two[target] target = gear_three[target] target = reflector[target] target = gear_three.index(target) target = gear_two.index(target) target = gear_one.index(target) code.append(alphabets[target]) rotator() if __name__ == "__main__": decode = list(input("Type your message:\n")) while True: try: token = int(input("Please set token:(must be only digits)\n")) break except Exception as error: print(error) for _ in range(token): rotator() for j in decode: engine(j) print("\n" + "".join(code)) print( f"\nYour Token is {token} please write it down.\nIf you want to decode " "this message again you should input same digits as token!" )
alphabets = [chr(i) for i in range(32, 126)] gear_one = list(range(len(alphabets))) gear_two = list(range(len(alphabets))) gear_three = list(range(len(alphabets))) reflector = list(reversed(range(len(alphabets)))) code = [] gear_one_pos = gear_two_pos = gear_three_pos = 0 def rotator(): global gear_one_pos global gear_two_pos global gear_three_pos i = gear_one[0] gear_one.append(i) del gear_one[0] gear_one_pos += 1 if gear_one_pos % int(len(alphabets)) == 0: i = gear_two[0] gear_two.append(i) del gear_two[0] gear_two_pos += 1 if gear_two_pos % int(len(alphabets)) == 0: i = gear_three[0] gear_three.append(i) del gear_three[0] gear_three_pos += 1 def engine(input_character): target = alphabets.index(input_character) target = gear_one[target] target = gear_two[target] target = gear_three[target] target = reflector[target] target = gear_three.index(target) target = gear_two.index(target) target = gear_one.index(target) code.append(alphabets[target]) rotator() if __name__ == "__main__": decode = list(input("Type your message:\n")) while True: try: token = int(input("Please set token:(must be only digits)\n")) break except Exception as error: print(error) for _ in range(token): rotator() for j in decode: engine(j) print("\n" + "".join(code)) print( f"\nYour Token is {token} please write it down.\nIf you want to decode " "this message again you should input same digits as token!" )
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
""" Check if three points are collinear in 3D. In short, the idea is that we are able to create a triangle using three points, and the area of that triangle can determine if the three points are collinear or not. First, we create two vectors with the same initial point from the three points, then we will calculate the cross-product of them. The length of the cross vector is numerically equal to the area of a parallelogram. Finally, the area of the triangle is equal to half of the area of the parallelogram. Since we are only differentiating between zero and anything else, we can get rid of the square root when calculating the length of the vector, and also the division by two at the end. From a second perspective, if the two vectors are parallel and overlapping, we can't get a nonzero perpendicular vector, since there will be an infinite number of orthogonal vectors. To simplify the solution we will not calculate the length, but we will decide directly from the vector whether it is equal to (0, 0, 0) or not. Read More: https://math.stackexchange.com/a/1951650 """ Vector3d = tuple[float, float, float] Point3d = tuple[float, float, float] def create_vector(end_point1: Point3d, end_point2: Point3d) -> Vector3d: """ Pass two points to get the vector from them in the form (x, y, z). >>> create_vector((0, 0, 0), (1, 1, 1)) (1, 1, 1) >>> create_vector((45, 70, 24), (47, 32, 1)) (2, -38, -23) >>> create_vector((-14, -1, -8), (-7, 6, 4)) (7, 7, 12) """ x = end_point2[0] - end_point1[0] y = end_point2[1] - end_point1[1] z = end_point2[2] - end_point1[2] return (x, y, z) def get_3d_vectors_cross(ab: Vector3d, ac: Vector3d) -> Vector3d: """ Get the cross of the two vectors AB and AC. I used determinant of 2x2 to get the determinant of the 3x3 matrix in the process. Read More: https://en.wikipedia.org/wiki/Cross_product https://en.wikipedia.org/wiki/Determinant >>> get_3d_vectors_cross((3, 4, 7), (4, 9, 2)) (-55, 22, 11) >>> get_3d_vectors_cross((1, 1, 1), (1, 1, 1)) (0, 0, 0) >>> get_3d_vectors_cross((-4, 3, 0), (3, -9, -12)) (-36, -48, 27) >>> get_3d_vectors_cross((17.67, 4.7, 6.78), (-9.5, 4.78, -19.33)) (-123.2594, 277.15110000000004, 129.11260000000001) """ x = ab[1] * ac[2] - ab[2] * ac[1] # *i y = (ab[0] * ac[2] - ab[2] * ac[0]) * -1 # *j z = ab[0] * ac[1] - ab[1] * ac[0] # *k return (x, y, z) def is_zero_vector(vector: Vector3d, accuracy: int) -> bool: """ Check if vector is equal to (0, 0, 0) of not. Sine the algorithm is very accurate, we will never get a zero vector, so we need to round the vector axis, because we want a result that is either True or False. In other applications, we can return a float that represents the collinearity ratio. >>> is_zero_vector((0, 0, 0), accuracy=10) True >>> is_zero_vector((15, 74, 32), accuracy=10) False >>> is_zero_vector((-15, -74, -32), accuracy=10) False """ return tuple(round(x, accuracy) for x in vector) == (0, 0, 0) def are_collinear(a: Point3d, b: Point3d, c: Point3d, accuracy: int = 10) -> bool: """ Check if three points are collinear or not. 1- Create tow vectors AB and AC. 2- Get the cross vector of the tow vectors. 3- Calcolate the length of the cross vector. 4- If the length is zero then the points are collinear, else they are not. The use of the accuracy parameter is explained in is_zero_vector docstring. >>> are_collinear((4.802293498137402, 3.536233125455244, 0), ... (-2.186788107953106, -9.24561398001649, 7.141509524846482), ... (1.530169574640268, -2.447927606600034, 3.343487096469054)) True >>> are_collinear((-6, -2, 6), ... (6.200213806439997, -4.930157614926678, -4.482371908289856), ... (-4.085171149525941, -2.459889509029438, 4.354787180795383)) True >>> are_collinear((2.399001826862445, -2.452009976680793, 4.464656666157666), ... (-3.682816335934376, 5.753788986533145, 9.490993909044244), ... (1.962903518985307, 3.741415730125627, 7)) False >>> are_collinear((1.875375340689544, -7.268426006071538, 7.358196269835993), ... (-3.546599383667157, -4.630005261513976, 3.208784032924246), ... (-2.564606140206386, 3.937845170672183, 7)) False """ ab = create_vector(a, b) ac = create_vector(a, c) return is_zero_vector(get_3d_vectors_cross(ab, ac), accuracy)
""" Check if three points are collinear in 3D. In short, the idea is that we are able to create a triangle using three points, and the area of that triangle can determine if the three points are collinear or not. First, we create two vectors with the same initial point from the three points, then we will calculate the cross-product of them. The length of the cross vector is numerically equal to the area of a parallelogram. Finally, the area of the triangle is equal to half of the area of the parallelogram. Since we are only differentiating between zero and anything else, we can get rid of the square root when calculating the length of the vector, and also the division by two at the end. From a second perspective, if the two vectors are parallel and overlapping, we can't get a nonzero perpendicular vector, since there will be an infinite number of orthogonal vectors. To simplify the solution we will not calculate the length, but we will decide directly from the vector whether it is equal to (0, 0, 0) or not. Read More: https://math.stackexchange.com/a/1951650 """ Vector3d = tuple[float, float, float] Point3d = tuple[float, float, float] def create_vector(end_point1: Point3d, end_point2: Point3d) -> Vector3d: """ Pass two points to get the vector from them in the form (x, y, z). >>> create_vector((0, 0, 0), (1, 1, 1)) (1, 1, 1) >>> create_vector((45, 70, 24), (47, 32, 1)) (2, -38, -23) >>> create_vector((-14, -1, -8), (-7, 6, 4)) (7, 7, 12) """ x = end_point2[0] - end_point1[0] y = end_point2[1] - end_point1[1] z = end_point2[2] - end_point1[2] return (x, y, z) def get_3d_vectors_cross(ab: Vector3d, ac: Vector3d) -> Vector3d: """ Get the cross of the two vectors AB and AC. I used determinant of 2x2 to get the determinant of the 3x3 matrix in the process. Read More: https://en.wikipedia.org/wiki/Cross_product https://en.wikipedia.org/wiki/Determinant >>> get_3d_vectors_cross((3, 4, 7), (4, 9, 2)) (-55, 22, 11) >>> get_3d_vectors_cross((1, 1, 1), (1, 1, 1)) (0, 0, 0) >>> get_3d_vectors_cross((-4, 3, 0), (3, -9, -12)) (-36, -48, 27) >>> get_3d_vectors_cross((17.67, 4.7, 6.78), (-9.5, 4.78, -19.33)) (-123.2594, 277.15110000000004, 129.11260000000001) """ x = ab[1] * ac[2] - ab[2] * ac[1] # *i y = (ab[0] * ac[2] - ab[2] * ac[0]) * -1 # *j z = ab[0] * ac[1] - ab[1] * ac[0] # *k return (x, y, z) def is_zero_vector(vector: Vector3d, accuracy: int) -> bool: """ Check if vector is equal to (0, 0, 0) of not. Sine the algorithm is very accurate, we will never get a zero vector, so we need to round the vector axis, because we want a result that is either True or False. In other applications, we can return a float that represents the collinearity ratio. >>> is_zero_vector((0, 0, 0), accuracy=10) True >>> is_zero_vector((15, 74, 32), accuracy=10) False >>> is_zero_vector((-15, -74, -32), accuracy=10) False """ return tuple(round(x, accuracy) for x in vector) == (0, 0, 0) def are_collinear(a: Point3d, b: Point3d, c: Point3d, accuracy: int = 10) -> bool: """ Check if three points are collinear or not. 1- Create tow vectors AB and AC. 2- Get the cross vector of the tow vectors. 3- Calcolate the length of the cross vector. 4- If the length is zero then the points are collinear, else they are not. The use of the accuracy parameter is explained in is_zero_vector docstring. >>> are_collinear((4.802293498137402, 3.536233125455244, 0), ... (-2.186788107953106, -9.24561398001649, 7.141509524846482), ... (1.530169574640268, -2.447927606600034, 3.343487096469054)) True >>> are_collinear((-6, -2, 6), ... (6.200213806439997, -4.930157614926678, -4.482371908289856), ... (-4.085171149525941, -2.459889509029438, 4.354787180795383)) True >>> are_collinear((2.399001826862445, -2.452009976680793, 4.464656666157666), ... (-3.682816335934376, 5.753788986533145, 9.490993909044244), ... (1.962903518985307, 3.741415730125627, 7)) False >>> are_collinear((1.875375340689544, -7.268426006071538, 7.358196269835993), ... (-3.546599383667157, -4.630005261513976, 3.208784032924246), ... (-2.564606140206386, 3.937845170672183, 7)) False """ ab = create_vector(a, b) ac = create_vector(a, c) return is_zero_vector(get_3d_vectors_cross(ab, ac), accuracy)
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
""" Longest Common Substring Problem Statement: Given two sequences, find the longest common substring present in both of them. A substring is necessarily continuous. Example: "abcdef" and "xabded" have two longest common substrings, "ab" or "de". Therefore, algorithm should return any one of them. """ def longest_common_substring(text1: str, text2: str) -> str: """ Finds the longest common substring between two strings. >>> longest_common_substring("", "") '' >>> longest_common_substring("a","") '' >>> longest_common_substring("", "a") '' >>> longest_common_substring("a", "a") 'a' >>> longest_common_substring("abcdef", "bcd") 'bcd' >>> longest_common_substring("abcdef", "xabded") 'ab' >>> longest_common_substring("GeeksforGeeks", "GeeksQuiz") 'Geeks' >>> longest_common_substring("abcdxyz", "xyzabcd") 'abcd' >>> longest_common_substring("zxabcdezy", "yzabcdezx") 'abcdez' >>> longest_common_substring("OldSite:GeeksforGeeks.org", "NewSite:GeeksQuiz.com") 'Site:Geeks' >>> longest_common_substring(1, 1) Traceback (most recent call last): ... ValueError: longest_common_substring() takes two strings for inputs """ if not (isinstance(text1, str) and isinstance(text2, str)): raise ValueError("longest_common_substring() takes two strings for inputs") text1_length = len(text1) text2_length = len(text2) dp = [[0] * (text2_length + 1) for _ in range(text1_length + 1)] ans_index = 0 ans_length = 0 for i in range(1, text1_length + 1): for j in range(1, text2_length + 1): if text1[i - 1] == text2[j - 1]: dp[i][j] = 1 + dp[i - 1][j - 1] if dp[i][j] > ans_length: ans_index = i ans_length = dp[i][j] return text1[ans_index - ans_length : ans_index] if __name__ == "__main__": import doctest doctest.testmod()
""" Longest Common Substring Problem Statement: Given two sequences, find the longest common substring present in both of them. A substring is necessarily continuous. Example: "abcdef" and "xabded" have two longest common substrings, "ab" or "de". Therefore, algorithm should return any one of them. """ def longest_common_substring(text1: str, text2: str) -> str: """ Finds the longest common substring between two strings. >>> longest_common_substring("", "") '' >>> longest_common_substring("a","") '' >>> longest_common_substring("", "a") '' >>> longest_common_substring("a", "a") 'a' >>> longest_common_substring("abcdef", "bcd") 'bcd' >>> longest_common_substring("abcdef", "xabded") 'ab' >>> longest_common_substring("GeeksforGeeks", "GeeksQuiz") 'Geeks' >>> longest_common_substring("abcdxyz", "xyzabcd") 'abcd' >>> longest_common_substring("zxabcdezy", "yzabcdezx") 'abcdez' >>> longest_common_substring("OldSite:GeeksforGeeks.org", "NewSite:GeeksQuiz.com") 'Site:Geeks' >>> longest_common_substring(1, 1) Traceback (most recent call last): ... ValueError: longest_common_substring() takes two strings for inputs """ if not (isinstance(text1, str) and isinstance(text2, str)): raise ValueError("longest_common_substring() takes two strings for inputs") text1_length = len(text1) text2_length = len(text2) dp = [[0] * (text2_length + 1) for _ in range(text1_length + 1)] ans_index = 0 ans_length = 0 for i in range(1, text1_length + 1): for j in range(1, text2_length + 1): if text1[i - 1] == text2[j - 1]: dp[i][j] = 1 + dp[i - 1][j - 1] if dp[i][j] > ans_length: ans_index = i ans_length = dp[i][j] return text1[ans_index - ans_length : ans_index] if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
from PIL import Image def change_brightness(img: Image, level: float) -> Image: """ Change the brightness of a PIL Image to a given level. """ def brightness(c: int) -> float: """ Fundamental Transformation/Operation that'll be performed on every bit. """ return 128 + level + (c - 128) if not -255.0 <= level <= 255.0: raise ValueError("level must be between -255.0 (black) and 255.0 (white)") return img.point(brightness) if __name__ == "__main__": # Load image with Image.open("image_data/lena.jpg") as img: # Change brightness to 100 brigt_img = change_brightness(img, 100) brigt_img.save("image_data/lena_brightness.png", format="png")
from PIL import Image def change_brightness(img: Image, level: float) -> Image: """ Change the brightness of a PIL Image to a given level. """ def brightness(c: int) -> float: """ Fundamental Transformation/Operation that'll be performed on every bit. """ return 128 + level + (c - 128) if not -255.0 <= level <= 255.0: raise ValueError("level must be between -255.0 (black) and 255.0 (white)") return img.point(brightness) if __name__ == "__main__": # Load image with Image.open("image_data/lena.jpg") as img: # Change brightness to 100 brigt_img = change_brightness(img, 100) brigt_img.save("image_data/lena_brightness.png", format="png")
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
import numpy as np from PIL import Image def rgb2gray(rgb: np.array) -> np.array: """ Return gray image from rgb image >>> rgb2gray(np.array([[[127, 255, 0]]])) array([[187.6453]]) >>> rgb2gray(np.array([[[0, 0, 0]]])) array([[0.]]) >>> rgb2gray(np.array([[[2, 4, 1]]])) array([[3.0598]]) >>> rgb2gray(np.array([[[26, 255, 14], [5, 147, 20], [1, 200, 0]]])) array([[159.0524, 90.0635, 117.6989]]) """ r, g, b = rgb[:, :, 0], rgb[:, :, 1], rgb[:, :, 2] return 0.2989 * r + 0.5870 * g + 0.1140 * b def gray2binary(gray: np.array) -> np.array: """ Return binary image from gray image >>> gray2binary(np.array([[127, 255, 0]])) array([[False, True, False]]) >>> gray2binary(np.array([[0]])) array([[False]]) >>> gray2binary(np.array([[26.2409, 4.9315, 1.4729]])) array([[False, False, False]]) >>> gray2binary(np.array([[26, 255, 14], [5, 147, 20], [1, 200, 0]])) array([[False, True, False], [False, True, False], [False, True, False]]) """ return (gray > 127) & (gray <= 255) def dilation(image: np.array, kernel: np.array) -> np.array: """ Return dilated image >>> dilation(np.array([[True, False, True]]), np.array([[0, 1, 0]])) array([[False, False, False]]) >>> dilation(np.array([[False, False, True]]), np.array([[1, 0, 1]])) array([[False, False, False]]) """ output = np.zeros_like(image) image_padded = np.zeros( (image.shape[0] + kernel.shape[0] - 1, image.shape[1] + kernel.shape[1] - 1) ) # Copy image to padded image image_padded[kernel.shape[0] - 2 : -1 :, kernel.shape[1] - 2 : -1 :] = image # Iterate over image & apply kernel for x in range(image.shape[1]): for y in range(image.shape[0]): summation = ( kernel * image_padded[y : y + kernel.shape[0], x : x + kernel.shape[1]] ).sum() output[y, x] = int(summation > 0) return output # kernel to be applied structuring_element = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]]) if __name__ == "__main__": # read original image image = np.array(Image.open(r"..\image_data\lena.jpg")) output = dilation(gray2binary(rgb2gray(image)), structuring_element) # Save the output image pil_img = Image.fromarray(output).convert("RGB") pil_img.save("result_dilation.png")
import numpy as np from PIL import Image def rgb2gray(rgb: np.array) -> np.array: """ Return gray image from rgb image >>> rgb2gray(np.array([[[127, 255, 0]]])) array([[187.6453]]) >>> rgb2gray(np.array([[[0, 0, 0]]])) array([[0.]]) >>> rgb2gray(np.array([[[2, 4, 1]]])) array([[3.0598]]) >>> rgb2gray(np.array([[[26, 255, 14], [5, 147, 20], [1, 200, 0]]])) array([[159.0524, 90.0635, 117.6989]]) """ r, g, b = rgb[:, :, 0], rgb[:, :, 1], rgb[:, :, 2] return 0.2989 * r + 0.5870 * g + 0.1140 * b def gray2binary(gray: np.array) -> np.array: """ Return binary image from gray image >>> gray2binary(np.array([[127, 255, 0]])) array([[False, True, False]]) >>> gray2binary(np.array([[0]])) array([[False]]) >>> gray2binary(np.array([[26.2409, 4.9315, 1.4729]])) array([[False, False, False]]) >>> gray2binary(np.array([[26, 255, 14], [5, 147, 20], [1, 200, 0]])) array([[False, True, False], [False, True, False], [False, True, False]]) """ return (gray > 127) & (gray <= 255) def dilation(image: np.array, kernel: np.array) -> np.array: """ Return dilated image >>> dilation(np.array([[True, False, True]]), np.array([[0, 1, 0]])) array([[False, False, False]]) >>> dilation(np.array([[False, False, True]]), np.array([[1, 0, 1]])) array([[False, False, False]]) """ output = np.zeros_like(image) image_padded = np.zeros( (image.shape[0] + kernel.shape[0] - 1, image.shape[1] + kernel.shape[1] - 1) ) # Copy image to padded image image_padded[kernel.shape[0] - 2 : -1 :, kernel.shape[1] - 2 : -1 :] = image # Iterate over image & apply kernel for x in range(image.shape[1]): for y in range(image.shape[0]): summation = ( kernel * image_padded[y : y + kernel.shape[0], x : x + kernel.shape[1]] ).sum() output[y, x] = int(summation > 0) return output # kernel to be applied structuring_element = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]]) if __name__ == "__main__": # read original image image = np.array(Image.open(r"..\image_data\lena.jpg")) output = dilation(gray2binary(rgb2gray(image)), structuring_element) # Save the output image pil_img = Image.fromarray(output).convert("RGB") pil_img.save("result_dilation.png")
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
""" Champernowne's constant Problem 40 An irrational decimal fraction is created by concatenating the positive integers: 0.123456789101112131415161718192021... It can be seen that the 12th digit of the fractional part is 1. If dn represents the nth digit of the fractional part, find the value of the following expression. d1 Γ— d10 Γ— d100 Γ— d1000 Γ— d10000 Γ— d100000 Γ— d1000000 """ def solution(): """Returns >>> solution() 210 """ constant = [] i = 1 while len(constant) < 1e6: constant.append(str(i)) i += 1 constant = "".join(constant) return ( int(constant[0]) * int(constant[9]) * int(constant[99]) * int(constant[999]) * int(constant[9999]) * int(constant[99999]) * int(constant[999999]) ) if __name__ == "__main__": print(solution())
""" Champernowne's constant Problem 40 An irrational decimal fraction is created by concatenating the positive integers: 0.123456789101112131415161718192021... It can be seen that the 12th digit of the fractional part is 1. If dn represents the nth digit of the fractional part, find the value of the following expression. d1 Γ— d10 Γ— d100 Γ— d1000 Γ— d10000 Γ— d100000 Γ— d1000000 """ def solution(): """Returns >>> solution() 210 """ constant = [] i = 1 while len(constant) < 1e6: constant.append(str(i)) i += 1 constant = "".join(constant) return ( int(constant[0]) * int(constant[9]) * int(constant[99]) * int(constant[999]) * int(constant[9999]) * int(constant[99999]) * int(constant[999999]) ) if __name__ == "__main__": print(solution())
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
import sys from collections import defaultdict class Heap: def __init__(self): self.node_position = [] def get_position(self, vertex): return self.node_position[vertex] def set_position(self, vertex, pos): self.node_position[vertex] = pos def top_to_bottom(self, heap, start, size, positions): if start > size // 2 - 1: return else: if 2 * start + 2 >= size: smallest_child = 2 * start + 1 else: if heap[2 * start + 1] < heap[2 * start + 2]: smallest_child = 2 * start + 1 else: smallest_child = 2 * start + 2 if heap[smallest_child] < heap[start]: temp, temp1 = heap[smallest_child], positions[smallest_child] heap[smallest_child], positions[smallest_child] = ( heap[start], positions[start], ) heap[start], positions[start] = temp, temp1 temp = self.get_position(positions[smallest_child]) self.set_position( positions[smallest_child], self.get_position(positions[start]) ) self.set_position(positions[start], temp) self.top_to_bottom(heap, smallest_child, size, positions) # Update function if value of any node in min-heap decreases def bottom_to_top(self, val, index, heap, position): temp = position[index] while index != 0: parent = int((index - 2) / 2) if index % 2 == 0 else int((index - 1) / 2) if val < heap[parent]: heap[index] = heap[parent] position[index] = position[parent] self.set_position(position[parent], index) else: heap[index] = val position[index] = temp self.set_position(temp, index) break index = parent else: heap[0] = val position[0] = temp self.set_position(temp, 0) def heapify(self, heap, positions): start = len(heap) // 2 - 1 for i in range(start, -1, -1): self.top_to_bottom(heap, i, len(heap), positions) def delete_minimum(self, heap, positions): temp = positions[0] heap[0] = sys.maxsize self.top_to_bottom(heap, 0, len(heap), positions) return temp def prisms_algorithm(adjacency_list): """ >>> adjacency_list = {0: [[1, 1], [3, 3]], ... 1: [[0, 1], [2, 6], [3, 5], [4, 1]], ... 2: [[1, 6], [4, 5], [5, 2]], ... 3: [[0, 3], [1, 5], [4, 1]], ... 4: [[1, 1], [2, 5], [3, 1], [5, 4]], ... 5: [[2, 2], [4, 4]]} >>> prisms_algorithm(adjacency_list) [(0, 1), (1, 4), (4, 3), (4, 5), (5, 2)] """ heap = Heap() visited = [0] * len(adjacency_list) nbr_tv = [-1] * len(adjacency_list) # Neighboring Tree Vertex of selected vertex # Minimum Distance of explored vertex with neighboring vertex of partial tree # formed in graph distance_tv = [] # Heap of Distance of vertices from their neighboring vertex positions = [] for vertex in range(len(adjacency_list)): distance_tv.append(sys.maxsize) positions.append(vertex) heap.node_position.append(vertex) tree_edges = [] visited[0] = 1 distance_tv[0] = sys.maxsize for neighbor, distance in adjacency_list[0]: nbr_tv[neighbor] = 0 distance_tv[neighbor] = distance heap.heapify(distance_tv, positions) for _ in range(1, len(adjacency_list)): vertex = heap.delete_minimum(distance_tv, positions) if visited[vertex] == 0: tree_edges.append((nbr_tv[vertex], vertex)) visited[vertex] = 1 for neighbor, distance in adjacency_list[vertex]: if ( visited[neighbor] == 0 and distance < distance_tv[heap.get_position(neighbor)] ): distance_tv[heap.get_position(neighbor)] = distance heap.bottom_to_top( distance, heap.get_position(neighbor), distance_tv, positions ) nbr_tv[neighbor] = vertex return tree_edges if __name__ == "__main__": # pragma: no cover # < --------- Prims Algorithm --------- > edges_number = int(input("Enter number of edges: ").strip()) adjacency_list = defaultdict(list) for _ in range(edges_number): edge = [int(x) for x in input().strip().split()] adjacency_list[edge[0]].append([edge[1], edge[2]]) adjacency_list[edge[1]].append([edge[0], edge[2]]) print(prisms_algorithm(adjacency_list))
import sys from collections import defaultdict class Heap: def __init__(self): self.node_position = [] def get_position(self, vertex): return self.node_position[vertex] def set_position(self, vertex, pos): self.node_position[vertex] = pos def top_to_bottom(self, heap, start, size, positions): if start > size // 2 - 1: return else: if 2 * start + 2 >= size: smallest_child = 2 * start + 1 else: if heap[2 * start + 1] < heap[2 * start + 2]: smallest_child = 2 * start + 1 else: smallest_child = 2 * start + 2 if heap[smallest_child] < heap[start]: temp, temp1 = heap[smallest_child], positions[smallest_child] heap[smallest_child], positions[smallest_child] = ( heap[start], positions[start], ) heap[start], positions[start] = temp, temp1 temp = self.get_position(positions[smallest_child]) self.set_position( positions[smallest_child], self.get_position(positions[start]) ) self.set_position(positions[start], temp) self.top_to_bottom(heap, smallest_child, size, positions) # Update function if value of any node in min-heap decreases def bottom_to_top(self, val, index, heap, position): temp = position[index] while index != 0: parent = int((index - 2) / 2) if index % 2 == 0 else int((index - 1) / 2) if val < heap[parent]: heap[index] = heap[parent] position[index] = position[parent] self.set_position(position[parent], index) else: heap[index] = val position[index] = temp self.set_position(temp, index) break index = parent else: heap[0] = val position[0] = temp self.set_position(temp, 0) def heapify(self, heap, positions): start = len(heap) // 2 - 1 for i in range(start, -1, -1): self.top_to_bottom(heap, i, len(heap), positions) def delete_minimum(self, heap, positions): temp = positions[0] heap[0] = sys.maxsize self.top_to_bottom(heap, 0, len(heap), positions) return temp def prisms_algorithm(adjacency_list): """ >>> adjacency_list = {0: [[1, 1], [3, 3]], ... 1: [[0, 1], [2, 6], [3, 5], [4, 1]], ... 2: [[1, 6], [4, 5], [5, 2]], ... 3: [[0, 3], [1, 5], [4, 1]], ... 4: [[1, 1], [2, 5], [3, 1], [5, 4]], ... 5: [[2, 2], [4, 4]]} >>> prisms_algorithm(adjacency_list) [(0, 1), (1, 4), (4, 3), (4, 5), (5, 2)] """ heap = Heap() visited = [0] * len(adjacency_list) nbr_tv = [-1] * len(adjacency_list) # Neighboring Tree Vertex of selected vertex # Minimum Distance of explored vertex with neighboring vertex of partial tree # formed in graph distance_tv = [] # Heap of Distance of vertices from their neighboring vertex positions = [] for vertex in range(len(adjacency_list)): distance_tv.append(sys.maxsize) positions.append(vertex) heap.node_position.append(vertex) tree_edges = [] visited[0] = 1 distance_tv[0] = sys.maxsize for neighbor, distance in adjacency_list[0]: nbr_tv[neighbor] = 0 distance_tv[neighbor] = distance heap.heapify(distance_tv, positions) for _ in range(1, len(adjacency_list)): vertex = heap.delete_minimum(distance_tv, positions) if visited[vertex] == 0: tree_edges.append((nbr_tv[vertex], vertex)) visited[vertex] = 1 for neighbor, distance in adjacency_list[vertex]: if ( visited[neighbor] == 0 and distance < distance_tv[heap.get_position(neighbor)] ): distance_tv[heap.get_position(neighbor)] = distance heap.bottom_to_top( distance, heap.get_position(neighbor), distance_tv, positions ) nbr_tv[neighbor] = vertex return tree_edges if __name__ == "__main__": # pragma: no cover # < --------- Prims Algorithm --------- > edges_number = int(input("Enter number of edges: ").strip()) adjacency_list = defaultdict(list) for _ in range(edges_number): edge = [int(x) for x in input().strip().split()] adjacency_list[edge[0]].append([edge[1], edge[2]]) adjacency_list[edge[1]].append([edge[0], edge[2]]) print(prisms_algorithm(adjacency_list))
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
""" Author : Syed Faizan (3rd Year Student IIIT Pune) github : faizan2700 You are given a bitmask m and you want to efficiently iterate through all of its submasks. The mask s is submask of m if only bits that were included in bitmask are set """ from __future__ import annotations def list_of_submasks(mask: int) -> list[int]: """ Args: mask : number which shows mask ( always integer > 0, zero does not have any submasks ) Returns: all_submasks : the list of submasks of mask (mask s is called submask of mask m if only bits that were included in original mask are set Raises: AssertionError: mask not positive integer >>> list_of_submasks(15) [15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1] >>> list_of_submasks(13) [13, 12, 9, 8, 5, 4, 1] >>> list_of_submasks(-7) # doctest: +ELLIPSIS Traceback (most recent call last): ... AssertionError: mask needs to be positive integer, your input -7 >>> list_of_submasks(0) # doctest: +ELLIPSIS Traceback (most recent call last): ... AssertionError: mask needs to be positive integer, your input 0 """ assert ( isinstance(mask, int) and mask > 0 ), f"mask needs to be positive integer, your input {mask}" """ first submask iterated will be mask itself then operation will be performed to get other submasks till we reach empty submask that is zero ( zero is not included in final submasks list ) """ all_submasks = [] submask = mask while submask: all_submasks.append(submask) submask = (submask - 1) & mask return all_submasks if __name__ == "__main__": import doctest doctest.testmod()
""" Author : Syed Faizan (3rd Year Student IIIT Pune) github : faizan2700 You are given a bitmask m and you want to efficiently iterate through all of its submasks. The mask s is submask of m if only bits that were included in bitmask are set """ from __future__ import annotations def list_of_submasks(mask: int) -> list[int]: """ Args: mask : number which shows mask ( always integer > 0, zero does not have any submasks ) Returns: all_submasks : the list of submasks of mask (mask s is called submask of mask m if only bits that were included in original mask are set Raises: AssertionError: mask not positive integer >>> list_of_submasks(15) [15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1] >>> list_of_submasks(13) [13, 12, 9, 8, 5, 4, 1] >>> list_of_submasks(-7) # doctest: +ELLIPSIS Traceback (most recent call last): ... AssertionError: mask needs to be positive integer, your input -7 >>> list_of_submasks(0) # doctest: +ELLIPSIS Traceback (most recent call last): ... AssertionError: mask needs to be positive integer, your input 0 """ assert ( isinstance(mask, int) and mask > 0 ), f"mask needs to be positive integer, your input {mask}" """ first submask iterated will be mask itself then operation will be performed to get other submasks till we reach empty submask that is zero ( zero is not included in final submasks list ) """ all_submasks = [] submask = mask while submask: all_submasks.append(submask) submask = (submask - 1) & mask return all_submasks if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
""" Implementation of finding nth fibonacci number using matrix exponentiation. Time Complexity is about O(log(n)*8), where 8 is the complexity of matrix multiplication of size 2 by 2. And on the other hand complexity of bruteforce solution is O(n). As we know f[n] = f[n-1] + f[n-1] Converting to matrix, [f(n),f(n-1)] = [[1,1],[1,0]] * [f(n-1),f(n-2)] -> [f(n),f(n-1)] = [[1,1],[1,0]]^2 * [f(n-2),f(n-3)] ... ... -> [f(n),f(n-1)] = [[1,1],[1,0]]^(n-1) * [f(1),f(0)] So we just need the n times multiplication of the matrix [1,1],[1,0]]. We can decrease the n times multiplication by following the divide and conquer approach. """ def multiply(matrix_a: list[list[int]], matrix_b: list[list[int]]) -> list[list[int]]: matrix_c = [] n = len(matrix_a) for i in range(n): list_1 = [] for j in range(n): val = 0 for k in range(n): val = val + matrix_a[i][k] * matrix_b[k][j] list_1.append(val) matrix_c.append(list_1) return matrix_c def identity(n: int) -> list[list[int]]: return [[int(row == column) for column in range(n)] for row in range(n)] def nth_fibonacci_matrix(n: int) -> int: """ >>> nth_fibonacci_matrix(100) 354224848179261915075 >>> nth_fibonacci_matrix(-100) -100 """ if n <= 1: return n res_matrix = identity(2) fibonacci_matrix = [[1, 1], [1, 0]] n = n - 1 while n > 0: if n % 2 == 1: res_matrix = multiply(res_matrix, fibonacci_matrix) fibonacci_matrix = multiply(fibonacci_matrix, fibonacci_matrix) n = int(n / 2) return res_matrix[0][0] def nth_fibonacci_bruteforce(n: int) -> int: """ >>> nth_fibonacci_bruteforce(100) 354224848179261915075 >>> nth_fibonacci_bruteforce(-100) -100 """ if n <= 1: return n fib0 = 0 fib1 = 1 for _ in range(2, n + 1): fib0, fib1 = fib1, fib0 + fib1 return fib1 def main() -> None: for ordinal in "0th 1st 2nd 3rd 10th 100th 1000th".split(): n = int("".join(c for c in ordinal if c in "0123456789")) # 1000th --> 1000 print( f"{ordinal} fibonacci number using matrix exponentiation is " f"{nth_fibonacci_matrix(n)} and using bruteforce is " f"{nth_fibonacci_bruteforce(n)}\n" ) # from timeit import timeit # print(timeit("nth_fibonacci_matrix(1000000)", # "from main import nth_fibonacci_matrix", number=5)) # print(timeit("nth_fibonacci_bruteforce(1000000)", # "from main import nth_fibonacci_bruteforce", number=5)) # 2.3342058970001744 # 57.256506615000035 if __name__ == "__main__": import doctest doctest.testmod() main()
""" Implementation of finding nth fibonacci number using matrix exponentiation. Time Complexity is about O(log(n)*8), where 8 is the complexity of matrix multiplication of size 2 by 2. And on the other hand complexity of bruteforce solution is O(n). As we know f[n] = f[n-1] + f[n-1] Converting to matrix, [f(n),f(n-1)] = [[1,1],[1,0]] * [f(n-1),f(n-2)] -> [f(n),f(n-1)] = [[1,1],[1,0]]^2 * [f(n-2),f(n-3)] ... ... -> [f(n),f(n-1)] = [[1,1],[1,0]]^(n-1) * [f(1),f(0)] So we just need the n times multiplication of the matrix [1,1],[1,0]]. We can decrease the n times multiplication by following the divide and conquer approach. """ def multiply(matrix_a: list[list[int]], matrix_b: list[list[int]]) -> list[list[int]]: matrix_c = [] n = len(matrix_a) for i in range(n): list_1 = [] for j in range(n): val = 0 for k in range(n): val = val + matrix_a[i][k] * matrix_b[k][j] list_1.append(val) matrix_c.append(list_1) return matrix_c def identity(n: int) -> list[list[int]]: return [[int(row == column) for column in range(n)] for row in range(n)] def nth_fibonacci_matrix(n: int) -> int: """ >>> nth_fibonacci_matrix(100) 354224848179261915075 >>> nth_fibonacci_matrix(-100) -100 """ if n <= 1: return n res_matrix = identity(2) fibonacci_matrix = [[1, 1], [1, 0]] n = n - 1 while n > 0: if n % 2 == 1: res_matrix = multiply(res_matrix, fibonacci_matrix) fibonacci_matrix = multiply(fibonacci_matrix, fibonacci_matrix) n = int(n / 2) return res_matrix[0][0] def nth_fibonacci_bruteforce(n: int) -> int: """ >>> nth_fibonacci_bruteforce(100) 354224848179261915075 >>> nth_fibonacci_bruteforce(-100) -100 """ if n <= 1: return n fib0 = 0 fib1 = 1 for _ in range(2, n + 1): fib0, fib1 = fib1, fib0 + fib1 return fib1 def main() -> None: for ordinal in "0th 1st 2nd 3rd 10th 100th 1000th".split(): n = int("".join(c for c in ordinal if c in "0123456789")) # 1000th --> 1000 print( f"{ordinal} fibonacci number using matrix exponentiation is " f"{nth_fibonacci_matrix(n)} and using bruteforce is " f"{nth_fibonacci_bruteforce(n)}\n" ) # from timeit import timeit # print(timeit("nth_fibonacci_matrix(1000000)", # "from main import nth_fibonacci_matrix", number=5)) # print(timeit("nth_fibonacci_bruteforce(1000000)", # "from main import nth_fibonacci_bruteforce", number=5)) # 2.3342058970001744 # 57.256506615000035 if __name__ == "__main__": import doctest doctest.testmod() main()
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
MMMMDCLXXII MMDCCCLXXXIII MMMDLXVIIII MMMMDXCV DCCCLXXII MMCCCVI MMMCDLXXXVII MMMMCCXXI MMMCCXX MMMMDCCCLXXIII MMMCCXXXVII MMCCCLXXXXIX MDCCCXXIIII MMCXCVI CCXCVIII MMMCCCXXXII MDCCXXX MMMDCCCL MMMMCCLXXXVI MMDCCCXCVI MMMDCII MMMCCXII MMMMDCCCCI MMDCCCXCII MDCXX CMLXXXVII MMMXXI MMMMCCCXIV MLXXII MCCLXXVIIII MMMMCCXXXXI MMDCCCLXXII MMMMXXXI MMMDCCLXXX MMDCCCLXXIX MMMMLXXXV MCXXI MDCCCXXXVII MMCCCLXVII MCDXXXV CCXXXIII CMXX MMMCLXIV MCCCLXXXVI DCCCXCVIII MMMDCCCCXXXIV CDXVIIII MMCCXXXV MDCCCXXXII MMMMD MMDCCLXIX MMMMCCCLXXXXVI MMDCCXLII MMMDCCCVIIII DCCLXXXIIII MDCCCCXXXII MMCXXVII DCCCXXX CCLXIX MMMXI MMMMCMLXXXXVIII MMMMDLXXXVII MMMMDCCCLX MMCCLIV CMIX MMDCCCLXXXIIII CLXXXII MMCCCCXXXXV MMMMDLXXXVIIII MMMDCCCXXI MMDCCCCLXXVI MCCCCLXX MMCDLVIIII MMMDCCCLIX MMMMCCCCXIX MMMDCCCLXXV XXXI CDLXXXIII MMMCXV MMDCCLXIII MMDXXX MMMMCCCLVII MMMDCI MMMMCDLXXXIIII MMMMCCCXVI CCCLXXXVIII MMMMCML MMMMXXIV MMMCCCCXXX DCCX MMMCCLX MMDXXXIII CCCLXIII MMDCCXIII MMMCCCXLIV CLXXXXI CXVI MMMMCXXXIII CLXX DCCCXVIII MLXVII DLXXXX MMDXXI MMMMDLXXXXVIII MXXII LXI DCCCCXLIII MMMMDV MMMMXXXIV MDCCCLVIII MMMCCLXXII MMMMDCCXXXVI MMMMLXXXIX MDCCCLXXXI MMMMDCCCXV MMMMCCCCXI MMMMCCCLIII MDCCCLXXI MMCCCCXI MLXV MMCDLXII MMMMDXXXXII MMMMDCCCXL MMMMCMLVI CCLXXXIV MMMDCCLXXXVI MMCLII MMMCCCCXV MMLXXXIII MMMV MMMV DCCLXII MMDCCCCXVI MMDCXLVIII CCLIIII CCCXXV MMDCCLXXXVIIII MMMMDCLXXVIII MMMMDCCCXCI MMMMCCCXX MMCCXLV MMMDCCCLXIX MMCCLXIIII MMMDCCCXLIX MMMMCCCLXIX CMLXXXXI MCMLXXXIX MMCDLXI MMDCLXXVIII MMMMDCCLXI MCDXXV DL CCCLXXII MXVIIII MCCCCLXVIII CIII MMMDCCLXXIIII MMMDVIII MMMMCCCLXXXXVII MMDXXVII MMDCCLXXXXV MMMMCXLVI MMMDCCLXXXII MMMDXXXVI MCXXII CLI DCLXXXIX MMMCLI MDCLXIII MMMMDCCXCVII MMCCCLXXXV MMMDCXXVIII MMMCDLX MMMCMLII MMMIV MMMMDCCCLVIII MMMDLXXXVIII MCXXIV MMMMLXXVI CLXXIX MMMCCCCXXVIIII DCCLXXXV MMMDCCCVI LI CLXXXVI MMMMCCCLXXVI MCCCLXVI CCXXXIX MMDXXXXI MMDCCCXLI DCCCLXXXVIII MMMMDCCCIV MDCCCCXV MMCMVI MMMMCMLXXXXV MMDCCLVI MMMMCCXLVIII DCCCCIIII MMCCCCIII MMMDCCLXXXVIIII MDCCCLXXXXV DVII MMMV DCXXV MMDCCCXCV DCVIII MMCDLXVI MCXXVIII MDCCXCVIII MMDCLX MMMDCCLXIV MMCDLXXVII MMDLXXXIIII MMMMCCCXXII MMMDCCCXLIIII DCCCCLXVII MMMCLXXXXIII MCCXV MMMMDCXI MMMMDCLXXXXV MMMCCCLII MMCMIX MMDCCXXV MMDLXXXVI MMMMDCXXVIIII DCCCCXXXVIIII MMCCXXXIIII MMDCCLXXVIII MDCCLXVIIII MMCCLXXXV MMMMDCCCLXXXVIII MMCMXCI MDXLII MMMMDCCXIV MMMMLI DXXXXIII MMDCCXI MMMMCCLXXXIII MMMDCCCLXXIII MDCLVII MMCD MCCCXXVII MMMMDCCIIII MMMDCCXLVI MMMCLXXXVII MMMCCVIIII MCCCCLXXIX DL DCCCLXXVI MMDXCI MMMMDCCCCXXXVI MMCII MMMDCCCXXXXV MMMCDXLV MMDCXXXXIV MMD MDCCCLXXXX MMDCXLIII MMCCXXXII MMDCXXXXVIIII DCCCLXXI MDXCVIIII MMMMCCLXXVIII MDCLVIIII MMMCCCLXXXIX MDCLXXXV MDLVIII MMMMCCVII MMMMDCXIV MMMCCCLXIIII MMIIII MMMMCCCLXXIII CCIII MMMCCLV MMMDXIII MMMCCCXC MMMDCCCXXI MMMMCCCCXXXII CCCLVI MMMCCCLXXXVI MXVIIII MMMCCCCXIIII CLXVII MMMCCLXX CCCCLXIV MMXXXXII MMMMCCLXXXX MXL CCXVI CCCCLVIIII MMCCCII MCCCLVIII MMMMCCCX MCDLXXXXIV MDCCCXIII MMDCCCXL MMMMCCCXXIII DXXXIV CVI MMMMDCLXXX DCCCVII MMCMLXIIII MMMDCCCXXXIII DCCC MDIII MMCCCLXVI MMMCCCCLXXI MMDCCCCXVIII CCXXXVII CCCXXV MDCCCXII MMMCMV MMMMCMXV MMMMDCXCI DXXI MMCCXLVIIII MMMMCMLII MDLXXX MMDCLXVI CXXI MMMDCCCLIIII MMMCXXI MCCIII MMDCXXXXI CCXCII MMMMDXXXV MMMCCCLXV MMMMDLXV MMMCCCCXXXII MMMCCCVIII DCCCCLXXXXII MMCLXIV MMMMCXI MLXXXXVII MMMCDXXXVIII MDXXII MLV MMMMDLXVI MMMCXII XXXIII MMMMDCCCXXVI MMMLXVIIII MMMLX MMMCDLXVII MDCCCLVII MMCXXXVII MDCCCCXXX MMDCCCLXIII MMMMDCXLIX MMMMCMXLVIII DCCCLXXVIIII MDCCCLIII MMMCMLXI MMMMCCLXI MMDCCCLIII MMMDCCCVI MMDXXXXIX MMCLXXXXV MMDXXX MMMXIII DCLXXIX DCCLXII MMMMDCCLXVIII MDCCXXXXIII CCXXXII MMMMDCXXV MMMCCCXXVIII MDCVIII MMMCLXXXXIIII CLXXXI MDCCCCXXXIII MMMMDCXXX MMMDCXXIV MMMCCXXXVII MCCCXXXXIIII CXVIII MMDCCCCIV MMMMCDLXXV MMMDLXIV MDXCIII MCCLXXXI MMMDCCCXXIV MCXLIII MMMDCCCI MCCLXXX CCXV MMDCCLXXI MMDLXXXIII MMMMDCXVII MMMCMLXV MCLXVIII MMMMCCLXXVI MMMDCCLXVIIII MMMMDCCCIX DLXXXXIX DCCCXXII MMMMIII MMMMCCCLXXVI DCCCXCIII DXXXI MXXXIIII CCXII MMMDCCLXXXIIII MMMCXX MMMCMXXVII DCCCXXXX MMCDXXXVIIII MMMMDCCXVIII LV MMMDCCCCVI MCCCII MMCMLXVIIII MDCCXI MMMMDLXVII MMCCCCLXI MMDCCV MMMCCCXXXIIII MMMMDI MMMDCCCXCV MMDCCLXXXXI MMMDXXVI MMMDCCCLVI MMDCXXX MCCCVII MMMMCCCLXII MMMMXXV MMCMXXV MMLVI MMDXXX MMMMCVII MDC MCCIII MMMMDCC MMCCLXXV MMDCCCXXXXVI MMMMCCCLXV CDXIIII MLXIIII CCV MMMCMXXXI CCCCLXVI MDXXXII MMMMCCCLVIII MMV MMMCLII MCMLI MMDCCXX MMMMCCCCXXXVI MCCLXXXI MMMCMVI DCCXXX MMMMCCCLXV DCCCXI MMMMDCCCXIV CCCXXI MMDLXXV CCCCLXXXX MCCCLXXXXII MMDCIX DCCXLIIII DXIV MMMMCLII CDLXI MMMCXXVII MMMMDCCCCLXIII MMMDCLIIII MCCCCXXXXII MMCCCLX CCCCLIII MDCCLXXVI MCMXXIII MMMMDLXXVIII MMDCCCCLX MMMCCCLXXXX MMMCDXXVI MMMDLVIII CCCLXI MMMMDCXXII MMDCCCXXI MMDCCXIII MMMMCLXXXVI MDCCCCXXVI MDV MMDCCCCLXXVI MMMMCCXXXVII MMMDCCLXXVIIII MMMCCCCLXVII DCCXLI MMCLXXXVIII MCCXXXVI MMDCXLVIII MMMMCXXXII MMMMDCCLXVI MMMMCMLI MMMMCLXV MMMMDCCCXCIV MCCLXXVII LXXVIIII DCCLII MMMCCCXCVI MMMCLV MMDCCCXXXXVIII DCCCXV MXC MMDCCLXXXXVII MMMMCML MMDCCCLXXVIII DXXI MCCCXLI DCLXXXXI MMCCCLXXXXVIII MDCCCCLXXVIII MMMMDXXV MMMDCXXXVI MMMCMXCVII MMXVIIII MMMDCCLXXIV MMMCXXV DXXXVIII MMMMCLXVI MDXII MMCCCLXX CCLXXI DXIV MMMCLIII DLII MMMCCCXLIX MMCCCCXXVI MMDCXLIII MXXXXII CCCLXXXV MDCLXXVI MDCXII MMMCCCLXXXIII MMDCCCCLXXXII MMMMCCCLXXXV MMDCXXI DCCCXXX MMMDCCCCLII MMMDCCXXII MMMMCDXCVIII MMMCCLXVIIII MMXXV MMMMCDXIX MMMMCCCX MMMCCCCLXVI MMMMDCLXXVIIII MMMMDCXXXXIV MMMCMXII MMMMXXXIII MMMMDLXXXII DCCCLIV MDXVIIII MMMCLXXXXV CCCCXX MMDIX MMCMLXXXVIII DCCXLIII DCCLX D MCCCVII MMMMCCCLXXXIII MDCCCLXXIIII MMMDCCCCLXXXVII MMMMCCCVII MMMDCCLXXXXVI CDXXXIV MCCLXVIII MMMMDLX MMMMDXII MMMMCCCCLIIII MCMLXXXXIII MMMMDCCCIII MMDCLXXXIII MDCCCXXXXIV XXXXVII MMMDCCCXXXII MMMDCCCXLII MCXXXV MDCXXVIIII MMMCXXXXIIII MMMMCDXVII MMMDXXIII MMMMCCCCLXI DCLXXXXVIIII LXXXXI CXXXIII MCDX MCCLVII MDCXXXXII MMMCXXIV MMMMLXXXX MMDCCCCXLV MLXXX MMDCCCCLX MCDLIII MMMCCCLXVII MMMMCCCLXXIV MMMDCVIII DCCCCXXIII MMXCI MMDCCIV MMMMDCCCXXXIV CCCLXXI MCCLXXXII MCMIII CCXXXI DCCXXXVIII MMMMDCCXLVIIII MMMMCMXXXV DCCCLXXV DCCXCI MMMMDVII MMMMDCCCLXVIIII CCCXCV MMMMDCCXX MCCCCII MMMCCCXC MMMCCCII MMDCCLXXVII MMDCLIIII CCXLIII MMMDCXVIII MMMCCCIX MCXV MMCCXXV MLXXIIII MDCCXXVI MMMCCCXX MMDLXX MMCCCCVI MMDCCXX MMMMDCCCCXCV MDCCCXXXII MMMMDCCCCXXXX XCIV MMCCCCLX MMXVII MLXXI MMMDXXVIII MDCCCCII MMMCMLVII MMCLXXXXVIII MDCCCCLV MCCCCLXXIIII MCCCLII MCDXLVI MMMMDXVIII DCCLXXXIX MMMDCCLXIV MDCCCCXLIII CLXXXXV MMMMCCXXXVI MMMDCCCXXI MMMMCDLXXVII MCDLIII MMCCXLVI DCCCLV MCDLXX DCLXXVIII MMDCXXXIX MMMMDCLX MMDCCLI MMCXXXV MMMCCXII MMMMCMLXII MMMMCCV MCCCCLXIX MMMMCCIII CLXVII MCCCLXXXXIIII MMMMDCVIII MMDCCCLXI MMLXXIX CMLXIX MMDCCCXLVIIII DCLXII MMMCCCXLVII MDCCCXXXV MMMMDCCXCVI DCXXX XXVI MMLXIX MMCXI DCXXXVII MMMMCCCXXXXVIII MMMMDCLXI MMMMDCLXXIIII MMMMVIII MMMMDCCCLXII MDCXCI MMCCCXXIIII CCCCXXXXV MMDCCCXXI MCVI MMDCCLXVIII MMMMCXL MLXVIII CMXXVII CCCLV MDCCLXXXIX MMMCCCCLXV MMDCCLXII MDLXVI MMMCCCXVIII MMMMCCLXXXI MMCXXVII MMDCCCLXVIII MMMCXCII MMMMDCLVIII MMMMDCCCXXXXII MMDCCCCLXXXXVI MDCCXL MDCCLVII MMMMDCCCLXXXVI DCCXXXIII MMMMDCCCCLXXXV MMCCXXXXVIII MMMCCLXXVIII MMMDCLXXVIII DCCCI MMMMLXXXXVIIII MMMCCCCLXXII MMCLXXXVII CCLXVI MCDXLIII MMCXXVIII MDXIV CCCXCVIII CLXXVIII MMCXXXXVIIII MMMDCLXXXIV CMLVIII MCDLIX MMMMDCCCXXXII MMMMDCXXXIIII MDCXXI MMMDCXLV MCLXXVIII MCDXXII IV MCDLXXXXIII MMMMDCCLXV CCLI MMMMDCCCXXXVIII DCLXII MCCCLXVII MMMMDCCCXXXVI MMDCCXLI MLXI MMMCDLXVIII MCCCCXCIII XXXIII MMMDCLXIII MMMMDCL DCCCXXXXIIII MMDLVII DXXXVII MCCCCXXIIII MCVII MMMMDCCXL MMMMCXXXXIIII MCCCCXXIV MMCLXVIII MMXCIII MDCCLXXX MCCCLIIII MMDCLXXI MXI MCMLIV MMMCCIIII DCCLXXXVIIII MDCLIV MMMDCXIX CMLXXXI DCCLXXXVII XXV MMMXXXVI MDVIIII CLXIII MMMCDLVIIII MMCCCCVII MMMLXX MXXXXII MMMMCCCLXVIII MMDCCCXXVIII MMMMDCXXXXI MMMMDCCCXXXXV MMMXV MMMMCCXVIIII MMDCCXIIII MMMXXVII MDCCLVIIII MMCXXIIII MCCCLXXIV DCLVIII MMMLVII MMMCXLV MMXCVII MMMCCCLXXXVII MMMMCCXXII DXII MMMDLV MCCCLXXVIII MMMCLIIII MMMMCLXXXX MMMCLXXXIIII MDCXXIII MMMMCCXVI MMMMDLXXXIII MMMDXXXXIII MMMMCCCCLV MMMDLXXXI MMMCCLXXVI MMMMXX MMMMDLVI MCCCCLXXX MMMXXII MMXXII MMDCCCCXXXI MMMDXXV MMMDCLXXXVIIII MMMDLXXXXVII MDLXIIII CMXC MMMXXXVIII MDLXXXVIII MCCCLXXVI MMCDLIX MMDCCCXVIII MDCCCXXXXVI MMMMCMIV MMMMDCIIII MMCCXXXV XXXXVI MMMMCCXVII MMCCXXIV MCMLVIIII MLXXXIX MMMMLXXXIX CLXXXXIX MMMDCCCCLVIII MMMMCCLXXIII MCCCC DCCCLIX MMMCCCLXXXII MMMCCLXVIIII MCLXXXV CDLXXXVII DCVI MMX MMCCXIII MMMMDCXX MMMMXXVIII DCCCLXII MMMMCCCXLIII MMMMCLXV DXCI MMMMCLXXX MMMDCCXXXXI MMMMXXXXVI DCLX MMMCCCXI MCCLXXX MMCDLXXII DCCLXXI MMMCCCXXXVI MCCCCLXXXVIIII CDLVIII DCCLVI MMMMDCXXXVIII MMCCCLXXXIII MMMMDCCLXXV MMMXXXVI CCCLXXXXIX CV CCCCXIII CCCCXVI MDCCCLXXXIIII MMDCCLXXXII MMMMCCCCLXXXI MXXV MMCCCLXXVIIII MMMCCXII MMMMCCXXXIII MMCCCLXXXVI MMMDCCCLVIIII MCCXXXVII MDCLXXV XXXV MMDLI MMMCCXXX MMMMCXXXXV CCCCLIX MMMMDCCCLXXIII MMCCCXVII DCCCXVI MMMCCCXXXXV MDCCCCXCV CLXXXI MMMMDCCLXX MMMDCCCIII MMCLXXVII MMMDCCXXIX MMDCCCXCIIII MMMCDXXIIII MMMMXXVIII MMMMDCCCCLXVIII MDCCCXX MMMMCDXXI MMMMDLXXXIX CCXVI MDVIII MMCCLXXI MMMDCCCLXXI MMMCCCLXXVI MMCCLXI MMMMDCCCXXXIV DLXXXVI MMMMDXXXII MMMXXIIII MMMMCDIV MMMMCCCXLVIII MMMMCXXXVIII MMMCCCLXVI MDCCXVIII MMCXX CCCLIX MMMMDCCLXXII MDCCCLXXV MMMMDCCCXXIV DCCCXXXXVIII MMMDCCCCXXXVIIII MMMMCCXXXV MDCLXXXIII MMCCLXXXIV MCLXXXXIIII DXXXXIII MCCCXXXXVIII MMCLXXIX MMMMCCLXIV MXXII MMMCXIX MDCXXXVII MMDCCVI MCLXXXXVIII MMMCXVI MCCCLX MMMCDX CCLXVIIII MMMCCLX MCXXVIII LXXXII MCCCCLXXXI MMMI MMMCCCLXIV MMMCCCXXVIIII CXXXVIII MMCCCXX MMMCCXXVIIII MCCLXVI MMMCCCCXXXXVI MMDCCXCIX MCMLXXI MMCCLXVIII CDLXXXXIII MMMMDCCXXII MMMMDCCLXXXVII MMMDCCLIV MMCCLXIII MDXXXVII DCCXXXIIII MCII MMMDCCCLXXI MMMLXXIII MDCCCLIII MMXXXVIII MDCCXVIIII MDCCCCXXXVII MMCCCXVI MCMXXII MMMCCCLVIII MMMMDCCCXX MCXXIII MMMDLXI MMMMDXXII MDCCCX MMDXCVIIII MMMDCCCCVIII MMMMDCCCCXXXXVI MMDCCCXXXV MMCXCIV MCMLXXXXIII MMMCCCLXXVI MMMMDCLXXXV CMLXIX DCXCII MMXXVIII MMMMCCCXXX XXXXVIIII
MMMMDCLXXII MMDCCCLXXXIII MMMDLXVIIII MMMMDXCV DCCCLXXII MMCCCVI MMMCDLXXXVII MMMMCCXXI MMMCCXX MMMMDCCCLXXIII MMMCCXXXVII MMCCCLXXXXIX MDCCCXXIIII MMCXCVI CCXCVIII MMMCCCXXXII MDCCXXX MMMDCCCL MMMMCCLXXXVI MMDCCCXCVI MMMDCII MMMCCXII MMMMDCCCCI MMDCCCXCII MDCXX CMLXXXVII MMMXXI MMMMCCCXIV MLXXII MCCLXXVIIII MMMMCCXXXXI MMDCCCLXXII MMMMXXXI MMMDCCLXXX MMDCCCLXXIX MMMMLXXXV MCXXI MDCCCXXXVII MMCCCLXVII MCDXXXV CCXXXIII CMXX MMMCLXIV MCCCLXXXVI DCCCXCVIII MMMDCCCCXXXIV CDXVIIII MMCCXXXV MDCCCXXXII MMMMD MMDCCLXIX MMMMCCCLXXXXVI MMDCCXLII MMMDCCCVIIII DCCLXXXIIII MDCCCCXXXII MMCXXVII DCCCXXX CCLXIX MMMXI MMMMCMLXXXXVIII MMMMDLXXXVII MMMMDCCCLX MMCCLIV CMIX MMDCCCLXXXIIII CLXXXII MMCCCCXXXXV MMMMDLXXXVIIII MMMDCCCXXI MMDCCCCLXXVI MCCCCLXX MMCDLVIIII MMMDCCCLIX MMMMCCCCXIX MMMDCCCLXXV XXXI CDLXXXIII MMMCXV MMDCCLXIII MMDXXX MMMMCCCLVII MMMDCI MMMMCDLXXXIIII MMMMCCCXVI CCCLXXXVIII MMMMCML MMMMXXIV MMMCCCCXXX DCCX MMMCCLX MMDXXXIII CCCLXIII MMDCCXIII MMMCCCXLIV CLXXXXI CXVI MMMMCXXXIII CLXX DCCCXVIII MLXVII DLXXXX MMDXXI MMMMDLXXXXVIII MXXII LXI DCCCCXLIII MMMMDV MMMMXXXIV MDCCCLVIII MMMCCLXXII MMMMDCCXXXVI MMMMLXXXIX MDCCCLXXXI MMMMDCCCXV MMMMCCCCXI MMMMCCCLIII MDCCCLXXI MMCCCCXI MLXV MMCDLXII MMMMDXXXXII MMMMDCCCXL MMMMCMLVI CCLXXXIV MMMDCCLXXXVI MMCLII MMMCCCCXV MMLXXXIII MMMV MMMV DCCLXII MMDCCCCXVI MMDCXLVIII CCLIIII CCCXXV MMDCCLXXXVIIII MMMMDCLXXVIII MMMMDCCCXCI MMMMCCCXX MMCCXLV MMMDCCCLXIX MMCCLXIIII MMMDCCCXLIX MMMMCCCLXIX CMLXXXXI MCMLXXXIX MMCDLXI MMDCLXXVIII MMMMDCCLXI MCDXXV DL CCCLXXII MXVIIII MCCCCLXVIII CIII MMMDCCLXXIIII MMMDVIII MMMMCCCLXXXXVII MMDXXVII MMDCCLXXXXV MMMMCXLVI MMMDCCLXXXII MMMDXXXVI MCXXII CLI DCLXXXIX MMMCLI MDCLXIII MMMMDCCXCVII MMCCCLXXXV MMMDCXXVIII MMMCDLX MMMCMLII MMMIV MMMMDCCCLVIII MMMDLXXXVIII MCXXIV MMMMLXXVI CLXXIX MMMCCCCXXVIIII DCCLXXXV MMMDCCCVI LI CLXXXVI MMMMCCCLXXVI MCCCLXVI CCXXXIX MMDXXXXI MMDCCCXLI DCCCLXXXVIII MMMMDCCCIV MDCCCCXV MMCMVI MMMMCMLXXXXV MMDCCLVI MMMMCCXLVIII DCCCCIIII MMCCCCIII MMMDCCLXXXVIIII MDCCCLXXXXV DVII MMMV DCXXV MMDCCCXCV DCVIII MMCDLXVI MCXXVIII MDCCXCVIII MMDCLX MMMDCCLXIV MMCDLXXVII MMDLXXXIIII MMMMCCCXXII MMMDCCCXLIIII DCCCCLXVII MMMCLXXXXIII MCCXV MMMMDCXI MMMMDCLXXXXV MMMCCCLII MMCMIX MMDCCXXV MMDLXXXVI MMMMDCXXVIIII DCCCCXXXVIIII MMCCXXXIIII MMDCCLXXVIII MDCCLXVIIII MMCCLXXXV MMMMDCCCLXXXVIII MMCMXCI MDXLII MMMMDCCXIV MMMMLI DXXXXIII MMDCCXI MMMMCCLXXXIII MMMDCCCLXXIII MDCLVII MMCD MCCCXXVII MMMMDCCIIII MMMDCCXLVI MMMCLXXXVII MMMCCVIIII MCCCCLXXIX DL DCCCLXXVI MMDXCI MMMMDCCCCXXXVI MMCII MMMDCCCXXXXV MMMCDXLV MMDCXXXXIV MMD MDCCCLXXXX MMDCXLIII MMCCXXXII MMDCXXXXVIIII DCCCLXXI MDXCVIIII MMMMCCLXXVIII MDCLVIIII MMMCCCLXXXIX MDCLXXXV MDLVIII MMMMCCVII MMMMDCXIV MMMCCCLXIIII MMIIII MMMMCCCLXXIII CCIII MMMCCLV MMMDXIII MMMCCCXC MMMDCCCXXI MMMMCCCCXXXII CCCLVI MMMCCCLXXXVI MXVIIII MMMCCCCXIIII CLXVII MMMCCLXX CCCCLXIV MMXXXXII MMMMCCLXXXX MXL CCXVI CCCCLVIIII MMCCCII MCCCLVIII MMMMCCCX MCDLXXXXIV MDCCCXIII MMDCCCXL MMMMCCCXXIII DXXXIV CVI MMMMDCLXXX DCCCVII MMCMLXIIII MMMDCCCXXXIII DCCC MDIII MMCCCLXVI MMMCCCCLXXI MMDCCCCXVIII CCXXXVII CCCXXV MDCCCXII MMMCMV MMMMCMXV MMMMDCXCI DXXI MMCCXLVIIII MMMMCMLII MDLXXX MMDCLXVI CXXI MMMDCCCLIIII MMMCXXI MCCIII MMDCXXXXI CCXCII MMMMDXXXV MMMCCCLXV MMMMDLXV MMMCCCCXXXII MMMCCCVIII DCCCCLXXXXII MMCLXIV MMMMCXI MLXXXXVII MMMCDXXXVIII MDXXII MLV MMMMDLXVI MMMCXII XXXIII MMMMDCCCXXVI MMMLXVIIII MMMLX MMMCDLXVII MDCCCLVII MMCXXXVII MDCCCCXXX MMDCCCLXIII MMMMDCXLIX MMMMCMXLVIII DCCCLXXVIIII MDCCCLIII MMMCMLXI MMMMCCLXI MMDCCCLIII MMMDCCCVI MMDXXXXIX MMCLXXXXV MMDXXX MMMXIII DCLXXIX DCCLXII MMMMDCCLXVIII MDCCXXXXIII CCXXXII MMMMDCXXV MMMCCCXXVIII MDCVIII MMMCLXXXXIIII CLXXXI MDCCCCXXXIII MMMMDCXXX MMMDCXXIV MMMCCXXXVII MCCCXXXXIIII CXVIII MMDCCCCIV MMMMCDLXXV MMMDLXIV MDXCIII MCCLXXXI MMMDCCCXXIV MCXLIII MMMDCCCI MCCLXXX CCXV MMDCCLXXI MMDLXXXIII MMMMDCXVII MMMCMLXV MCLXVIII MMMMCCLXXVI MMMDCCLXVIIII MMMMDCCCIX DLXXXXIX DCCCXXII MMMMIII MMMMCCCLXXVI DCCCXCIII DXXXI MXXXIIII CCXII MMMDCCLXXXIIII MMMCXX MMMCMXXVII DCCCXXXX MMCDXXXVIIII MMMMDCCXVIII LV MMMDCCCCVI MCCCII MMCMLXVIIII MDCCXI MMMMDLXVII MMCCCCLXI MMDCCV MMMCCCXXXIIII MMMMDI MMMDCCCXCV MMDCCLXXXXI MMMDXXVI MMMDCCCLVI MMDCXXX MCCCVII MMMMCCCLXII MMMMXXV MMCMXXV MMLVI MMDXXX MMMMCVII MDC MCCIII MMMMDCC MMCCLXXV MMDCCCXXXXVI MMMMCCCLXV CDXIIII MLXIIII CCV MMMCMXXXI CCCCLXVI MDXXXII MMMMCCCLVIII MMV MMMCLII MCMLI MMDCCXX MMMMCCCCXXXVI MCCLXXXI MMMCMVI DCCXXX MMMMCCCLXV DCCCXI MMMMDCCCXIV CCCXXI MMDLXXV CCCCLXXXX MCCCLXXXXII MMDCIX DCCXLIIII DXIV MMMMCLII CDLXI MMMCXXVII MMMMDCCCCLXIII MMMDCLIIII MCCCCXXXXII MMCCCLX CCCCLIII MDCCLXXVI MCMXXIII MMMMDLXXVIII MMDCCCCLX MMMCCCLXXXX MMMCDXXVI MMMDLVIII CCCLXI MMMMDCXXII MMDCCCXXI MMDCCXIII MMMMCLXXXVI MDCCCCXXVI MDV MMDCCCCLXXVI MMMMCCXXXVII MMMDCCLXXVIIII MMMCCCCLXVII DCCXLI MMCLXXXVIII MCCXXXVI MMDCXLVIII MMMMCXXXII MMMMDCCLXVI MMMMCMLI MMMMCLXV MMMMDCCCXCIV MCCLXXVII LXXVIIII DCCLII MMMCCCXCVI MMMCLV MMDCCCXXXXVIII DCCCXV MXC MMDCCLXXXXVII MMMMCML MMDCCCLXXVIII DXXI MCCCXLI DCLXXXXI MMCCCLXXXXVIII MDCCCCLXXVIII MMMMDXXV MMMDCXXXVI MMMCMXCVII MMXVIIII MMMDCCLXXIV MMMCXXV DXXXVIII MMMMCLXVI MDXII MMCCCLXX CCLXXI DXIV MMMCLIII DLII MMMCCCXLIX MMCCCCXXVI MMDCXLIII MXXXXII CCCLXXXV MDCLXXVI MDCXII MMMCCCLXXXIII MMDCCCCLXXXII MMMMCCCLXXXV MMDCXXI DCCCXXX MMMDCCCCLII MMMDCCXXII MMMMCDXCVIII MMMCCLXVIIII MMXXV MMMMCDXIX MMMMCCCX MMMCCCCLXVI MMMMDCLXXVIIII MMMMDCXXXXIV MMMCMXII MMMMXXXIII MMMMDLXXXII DCCCLIV MDXVIIII MMMCLXXXXV CCCCXX MMDIX MMCMLXXXVIII DCCXLIII DCCLX D MCCCVII MMMMCCCLXXXIII MDCCCLXXIIII MMMDCCCCLXXXVII MMMMCCCVII MMMDCCLXXXXVI CDXXXIV MCCLXVIII MMMMDLX MMMMDXII MMMMCCCCLIIII MCMLXXXXIII MMMMDCCCIII MMDCLXXXIII MDCCCXXXXIV XXXXVII MMMDCCCXXXII MMMDCCCXLII MCXXXV MDCXXVIIII MMMCXXXXIIII MMMMCDXVII MMMDXXIII MMMMCCCCLXI DCLXXXXVIIII LXXXXI CXXXIII MCDX MCCLVII MDCXXXXII MMMCXXIV MMMMLXXXX MMDCCCCXLV MLXXX MMDCCCCLX MCDLIII MMMCCCLXVII MMMMCCCLXXIV MMMDCVIII DCCCCXXIII MMXCI MMDCCIV MMMMDCCCXXXIV CCCLXXI MCCLXXXII MCMIII CCXXXI DCCXXXVIII MMMMDCCXLVIIII MMMMCMXXXV DCCCLXXV DCCXCI MMMMDVII MMMMDCCCLXVIIII CCCXCV MMMMDCCXX MCCCCII MMMCCCXC MMMCCCII MMDCCLXXVII MMDCLIIII CCXLIII MMMDCXVIII MMMCCCIX MCXV MMCCXXV MLXXIIII MDCCXXVI MMMCCCXX MMDLXX MMCCCCVI MMDCCXX MMMMDCCCCXCV MDCCCXXXII MMMMDCCCCXXXX XCIV MMCCCCLX MMXVII MLXXI MMMDXXVIII MDCCCCII MMMCMLVII MMCLXXXXVIII MDCCCCLV MCCCCLXXIIII MCCCLII MCDXLVI MMMMDXVIII DCCLXXXIX MMMDCCLXIV MDCCCCXLIII CLXXXXV MMMMCCXXXVI MMMDCCCXXI MMMMCDLXXVII MCDLIII MMCCXLVI DCCCLV MCDLXX DCLXXVIII MMDCXXXIX MMMMDCLX MMDCCLI MMCXXXV MMMCCXII MMMMCMLXII MMMMCCV MCCCCLXIX MMMMCCIII CLXVII MCCCLXXXXIIII MMMMDCVIII MMDCCCLXI MMLXXIX CMLXIX MMDCCCXLVIIII DCLXII MMMCCCXLVII MDCCCXXXV MMMMDCCXCVI DCXXX XXVI MMLXIX MMCXI DCXXXVII MMMMCCCXXXXVIII MMMMDCLXI MMMMDCLXXIIII MMMMVIII MMMMDCCCLXII MDCXCI MMCCCXXIIII CCCCXXXXV MMDCCCXXI MCVI MMDCCLXVIII MMMMCXL MLXVIII CMXXVII CCCLV MDCCLXXXIX MMMCCCCLXV MMDCCLXII MDLXVI MMMCCCXVIII MMMMCCLXXXI MMCXXVII MMDCCCLXVIII MMMCXCII MMMMDCLVIII MMMMDCCCXXXXII MMDCCCCLXXXXVI MDCCXL MDCCLVII MMMMDCCCLXXXVI DCCXXXIII MMMMDCCCCLXXXV MMCCXXXXVIII MMMCCLXXVIII MMMDCLXXVIII DCCCI MMMMLXXXXVIIII MMMCCCCLXXII MMCLXXXVII CCLXVI MCDXLIII MMCXXVIII MDXIV CCCXCVIII CLXXVIII MMCXXXXVIIII MMMDCLXXXIV CMLVIII MCDLIX MMMMDCCCXXXII MMMMDCXXXIIII MDCXXI MMMDCXLV MCLXXVIII MCDXXII IV MCDLXXXXIII MMMMDCCLXV CCLI MMMMDCCCXXXVIII DCLXII MCCCLXVII MMMMDCCCXXXVI MMDCCXLI MLXI MMMCDLXVIII MCCCCXCIII XXXIII MMMDCLXIII MMMMDCL DCCCXXXXIIII MMDLVII DXXXVII MCCCCXXIIII MCVII MMMMDCCXL MMMMCXXXXIIII MCCCCXXIV MMCLXVIII MMXCIII MDCCLXXX MCCCLIIII MMDCLXXI MXI MCMLIV MMMCCIIII DCCLXXXVIIII MDCLIV MMMDCXIX CMLXXXI DCCLXXXVII XXV MMMXXXVI MDVIIII CLXIII MMMCDLVIIII MMCCCCVII MMMLXX MXXXXII MMMMCCCLXVIII MMDCCCXXVIII MMMMDCXXXXI MMMMDCCCXXXXV MMMXV MMMMCCXVIIII MMDCCXIIII MMMXXVII MDCCLVIIII MMCXXIIII MCCCLXXIV DCLVIII MMMLVII MMMCXLV MMXCVII MMMCCCLXXXVII MMMMCCXXII DXII MMMDLV MCCCLXXVIII MMMCLIIII MMMMCLXXXX MMMCLXXXIIII MDCXXIII MMMMCCXVI MMMMDLXXXIII MMMDXXXXIII MMMMCCCCLV MMMDLXXXI MMMCCLXXVI MMMMXX MMMMDLVI MCCCCLXXX MMMXXII MMXXII MMDCCCCXXXI MMMDXXV MMMDCLXXXVIIII MMMDLXXXXVII MDLXIIII CMXC MMMXXXVIII MDLXXXVIII MCCCLXXVI MMCDLIX MMDCCCXVIII MDCCCXXXXVI MMMMCMIV MMMMDCIIII MMCCXXXV XXXXVI MMMMCCXVII MMCCXXIV MCMLVIIII MLXXXIX MMMMLXXXIX CLXXXXIX MMMDCCCCLVIII MMMMCCLXXIII MCCCC DCCCLIX MMMCCCLXXXII MMMCCLXVIIII MCLXXXV CDLXXXVII DCVI MMX MMCCXIII MMMMDCXX MMMMXXVIII DCCCLXII MMMMCCCXLIII MMMMCLXV DXCI MMMMCLXXX MMMDCCXXXXI MMMMXXXXVI DCLX MMMCCCXI MCCLXXX MMCDLXXII DCCLXXI MMMCCCXXXVI MCCCCLXXXVIIII CDLVIII DCCLVI MMMMDCXXXVIII MMCCCLXXXIII MMMMDCCLXXV MMMXXXVI CCCLXXXXIX CV CCCCXIII CCCCXVI MDCCCLXXXIIII MMDCCLXXXII MMMMCCCCLXXXI MXXV MMCCCLXXVIIII MMMCCXII MMMMCCXXXIII MMCCCLXXXVI MMMDCCCLVIIII MCCXXXVII MDCLXXV XXXV MMDLI MMMCCXXX MMMMCXXXXV CCCCLIX MMMMDCCCLXXIII MMCCCXVII DCCCXVI MMMCCCXXXXV MDCCCCXCV CLXXXI MMMMDCCLXX MMMDCCCIII MMCLXXVII MMMDCCXXIX MMDCCCXCIIII MMMCDXXIIII MMMMXXVIII MMMMDCCCCLXVIII MDCCCXX MMMMCDXXI MMMMDLXXXIX CCXVI MDVIII MMCCLXXI MMMDCCCLXXI MMMCCCLXXVI MMCCLXI MMMMDCCCXXXIV DLXXXVI MMMMDXXXII MMMXXIIII MMMMCDIV MMMMCCCXLVIII MMMMCXXXVIII MMMCCCLXVI MDCCXVIII MMCXX CCCLIX MMMMDCCLXXII MDCCCLXXV MMMMDCCCXXIV DCCCXXXXVIII MMMDCCCCXXXVIIII MMMMCCXXXV MDCLXXXIII MMCCLXXXIV MCLXXXXIIII DXXXXIII MCCCXXXXVIII MMCLXXIX MMMMCCLXIV MXXII MMMCXIX MDCXXXVII MMDCCVI MCLXXXXVIII MMMCXVI MCCCLX MMMCDX CCLXVIIII MMMCCLX MCXXVIII LXXXII MCCCCLXXXI MMMI MMMCCCLXIV MMMCCCXXVIIII CXXXVIII MMCCCXX MMMCCXXVIIII MCCLXVI MMMCCCCXXXXVI MMDCCXCIX MCMLXXI MMCCLXVIII CDLXXXXIII MMMMDCCXXII MMMMDCCLXXXVII MMMDCCLIV MMCCLXIII MDXXXVII DCCXXXIIII MCII MMMDCCCLXXI MMMLXXIII MDCCCLIII MMXXXVIII MDCCXVIIII MDCCCCXXXVII MMCCCXVI MCMXXII MMMCCCLVIII MMMMDCCCXX MCXXIII MMMDLXI MMMMDXXII MDCCCX MMDXCVIIII MMMDCCCCVIII MMMMDCCCCXXXXVI MMDCCCXXXV MMCXCIV MCMLXXXXIII MMMCCCLXXVI MMMMDCLXXXV CMLXIX DCXCII MMXXVIII MMMMCCCXXX XXXXVIIII
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
""" Calculates buoyant force on object submerged within static fluid. Discovered by greek mathematician, Archimedes. The principle is named after him. Equation for calculating buoyant force: Fb = ρ * V * g Source: - https://en.wikipedia.org/wiki/Archimedes%27_principle """ # Acceleration Constant on Earth (unit m/s^2) g = 9.80665 def archimedes_principle( fluid_density: float, volume: float, gravity: float = g ) -> float: """ Args: fluid_density: density of fluid (kg/m^3) volume: volume of object / liquid being displaced by object gravity: Acceleration from gravity. Gravitational force on system, Default is Earth Gravity returns: buoyant force on object in Newtons >>> archimedes_principle(fluid_density=997, volume=0.5, gravity=9.8) 4885.3 >>> archimedes_principle(fluid_density=997, volume=0.7) 6844.061035 """ if fluid_density <= 0: raise ValueError("Impossible fluid density") if volume < 0: raise ValueError("Impossible Object volume") if gravity <= 0: raise ValueError("Impossible Gravity") return fluid_density * gravity * volume if __name__ == "__main__": import doctest # run doctest doctest.testmod()
""" Calculates buoyant force on object submerged within static fluid. Discovered by greek mathematician, Archimedes. The principle is named after him. Equation for calculating buoyant force: Fb = ρ * V * g Source: - https://en.wikipedia.org/wiki/Archimedes%27_principle """ # Acceleration Constant on Earth (unit m/s^2) g = 9.80665 def archimedes_principle( fluid_density: float, volume: float, gravity: float = g ) -> float: """ Args: fluid_density: density of fluid (kg/m^3) volume: volume of object / liquid being displaced by object gravity: Acceleration from gravity. Gravitational force on system, Default is Earth Gravity returns: buoyant force on object in Newtons >>> archimedes_principle(fluid_density=997, volume=0.5, gravity=9.8) 4885.3 >>> archimedes_principle(fluid_density=997, volume=0.7) 6844.061035 """ if fluid_density <= 0: raise ValueError("Impossible fluid density") if volume < 0: raise ValueError("Impossible Object volume") if gravity <= 0: raise ValueError("Impossible Gravity") return fluid_density * gravity * volume if __name__ == "__main__": import doctest # run doctest doctest.testmod()
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
""" This algorithm was created for sdbm (a public-domain reimplementation of ndbm) database library. It was found to do well in scrambling bits, causing better distribution of the keys and fewer splits. It also happens to be a good general hashing function with good distribution. The actual function (pseudo code) is: for i in i..len(str): hash(i) = hash(i - 1) * 65599 + str[i]; What is included below is the faster version used in gawk. [there is even a faster, duff-device version] The magic constant 65599 was picked out of thin air while experimenting with different constants. It turns out to be a prime. This is one of the algorithms used in berkeley db (see sleepycat) and elsewhere. source: http://www.cse.yorku.ca/~oz/hash.html """ def sdbm(plain_text: str) -> int: """ Function implements sdbm hash, easy to use, great for bits scrambling. iterates over each character in the given string and applies function to each of them. >>> sdbm('Algorithms') 1462174910723540325254304520539387479031000036 >>> sdbm('scramble bits') 730247649148944819640658295400555317318720608290373040936089 """ hash_value = 0 for plain_chr in plain_text: hash_value = ( ord(plain_chr) + (hash_value << 6) + (hash_value << 16) - hash_value ) return hash_value
""" This algorithm was created for sdbm (a public-domain reimplementation of ndbm) database library. It was found to do well in scrambling bits, causing better distribution of the keys and fewer splits. It also happens to be a good general hashing function with good distribution. The actual function (pseudo code) is: for i in i..len(str): hash(i) = hash(i - 1) * 65599 + str[i]; What is included below is the faster version used in gawk. [there is even a faster, duff-device version] The magic constant 65599 was picked out of thin air while experimenting with different constants. It turns out to be a prime. This is one of the algorithms used in berkeley db (see sleepycat) and elsewhere. source: http://www.cse.yorku.ca/~oz/hash.html """ def sdbm(plain_text: str) -> int: """ Function implements sdbm hash, easy to use, great for bits scrambling. iterates over each character in the given string and applies function to each of them. >>> sdbm('Algorithms') 1462174910723540325254304520539387479031000036 >>> sdbm('scramble bits') 730247649148944819640658295400555317318720608290373040936089 """ hash_value = 0 for plain_chr in plain_text: hash_value = ( ord(plain_chr) + (hash_value << 6) + (hash_value << 16) - hash_value ) return hash_value
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
def get_highest_set_bit_position(number: int) -> int: """ Returns position of the highest set bit of a number. Ref - https://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious >>> get_highest_set_bit_position(25) 5 >>> get_highest_set_bit_position(37) 6 >>> get_highest_set_bit_position(1) 1 >>> get_highest_set_bit_position(4) 3 >>> get_highest_set_bit_position(0) 0 >>> get_highest_set_bit_position(0.8) Traceback (most recent call last): ... TypeError: Input value must be an 'int' type """ if not isinstance(number, int): raise TypeError("Input value must be an 'int' type") position = 0 while number: position += 1 number >>= 1 return position if __name__ == "__main__": import doctest doctest.testmod()
def get_highest_set_bit_position(number: int) -> int: """ Returns position of the highest set bit of a number. Ref - https://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious >>> get_highest_set_bit_position(25) 5 >>> get_highest_set_bit_position(37) 6 >>> get_highest_set_bit_position(1) 1 >>> get_highest_set_bit_position(4) 3 >>> get_highest_set_bit_position(0) 0 >>> get_highest_set_bit_position(0.8) Traceback (most recent call last): ... TypeError: Input value must be an 'int' type """ if not isinstance(number, int): raise TypeError("Input value must be an 'int' type") position = 0 while number: position += 1 number >>= 1 return position if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
""" Description : Newton's second law of motion pertains to the behavior of objects for which all existing forces are not balanced. The second law states that the acceleration of an object is dependent upon two variables - the net force acting upon the object and the mass of the object. The acceleration of an object depends directly upon the net force acting upon the object, and inversely upon the mass of the object. As the force acting upon an object is increased, the acceleration of the object is increased. As the mass of an object is increased, the acceleration of the object is decreased. Source: https://www.physicsclassroom.com/class/newtlaws/Lesson-3/Newton-s-Second-Law Formulation: Fnet = m β€’ a Diagrammatic Explanation: Forces are unbalanced | | | V There is acceleration /\ / \ / \ / \ / \ / \ / \ __________________ ____ ________________ |The acceleration | |The acceleration | |depends directly | |depends inversely | |on the net Force | |upon the object's | |_________________| |mass_______________| Units: 1 Newton = 1 kg X meters / (seconds^2) How to use? Inputs: ___________________________________________________ |Name | Units | Type | |-------------|-------------------------|-----------| |mass | (in kgs) | float | |-------------|-------------------------|-----------| |acceleration | (in meters/(seconds^2)) | float | |_____________|_________________________|___________| Output: ___________________________________________________ |Name | Units | Type | |-------------|-------------------------|-----------| |force | (in Newtons) | float | |_____________|_________________________|___________| """ def newtons_second_law_of_motion(mass: float, acceleration: float) -> float: """ >>> newtons_second_law_of_motion(10, 10) 100 >>> newtons_second_law_of_motion(2.0, 1) 2.0 """ force = float() try: force = mass * acceleration except Exception: return -0.0 return force if __name__ == "__main__": import doctest # run doctest doctest.testmod() # demo mass = 12.5 acceleration = 10 force = newtons_second_law_of_motion(mass, acceleration) print("The force is ", force, "N")
""" Description : Newton's second law of motion pertains to the behavior of objects for which all existing forces are not balanced. The second law states that the acceleration of an object is dependent upon two variables - the net force acting upon the object and the mass of the object. The acceleration of an object depends directly upon the net force acting upon the object, and inversely upon the mass of the object. As the force acting upon an object is increased, the acceleration of the object is increased. As the mass of an object is increased, the acceleration of the object is decreased. Source: https://www.physicsclassroom.com/class/newtlaws/Lesson-3/Newton-s-Second-Law Formulation: Fnet = m β€’ a Diagrammatic Explanation: Forces are unbalanced | | | V There is acceleration /\ / \ / \ / \ / \ / \ / \ __________________ ____ ________________ |The acceleration | |The acceleration | |depends directly | |depends inversely | |on the net Force | |upon the object's | |_________________| |mass_______________| Units: 1 Newton = 1 kg X meters / (seconds^2) How to use? Inputs: ___________________________________________________ |Name | Units | Type | |-------------|-------------------------|-----------| |mass | (in kgs) | float | |-------------|-------------------------|-----------| |acceleration | (in meters/(seconds^2)) | float | |_____________|_________________________|___________| Output: ___________________________________________________ |Name | Units | Type | |-------------|-------------------------|-----------| |force | (in Newtons) | float | |_____________|_________________________|___________| """ def newtons_second_law_of_motion(mass: float, acceleration: float) -> float: """ >>> newtons_second_law_of_motion(10, 10) 100 >>> newtons_second_law_of_motion(2.0, 1) 2.0 """ force = float() try: force = mass * acceleration except Exception: return -0.0 return force if __name__ == "__main__": import doctest # run doctest doctest.testmod() # demo mass = 12.5 acceleration = 10 force = newtons_second_law_of_motion(mass, acceleration) print("The force is ", force, "N")
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
""" Testing here assumes that numpy and linalg is ALWAYS correct!!!! If running from PyCharm you can place the following line in "Additional Arguments" for the pytest run configuration -vv -m mat_ops -p no:cacheprovider """ import logging # standard libraries import sys import numpy as np import pytest # type: ignore # Custom/local libraries from matrix import matrix_operation as matop mat_a = [[12, 10], [3, 9]] mat_b = [[3, 4], [7, 4]] mat_c = [[3, 0, 2], [2, 0, -2], [0, 1, 1]] mat_d = [[3, 0, -2], [2, 0, 2], [0, 1, 1]] mat_e = [[3, 0, 2], [2, 0, -2], [0, 1, 1], [2, 0, -2]] mat_f = [1] mat_h = [2] logger = logging.getLogger() logger.level = logging.DEBUG stream_handler = logging.StreamHandler(sys.stdout) logger.addHandler(stream_handler) @pytest.mark.mat_ops @pytest.mark.parametrize( ("mat1", "mat2"), [(mat_a, mat_b), (mat_c, mat_d), (mat_d, mat_e), (mat_f, mat_h)] ) def test_addition(mat1, mat2): if (np.array(mat1)).shape < (2, 2) or (np.array(mat2)).shape < (2, 2): with pytest.raises(TypeError): logger.info(f"\n\t{test_addition.__name__} returned integer") matop.add(mat1, mat2) elif (np.array(mat1)).shape == (np.array(mat2)).shape: logger.info(f"\n\t{test_addition.__name__} with same matrix dims") act = (np.array(mat1) + np.array(mat2)).tolist() theo = matop.add(mat1, mat2) assert theo == act else: with pytest.raises(ValueError): logger.info(f"\n\t{test_addition.__name__} with different matrix dims") matop.add(mat1, mat2) @pytest.mark.mat_ops @pytest.mark.parametrize( ("mat1", "mat2"), [(mat_a, mat_b), (mat_c, mat_d), (mat_d, mat_e), (mat_f, mat_h)] ) def test_subtraction(mat1, mat2): if (np.array(mat1)).shape < (2, 2) or (np.array(mat2)).shape < (2, 2): with pytest.raises(TypeError): logger.info(f"\n\t{test_subtraction.__name__} returned integer") matop.subtract(mat1, mat2) elif (np.array(mat1)).shape == (np.array(mat2)).shape: logger.info(f"\n\t{test_subtraction.__name__} with same matrix dims") act = (np.array(mat1) - np.array(mat2)).tolist() theo = matop.subtract(mat1, mat2) assert theo == act else: with pytest.raises(ValueError): logger.info(f"\n\t{test_subtraction.__name__} with different matrix dims") assert matop.subtract(mat1, mat2) @pytest.mark.mat_ops @pytest.mark.parametrize( ("mat1", "mat2"), [(mat_a, mat_b), (mat_c, mat_d), (mat_d, mat_e), (mat_f, mat_h)] ) def test_multiplication(mat1, mat2): if (np.array(mat1)).shape < (2, 2) or (np.array(mat2)).shape < (2, 2): logger.info(f"\n\t{test_multiplication.__name__} returned integer") with pytest.raises(TypeError): matop.add(mat1, mat2) elif (np.array(mat1)).shape == (np.array(mat2)).shape: logger.info(f"\n\t{test_multiplication.__name__} meets dim requirements") act = (np.matmul(mat1, mat2)).tolist() theo = matop.multiply(mat1, mat2) assert theo == act else: with pytest.raises(ValueError): logger.info( f"\n\t{test_multiplication.__name__} does not meet dim requirements" ) assert matop.subtract(mat1, mat2) @pytest.mark.mat_ops def test_scalar_multiply(): act = (3.5 * np.array(mat_a)).tolist() theo = matop.scalar_multiply(mat_a, 3.5) assert theo == act @pytest.mark.mat_ops def test_identity(): act = (np.identity(5)).tolist() theo = matop.identity(5) assert theo == act @pytest.mark.mat_ops @pytest.mark.parametrize("mat", [mat_a, mat_b, mat_c, mat_d, mat_e, mat_f]) def test_transpose(mat): if (np.array(mat)).shape < (2, 2): with pytest.raises(TypeError): logger.info(f"\n\t{test_transpose.__name__} returned integer") matop.transpose(mat) else: act = (np.transpose(mat)).tolist() theo = matop.transpose(mat, return_map=False) assert theo == act
""" Testing here assumes that numpy and linalg is ALWAYS correct!!!! If running from PyCharm you can place the following line in "Additional Arguments" for the pytest run configuration -vv -m mat_ops -p no:cacheprovider """ import logging # standard libraries import sys import numpy as np import pytest # type: ignore # Custom/local libraries from matrix import matrix_operation as matop mat_a = [[12, 10], [3, 9]] mat_b = [[3, 4], [7, 4]] mat_c = [[3, 0, 2], [2, 0, -2], [0, 1, 1]] mat_d = [[3, 0, -2], [2, 0, 2], [0, 1, 1]] mat_e = [[3, 0, 2], [2, 0, -2], [0, 1, 1], [2, 0, -2]] mat_f = [1] mat_h = [2] logger = logging.getLogger() logger.level = logging.DEBUG stream_handler = logging.StreamHandler(sys.stdout) logger.addHandler(stream_handler) @pytest.mark.mat_ops @pytest.mark.parametrize( ("mat1", "mat2"), [(mat_a, mat_b), (mat_c, mat_d), (mat_d, mat_e), (mat_f, mat_h)] ) def test_addition(mat1, mat2): if (np.array(mat1)).shape < (2, 2) or (np.array(mat2)).shape < (2, 2): with pytest.raises(TypeError): logger.info(f"\n\t{test_addition.__name__} returned integer") matop.add(mat1, mat2) elif (np.array(mat1)).shape == (np.array(mat2)).shape: logger.info(f"\n\t{test_addition.__name__} with same matrix dims") act = (np.array(mat1) + np.array(mat2)).tolist() theo = matop.add(mat1, mat2) assert theo == act else: with pytest.raises(ValueError): logger.info(f"\n\t{test_addition.__name__} with different matrix dims") matop.add(mat1, mat2) @pytest.mark.mat_ops @pytest.mark.parametrize( ("mat1", "mat2"), [(mat_a, mat_b), (mat_c, mat_d), (mat_d, mat_e), (mat_f, mat_h)] ) def test_subtraction(mat1, mat2): if (np.array(mat1)).shape < (2, 2) or (np.array(mat2)).shape < (2, 2): with pytest.raises(TypeError): logger.info(f"\n\t{test_subtraction.__name__} returned integer") matop.subtract(mat1, mat2) elif (np.array(mat1)).shape == (np.array(mat2)).shape: logger.info(f"\n\t{test_subtraction.__name__} with same matrix dims") act = (np.array(mat1) - np.array(mat2)).tolist() theo = matop.subtract(mat1, mat2) assert theo == act else: with pytest.raises(ValueError): logger.info(f"\n\t{test_subtraction.__name__} with different matrix dims") assert matop.subtract(mat1, mat2) @pytest.mark.mat_ops @pytest.mark.parametrize( ("mat1", "mat2"), [(mat_a, mat_b), (mat_c, mat_d), (mat_d, mat_e), (mat_f, mat_h)] ) def test_multiplication(mat1, mat2): if (np.array(mat1)).shape < (2, 2) or (np.array(mat2)).shape < (2, 2): logger.info(f"\n\t{test_multiplication.__name__} returned integer") with pytest.raises(TypeError): matop.add(mat1, mat2) elif (np.array(mat1)).shape == (np.array(mat2)).shape: logger.info(f"\n\t{test_multiplication.__name__} meets dim requirements") act = (np.matmul(mat1, mat2)).tolist() theo = matop.multiply(mat1, mat2) assert theo == act else: with pytest.raises(ValueError): logger.info( f"\n\t{test_multiplication.__name__} does not meet dim requirements" ) assert matop.subtract(mat1, mat2) @pytest.mark.mat_ops def test_scalar_multiply(): act = (3.5 * np.array(mat_a)).tolist() theo = matop.scalar_multiply(mat_a, 3.5) assert theo == act @pytest.mark.mat_ops def test_identity(): act = (np.identity(5)).tolist() theo = matop.identity(5) assert theo == act @pytest.mark.mat_ops @pytest.mark.parametrize("mat", [mat_a, mat_b, mat_c, mat_d, mat_e, mat_f]) def test_transpose(mat): if (np.array(mat)).shape < (2, 2): with pytest.raises(TypeError): logger.info(f"\n\t{test_transpose.__name__} returned integer") matop.transpose(mat) else: act = (np.transpose(mat)).tolist() theo = matop.transpose(mat, return_map=False) assert theo == act
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
""" Project Euler Problem 3: https://projecteuler.net/problem=3 Largest prime factor The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143? References: - https://en.wikipedia.org/wiki/Prime_number#Unique_factorization """ def solution(n: int = 600851475143) -> int: """ Returns the largest prime factor of a given number n. >>> solution(13195) 29 >>> solution(10) 5 >>> solution(17) 17 >>> solution(3.4) 3 >>> solution(0) Traceback (most recent call last): ... ValueError: Parameter n must be greater than or equal to one. >>> solution(-17) Traceback (most recent call last): ... ValueError: Parameter n must be greater than or equal to one. >>> solution([]) Traceback (most recent call last): ... TypeError: Parameter n must be int or castable to int. >>> solution("asd") Traceback (most recent call last): ... TypeError: Parameter n must be int or castable to int. """ try: n = int(n) except (TypeError, ValueError): raise TypeError("Parameter n must be int or castable to int.") if n <= 0: raise ValueError("Parameter n must be greater than or equal to one.") i = 2 ans = 0 if n == 2: return 2 while n > 2: while n % i != 0: i += 1 ans = i while n % i == 0: n = n // i i += 1 return int(ans) if __name__ == "__main__": print(f"{solution() = }")
""" Project Euler Problem 3: https://projecteuler.net/problem=3 Largest prime factor The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143? References: - https://en.wikipedia.org/wiki/Prime_number#Unique_factorization """ def solution(n: int = 600851475143) -> int: """ Returns the largest prime factor of a given number n. >>> solution(13195) 29 >>> solution(10) 5 >>> solution(17) 17 >>> solution(3.4) 3 >>> solution(0) Traceback (most recent call last): ... ValueError: Parameter n must be greater than or equal to one. >>> solution(-17) Traceback (most recent call last): ... ValueError: Parameter n must be greater than or equal to one. >>> solution([]) Traceback (most recent call last): ... TypeError: Parameter n must be int or castable to int. >>> solution("asd") Traceback (most recent call last): ... TypeError: Parameter n must be int or castable to int. """ try: n = int(n) except (TypeError, ValueError): raise TypeError("Parameter n must be int or castable to int.") if n <= 0: raise ValueError("Parameter n must be greater than or equal to one.") i = 2 ans = 0 if n == 2: return 2 while n > 2: while n % i != 0: i += 1 ans = i while n % i == 0: n = n // i i += 1 return int(ans) if __name__ == "__main__": print(f"{solution() = }")
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
0000000000000000000000000000000000000000 9caf4784aada17dc75348f77cc8c356df503c0f3 jupyter <[email protected]> 1704811012 +0000 clone: from https://github.com/TheAlgorithms/Python.git
0000000000000000000000000000000000000000 9caf4784aada17dc75348f77cc8c356df503c0f3 jupyter <[email protected]> 1704811012 +0000 clone: from https://github.com/TheAlgorithms/Python.git
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
""" Calculate sin function. It's not a perfect function so I am rounding the result to 10 decimal places by default. Formula: sin(x) = x - x^3/3! + x^5/5! - x^7/7! + ... Where: x = angle in randians. Source: https://www.homeschoolmath.net/teaching/sine_calculator.php """ from math import factorial, radians def sin( angle_in_degrees: float, accuracy: int = 18, rounded_values_count: int = 10 ) -> float: """ Implement sin function. >>> sin(0.0) 0.0 >>> sin(90.0) 1.0 >>> sin(180.0) 0.0 >>> sin(270.0) -1.0 >>> sin(0.68) 0.0118679603 >>> sin(1.97) 0.0343762121 >>> sin(64.0) 0.8987940463 >>> sin(9999.0) -0.9876883406 >>> sin(-689.0) 0.5150380749 >>> sin(89.7) 0.9999862922 """ # Simplify the angle to be between 360 and -360 degrees. angle_in_degrees = angle_in_degrees - ((angle_in_degrees // 360.0) * 360.0) # Converting from degrees to radians angle_in_radians = radians(angle_in_degrees) result = angle_in_radians a = 3 b = -1 for _ in range(accuracy): result += (b * (angle_in_radians**a)) / factorial(a) b = -b # One positive term and the next will be negative and so on... a += 2 # Increased by 2 for every term. return round(result, rounded_values_count) if __name__ == "__main__": __import__("doctest").testmod()
""" Calculate sin function. It's not a perfect function so I am rounding the result to 10 decimal places by default. Formula: sin(x) = x - x^3/3! + x^5/5! - x^7/7! + ... Where: x = angle in randians. Source: https://www.homeschoolmath.net/teaching/sine_calculator.php """ from math import factorial, radians def sin( angle_in_degrees: float, accuracy: int = 18, rounded_values_count: int = 10 ) -> float: """ Implement sin function. >>> sin(0.0) 0.0 >>> sin(90.0) 1.0 >>> sin(180.0) 0.0 >>> sin(270.0) -1.0 >>> sin(0.68) 0.0118679603 >>> sin(1.97) 0.0343762121 >>> sin(64.0) 0.8987940463 >>> sin(9999.0) -0.9876883406 >>> sin(-689.0) 0.5150380749 >>> sin(89.7) 0.9999862922 """ # Simplify the angle to be between 360 and -360 degrees. angle_in_degrees = angle_in_degrees - ((angle_in_degrees // 360.0) * 360.0) # Converting from degrees to radians angle_in_radians = radians(angle_in_degrees) result = angle_in_radians a = 3 b = -1 for _ in range(accuracy): result += (b * (angle_in_radians**a)) / factorial(a) b = -b # One positive term and the next will be negative and so on... a += 2 # Increased by 2 for every term. return round(result, rounded_values_count) if __name__ == "__main__": __import__("doctest").testmod()
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
""" This is a Python implementation for questions involving task assignments between people. Here Bitmasking and DP are used for solving this. Question :- We have N tasks and M people. Each person in M can do only certain of these tasks. Also a person can do only one task and a task is performed only by one person. Find the total no of ways in which the tasks can be distributed. """ from collections import defaultdict class AssignmentUsingBitmask: def __init__(self, task_performed, total): self.total_tasks = total # total no of tasks (N) # DP table will have a dimension of (2^M)*N # initially all values are set to -1 self.dp = [ [-1 for i in range(total + 1)] for j in range(2 ** len(task_performed)) ] self.task = defaultdict(list) # stores the list of persons for each task # final_mask is used to check if all persons are included by setting all bits # to 1 self.final_mask = (1 << len(task_performed)) - 1 def count_ways_until(self, mask, task_no): # if mask == self.finalmask all persons are distributed tasks, return 1 if mask == self.final_mask: return 1 # if not everyone gets the task and no more tasks are available, return 0 if task_no > self.total_tasks: return 0 # if case already considered if self.dp[mask][task_no] != -1: return self.dp[mask][task_no] # Number of ways when we don't this task in the arrangement total_ways_util = self.count_ways_until(mask, task_no + 1) # now assign the tasks one by one to all possible persons and recursively # assign for the remaining tasks. if task_no in self.task: for p in self.task[task_no]: # if p is already given a task if mask & (1 << p): continue # assign this task to p and change the mask value. And recursively # assign tasks with the new mask value. total_ways_util += self.count_ways_until(mask | (1 << p), task_no + 1) # save the value. self.dp[mask][task_no] = total_ways_util return self.dp[mask][task_no] def count_no_of_ways(self, task_performed): # Store the list of persons for each task for i in range(len(task_performed)): for j in task_performed[i]: self.task[j].append(i) # call the function to fill the DP table, final answer is stored in dp[0][1] return self.count_ways_until(0, 1) if __name__ == "__main__": total_tasks = 5 # total no of tasks (the value of N) # the list of tasks that can be done by M persons. task_performed = [[1, 3, 4], [1, 2, 5], [3, 4]] print( AssignmentUsingBitmask(task_performed, total_tasks).count_no_of_ways( task_performed ) ) """ For the particular example the tasks can be distributed as (1,2,3), (1,2,4), (1,5,3), (1,5,4), (3,1,4), (3,2,4), (3,5,4), (4,1,3), (4,2,3), (4,5,3) total 10 """
""" This is a Python implementation for questions involving task assignments between people. Here Bitmasking and DP are used for solving this. Question :- We have N tasks and M people. Each person in M can do only certain of these tasks. Also a person can do only one task and a task is performed only by one person. Find the total no of ways in which the tasks can be distributed. """ from collections import defaultdict class AssignmentUsingBitmask: def __init__(self, task_performed, total): self.total_tasks = total # total no of tasks (N) # DP table will have a dimension of (2^M)*N # initially all values are set to -1 self.dp = [ [-1 for i in range(total + 1)] for j in range(2 ** len(task_performed)) ] self.task = defaultdict(list) # stores the list of persons for each task # final_mask is used to check if all persons are included by setting all bits # to 1 self.final_mask = (1 << len(task_performed)) - 1 def count_ways_until(self, mask, task_no): # if mask == self.finalmask all persons are distributed tasks, return 1 if mask == self.final_mask: return 1 # if not everyone gets the task and no more tasks are available, return 0 if task_no > self.total_tasks: return 0 # if case already considered if self.dp[mask][task_no] != -1: return self.dp[mask][task_no] # Number of ways when we don't this task in the arrangement total_ways_util = self.count_ways_until(mask, task_no + 1) # now assign the tasks one by one to all possible persons and recursively # assign for the remaining tasks. if task_no in self.task: for p in self.task[task_no]: # if p is already given a task if mask & (1 << p): continue # assign this task to p and change the mask value. And recursively # assign tasks with the new mask value. total_ways_util += self.count_ways_until(mask | (1 << p), task_no + 1) # save the value. self.dp[mask][task_no] = total_ways_util return self.dp[mask][task_no] def count_no_of_ways(self, task_performed): # Store the list of persons for each task for i in range(len(task_performed)): for j in task_performed[i]: self.task[j].append(i) # call the function to fill the DP table, final answer is stored in dp[0][1] return self.count_ways_until(0, 1) if __name__ == "__main__": total_tasks = 5 # total no of tasks (the value of N) # the list of tasks that can be done by M persons. task_performed = [[1, 3, 4], [1, 2, 5], [3, 4]] print( AssignmentUsingBitmask(task_performed, total_tasks).count_no_of_ways( task_performed ) ) """ For the particular example the tasks can be distributed as (1,2,3), (1,2,4), (1,5,3), (1,5,4), (3,1,4), (3,2,4), (3,5,4), (4,1,3), (4,2,3), (4,5,3) total 10 """
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
from __future__ import annotations class Node: """ A Node has data variable and pointers to Nodes to its left and right. """ def __init__(self, data: int) -> None: self.data = data self.left: Node | None = None self.right: Node | None = None def display(tree: Node | None) -> None: # In Order traversal of the tree """ >>> root = Node(1) >>> root.left = Node(0) >>> root.right = Node(2) >>> display(root) 0 1 2 >>> display(root.right) 2 """ if tree: display(tree.left) print(tree.data) display(tree.right) def depth_of_tree(tree: Node | None) -> int: """ Recursive function that returns the depth of a binary tree. >>> root = Node(0) >>> depth_of_tree(root) 1 >>> root.left = Node(0) >>> depth_of_tree(root) 2 >>> root.right = Node(0) >>> depth_of_tree(root) 2 >>> root.left.right = Node(0) >>> depth_of_tree(root) 3 >>> depth_of_tree(root.left) 2 """ return 1 + max(depth_of_tree(tree.left), depth_of_tree(tree.right)) if tree else 0 def is_full_binary_tree(tree: Node) -> bool: """ Returns True if this is a full binary tree >>> root = Node(0) >>> is_full_binary_tree(root) True >>> root.left = Node(0) >>> is_full_binary_tree(root) False >>> root.right = Node(0) >>> is_full_binary_tree(root) True >>> root.left.left = Node(0) >>> is_full_binary_tree(root) False >>> root.right.right = Node(0) >>> is_full_binary_tree(root) False """ if not tree: return True if tree.left and tree.right: return is_full_binary_tree(tree.left) and is_full_binary_tree(tree.right) else: return not tree.left and not tree.right def main() -> None: # Main function for testing. tree = Node(1) tree.left = Node(2) tree.right = Node(3) tree.left.left = Node(4) tree.left.right = Node(5) tree.left.right.left = Node(6) tree.right.left = Node(7) tree.right.left.left = Node(8) tree.right.left.left.right = Node(9) print(is_full_binary_tree(tree)) print(depth_of_tree(tree)) print("Tree is: ") display(tree) if __name__ == "__main__": main()
from __future__ import annotations class Node: """ A Node has data variable and pointers to Nodes to its left and right. """ def __init__(self, data: int) -> None: self.data = data self.left: Node | None = None self.right: Node | None = None def display(tree: Node | None) -> None: # In Order traversal of the tree """ >>> root = Node(1) >>> root.left = Node(0) >>> root.right = Node(2) >>> display(root) 0 1 2 >>> display(root.right) 2 """ if tree: display(tree.left) print(tree.data) display(tree.right) def depth_of_tree(tree: Node | None) -> int: """ Recursive function that returns the depth of a binary tree. >>> root = Node(0) >>> depth_of_tree(root) 1 >>> root.left = Node(0) >>> depth_of_tree(root) 2 >>> root.right = Node(0) >>> depth_of_tree(root) 2 >>> root.left.right = Node(0) >>> depth_of_tree(root) 3 >>> depth_of_tree(root.left) 2 """ return 1 + max(depth_of_tree(tree.left), depth_of_tree(tree.right)) if tree else 0 def is_full_binary_tree(tree: Node) -> bool: """ Returns True if this is a full binary tree >>> root = Node(0) >>> is_full_binary_tree(root) True >>> root.left = Node(0) >>> is_full_binary_tree(root) False >>> root.right = Node(0) >>> is_full_binary_tree(root) True >>> root.left.left = Node(0) >>> is_full_binary_tree(root) False >>> root.right.right = Node(0) >>> is_full_binary_tree(root) False """ if not tree: return True if tree.left and tree.right: return is_full_binary_tree(tree.left) and is_full_binary_tree(tree.right) else: return not tree.left and not tree.right def main() -> None: # Main function for testing. tree = Node(1) tree.left = Node(2) tree.right = Node(3) tree.left.left = Node(4) tree.left.right = Node(5) tree.left.right.left = Node(6) tree.right.left = Node(7) tree.right.left.left = Node(8) tree.right.left.left.right = Node(9) print(is_full_binary_tree(tree)) print(depth_of_tree(tree)) print("Tree is: ") display(tree) if __name__ == "__main__": main()
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
import webbrowser from sys import argv from urllib.parse import parse_qs, quote import requests from bs4 import BeautifulSoup from fake_useragent import UserAgent if __name__ == "__main__": query = "%20".join(argv[1:]) if len(argv) > 1 else quote(str(input("Search: "))) print("Googling.....") url = f"https://www.google.com/search?q={query}&num=100" res = requests.get( url, headers={"User-Agent": str(UserAgent().random)}, ) try: link = ( BeautifulSoup(res.text, "html.parser") .find("div", attrs={"class": "yuRUbf"}) .find("a") .get("href") ) except AttributeError: link = parse_qs( BeautifulSoup(res.text, "html.parser") .find("div", attrs={"class": "kCrYT"}) .find("a") .get("href") )["url"][0] webbrowser.open(link)
import webbrowser from sys import argv from urllib.parse import parse_qs, quote import requests from bs4 import BeautifulSoup from fake_useragent import UserAgent if __name__ == "__main__": query = "%20".join(argv[1:]) if len(argv) > 1 else quote(str(input("Search: "))) print("Googling.....") url = f"https://www.google.com/search?q={query}&num=100" res = requests.get( url, headers={"User-Agent": str(UserAgent().random)}, ) try: link = ( BeautifulSoup(res.text, "html.parser") .find("div", attrs={"class": "yuRUbf"}) .find("a") .get("href") ) except AttributeError: link = parse_qs( BeautifulSoup(res.text, "html.parser") .find("div", attrs={"class": "kCrYT"}) .find("a") .get("href") )["url"][0] webbrowser.open(link)
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
#!/bin/sh # # An example hook script to make use of push options. # The example simply echoes all push options that start with 'echoback=' # and rejects all pushes when the "reject" push option is used. # # To enable this hook, rename this file to "pre-receive". if test -n "$GIT_PUSH_OPTION_COUNT" then i=0 while test "$i" -lt "$GIT_PUSH_OPTION_COUNT" do eval "value=\$GIT_PUSH_OPTION_$i" case "$value" in echoback=*) echo "echo from the pre-receive-hook: ${value#*=}" >&2 ;; reject) exit 1 esac i=$((i + 1)) done fi
#!/bin/sh # # An example hook script to make use of push options. # The example simply echoes all push options that start with 'echoback=' # and rejects all pushes when the "reject" push option is used. # # To enable this hook, rename this file to "pre-receive". if test -n "$GIT_PUSH_OPTION_COUNT" then i=0 while test "$i" -lt "$GIT_PUSH_OPTION_COUNT" do eval "value=\$GIT_PUSH_OPTION_$i" case "$value" in echoback=*) echo "echo from the pre-receive-hook: ${value#*=}" >&2 ;; reject) exit 1 esac i=$((i + 1)) done fi
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
""" A Radix Tree is a data structure that represents a space-optimized trie (prefix tree) in whicheach node that is the only child is merged with its parent [https://en.wikipedia.org/wiki/Radix_tree] """ class RadixNode: def __init__(self, prefix: str = "", is_leaf: bool = False) -> None: # Mapping from the first character of the prefix of the node self.nodes: dict[str, RadixNode] = {} # A node will be a leaf if the tree contains its word self.is_leaf = is_leaf self.prefix = prefix def match(self, word: str) -> tuple[str, str, str]: """Compute the common substring of the prefix of the node and a word Args: word (str): word to compare Returns: (str, str, str): common substring, remaining prefix, remaining word >>> RadixNode("myprefix").match("mystring") ('my', 'prefix', 'string') """ x = 0 for q, w in zip(self.prefix, word): if q != w: break x += 1 return self.prefix[:x], self.prefix[x:], word[x:] def insert_many(self, words: list[str]) -> None: """Insert many words in the tree Args: words (list[str]): list of words >>> RadixNode("myprefix").insert_many(["mystring", "hello"]) """ for word in words: self.insert(word) def insert(self, word: str) -> None: """Insert a word into the tree Args: word (str): word to insert >>> RadixNode("myprefix").insert("mystring") """ # Case 1: If the word is the prefix of the node # Solution: We set the current node as leaf if self.prefix == word: self.is_leaf = True # Case 2: The node has no edges that have a prefix to the word # Solution: We create an edge from the current node to a new one # containing the word elif word[0] not in self.nodes: self.nodes[word[0]] = RadixNode(prefix=word, is_leaf=True) else: incoming_node = self.nodes[word[0]] matching_string, remaining_prefix, remaining_word = incoming_node.match( word ) # Case 3: The node prefix is equal to the matching # Solution: We insert remaining word on the next node if remaining_prefix == "": self.nodes[matching_string[0]].insert(remaining_word) # Case 4: The word is greater equal to the matching # Solution: Create a node in between both nodes, change # prefixes and add the new node for the remaining word else: incoming_node.prefix = remaining_prefix aux_node = self.nodes[matching_string[0]] self.nodes[matching_string[0]] = RadixNode(matching_string, False) self.nodes[matching_string[0]].nodes[remaining_prefix[0]] = aux_node if remaining_word == "": self.nodes[matching_string[0]].is_leaf = True else: self.nodes[matching_string[0]].insert(remaining_word) def find(self, word: str) -> bool: """Returns if the word is on the tree Args: word (str): word to check Returns: bool: True if the word appears on the tree >>> RadixNode("myprefix").find("mystring") False """ incoming_node = self.nodes.get(word[0], None) if not incoming_node: return False else: matching_string, remaining_prefix, remaining_word = incoming_node.match( word ) # If there is remaining prefix, the word can't be on the tree if remaining_prefix != "": return False # This applies when the word and the prefix are equal elif remaining_word == "": return incoming_node.is_leaf # We have word remaining so we check the next node else: return incoming_node.find(remaining_word) def delete(self, word: str) -> bool: """Deletes a word from the tree if it exists Args: word (str): word to be deleted Returns: bool: True if the word was found and deleted. False if word is not found >>> RadixNode("myprefix").delete("mystring") False """ incoming_node = self.nodes.get(word[0], None) if not incoming_node: return False else: matching_string, remaining_prefix, remaining_word = incoming_node.match( word ) # If there is remaining prefix, the word can't be on the tree if remaining_prefix != "": return False # We have word remaining so we check the next node elif remaining_word != "": return incoming_node.delete(remaining_word) else: # If it is not a leaf, we don't have to delete if not incoming_node.is_leaf: return False else: # We delete the nodes if no edges go from it if len(incoming_node.nodes) == 0: del self.nodes[word[0]] # We merge the current node with its only child if len(self.nodes) == 1 and not self.is_leaf: merging_node = list(self.nodes.values())[0] self.is_leaf = merging_node.is_leaf self.prefix += merging_node.prefix self.nodes = merging_node.nodes # If there is more than 1 edge, we just mark it as non-leaf elif len(incoming_node.nodes) > 1: incoming_node.is_leaf = False # If there is 1 edge, we merge it with its child else: merging_node = list(incoming_node.nodes.values())[0] incoming_node.is_leaf = merging_node.is_leaf incoming_node.prefix += merging_node.prefix incoming_node.nodes = merging_node.nodes return True def print_tree(self, height: int = 0) -> None: """Print the tree Args: height (int, optional): Height of the printed node """ if self.prefix != "": print("-" * height, self.prefix, " (leaf)" if self.is_leaf else "") for value in self.nodes.values(): value.print_tree(height + 1) def test_trie() -> bool: words = "banana bananas bandana band apple all beast".split() root = RadixNode() root.insert_many(words) assert all(root.find(word) for word in words) assert not root.find("bandanas") assert not root.find("apps") root.delete("all") assert not root.find("all") root.delete("banana") assert not root.find("banana") assert root.find("bananas") return True def pytests() -> None: assert test_trie() def main() -> None: """ >>> pytests() """ root = RadixNode() words = "banana bananas bandanas bandana band apple all beast".split() root.insert_many(words) print("Words:", words) print("Tree:") root.print_tree() if __name__ == "__main__": main()
""" A Radix Tree is a data structure that represents a space-optimized trie (prefix tree) in whicheach node that is the only child is merged with its parent [https://en.wikipedia.org/wiki/Radix_tree] """ class RadixNode: def __init__(self, prefix: str = "", is_leaf: bool = False) -> None: # Mapping from the first character of the prefix of the node self.nodes: dict[str, RadixNode] = {} # A node will be a leaf if the tree contains its word self.is_leaf = is_leaf self.prefix = prefix def match(self, word: str) -> tuple[str, str, str]: """Compute the common substring of the prefix of the node and a word Args: word (str): word to compare Returns: (str, str, str): common substring, remaining prefix, remaining word >>> RadixNode("myprefix").match("mystring") ('my', 'prefix', 'string') """ x = 0 for q, w in zip(self.prefix, word): if q != w: break x += 1 return self.prefix[:x], self.prefix[x:], word[x:] def insert_many(self, words: list[str]) -> None: """Insert many words in the tree Args: words (list[str]): list of words >>> RadixNode("myprefix").insert_many(["mystring", "hello"]) """ for word in words: self.insert(word) def insert(self, word: str) -> None: """Insert a word into the tree Args: word (str): word to insert >>> RadixNode("myprefix").insert("mystring") """ # Case 1: If the word is the prefix of the node # Solution: We set the current node as leaf if self.prefix == word: self.is_leaf = True # Case 2: The node has no edges that have a prefix to the word # Solution: We create an edge from the current node to a new one # containing the word elif word[0] not in self.nodes: self.nodes[word[0]] = RadixNode(prefix=word, is_leaf=True) else: incoming_node = self.nodes[word[0]] matching_string, remaining_prefix, remaining_word = incoming_node.match( word ) # Case 3: The node prefix is equal to the matching # Solution: We insert remaining word on the next node if remaining_prefix == "": self.nodes[matching_string[0]].insert(remaining_word) # Case 4: The word is greater equal to the matching # Solution: Create a node in between both nodes, change # prefixes and add the new node for the remaining word else: incoming_node.prefix = remaining_prefix aux_node = self.nodes[matching_string[0]] self.nodes[matching_string[0]] = RadixNode(matching_string, False) self.nodes[matching_string[0]].nodes[remaining_prefix[0]] = aux_node if remaining_word == "": self.nodes[matching_string[0]].is_leaf = True else: self.nodes[matching_string[0]].insert(remaining_word) def find(self, word: str) -> bool: """Returns if the word is on the tree Args: word (str): word to check Returns: bool: True if the word appears on the tree >>> RadixNode("myprefix").find("mystring") False """ incoming_node = self.nodes.get(word[0], None) if not incoming_node: return False else: matching_string, remaining_prefix, remaining_word = incoming_node.match( word ) # If there is remaining prefix, the word can't be on the tree if remaining_prefix != "": return False # This applies when the word and the prefix are equal elif remaining_word == "": return incoming_node.is_leaf # We have word remaining so we check the next node else: return incoming_node.find(remaining_word) def delete(self, word: str) -> bool: """Deletes a word from the tree if it exists Args: word (str): word to be deleted Returns: bool: True if the word was found and deleted. False if word is not found >>> RadixNode("myprefix").delete("mystring") False """ incoming_node = self.nodes.get(word[0], None) if not incoming_node: return False else: matching_string, remaining_prefix, remaining_word = incoming_node.match( word ) # If there is remaining prefix, the word can't be on the tree if remaining_prefix != "": return False # We have word remaining so we check the next node elif remaining_word != "": return incoming_node.delete(remaining_word) else: # If it is not a leaf, we don't have to delete if not incoming_node.is_leaf: return False else: # We delete the nodes if no edges go from it if len(incoming_node.nodes) == 0: del self.nodes[word[0]] # We merge the current node with its only child if len(self.nodes) == 1 and not self.is_leaf: merging_node = list(self.nodes.values())[0] self.is_leaf = merging_node.is_leaf self.prefix += merging_node.prefix self.nodes = merging_node.nodes # If there is more than 1 edge, we just mark it as non-leaf elif len(incoming_node.nodes) > 1: incoming_node.is_leaf = False # If there is 1 edge, we merge it with its child else: merging_node = list(incoming_node.nodes.values())[0] incoming_node.is_leaf = merging_node.is_leaf incoming_node.prefix += merging_node.prefix incoming_node.nodes = merging_node.nodes return True def print_tree(self, height: int = 0) -> None: """Print the tree Args: height (int, optional): Height of the printed node """ if self.prefix != "": print("-" * height, self.prefix, " (leaf)" if self.is_leaf else "") for value in self.nodes.values(): value.print_tree(height + 1) def test_trie() -> bool: words = "banana bananas bandana band apple all beast".split() root = RadixNode() root.insert_many(words) assert all(root.find(word) for word in words) assert not root.find("bandanas") assert not root.find("apps") root.delete("all") assert not root.find("all") root.delete("banana") assert not root.find("banana") assert root.find("bananas") return True def pytests() -> None: assert test_trie() def main() -> None: """ >>> pytests() """ root = RadixNode() words = "banana bananas bandanas bandana band apple all beast".split() root.insert_many(words) print("Words:", words) print("Tree:") root.print_tree() if __name__ == "__main__": main()
-1
TheAlgorithms/Python
8,177
[pre-commit.ci] pre-commit autoupdate
<!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
pre-commit-ci[bot]
"2023-03-13T21:31:29Z"
"2023-03-13T22:18:36Z"
f9cc25221c1521a0da9ee27d6a9bea1f14f4c986
8959211100ba7a612d42a6e7db4755303b78c5a7
[pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start--> updates: - [github.com/charliermarsh/ruff-pre-commit: v0.0.254 β†’ v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255) - [github.com/pre-commit/mirrors-mypy: v1.0.1 β†’ v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1) - [github.com/codespell-project/codespell: v2.2.2 β†’ v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4) <!--pre-commit.ci end-->
import os from itertools import chain from random import randrange, shuffle import pytest from .sol1 import PokerHand SORTED_HANDS = ( "4S 3H 2C 7S 5H", "9D 8H 2C 6S 7H", "2D 6D 9D TH 7D", "TC 8C 2S JH 6C", "JH 8S TH AH QH", "TS KS 5S 9S AC", "KD 6S 9D TH AD", "KS 8D 4D 9S 4S", # pair "8C 4S KH JS 4D", # pair "QH 8H KD JH 8S", # pair "KC 4H KS 2H 8D", # pair "KD 4S KC 3H 8S", # pair "AH 8S AS KC JH", # pair "3H 4C 4H 3S 2H", # 2 pairs "5S 5D 2C KH KH", # 2 pairs "3C KH 5D 5S KH", # 2 pairs "AS 3C KH AD KH", # 2 pairs "7C 7S 3S 7H 5S", # 3 of a kind "7C 7S KH 2H 7H", # 3 of a kind "AC KH QH AH AS", # 3 of a kind "2H 4D 3C AS 5S", # straight (low ace) "3C 5C 4C 2C 6H", # straight "6S 8S 7S 5H 9H", # straight "JS QS 9H TS KH", # straight "QC KH TS JS AH", # straight (high ace) "8C 9C 5C 3C TC", # flush "3S 8S 9S 5S KS", # flush "4C 5C 9C 8C KC", # flush "JH 8H AH KH QH", # flush "3D 2H 3H 2C 2D", # full house "2H 2C 3S 3H 3D", # full house "KH KC 3S 3H 3D", # full house "JC 6H JS JD JH", # 4 of a kind "JC 7H JS JD JH", # 4 of a kind "JC KH JS JD JH", # 4 of a kind "2S AS 4S 5S 3S", # straight flush (low ace) "2D 6D 3D 4D 5D", # straight flush "5C 6C 3C 7C 4C", # straight flush "JH 9H TH KH QH", # straight flush "JH AH TH KH QH", # royal flush (high ace straight flush) ) TEST_COMPARE = ( ("2H 3H 4H 5H 6H", "KS AS TS QS JS", "Loss"), ("2H 3H 4H 5H 6H", "AS AD AC AH JD", "Win"), ("AS AH 2H AD AC", "JS JD JC JH 3D", "Win"), ("2S AH 2H AS AC", "JS JD JC JH AD", "Loss"), ("2S AH 2H AS AC", "2H 3H 5H 6H 7H", "Win"), ("AS 3S 4S 8S 2S", "2H 3H 5H 6H 7H", "Win"), ("2H 3H 5H 6H 7H", "2S 3H 4H 5S 6C", "Win"), ("2S 3H 4H 5S 6C", "3D 4C 5H 6H 2S", "Tie"), ("2S 3H 4H 5S 6C", "AH AC 5H 6H AS", "Win"), ("2S 2H 4H 5S 4C", "AH AC 5H 6H AS", "Loss"), ("2S 2H 4H 5S 4C", "AH AC 5H 6H 7S", "Win"), ("6S AD 7H 4S AS", "AH AC 5H 6H 7S", "Loss"), ("2S AH 4H 5S KC", "AH AC 5H 6H 7S", "Loss"), ("2S 3H 6H 7S 9C", "7H 3C TH 6H 9S", "Loss"), ("4S 5H 6H TS AC", "3S 5H 6H TS AC", "Win"), ("2S AH 4H 5S 6C", "AD 4C 5H 6H 2C", "Tie"), ("AS AH 3H AD AC", "AS AH 2H AD AC", "Win"), ("AH AC 5H 5C QS", "AH AC 5H 5C KS", "Loss"), ("AH AC 5H 5C QS", "KH KC 5H 5C QS", "Win"), ("7C 7S KH 2H 7H", "3C 3S AH 2H 3H", "Win"), ("3C 3S AH 2H 3H", "7C 7S KH 2H 7H", "Loss"), ("6H 5H 4H 3H 2H", "5H 4H 3H 2H AH", "Win"), ("5H 4H 3H 2H AH", "5H 4H 3H 2H AH", "Tie"), ("5H 4H 3H 2H AH", "6H 5H 4H 3H 2H", "Loss"), ("AH AD KS KC AC", "AH KD KH AC KC", "Win"), ("2H 4D 3C AS 5S", "2H 4D 3C 6S 5S", "Loss"), ("2H 3S 3C 3H 2S", "3S 3C 2S 2H 2D", "Win"), ("4D 6D 5D 2D JH", "3S 8S 3H TC KH", "Loss"), ("4S 6C 8S 3S 7S", "AD KS 2D 7D 7C", "Loss"), ("6S 4C 7H 8C 3H", "5H JC AH 9D 9C", "Loss"), ("9D 9H JH TC QH", "3C 2S JS 5C 7H", "Win"), ("2H TC 8S AD 9S", "4H TS 7H 2C 5C", "Win"), ("9D 3S 2C 7S 7C", "JC TD 3C TC 9H", "Loss"), ) TEST_FLUSH = ( ("2H 3H 4H 5H 6H", True), ("AS AH 2H AD AC", False), ("2H 3H 5H 6H 7H", True), ("KS AS TS QS JS", True), ("8H 9H QS JS TH", False), ("AS 3S 4S 8S 2S", True), ) TEST_STRAIGHT = ( ("2H 3H 4H 5H 6H", True), ("AS AH 2H AD AC", False), ("2H 3H 5H 6H 7H", False), ("KS AS TS QS JS", True), ("8H 9H QS JS TH", True), ) TEST_FIVE_HIGH_STRAIGHT = ( ("2H 4D 3C AS 5S", True, [5, 4, 3, 2, 14]), ("2H 5D 3C AS 5S", False, [14, 5, 5, 3, 2]), ("JH QD KC AS TS", False, [14, 13, 12, 11, 10]), ("9D 3S 2C 7S 7C", False, [9, 7, 7, 3, 2]), ) TEST_KIND = ( ("JH AH TH KH QH", 0), ("JH 9H TH KH QH", 0), ("JC KH JS JD JH", 7), ("KH KC 3S 3H 3D", 6), ("8C 9C 5C 3C TC", 0), ("JS QS 9H TS KH", 0), ("7C 7S KH 2H 7H", 3), ("3C KH 5D 5S KH", 2), ("QH 8H KD JH 8S", 1), ("2D 6D 9D TH 7D", 0), ) TEST_TYPES = ( ("JH AH TH KH QH", 23), ("JH 9H TH KH QH", 22), ("JC KH JS JD JH", 21), ("KH KC 3S 3H 3D", 20), ("8C 9C 5C 3C TC", 19), ("JS QS 9H TS KH", 18), ("7C 7S KH 2H 7H", 17), ("3C KH 5D 5S KH", 16), ("QH 8H KD JH 8S", 15), ("2D 6D 9D TH 7D", 14), ) def generate_random_hand(): play, oppo = randrange(len(SORTED_HANDS)), randrange(len(SORTED_HANDS)) expected = ["Loss", "Tie", "Win"][(play >= oppo) + (play > oppo)] hand, other = SORTED_HANDS[play], SORTED_HANDS[oppo] return hand, other, expected def generate_random_hands(number_of_hands: int = 100): return (generate_random_hand() for _ in range(number_of_hands)) @pytest.mark.parametrize("hand, expected", TEST_FLUSH) def test_hand_is_flush(hand, expected): assert PokerHand(hand)._is_flush() == expected @pytest.mark.parametrize("hand, expected", TEST_STRAIGHT) def test_hand_is_straight(hand, expected): assert PokerHand(hand)._is_straight() == expected @pytest.mark.parametrize("hand, expected, card_values", TEST_FIVE_HIGH_STRAIGHT) def test_hand_is_five_high_straight(hand, expected, card_values): player = PokerHand(hand) assert player._is_five_high_straight() == expected assert player._card_values == card_values @pytest.mark.parametrize("hand, expected", TEST_KIND) def test_hand_is_same_kind(hand, expected): assert PokerHand(hand)._is_same_kind() == expected @pytest.mark.parametrize("hand, expected", TEST_TYPES) def test_hand_values(hand, expected): assert PokerHand(hand)._hand_type == expected @pytest.mark.parametrize("hand, other, expected", TEST_COMPARE) def test_compare_simple(hand, other, expected): assert PokerHand(hand).compare_with(PokerHand(other)) == expected @pytest.mark.parametrize("hand, other, expected", generate_random_hands()) def test_compare_random(hand, other, expected): assert PokerHand(hand).compare_with(PokerHand(other)) == expected def test_hand_sorted(): poker_hands = [PokerHand(hand) for hand in SORTED_HANDS] list_copy = poker_hands.copy() shuffle(list_copy) user_sorted = chain(sorted(list_copy)) for index, hand in enumerate(user_sorted): assert hand == poker_hands[index] def test_custom_sort_five_high_straight(): # Test that five high straights are compared correctly. pokerhands = [PokerHand("2D AC 3H 4H 5S"), PokerHand("2S 3H 4H 5S 6C")] pokerhands.sort(reverse=True) assert pokerhands[0].__str__() == "2S 3H 4H 5S 6C" def test_multiple_calls_five_high_straight(): # Multiple calls to five_high_straight function should still return True # and shouldn't mutate the list in every call other than the first. pokerhand = PokerHand("2C 4S AS 3D 5C") expected = True expected_card_values = [5, 4, 3, 2, 14] for _ in range(10): assert pokerhand._is_five_high_straight() == expected assert pokerhand._card_values == expected_card_values def test_euler_project(): # Problem number 54 from Project Euler # Testing from poker_hands.txt file answer = 0 script_dir = os.path.abspath(os.path.dirname(__file__)) poker_hands = os.path.join(script_dir, "poker_hands.txt") with open(poker_hands) as file_hand: for line in file_hand: player_hand = line[:14].strip() opponent_hand = line[15:].strip() player, opponent = PokerHand(player_hand), PokerHand(opponent_hand) output = player.compare_with(opponent) if output == "Win": answer += 1 assert answer == 376
import os from itertools import chain from random import randrange, shuffle import pytest from .sol1 import PokerHand SORTED_HANDS = ( "4S 3H 2C 7S 5H", "9D 8H 2C 6S 7H", "2D 6D 9D TH 7D", "TC 8C 2S JH 6C", "JH 8S TH AH QH", "TS KS 5S 9S AC", "KD 6S 9D TH AD", "KS 8D 4D 9S 4S", # pair "8C 4S KH JS 4D", # pair "QH 8H KD JH 8S", # pair "KC 4H KS 2H 8D", # pair "KD 4S KC 3H 8S", # pair "AH 8S AS KC JH", # pair "3H 4C 4H 3S 2H", # 2 pairs "5S 5D 2C KH KH", # 2 pairs "3C KH 5D 5S KH", # 2 pairs "AS 3C KH AD KH", # 2 pairs "7C 7S 3S 7H 5S", # 3 of a kind "7C 7S KH 2H 7H", # 3 of a kind "AC KH QH AH AS", # 3 of a kind "2H 4D 3C AS 5S", # straight (low ace) "3C 5C 4C 2C 6H", # straight "6S 8S 7S 5H 9H", # straight "JS QS 9H TS KH", # straight "QC KH TS JS AH", # straight (high ace) "8C 9C 5C 3C TC", # flush "3S 8S 9S 5S KS", # flush "4C 5C 9C 8C KC", # flush "JH 8H AH KH QH", # flush "3D 2H 3H 2C 2D", # full house "2H 2C 3S 3H 3D", # full house "KH KC 3S 3H 3D", # full house "JC 6H JS JD JH", # 4 of a kind "JC 7H JS JD JH", # 4 of a kind "JC KH JS JD JH", # 4 of a kind "2S AS 4S 5S 3S", # straight flush (low ace) "2D 6D 3D 4D 5D", # straight flush "5C 6C 3C 7C 4C", # straight flush "JH 9H TH KH QH", # straight flush "JH AH TH KH QH", # royal flush (high ace straight flush) ) TEST_COMPARE = ( ("2H 3H 4H 5H 6H", "KS AS TS QS JS", "Loss"), ("2H 3H 4H 5H 6H", "AS AD AC AH JD", "Win"), ("AS AH 2H AD AC", "JS JD JC JH 3D", "Win"), ("2S AH 2H AS AC", "JS JD JC JH AD", "Loss"), ("2S AH 2H AS AC", "2H 3H 5H 6H 7H", "Win"), ("AS 3S 4S 8S 2S", "2H 3H 5H 6H 7H", "Win"), ("2H 3H 5H 6H 7H", "2S 3H 4H 5S 6C", "Win"), ("2S 3H 4H 5S 6C", "3D 4C 5H 6H 2S", "Tie"), ("2S 3H 4H 5S 6C", "AH AC 5H 6H AS", "Win"), ("2S 2H 4H 5S 4C", "AH AC 5H 6H AS", "Loss"), ("2S 2H 4H 5S 4C", "AH AC 5H 6H 7S", "Win"), ("6S AD 7H 4S AS", "AH AC 5H 6H 7S", "Loss"), ("2S AH 4H 5S KC", "AH AC 5H 6H 7S", "Loss"), ("2S 3H 6H 7S 9C", "7H 3C TH 6H 9S", "Loss"), ("4S 5H 6H TS AC", "3S 5H 6H TS AC", "Win"), ("2S AH 4H 5S 6C", "AD 4C 5H 6H 2C", "Tie"), ("AS AH 3H AD AC", "AS AH 2H AD AC", "Win"), ("AH AC 5H 5C QS", "AH AC 5H 5C KS", "Loss"), ("AH AC 5H 5C QS", "KH KC 5H 5C QS", "Win"), ("7C 7S KH 2H 7H", "3C 3S AH 2H 3H", "Win"), ("3C 3S AH 2H 3H", "7C 7S KH 2H 7H", "Loss"), ("6H 5H 4H 3H 2H", "5H 4H 3H 2H AH", "Win"), ("5H 4H 3H 2H AH", "5H 4H 3H 2H AH", "Tie"), ("5H 4H 3H 2H AH", "6H 5H 4H 3H 2H", "Loss"), ("AH AD KS KC AC", "AH KD KH AC KC", "Win"), ("2H 4D 3C AS 5S", "2H 4D 3C 6S 5S", "Loss"), ("2H 3S 3C 3H 2S", "3S 3C 2S 2H 2D", "Win"), ("4D 6D 5D 2D JH", "3S 8S 3H TC KH", "Loss"), ("4S 6C 8S 3S 7S", "AD KS 2D 7D 7C", "Loss"), ("6S 4C 7H 8C 3H", "5H JC AH 9D 9C", "Loss"), ("9D 9H JH TC QH", "3C 2S JS 5C 7H", "Win"), ("2H TC 8S AD 9S", "4H TS 7H 2C 5C", "Win"), ("9D 3S 2C 7S 7C", "JC TD 3C TC 9H", "Loss"), ) TEST_FLUSH = ( ("2H 3H 4H 5H 6H", True), ("AS AH 2H AD AC", False), ("2H 3H 5H 6H 7H", True), ("KS AS TS QS JS", True), ("8H 9H QS JS TH", False), ("AS 3S 4S 8S 2S", True), ) TEST_STRAIGHT = ( ("2H 3H 4H 5H 6H", True), ("AS AH 2H AD AC", False), ("2H 3H 5H 6H 7H", False), ("KS AS TS QS JS", True), ("8H 9H QS JS TH", True), ) TEST_FIVE_HIGH_STRAIGHT = ( ("2H 4D 3C AS 5S", True, [5, 4, 3, 2, 14]), ("2H 5D 3C AS 5S", False, [14, 5, 5, 3, 2]), ("JH QD KC AS TS", False, [14, 13, 12, 11, 10]), ("9D 3S 2C 7S 7C", False, [9, 7, 7, 3, 2]), ) TEST_KIND = ( ("JH AH TH KH QH", 0), ("JH 9H TH KH QH", 0), ("JC KH JS JD JH", 7), ("KH KC 3S 3H 3D", 6), ("8C 9C 5C 3C TC", 0), ("JS QS 9H TS KH", 0), ("7C 7S KH 2H 7H", 3), ("3C KH 5D 5S KH", 2), ("QH 8H KD JH 8S", 1), ("2D 6D 9D TH 7D", 0), ) TEST_TYPES = ( ("JH AH TH KH QH", 23), ("JH 9H TH KH QH", 22), ("JC KH JS JD JH", 21), ("KH KC 3S 3H 3D", 20), ("8C 9C 5C 3C TC", 19), ("JS QS 9H TS KH", 18), ("7C 7S KH 2H 7H", 17), ("3C KH 5D 5S KH", 16), ("QH 8H KD JH 8S", 15), ("2D 6D 9D TH 7D", 14), ) def generate_random_hand(): play, oppo = randrange(len(SORTED_HANDS)), randrange(len(SORTED_HANDS)) expected = ["Loss", "Tie", "Win"][(play >= oppo) + (play > oppo)] hand, other = SORTED_HANDS[play], SORTED_HANDS[oppo] return hand, other, expected def generate_random_hands(number_of_hands: int = 100): return (generate_random_hand() for _ in range(number_of_hands)) @pytest.mark.parametrize("hand, expected", TEST_FLUSH) def test_hand_is_flush(hand, expected): assert PokerHand(hand)._is_flush() == expected @pytest.mark.parametrize("hand, expected", TEST_STRAIGHT) def test_hand_is_straight(hand, expected): assert PokerHand(hand)._is_straight() == expected @pytest.mark.parametrize("hand, expected, card_values", TEST_FIVE_HIGH_STRAIGHT) def test_hand_is_five_high_straight(hand, expected, card_values): player = PokerHand(hand) assert player._is_five_high_straight() == expected assert player._card_values == card_values @pytest.mark.parametrize("hand, expected", TEST_KIND) def test_hand_is_same_kind(hand, expected): assert PokerHand(hand)._is_same_kind() == expected @pytest.mark.parametrize("hand, expected", TEST_TYPES) def test_hand_values(hand, expected): assert PokerHand(hand)._hand_type == expected @pytest.mark.parametrize("hand, other, expected", TEST_COMPARE) def test_compare_simple(hand, other, expected): assert PokerHand(hand).compare_with(PokerHand(other)) == expected @pytest.mark.parametrize("hand, other, expected", generate_random_hands()) def test_compare_random(hand, other, expected): assert PokerHand(hand).compare_with(PokerHand(other)) == expected def test_hand_sorted(): poker_hands = [PokerHand(hand) for hand in SORTED_HANDS] list_copy = poker_hands.copy() shuffle(list_copy) user_sorted = chain(sorted(list_copy)) for index, hand in enumerate(user_sorted): assert hand == poker_hands[index] def test_custom_sort_five_high_straight(): # Test that five high straights are compared correctly. pokerhands = [PokerHand("2D AC 3H 4H 5S"), PokerHand("2S 3H 4H 5S 6C")] pokerhands.sort(reverse=True) assert pokerhands[0].__str__() == "2S 3H 4H 5S 6C" def test_multiple_calls_five_high_straight(): # Multiple calls to five_high_straight function should still return True # and shouldn't mutate the list in every call other than the first. pokerhand = PokerHand("2C 4S AS 3D 5C") expected = True expected_card_values = [5, 4, 3, 2, 14] for _ in range(10): assert pokerhand._is_five_high_straight() == expected assert pokerhand._card_values == expected_card_values def test_euler_project(): # Problem number 54 from Project Euler # Testing from poker_hands.txt file answer = 0 script_dir = os.path.abspath(os.path.dirname(__file__)) poker_hands = os.path.join(script_dir, "poker_hands.txt") with open(poker_hands) as file_hand: for line in file_hand: player_hand = line[:14].strip() opponent_hand = line[15:].strip() player, opponent = PokerHand(player_hand), PokerHand(opponent_hand) output = player.compare_with(opponent) if output == "Win": answer += 1 assert answer == 376
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
[flake8] max-line-length = 88 # max-complexity should be 10 max-complexity = 19 extend-ignore = # Formatting style for `black` E203 # Whitespace before ':' W503 # Line break occurred before a binary operator
[flake8] max-line-length = 88 # max-complexity should be 10 max-complexity = 19 extend-ignore = # Formatting style for `black` # E203 is whitespace before ':' E203, # W503 is line break occurred before a binary operator W503
1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v4.3.0 hooks: - id: check-executables-have-shebangs - id: check-yaml - id: end-of-file-fixer types: [python] - id: trailing-whitespace - id: requirements-txt-fixer - repo: https://github.com/MarcoGorelli/auto-walrus rev: v0.2.1 hooks: - id: auto-walrus - repo: https://github.com/psf/black rev: 22.10.0 hooks: - id: black - repo: https://github.com/PyCQA/isort rev: 5.10.1 hooks: - id: isort args: - --profile=black - repo: https://github.com/asottile/pyupgrade rev: v3.2.2 hooks: - id: pyupgrade args: - --py311-plus - repo: https://github.com/PyCQA/flake8 rev: 5.0.4 hooks: - id: flake8 # See .flake8 for args additional_dependencies: &flake8-plugins - flake8-bugbear - flake8-builtins - flake8-broken-line - flake8-comprehensions - pep8-naming - repo: https://github.com/asottile/yesqa rev: v1.4.0 hooks: - id: yesqa additional_dependencies: *flake8-plugins - repo: https://github.com/pre-commit/mirrors-mypy rev: v0.991 hooks: - id: mypy args: - --ignore-missing-imports - --install-types # See mirrors-mypy README.md - --non-interactive additional_dependencies: [types-requests] - repo: https://github.com/codespell-project/codespell rev: v2.2.2 hooks: - id: codespell args: - --ignore-words-list=ans,crate,damon,fo,followings,hist,iff,mater,secant,som,sur,tim,zar exclude: | (?x)^( ciphers/prehistoric_men.txt | strings/dictionary.txt | strings/words.txt | project_euler/problem_022/p022_names.txt )$ - repo: local hooks: - id: validate-filenames name: Validate filenames entry: ./scripts/validate_filenames.py language: script pass_filenames: false
repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v4.4.0 hooks: - id: check-executables-have-shebangs - id: check-yaml - id: end-of-file-fixer types: [python] - id: trailing-whitespace - id: requirements-txt-fixer - repo: https://github.com/MarcoGorelli/auto-walrus rev: v0.2.1 hooks: - id: auto-walrus - repo: https://github.com/psf/black rev: 22.10.0 hooks: - id: black - repo: https://github.com/PyCQA/isort rev: 5.10.1 hooks: - id: isort args: - --profile=black - repo: https://github.com/asottile/pyupgrade rev: v3.2.2 hooks: - id: pyupgrade args: - --py311-plus - repo: https://github.com/PyCQA/flake8 rev: 6.0.0 hooks: - id: flake8 # See .flake8 for args additional_dependencies: &flake8-plugins - flake8-bugbear - flake8-builtins # - flake8-broken-line - flake8-comprehensions - pep8-naming - repo: https://github.com/asottile/yesqa rev: v1.4.0 hooks: - id: yesqa additional_dependencies: *flake8-plugins - repo: https://github.com/pre-commit/mirrors-mypy rev: v0.991 hooks: - id: mypy args: - --ignore-missing-imports - --install-types # See mirrors-mypy README.md - --non-interactive additional_dependencies: [types-requests] - repo: https://github.com/codespell-project/codespell rev: v2.2.2 hooks: - id: codespell args: - --ignore-words-list=ans,crate,damon,fo,followings,hist,iff,mater,secant,som,sur,tim,zar exclude: | (?x)^( ciphers/prehistoric_men.txt | strings/dictionary.txt | strings/words.txt | project_euler/problem_022/p022_names.txt )$ - repo: local hooks: - id: validate-filenames name: Validate filenames entry: ./scripts/validate_filenames.py language: script pass_filenames: false
1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
## Arithmetic Analysis * [Bisection](arithmetic_analysis/bisection.py) * [Gaussian Elimination](arithmetic_analysis/gaussian_elimination.py) * [In Static Equilibrium](arithmetic_analysis/in_static_equilibrium.py) * [Intersection](arithmetic_analysis/intersection.py) * [Jacobi Iteration Method](arithmetic_analysis/jacobi_iteration_method.py) * [Lu Decomposition](arithmetic_analysis/lu_decomposition.py) * [Newton Forward Interpolation](arithmetic_analysis/newton_forward_interpolation.py) * [Newton Method](arithmetic_analysis/newton_method.py) * [Newton Raphson](arithmetic_analysis/newton_raphson.py) * [Newton Raphson New](arithmetic_analysis/newton_raphson_new.py) * [Secant Method](arithmetic_analysis/secant_method.py) ## Audio Filters * [Butterworth Filter](audio_filters/butterworth_filter.py) * [Equal Loudness Filter](audio_filters/equal_loudness_filter.py) * [Iir Filter](audio_filters/iir_filter.py) * [Show Response](audio_filters/show_response.py) ## Backtracking * [All Combinations](backtracking/all_combinations.py) * [All Permutations](backtracking/all_permutations.py) * [All Subsequences](backtracking/all_subsequences.py) * [Coloring](backtracking/coloring.py) * [Combination Sum](backtracking/combination_sum.py) * [Hamiltonian Cycle](backtracking/hamiltonian_cycle.py) * [Knight Tour](backtracking/knight_tour.py) * [Minimax](backtracking/minimax.py) * [Minmax](backtracking/minmax.py) * [N Queens](backtracking/n_queens.py) * [N Queens Math](backtracking/n_queens_math.py) * [Rat In Maze](backtracking/rat_in_maze.py) * [Sudoku](backtracking/sudoku.py) * [Sum Of Subsets](backtracking/sum_of_subsets.py) ## Bit Manipulation * [Binary And Operator](bit_manipulation/binary_and_operator.py) * [Binary Count Setbits](bit_manipulation/binary_count_setbits.py) * [Binary Count Trailing Zeros](bit_manipulation/binary_count_trailing_zeros.py) * [Binary Or Operator](bit_manipulation/binary_or_operator.py) * [Binary Shifts](bit_manipulation/binary_shifts.py) * [Binary Twos Complement](bit_manipulation/binary_twos_complement.py) * [Binary Xor Operator](bit_manipulation/binary_xor_operator.py) * [Count 1S Brian Kernighan Method](bit_manipulation/count_1s_brian_kernighan_method.py) * [Count Number Of One Bits](bit_manipulation/count_number_of_one_bits.py) * [Gray Code Sequence](bit_manipulation/gray_code_sequence.py) * [Highest Set Bit](bit_manipulation/highest_set_bit.py) * [Index Of Rightmost Set Bit](bit_manipulation/index_of_rightmost_set_bit.py) * [Is Even](bit_manipulation/is_even.py) * [Is Power Of Two](bit_manipulation/is_power_of_two.py) * [Reverse Bits](bit_manipulation/reverse_bits.py) * [Single Bit Manipulation Operations](bit_manipulation/single_bit_manipulation_operations.py) ## Blockchain * [Chinese Remainder Theorem](blockchain/chinese_remainder_theorem.py) * [Diophantine Equation](blockchain/diophantine_equation.py) * [Modular Division](blockchain/modular_division.py) ## Boolean Algebra * [And Gate](boolean_algebra/and_gate.py) * [Nand Gate](boolean_algebra/nand_gate.py) * [Norgate](boolean_algebra/norgate.py) * [Not Gate](boolean_algebra/not_gate.py) * [Or Gate](boolean_algebra/or_gate.py) * [Quine Mc Cluskey](boolean_algebra/quine_mc_cluskey.py) * [Xnor Gate](boolean_algebra/xnor_gate.py) * [Xor Gate](boolean_algebra/xor_gate.py) ## Cellular Automata * [Conways Game Of Life](cellular_automata/conways_game_of_life.py) * [Game Of Life](cellular_automata/game_of_life.py) * [Nagel Schrekenberg](cellular_automata/nagel_schrekenberg.py) * [One Dimensional](cellular_automata/one_dimensional.py) ## Ciphers * [A1Z26](ciphers/a1z26.py) * [Affine Cipher](ciphers/affine_cipher.py) * [Atbash](ciphers/atbash.py) * [Baconian Cipher](ciphers/baconian_cipher.py) * [Base16](ciphers/base16.py) * [Base32](ciphers/base32.py) * [Base64](ciphers/base64.py) * [Base85](ciphers/base85.py) * [Beaufort Cipher](ciphers/beaufort_cipher.py) * [Bifid](ciphers/bifid.py) * [Brute Force Caesar Cipher](ciphers/brute_force_caesar_cipher.py) * [Caesar Cipher](ciphers/caesar_cipher.py) * [Cryptomath Module](ciphers/cryptomath_module.py) * [Decrypt Caesar With Chi Squared](ciphers/decrypt_caesar_with_chi_squared.py) * [Deterministic Miller Rabin](ciphers/deterministic_miller_rabin.py) * [Diffie](ciphers/diffie.py) * [Diffie Hellman](ciphers/diffie_hellman.py) * [Elgamal Key Generator](ciphers/elgamal_key_generator.py) * [Enigma Machine2](ciphers/enigma_machine2.py) * [Hill Cipher](ciphers/hill_cipher.py) * [Mixed Keyword Cypher](ciphers/mixed_keyword_cypher.py) * [Mono Alphabetic Ciphers](ciphers/mono_alphabetic_ciphers.py) * [Morse Code](ciphers/morse_code.py) * [Onepad Cipher](ciphers/onepad_cipher.py) * [Playfair Cipher](ciphers/playfair_cipher.py) * [Polybius](ciphers/polybius.py) * [Porta Cipher](ciphers/porta_cipher.py) * [Rabin Miller](ciphers/rabin_miller.py) * [Rail Fence Cipher](ciphers/rail_fence_cipher.py) * [Rot13](ciphers/rot13.py) * [Rsa Cipher](ciphers/rsa_cipher.py) * [Rsa Factorization](ciphers/rsa_factorization.py) * [Rsa Key Generator](ciphers/rsa_key_generator.py) * [Shuffled Shift Cipher](ciphers/shuffled_shift_cipher.py) * [Simple Keyword Cypher](ciphers/simple_keyword_cypher.py) * [Simple Substitution Cipher](ciphers/simple_substitution_cipher.py) * [Trafid Cipher](ciphers/trafid_cipher.py) * [Transposition Cipher](ciphers/transposition_cipher.py) * [Transposition Cipher Encrypt Decrypt File](ciphers/transposition_cipher_encrypt_decrypt_file.py) * [Vigenere Cipher](ciphers/vigenere_cipher.py) * [Xor Cipher](ciphers/xor_cipher.py) ## Compression * [Burrows Wheeler](compression/burrows_wheeler.py) * [Huffman](compression/huffman.py) * [Lempel Ziv](compression/lempel_ziv.py) * [Lempel Ziv Decompress](compression/lempel_ziv_decompress.py) * [Peak Signal To Noise Ratio](compression/peak_signal_to_noise_ratio.py) * [Run Length Encoding](compression/run_length_encoding.py) ## Computer Vision * [Cnn Classification](computer_vision/cnn_classification.py) * [Flip Augmentation](computer_vision/flip_augmentation.py) * [Harris Corner](computer_vision/harris_corner.py) * [Horn Schunck](computer_vision/horn_schunck.py) * [Mean Threshold](computer_vision/mean_threshold.py) * [Mosaic Augmentation](computer_vision/mosaic_augmentation.py) * [Pooling Functions](computer_vision/pooling_functions.py) ## Conversions * [Astronomical Length Scale Conversion](conversions/astronomical_length_scale_conversion.py) * [Binary To Decimal](conversions/binary_to_decimal.py) * [Binary To Hexadecimal](conversions/binary_to_hexadecimal.py) * [Binary To Octal](conversions/binary_to_octal.py) * [Decimal To Any](conversions/decimal_to_any.py) * [Decimal To Binary](conversions/decimal_to_binary.py) * [Decimal To Binary Recursion](conversions/decimal_to_binary_recursion.py) * [Decimal To Hexadecimal](conversions/decimal_to_hexadecimal.py) * [Decimal To Octal](conversions/decimal_to_octal.py) * [Excel Title To Column](conversions/excel_title_to_column.py) * [Hex To Bin](conversions/hex_to_bin.py) * [Hexadecimal To Decimal](conversions/hexadecimal_to_decimal.py) * [Length Conversion](conversions/length_conversion.py) * [Molecular Chemistry](conversions/molecular_chemistry.py) * [Octal To Decimal](conversions/octal_to_decimal.py) * [Prefix Conversions](conversions/prefix_conversions.py) * [Prefix Conversions String](conversions/prefix_conversions_string.py) * [Pressure Conversions](conversions/pressure_conversions.py) * [Rgb Hsv Conversion](conversions/rgb_hsv_conversion.py) * [Roman Numerals](conversions/roman_numerals.py) * [Speed Conversions](conversions/speed_conversions.py) * [Temperature Conversions](conversions/temperature_conversions.py) * [Volume Conversions](conversions/volume_conversions.py) * [Weight Conversion](conversions/weight_conversion.py) ## Data Structures * Arrays * [Permutations](data_structures/arrays/permutations.py) * [Prefix Sum](data_structures/arrays/prefix_sum.py) * Binary Tree * [Avl Tree](data_structures/binary_tree/avl_tree.py) * [Basic Binary Tree](data_structures/binary_tree/basic_binary_tree.py) * [Binary Search Tree](data_structures/binary_tree/binary_search_tree.py) * [Binary Search Tree Recursive](data_structures/binary_tree/binary_search_tree_recursive.py) * [Binary Tree Mirror](data_structures/binary_tree/binary_tree_mirror.py) * [Binary Tree Node Sum](data_structures/binary_tree/binary_tree_node_sum.py) * [Binary Tree Path Sum](data_structures/binary_tree/binary_tree_path_sum.py) * [Binary Tree Traversals](data_structures/binary_tree/binary_tree_traversals.py) * [Diff Views Of Binary Tree](data_structures/binary_tree/diff_views_of_binary_tree.py) * [Distribute Coins](data_structures/binary_tree/distribute_coins.py) * [Fenwick Tree](data_structures/binary_tree/fenwick_tree.py) * [Inorder Tree Traversal 2022](data_structures/binary_tree/inorder_tree_traversal_2022.py) * [Is Bst](data_structures/binary_tree/is_bst.py) * [Lazy Segment Tree](data_structures/binary_tree/lazy_segment_tree.py) * [Lowest Common Ancestor](data_structures/binary_tree/lowest_common_ancestor.py) * [Maximum Fenwick Tree](data_structures/binary_tree/maximum_fenwick_tree.py) * [Merge Two Binary Trees](data_structures/binary_tree/merge_two_binary_trees.py) * [Non Recursive Segment Tree](data_structures/binary_tree/non_recursive_segment_tree.py) * [Number Of Possible Binary Trees](data_structures/binary_tree/number_of_possible_binary_trees.py) * [Red Black Tree](data_structures/binary_tree/red_black_tree.py) * [Segment Tree](data_structures/binary_tree/segment_tree.py) * [Segment Tree Other](data_structures/binary_tree/segment_tree_other.py) * [Treap](data_structures/binary_tree/treap.py) * [Wavelet Tree](data_structures/binary_tree/wavelet_tree.py) * Disjoint Set * [Alternate Disjoint Set](data_structures/disjoint_set/alternate_disjoint_set.py) * [Disjoint Set](data_structures/disjoint_set/disjoint_set.py) * Hashing * [Double Hash](data_structures/hashing/double_hash.py) * [Hash Table](data_structures/hashing/hash_table.py) * [Hash Table With Linked List](data_structures/hashing/hash_table_with_linked_list.py) * Number Theory * [Prime Numbers](data_structures/hashing/number_theory/prime_numbers.py) * [Quadratic Probing](data_structures/hashing/quadratic_probing.py) * Heap * [Binomial Heap](data_structures/heap/binomial_heap.py) * [Heap](data_structures/heap/heap.py) * [Heap Generic](data_structures/heap/heap_generic.py) * [Max Heap](data_structures/heap/max_heap.py) * [Min Heap](data_structures/heap/min_heap.py) * [Randomized Heap](data_structures/heap/randomized_heap.py) * [Skew Heap](data_structures/heap/skew_heap.py) * Linked List * [Circular Linked List](data_structures/linked_list/circular_linked_list.py) * [Deque Doubly](data_structures/linked_list/deque_doubly.py) * [Doubly Linked List](data_structures/linked_list/doubly_linked_list.py) * [Doubly Linked List Two](data_structures/linked_list/doubly_linked_list_two.py) * [From Sequence](data_structures/linked_list/from_sequence.py) * [Has Loop](data_structures/linked_list/has_loop.py) * [Is Palindrome](data_structures/linked_list/is_palindrome.py) * [Merge Two Lists](data_structures/linked_list/merge_two_lists.py) * [Middle Element Of Linked List](data_structures/linked_list/middle_element_of_linked_list.py) * [Print Reverse](data_structures/linked_list/print_reverse.py) * [Singly Linked List](data_structures/linked_list/singly_linked_list.py) * [Skip List](data_structures/linked_list/skip_list.py) * [Swap Nodes](data_structures/linked_list/swap_nodes.py) * Queue * [Circular Queue](data_structures/queue/circular_queue.py) * [Circular Queue Linked List](data_structures/queue/circular_queue_linked_list.py) * [Double Ended Queue](data_structures/queue/double_ended_queue.py) * [Linked Queue](data_structures/queue/linked_queue.py) * [Priority Queue Using List](data_structures/queue/priority_queue_using_list.py) * [Queue On List](data_structures/queue/queue_on_list.py) * [Queue On Pseudo Stack](data_structures/queue/queue_on_pseudo_stack.py) * Stacks * [Balanced Parentheses](data_structures/stacks/balanced_parentheses.py) * [Dijkstras Two Stack Algorithm](data_structures/stacks/dijkstras_two_stack_algorithm.py) * [Evaluate Postfix Notations](data_structures/stacks/evaluate_postfix_notations.py) * [Infix To Postfix Conversion](data_structures/stacks/infix_to_postfix_conversion.py) * [Infix To Prefix Conversion](data_structures/stacks/infix_to_prefix_conversion.py) * [Next Greater Element](data_structures/stacks/next_greater_element.py) * [Postfix Evaluation](data_structures/stacks/postfix_evaluation.py) * [Prefix Evaluation](data_structures/stacks/prefix_evaluation.py) * [Stack](data_structures/stacks/stack.py) * [Stack With Doubly Linked List](data_structures/stacks/stack_with_doubly_linked_list.py) * [Stack With Singly Linked List](data_structures/stacks/stack_with_singly_linked_list.py) * [Stock Span Problem](data_structures/stacks/stock_span_problem.py) * Trie * [Radix Tree](data_structures/trie/radix_tree.py) * [Trie](data_structures/trie/trie.py) ## Digital Image Processing * [Change Brightness](digital_image_processing/change_brightness.py) * [Change Contrast](digital_image_processing/change_contrast.py) * [Convert To Negative](digital_image_processing/convert_to_negative.py) * Dithering * [Burkes](digital_image_processing/dithering/burkes.py) * Edge Detection * [Canny](digital_image_processing/edge_detection/canny.py) * Filters * [Bilateral Filter](digital_image_processing/filters/bilateral_filter.py) * [Convolve](digital_image_processing/filters/convolve.py) * [Gabor Filter](digital_image_processing/filters/gabor_filter.py) * [Gaussian Filter](digital_image_processing/filters/gaussian_filter.py) * [Local Binary Pattern](digital_image_processing/filters/local_binary_pattern.py) * [Median Filter](digital_image_processing/filters/median_filter.py) * [Sobel Filter](digital_image_processing/filters/sobel_filter.py) * Histogram Equalization * [Histogram Stretch](digital_image_processing/histogram_equalization/histogram_stretch.py) * [Index Calculation](digital_image_processing/index_calculation.py) * Morphological Operations * [Dilation Operation](digital_image_processing/morphological_operations/dilation_operation.py) * [Erosion Operation](digital_image_processing/morphological_operations/erosion_operation.py) * Resize * [Resize](digital_image_processing/resize/resize.py) * Rotation * [Rotation](digital_image_processing/rotation/rotation.py) * [Sepia](digital_image_processing/sepia.py) * [Test Digital Image Processing](digital_image_processing/test_digital_image_processing.py) ## Divide And Conquer * [Closest Pair Of Points](divide_and_conquer/closest_pair_of_points.py) * [Convex Hull](divide_and_conquer/convex_hull.py) * [Heaps Algorithm](divide_and_conquer/heaps_algorithm.py) * [Heaps Algorithm Iterative](divide_and_conquer/heaps_algorithm_iterative.py) * [Inversions](divide_and_conquer/inversions.py) * [Kth Order Statistic](divide_and_conquer/kth_order_statistic.py) * [Max Difference Pair](divide_and_conquer/max_difference_pair.py) * [Max Subarray Sum](divide_and_conquer/max_subarray_sum.py) * [Mergesort](divide_and_conquer/mergesort.py) * [Peak](divide_and_conquer/peak.py) * [Power](divide_and_conquer/power.py) * [Strassen Matrix Multiplication](divide_and_conquer/strassen_matrix_multiplication.py) ## Dynamic Programming * [Abbreviation](dynamic_programming/abbreviation.py) * [All Construct](dynamic_programming/all_construct.py) * [Bitmask](dynamic_programming/bitmask.py) * [Catalan Numbers](dynamic_programming/catalan_numbers.py) * [Climbing Stairs](dynamic_programming/climbing_stairs.py) * [Combination Sum Iv](dynamic_programming/combination_sum_iv.py) * [Edit Distance](dynamic_programming/edit_distance.py) * [Factorial](dynamic_programming/factorial.py) * [Fast Fibonacci](dynamic_programming/fast_fibonacci.py) * [Fibonacci](dynamic_programming/fibonacci.py) * [Fizz Buzz](dynamic_programming/fizz_buzz.py) * [Floyd Warshall](dynamic_programming/floyd_warshall.py) * [Integer Partition](dynamic_programming/integer_partition.py) * [Iterating Through Submasks](dynamic_programming/iterating_through_submasks.py) * [Knapsack](dynamic_programming/knapsack.py) * [Longest Common Subsequence](dynamic_programming/longest_common_subsequence.py) * [Longest Common Substring](dynamic_programming/longest_common_substring.py) * [Longest Increasing Subsequence](dynamic_programming/longest_increasing_subsequence.py) * [Longest Increasing Subsequence O(Nlogn)](dynamic_programming/longest_increasing_subsequence_o(nlogn).py) * [Longest Sub Array](dynamic_programming/longest_sub_array.py) * [Matrix Chain Order](dynamic_programming/matrix_chain_order.py) * [Max Non Adjacent Sum](dynamic_programming/max_non_adjacent_sum.py) * [Max Sub Array](dynamic_programming/max_sub_array.py) * [Max Sum Contiguous Subsequence](dynamic_programming/max_sum_contiguous_subsequence.py) * [Min Distance Up Bottom](dynamic_programming/min_distance_up_bottom.py) * [Minimum Coin Change](dynamic_programming/minimum_coin_change.py) * [Minimum Cost Path](dynamic_programming/minimum_cost_path.py) * [Minimum Partition](dynamic_programming/minimum_partition.py) * [Minimum Squares To Represent A Number](dynamic_programming/minimum_squares_to_represent_a_number.py) * [Minimum Steps To One](dynamic_programming/minimum_steps_to_one.py) * [Minimum Tickets Cost](dynamic_programming/minimum_tickets_cost.py) * [Optimal Binary Search Tree](dynamic_programming/optimal_binary_search_tree.py) * [Palindrome Partitioning](dynamic_programming/palindrome_partitioning.py) * [Rod Cutting](dynamic_programming/rod_cutting.py) * [Subset Generation](dynamic_programming/subset_generation.py) * [Sum Of Subset](dynamic_programming/sum_of_subset.py) * [Viterbi](dynamic_programming/viterbi.py) ## Electronics * [Builtin Voltage](electronics/builtin_voltage.py) * [Carrier Concentration](electronics/carrier_concentration.py) * [Coulombs Law](electronics/coulombs_law.py) * [Electric Conductivity](electronics/electric_conductivity.py) * [Electric Power](electronics/electric_power.py) * [Electrical Impedance](electronics/electrical_impedance.py) * [Ind Reactance](electronics/ind_reactance.py) * [Ohms Law](electronics/ohms_law.py) * [Resistor Equivalence](electronics/resistor_equivalence.py) * [Resonant Frequency](electronics/resonant_frequency.py) ## File Transfer * [Receive File](file_transfer/receive_file.py) * [Send File](file_transfer/send_file.py) * Tests * [Test Send File](file_transfer/tests/test_send_file.py) ## Financial * [Equated Monthly Installments](financial/equated_monthly_installments.py) * [Interest](financial/interest.py) * [Price Plus Tax](financial/price_plus_tax.py) ## Fractals * [Julia Sets](fractals/julia_sets.py) * [Koch Snowflake](fractals/koch_snowflake.py) * [Mandelbrot](fractals/mandelbrot.py) * [Sierpinski Triangle](fractals/sierpinski_triangle.py) ## Fuzzy Logic * [Fuzzy Operations](fuzzy_logic/fuzzy_operations.py) ## Genetic Algorithm * [Basic String](genetic_algorithm/basic_string.py) ## Geodesy * [Haversine Distance](geodesy/haversine_distance.py) * [Lamberts Ellipsoidal Distance](geodesy/lamberts_ellipsoidal_distance.py) ## Graphics * [Bezier Curve](graphics/bezier_curve.py) * [Vector3 For 2D Rendering](graphics/vector3_for_2d_rendering.py) ## Graphs * [A Star](graphs/a_star.py) * [Articulation Points](graphs/articulation_points.py) * [Basic Graphs](graphs/basic_graphs.py) * [Bellman Ford](graphs/bellman_ford.py) * [Bidirectional A Star](graphs/bidirectional_a_star.py) * [Bidirectional Breadth First Search](graphs/bidirectional_breadth_first_search.py) * [Boruvka](graphs/boruvka.py) * [Breadth First Search](graphs/breadth_first_search.py) * [Breadth First Search 2](graphs/breadth_first_search_2.py) * [Breadth First Search Shortest Path](graphs/breadth_first_search_shortest_path.py) * [Breadth First Search Shortest Path 2](graphs/breadth_first_search_shortest_path_2.py) * [Breadth First Search Zero One Shortest Path](graphs/breadth_first_search_zero_one_shortest_path.py) * [Check Bipartite Graph Bfs](graphs/check_bipartite_graph_bfs.py) * [Check Bipartite Graph Dfs](graphs/check_bipartite_graph_dfs.py) * [Check Cycle](graphs/check_cycle.py) * [Connected Components](graphs/connected_components.py) * [Depth First Search](graphs/depth_first_search.py) * [Depth First Search 2](graphs/depth_first_search_2.py) * [Dijkstra](graphs/dijkstra.py) * [Dijkstra 2](graphs/dijkstra_2.py) * [Dijkstra Algorithm](graphs/dijkstra_algorithm.py) * [Dijkstra Alternate](graphs/dijkstra_alternate.py) * [Dinic](graphs/dinic.py) * [Directed And Undirected (Weighted) Graph](graphs/directed_and_undirected_(weighted)_graph.py) * [Edmonds Karp Multiple Source And Sink](graphs/edmonds_karp_multiple_source_and_sink.py) * [Eulerian Path And Circuit For Undirected Graph](graphs/eulerian_path_and_circuit_for_undirected_graph.py) * [Even Tree](graphs/even_tree.py) * [Finding Bridges](graphs/finding_bridges.py) * [Frequent Pattern Graph Miner](graphs/frequent_pattern_graph_miner.py) * [G Topological Sort](graphs/g_topological_sort.py) * [Gale Shapley Bigraph](graphs/gale_shapley_bigraph.py) * [Graph List](graphs/graph_list.py) * [Graph Matrix](graphs/graph_matrix.py) * [Graphs Floyd Warshall](graphs/graphs_floyd_warshall.py) * [Greedy Best First](graphs/greedy_best_first.py) * [Greedy Min Vertex Cover](graphs/greedy_min_vertex_cover.py) * [Kahns Algorithm Long](graphs/kahns_algorithm_long.py) * [Kahns Algorithm Topo](graphs/kahns_algorithm_topo.py) * [Karger](graphs/karger.py) * [Markov Chain](graphs/markov_chain.py) * [Matching Min Vertex Cover](graphs/matching_min_vertex_cover.py) * [Minimum Path Sum](graphs/minimum_path_sum.py) * [Minimum Spanning Tree Boruvka](graphs/minimum_spanning_tree_boruvka.py) * [Minimum Spanning Tree Kruskal](graphs/minimum_spanning_tree_kruskal.py) * [Minimum Spanning Tree Kruskal2](graphs/minimum_spanning_tree_kruskal2.py) * [Minimum Spanning Tree Prims](graphs/minimum_spanning_tree_prims.py) * [Minimum Spanning Tree Prims2](graphs/minimum_spanning_tree_prims2.py) * [Multi Heuristic Astar](graphs/multi_heuristic_astar.py) * [Page Rank](graphs/page_rank.py) * [Prim](graphs/prim.py) * [Random Graph Generator](graphs/random_graph_generator.py) * [Scc Kosaraju](graphs/scc_kosaraju.py) * [Strongly Connected Components](graphs/strongly_connected_components.py) * [Tarjans Scc](graphs/tarjans_scc.py) * Tests * [Test Min Spanning Tree Kruskal](graphs/tests/test_min_spanning_tree_kruskal.py) * [Test Min Spanning Tree Prim](graphs/tests/test_min_spanning_tree_prim.py) ## Greedy Methods * [Fractional Knapsack](greedy_methods/fractional_knapsack.py) * [Fractional Knapsack 2](greedy_methods/fractional_knapsack_2.py) * [Optimal Merge Pattern](greedy_methods/optimal_merge_pattern.py) ## Hashes * [Adler32](hashes/adler32.py) * [Chaos Machine](hashes/chaos_machine.py) * [Djb2](hashes/djb2.py) * [Elf](hashes/elf.py) * [Enigma Machine](hashes/enigma_machine.py) * [Hamming Code](hashes/hamming_code.py) * [Luhn](hashes/luhn.py) * [Md5](hashes/md5.py) * [Sdbm](hashes/sdbm.py) * [Sha1](hashes/sha1.py) * [Sha256](hashes/sha256.py) ## Knapsack * [Greedy Knapsack](knapsack/greedy_knapsack.py) * [Knapsack](knapsack/knapsack.py) * [Recursive Approach Knapsack](knapsack/recursive_approach_knapsack.py) * Tests * [Test Greedy Knapsack](knapsack/tests/test_greedy_knapsack.py) * [Test Knapsack](knapsack/tests/test_knapsack.py) ## Linear Algebra * Src * [Conjugate Gradient](linear_algebra/src/conjugate_gradient.py) * [Lib](linear_algebra/src/lib.py) * [Polynom For Points](linear_algebra/src/polynom_for_points.py) * [Power Iteration](linear_algebra/src/power_iteration.py) * [Rayleigh Quotient](linear_algebra/src/rayleigh_quotient.py) * [Schur Complement](linear_algebra/src/schur_complement.py) * [Test Linear Algebra](linear_algebra/src/test_linear_algebra.py) * [Transformations 2D](linear_algebra/src/transformations_2d.py) ## Machine Learning * [Astar](machine_learning/astar.py) * [Data Transformations](machine_learning/data_transformations.py) * [Decision Tree](machine_learning/decision_tree.py) * Forecasting * [Run](machine_learning/forecasting/run.py) * [Gaussian Naive Bayes](machine_learning/gaussian_naive_bayes.py) * [Gradient Boosting Regressor](machine_learning/gradient_boosting_regressor.py) * [Gradient Descent](machine_learning/gradient_descent.py) * [K Means Clust](machine_learning/k_means_clust.py) * [K Nearest Neighbours](machine_learning/k_nearest_neighbours.py) * [Knn Sklearn](machine_learning/knn_sklearn.py) * [Linear Discriminant Analysis](machine_learning/linear_discriminant_analysis.py) * [Linear Regression](machine_learning/linear_regression.py) * Local Weighted Learning * [Local Weighted Learning](machine_learning/local_weighted_learning/local_weighted_learning.py) * [Logistic Regression](machine_learning/logistic_regression.py) * Lstm * [Lstm Prediction](machine_learning/lstm/lstm_prediction.py) * [Multilayer Perceptron Classifier](machine_learning/multilayer_perceptron_classifier.py) * [Polymonial Regression](machine_learning/polymonial_regression.py) * [Random Forest Classifier](machine_learning/random_forest_classifier.py) * [Random Forest Regressor](machine_learning/random_forest_regressor.py) * [Scoring Functions](machine_learning/scoring_functions.py) * [Self Organizing Map](machine_learning/self_organizing_map.py) * [Sequential Minimum Optimization](machine_learning/sequential_minimum_optimization.py) * [Similarity Search](machine_learning/similarity_search.py) * [Support Vector Machines](machine_learning/support_vector_machines.py) * [Word Frequency Functions](machine_learning/word_frequency_functions.py) * [Xgboost Classifier](machine_learning/xgboost_classifier.py) * [Xgboost Regressor](machine_learning/xgboost_regressor.py) ## Maths * [3N Plus 1](maths/3n_plus_1.py) * [Abs](maths/abs.py) * [Add](maths/add.py) * [Addition Without Arithmetic](maths/addition_without_arithmetic.py) * [Aliquot Sum](maths/aliquot_sum.py) * [Allocation Number](maths/allocation_number.py) * [Arc Length](maths/arc_length.py) * [Area](maths/area.py) * [Area Under Curve](maths/area_under_curve.py) * [Armstrong Numbers](maths/armstrong_numbers.py) * [Automorphic Number](maths/automorphic_number.py) * [Average Absolute Deviation](maths/average_absolute_deviation.py) * [Average Mean](maths/average_mean.py) * [Average Median](maths/average_median.py) * [Average Mode](maths/average_mode.py) * [Bailey Borwein Plouffe](maths/bailey_borwein_plouffe.py) * [Basic Maths](maths/basic_maths.py) * [Binary Exp Mod](maths/binary_exp_mod.py) * [Binary Exponentiation](maths/binary_exponentiation.py) * [Binary Exponentiation 2](maths/binary_exponentiation_2.py) * [Binary Exponentiation 3](maths/binary_exponentiation_3.py) * [Binomial Coefficient](maths/binomial_coefficient.py) * [Binomial Distribution](maths/binomial_distribution.py) * [Bisection](maths/bisection.py) * [Carmichael Number](maths/carmichael_number.py) * [Catalan Number](maths/catalan_number.py) * [Ceil](maths/ceil.py) * [Check Polygon](maths/check_polygon.py) * [Chudnovsky Algorithm](maths/chudnovsky_algorithm.py) * [Collatz Sequence](maths/collatz_sequence.py) * [Combinations](maths/combinations.py) * [Decimal Isolate](maths/decimal_isolate.py) * [Dodecahedron](maths/dodecahedron.py) * [Double Factorial Iterative](maths/double_factorial_iterative.py) * [Double Factorial Recursive](maths/double_factorial_recursive.py) * [Entropy](maths/entropy.py) * [Euclidean Distance](maths/euclidean_distance.py) * [Euclidean Gcd](maths/euclidean_gcd.py) * [Euler Method](maths/euler_method.py) * [Euler Modified](maths/euler_modified.py) * [Eulers Totient](maths/eulers_totient.py) * [Extended Euclidean Algorithm](maths/extended_euclidean_algorithm.py) * [Factorial Iterative](maths/factorial_iterative.py) * [Factorial Recursive](maths/factorial_recursive.py) * [Factors](maths/factors.py) * [Fermat Little Theorem](maths/fermat_little_theorem.py) * [Fibonacci](maths/fibonacci.py) * [Find Max](maths/find_max.py) * [Find Max Recursion](maths/find_max_recursion.py) * [Find Min](maths/find_min.py) * [Find Min Recursion](maths/find_min_recursion.py) * [Floor](maths/floor.py) * [Gamma](maths/gamma.py) * [Gamma Recursive](maths/gamma_recursive.py) * [Gaussian](maths/gaussian.py) * [Gaussian Error Linear Unit](maths/gaussian_error_linear_unit.py) * [Greatest Common Divisor](maths/greatest_common_divisor.py) * [Greedy Coin Change](maths/greedy_coin_change.py) * [Hamming Numbers](maths/hamming_numbers.py) * [Hardy Ramanujanalgo](maths/hardy_ramanujanalgo.py) * [Integration By Simpson Approx](maths/integration_by_simpson_approx.py) * [Is Ip V4 Address Valid](maths/is_ip_v4_address_valid.py) * [Is Square Free](maths/is_square_free.py) * [Jaccard Similarity](maths/jaccard_similarity.py) * [Kadanes](maths/kadanes.py) * [Karatsuba](maths/karatsuba.py) * [Krishnamurthy Number](maths/krishnamurthy_number.py) * [Kth Lexicographic Permutation](maths/kth_lexicographic_permutation.py) * [Largest Of Very Large Numbers](maths/largest_of_very_large_numbers.py) * [Largest Subarray Sum](maths/largest_subarray_sum.py) * [Least Common Multiple](maths/least_common_multiple.py) * [Line Length](maths/line_length.py) * [Liouville Lambda](maths/liouville_lambda.py) * [Lucas Lehmer Primality Test](maths/lucas_lehmer_primality_test.py) * [Lucas Series](maths/lucas_series.py) * [Maclaurin Series](maths/maclaurin_series.py) * [Manhattan Distance](maths/manhattan_distance.py) * [Matrix Exponentiation](maths/matrix_exponentiation.py) * [Max Sum Sliding Window](maths/max_sum_sliding_window.py) * [Median Of Two Arrays](maths/median_of_two_arrays.py) * [Miller Rabin](maths/miller_rabin.py) * [Mobius Function](maths/mobius_function.py) * [Modular Exponential](maths/modular_exponential.py) * [Monte Carlo](maths/monte_carlo.py) * [Monte Carlo Dice](maths/monte_carlo_dice.py) * [Nevilles Method](maths/nevilles_method.py) * [Newton Raphson](maths/newton_raphson.py) * [Number Of Digits](maths/number_of_digits.py) * [Numerical Integration](maths/numerical_integration.py) * [Perfect Cube](maths/perfect_cube.py) * [Perfect Number](maths/perfect_number.py) * [Perfect Square](maths/perfect_square.py) * [Persistence](maths/persistence.py) * [Pi Monte Carlo Estimation](maths/pi_monte_carlo_estimation.py) * [Points Are Collinear 3D](maths/points_are_collinear_3d.py) * [Pollard Rho](maths/pollard_rho.py) * [Polynomial Evaluation](maths/polynomial_evaluation.py) * Polynomials * [Single Indeterminate Operations](maths/polynomials/single_indeterminate_operations.py) * [Power Using Recursion](maths/power_using_recursion.py) * [Prime Check](maths/prime_check.py) * [Prime Factors](maths/prime_factors.py) * [Prime Numbers](maths/prime_numbers.py) * [Prime Sieve Eratosthenes](maths/prime_sieve_eratosthenes.py) * [Primelib](maths/primelib.py) * [Print Multiplication Table](maths/print_multiplication_table.py) * [Pronic Number](maths/pronic_number.py) * [Proth Number](maths/proth_number.py) * [Pythagoras](maths/pythagoras.py) * [Qr Decomposition](maths/qr_decomposition.py) * [Quadratic Equations Complex Numbers](maths/quadratic_equations_complex_numbers.py) * [Radians](maths/radians.py) * [Radix2 Fft](maths/radix2_fft.py) * [Relu](maths/relu.py) * [Runge Kutta](maths/runge_kutta.py) * [Segmented Sieve](maths/segmented_sieve.py) * Series * [Arithmetic](maths/series/arithmetic.py) * [Geometric](maths/series/geometric.py) * [Geometric Series](maths/series/geometric_series.py) * [Harmonic](maths/series/harmonic.py) * [Harmonic Series](maths/series/harmonic_series.py) * [Hexagonal Numbers](maths/series/hexagonal_numbers.py) * [P Series](maths/series/p_series.py) * [Sieve Of Eratosthenes](maths/sieve_of_eratosthenes.py) * [Sigmoid](maths/sigmoid.py) * [Sigmoid Linear Unit](maths/sigmoid_linear_unit.py) * [Signum](maths/signum.py) * [Simpson Rule](maths/simpson_rule.py) * [Sin](maths/sin.py) * [Sock Merchant](maths/sock_merchant.py) * [Softmax](maths/softmax.py) * [Square Root](maths/square_root.py) * [Sum Of Arithmetic Series](maths/sum_of_arithmetic_series.py) * [Sum Of Digits](maths/sum_of_digits.py) * [Sum Of Geometric Progression](maths/sum_of_geometric_progression.py) * [Sum Of Harmonic Series](maths/sum_of_harmonic_series.py) * [Sumset](maths/sumset.py) * [Sylvester Sequence](maths/sylvester_sequence.py) * [Test Prime Check](maths/test_prime_check.py) * [Trapezoidal Rule](maths/trapezoidal_rule.py) * [Triplet Sum](maths/triplet_sum.py) * [Twin Prime](maths/twin_prime.py) * [Two Pointer](maths/two_pointer.py) * [Two Sum](maths/two_sum.py) * [Ugly Numbers](maths/ugly_numbers.py) * [Volume](maths/volume.py) * [Weird Number](maths/weird_number.py) * [Zellers Congruence](maths/zellers_congruence.py) ## Matrix * [Binary Search Matrix](matrix/binary_search_matrix.py) * [Count Islands In Matrix](matrix/count_islands_in_matrix.py) * [Count Paths](matrix/count_paths.py) * [Cramers Rule 2X2](matrix/cramers_rule_2x2.py) * [Inverse Of Matrix](matrix/inverse_of_matrix.py) * [Largest Square Area In Matrix](matrix/largest_square_area_in_matrix.py) * [Matrix Class](matrix/matrix_class.py) * [Matrix Operation](matrix/matrix_operation.py) * [Max Area Of Island](matrix/max_area_of_island.py) * [Nth Fibonacci Using Matrix Exponentiation](matrix/nth_fibonacci_using_matrix_exponentiation.py) * [Pascal Triangle](matrix/pascal_triangle.py) * [Rotate Matrix](matrix/rotate_matrix.py) * [Searching In Sorted Matrix](matrix/searching_in_sorted_matrix.py) * [Sherman Morrison](matrix/sherman_morrison.py) * [Spiral Print](matrix/spiral_print.py) * Tests * [Test Matrix Operation](matrix/tests/test_matrix_operation.py) ## Networking Flow * [Ford Fulkerson](networking_flow/ford_fulkerson.py) * [Minimum Cut](networking_flow/minimum_cut.py) ## Neural Network * [2 Hidden Layers Neural Network](neural_network/2_hidden_layers_neural_network.py) * [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py) * [Convolution Neural Network](neural_network/convolution_neural_network.py) * [Perceptron](neural_network/perceptron.py) * [Simple Neural Network](neural_network/simple_neural_network.py) ## Other * [Activity Selection](other/activity_selection.py) * [Alternative List Arrange](other/alternative_list_arrange.py) * [Davisb Putnamb Logemannb Loveland](other/davisb_putnamb_logemannb_loveland.py) * [Dijkstra Bankers Algorithm](other/dijkstra_bankers_algorithm.py) * [Doomsday](other/doomsday.py) * [Fischer Yates Shuffle](other/fischer_yates_shuffle.py) * [Gauss Easter](other/gauss_easter.py) * [Graham Scan](other/graham_scan.py) * [Greedy](other/greedy.py) * [Least Recently Used](other/least_recently_used.py) * [Lfu Cache](other/lfu_cache.py) * [Linear Congruential Generator](other/linear_congruential_generator.py) * [Lru Cache](other/lru_cache.py) * [Magicdiamondpattern](other/magicdiamondpattern.py) * [Maximum Subarray](other/maximum_subarray.py) * [Nested Brackets](other/nested_brackets.py) * [Password](other/password.py) * [Quine](other/quine.py) * [Scoring Algorithm](other/scoring_algorithm.py) * [Sdes](other/sdes.py) * [Tower Of Hanoi](other/tower_of_hanoi.py) ## Physics * [Archimedes Principle](physics/archimedes_principle.py) * [Casimir Effect](physics/casimir_effect.py) * [Centripetal Force](physics/centripetal_force.py) * [Horizontal Projectile Motion](physics/horizontal_projectile_motion.py) * [Hubble Parameter](physics/hubble_parameter.py) * [Ideal Gas Law](physics/ideal_gas_law.py) * [Kinetic Energy](physics/kinetic_energy.py) * [Lorentz Transformation Four Vector](physics/lorentz_transformation_four_vector.py) * [Malus Law](physics/malus_law.py) * [N Body Simulation](physics/n_body_simulation.py) * [Newtons Law Of Gravitation](physics/newtons_law_of_gravitation.py) * [Newtons Second Law Of Motion](physics/newtons_second_law_of_motion.py) * [Potential Energy](physics/potential_energy.py) * [Rms Speed Of Molecule](physics/rms_speed_of_molecule.py) * [Shear Stress](physics/shear_stress.py) ## Project Euler * Problem 001 * [Sol1](project_euler/problem_001/sol1.py) * [Sol2](project_euler/problem_001/sol2.py) * [Sol3](project_euler/problem_001/sol3.py) * [Sol4](project_euler/problem_001/sol4.py) * [Sol5](project_euler/problem_001/sol5.py) * [Sol6](project_euler/problem_001/sol6.py) * [Sol7](project_euler/problem_001/sol7.py) * Problem 002 * [Sol1](project_euler/problem_002/sol1.py) * [Sol2](project_euler/problem_002/sol2.py) * [Sol3](project_euler/problem_002/sol3.py) * [Sol4](project_euler/problem_002/sol4.py) * [Sol5](project_euler/problem_002/sol5.py) * Problem 003 * [Sol1](project_euler/problem_003/sol1.py) * [Sol2](project_euler/problem_003/sol2.py) * [Sol3](project_euler/problem_003/sol3.py) * Problem 004 * [Sol1](project_euler/problem_004/sol1.py) * [Sol2](project_euler/problem_004/sol2.py) * Problem 005 * [Sol1](project_euler/problem_005/sol1.py) * [Sol2](project_euler/problem_005/sol2.py) * Problem 006 * [Sol1](project_euler/problem_006/sol1.py) * [Sol2](project_euler/problem_006/sol2.py) * [Sol3](project_euler/problem_006/sol3.py) * [Sol4](project_euler/problem_006/sol4.py) * Problem 007 * [Sol1](project_euler/problem_007/sol1.py) * [Sol2](project_euler/problem_007/sol2.py) * [Sol3](project_euler/problem_007/sol3.py) * Problem 008 * [Sol1](project_euler/problem_008/sol1.py) * [Sol2](project_euler/problem_008/sol2.py) * [Sol3](project_euler/problem_008/sol3.py) * Problem 009 * [Sol1](project_euler/problem_009/sol1.py) * [Sol2](project_euler/problem_009/sol2.py) * [Sol3](project_euler/problem_009/sol3.py) * Problem 010 * [Sol1](project_euler/problem_010/sol1.py) * [Sol2](project_euler/problem_010/sol2.py) * [Sol3](project_euler/problem_010/sol3.py) * Problem 011 * [Sol1](project_euler/problem_011/sol1.py) * [Sol2](project_euler/problem_011/sol2.py) * Problem 012 * [Sol1](project_euler/problem_012/sol1.py) * [Sol2](project_euler/problem_012/sol2.py) * Problem 013 * [Sol1](project_euler/problem_013/sol1.py) * Problem 014 * [Sol1](project_euler/problem_014/sol1.py) * [Sol2](project_euler/problem_014/sol2.py) * Problem 015 * [Sol1](project_euler/problem_015/sol1.py) * Problem 016 * [Sol1](project_euler/problem_016/sol1.py) * [Sol2](project_euler/problem_016/sol2.py) * Problem 017 * [Sol1](project_euler/problem_017/sol1.py) * Problem 018 * [Solution](project_euler/problem_018/solution.py) * Problem 019 * [Sol1](project_euler/problem_019/sol1.py) * Problem 020 * [Sol1](project_euler/problem_020/sol1.py) * [Sol2](project_euler/problem_020/sol2.py) * [Sol3](project_euler/problem_020/sol3.py) * [Sol4](project_euler/problem_020/sol4.py) * Problem 021 * [Sol1](project_euler/problem_021/sol1.py) * Problem 022 * [Sol1](project_euler/problem_022/sol1.py) * [Sol2](project_euler/problem_022/sol2.py) * Problem 023 * [Sol1](project_euler/problem_023/sol1.py) * Problem 024 * [Sol1](project_euler/problem_024/sol1.py) * Problem 025 * [Sol1](project_euler/problem_025/sol1.py) * [Sol2](project_euler/problem_025/sol2.py) * [Sol3](project_euler/problem_025/sol3.py) * Problem 026 * [Sol1](project_euler/problem_026/sol1.py) * Problem 027 * [Sol1](project_euler/problem_027/sol1.py) * Problem 028 * [Sol1](project_euler/problem_028/sol1.py) * Problem 029 * [Sol1](project_euler/problem_029/sol1.py) * Problem 030 * [Sol1](project_euler/problem_030/sol1.py) * Problem 031 * [Sol1](project_euler/problem_031/sol1.py) * [Sol2](project_euler/problem_031/sol2.py) * Problem 032 * [Sol32](project_euler/problem_032/sol32.py) * Problem 033 * [Sol1](project_euler/problem_033/sol1.py) * Problem 034 * [Sol1](project_euler/problem_034/sol1.py) * Problem 035 * [Sol1](project_euler/problem_035/sol1.py) * Problem 036 * [Sol1](project_euler/problem_036/sol1.py) * Problem 037 * [Sol1](project_euler/problem_037/sol1.py) * Problem 038 * [Sol1](project_euler/problem_038/sol1.py) * Problem 039 * [Sol1](project_euler/problem_039/sol1.py) * Problem 040 * [Sol1](project_euler/problem_040/sol1.py) * Problem 041 * [Sol1](project_euler/problem_041/sol1.py) * Problem 042 * [Solution42](project_euler/problem_042/solution42.py) * Problem 043 * [Sol1](project_euler/problem_043/sol1.py) * Problem 044 * [Sol1](project_euler/problem_044/sol1.py) * Problem 045 * [Sol1](project_euler/problem_045/sol1.py) * Problem 046 * [Sol1](project_euler/problem_046/sol1.py) * Problem 047 * [Sol1](project_euler/problem_047/sol1.py) * Problem 048 * [Sol1](project_euler/problem_048/sol1.py) * Problem 049 * [Sol1](project_euler/problem_049/sol1.py) * Problem 050 * [Sol1](project_euler/problem_050/sol1.py) * Problem 051 * [Sol1](project_euler/problem_051/sol1.py) * Problem 052 * [Sol1](project_euler/problem_052/sol1.py) * Problem 053 * [Sol1](project_euler/problem_053/sol1.py) * Problem 054 * [Sol1](project_euler/problem_054/sol1.py) * [Test Poker Hand](project_euler/problem_054/test_poker_hand.py) * Problem 055 * [Sol1](project_euler/problem_055/sol1.py) * Problem 056 * [Sol1](project_euler/problem_056/sol1.py) * Problem 057 * [Sol1](project_euler/problem_057/sol1.py) * Problem 058 * [Sol1](project_euler/problem_058/sol1.py) * Problem 059 * [Sol1](project_euler/problem_059/sol1.py) * Problem 062 * [Sol1](project_euler/problem_062/sol1.py) * Problem 063 * [Sol1](project_euler/problem_063/sol1.py) * Problem 064 * [Sol1](project_euler/problem_064/sol1.py) * Problem 065 * [Sol1](project_euler/problem_065/sol1.py) * Problem 067 * [Sol1](project_euler/problem_067/sol1.py) * [Sol2](project_euler/problem_067/sol2.py) * Problem 068 * [Sol1](project_euler/problem_068/sol1.py) * Problem 069 * [Sol1](project_euler/problem_069/sol1.py) * Problem 070 * [Sol1](project_euler/problem_070/sol1.py) * Problem 071 * [Sol1](project_euler/problem_071/sol1.py) * Problem 072 * [Sol1](project_euler/problem_072/sol1.py) * [Sol2](project_euler/problem_072/sol2.py) * Problem 073 * [Sol1](project_euler/problem_073/sol1.py) * Problem 074 * [Sol1](project_euler/problem_074/sol1.py) * [Sol2](project_euler/problem_074/sol2.py) * Problem 075 * [Sol1](project_euler/problem_075/sol1.py) * Problem 076 * [Sol1](project_euler/problem_076/sol1.py) * Problem 077 * [Sol1](project_euler/problem_077/sol1.py) * Problem 078 * [Sol1](project_euler/problem_078/sol1.py) * Problem 080 * [Sol1](project_euler/problem_080/sol1.py) * Problem 081 * [Sol1](project_euler/problem_081/sol1.py) * Problem 085 * [Sol1](project_euler/problem_085/sol1.py) * Problem 086 * [Sol1](project_euler/problem_086/sol1.py) * Problem 087 * [Sol1](project_euler/problem_087/sol1.py) * Problem 089 * [Sol1](project_euler/problem_089/sol1.py) * Problem 091 * [Sol1](project_euler/problem_091/sol1.py) * Problem 092 * [Sol1](project_euler/problem_092/sol1.py) * Problem 097 * [Sol1](project_euler/problem_097/sol1.py) * Problem 099 * [Sol1](project_euler/problem_099/sol1.py) * Problem 101 * [Sol1](project_euler/problem_101/sol1.py) * Problem 102 * [Sol1](project_euler/problem_102/sol1.py) * Problem 104 * [Sol1](project_euler/problem_104/sol1.py) * Problem 107 * [Sol1](project_euler/problem_107/sol1.py) * Problem 109 * [Sol1](project_euler/problem_109/sol1.py) * Problem 112 * [Sol1](project_euler/problem_112/sol1.py) * Problem 113 * [Sol1](project_euler/problem_113/sol1.py) * Problem 114 * [Sol1](project_euler/problem_114/sol1.py) * Problem 115 * [Sol1](project_euler/problem_115/sol1.py) * Problem 116 * [Sol1](project_euler/problem_116/sol1.py) * Problem 119 * [Sol1](project_euler/problem_119/sol1.py) * Problem 120 * [Sol1](project_euler/problem_120/sol1.py) * Problem 121 * [Sol1](project_euler/problem_121/sol1.py) * Problem 123 * [Sol1](project_euler/problem_123/sol1.py) * Problem 125 * [Sol1](project_euler/problem_125/sol1.py) * Problem 129 * [Sol1](project_euler/problem_129/sol1.py) * Problem 135 * [Sol1](project_euler/problem_135/sol1.py) * Problem 144 * [Sol1](project_euler/problem_144/sol1.py) * Problem 145 * [Sol1](project_euler/problem_145/sol1.py) * Problem 173 * [Sol1](project_euler/problem_173/sol1.py) * Problem 174 * [Sol1](project_euler/problem_174/sol1.py) * Problem 180 * [Sol1](project_euler/problem_180/sol1.py) * Problem 188 * [Sol1](project_euler/problem_188/sol1.py) * Problem 191 * [Sol1](project_euler/problem_191/sol1.py) * Problem 203 * [Sol1](project_euler/problem_203/sol1.py) * Problem 205 * [Sol1](project_euler/problem_205/sol1.py) * Problem 206 * [Sol1](project_euler/problem_206/sol1.py) * Problem 207 * [Sol1](project_euler/problem_207/sol1.py) * Problem 234 * [Sol1](project_euler/problem_234/sol1.py) * Problem 301 * [Sol1](project_euler/problem_301/sol1.py) * Problem 493 * [Sol1](project_euler/problem_493/sol1.py) * Problem 551 * [Sol1](project_euler/problem_551/sol1.py) * Problem 587 * [Sol1](project_euler/problem_587/sol1.py) * Problem 686 * [Sol1](project_euler/problem_686/sol1.py) ## Quantum * [Bb84](quantum/bb84.py) * [Deutsch Jozsa](quantum/deutsch_jozsa.py) * [Half Adder](quantum/half_adder.py) * [Not Gate](quantum/not_gate.py) * [Q Fourier Transform](quantum/q_fourier_transform.py) * [Q Full Adder](quantum/q_full_adder.py) * [Quantum Entanglement](quantum/quantum_entanglement.py) * [Quantum Teleportation](quantum/quantum_teleportation.py) * [Ripple Adder Classic](quantum/ripple_adder_classic.py) * [Single Qubit Measure](quantum/single_qubit_measure.py) * [Superdense Coding](quantum/superdense_coding.py) ## Scheduling * [First Come First Served](scheduling/first_come_first_served.py) * [Highest Response Ratio Next](scheduling/highest_response_ratio_next.py) * [Job Sequencing With Deadline](scheduling/job_sequencing_with_deadline.py) * [Multi Level Feedback Queue](scheduling/multi_level_feedback_queue.py) * [Non Preemptive Shortest Job First](scheduling/non_preemptive_shortest_job_first.py) * [Round Robin](scheduling/round_robin.py) * [Shortest Job First](scheduling/shortest_job_first.py) ## Searches * [Binary Search](searches/binary_search.py) * [Binary Tree Traversal](searches/binary_tree_traversal.py) * [Double Linear Search](searches/double_linear_search.py) * [Double Linear Search Recursion](searches/double_linear_search_recursion.py) * [Fibonacci Search](searches/fibonacci_search.py) * [Hill Climbing](searches/hill_climbing.py) * [Interpolation Search](searches/interpolation_search.py) * [Jump Search](searches/jump_search.py) * [Linear Search](searches/linear_search.py) * [Quick Select](searches/quick_select.py) * [Sentinel Linear Search](searches/sentinel_linear_search.py) * [Simple Binary Search](searches/simple_binary_search.py) * [Simulated Annealing](searches/simulated_annealing.py) * [Tabu Search](searches/tabu_search.py) * [Ternary Search](searches/ternary_search.py) ## Sorts * [Bead Sort](sorts/bead_sort.py) * [Bitonic Sort](sorts/bitonic_sort.py) * [Bogo Sort](sorts/bogo_sort.py) * [Bubble Sort](sorts/bubble_sort.py) * [Bucket Sort](sorts/bucket_sort.py) * [Circle Sort](sorts/circle_sort.py) * [Cocktail Shaker Sort](sorts/cocktail_shaker_sort.py) * [Comb Sort](sorts/comb_sort.py) * [Counting Sort](sorts/counting_sort.py) * [Cycle Sort](sorts/cycle_sort.py) * [Double Sort](sorts/double_sort.py) * [Dutch National Flag Sort](sorts/dutch_national_flag_sort.py) * [Exchange Sort](sorts/exchange_sort.py) * [External Sort](sorts/external_sort.py) * [Gnome Sort](sorts/gnome_sort.py) * [Heap Sort](sorts/heap_sort.py) * [Insertion Sort](sorts/insertion_sort.py) * [Intro Sort](sorts/intro_sort.py) * [Iterative Merge Sort](sorts/iterative_merge_sort.py) * [Merge Insertion Sort](sorts/merge_insertion_sort.py) * [Merge Sort](sorts/merge_sort.py) * [Msd Radix Sort](sorts/msd_radix_sort.py) * [Natural Sort](sorts/natural_sort.py) * [Odd Even Sort](sorts/odd_even_sort.py) * [Odd Even Transposition Parallel](sorts/odd_even_transposition_parallel.py) * [Odd Even Transposition Single Threaded](sorts/odd_even_transposition_single_threaded.py) * [Pancake Sort](sorts/pancake_sort.py) * [Patience Sort](sorts/patience_sort.py) * [Pigeon Sort](sorts/pigeon_sort.py) * [Pigeonhole Sort](sorts/pigeonhole_sort.py) * [Quick Sort](sorts/quick_sort.py) * [Quick Sort 3 Partition](sorts/quick_sort_3_partition.py) * [Radix Sort](sorts/radix_sort.py) * [Random Normal Distribution Quicksort](sorts/random_normal_distribution_quicksort.py) * [Random Pivot Quick Sort](sorts/random_pivot_quick_sort.py) * [Recursive Bubble Sort](sorts/recursive_bubble_sort.py) * [Recursive Insertion Sort](sorts/recursive_insertion_sort.py) * [Recursive Mergesort Array](sorts/recursive_mergesort_array.py) * [Recursive Quick Sort](sorts/recursive_quick_sort.py) * [Selection Sort](sorts/selection_sort.py) * [Shell Sort](sorts/shell_sort.py) * [Shrink Shell Sort](sorts/shrink_shell_sort.py) * [Slowsort](sorts/slowsort.py) * [Stooge Sort](sorts/stooge_sort.py) * [Strand Sort](sorts/strand_sort.py) * [Tim Sort](sorts/tim_sort.py) * [Topological Sort](sorts/topological_sort.py) * [Tree Sort](sorts/tree_sort.py) * [Unknown Sort](sorts/unknown_sort.py) * [Wiggle Sort](sorts/wiggle_sort.py) ## Strings * [Aho Corasick](strings/aho_corasick.py) * [Alternative String Arrange](strings/alternative_string_arrange.py) * [Anagrams](strings/anagrams.py) * [Autocomplete Using Trie](strings/autocomplete_using_trie.py) * [Barcode Validator](strings/barcode_validator.py) * [Boyer Moore Search](strings/boyer_moore_search.py) * [Can String Be Rearranged As Palindrome](strings/can_string_be_rearranged_as_palindrome.py) * [Capitalize](strings/capitalize.py) * [Check Anagrams](strings/check_anagrams.py) * [Credit Card Validator](strings/credit_card_validator.py) * [Detecting English Programmatically](strings/detecting_english_programmatically.py) * [Dna](strings/dna.py) * [Frequency Finder](strings/frequency_finder.py) * [Hamming Distance](strings/hamming_distance.py) * [Indian Phone Validator](strings/indian_phone_validator.py) * [Is Contains Unique Chars](strings/is_contains_unique_chars.py) * [Is Isogram](strings/is_isogram.py) * [Is Palindrome](strings/is_palindrome.py) * [Is Pangram](strings/is_pangram.py) * [Is Spain National Id](strings/is_spain_national_id.py) * [Is Srilankan Phone Number](strings/is_srilankan_phone_number.py) * [Jaro Winkler](strings/jaro_winkler.py) * [Join](strings/join.py) * [Knuth Morris Pratt](strings/knuth_morris_pratt.py) * [Levenshtein Distance](strings/levenshtein_distance.py) * [Lower](strings/lower.py) * [Manacher](strings/manacher.py) * [Min Cost String Conversion](strings/min_cost_string_conversion.py) * [Naive String Search](strings/naive_string_search.py) * [Ngram](strings/ngram.py) * [Palindrome](strings/palindrome.py) * [Prefix Function](strings/prefix_function.py) * [Rabin Karp](strings/rabin_karp.py) * [Remove Duplicate](strings/remove_duplicate.py) * [Reverse Letters](strings/reverse_letters.py) * [Reverse Long Words](strings/reverse_long_words.py) * [Reverse Words](strings/reverse_words.py) * [Snake Case To Camel Pascal Case](strings/snake_case_to_camel_pascal_case.py) * [Split](strings/split.py) * [Text Justification](strings/text_justification.py) * [Upper](strings/upper.py) * [Wave](strings/wave.py) * [Wildcard Pattern Matching](strings/wildcard_pattern_matching.py) * [Word Occurrence](strings/word_occurrence.py) * [Word Patterns](strings/word_patterns.py) * [Z Function](strings/z_function.py) ## Web Programming * [Co2 Emission](web_programming/co2_emission.py) * [Convert Number To Words](web_programming/convert_number_to_words.py) * [Covid Stats Via Xpath](web_programming/covid_stats_via_xpath.py) * [Crawl Google Results](web_programming/crawl_google_results.py) * [Crawl Google Scholar Citation](web_programming/crawl_google_scholar_citation.py) * [Currency Converter](web_programming/currency_converter.py) * [Current Stock Price](web_programming/current_stock_price.py) * [Current Weather](web_programming/current_weather.py) * [Daily Horoscope](web_programming/daily_horoscope.py) * [Download Images From Google Query](web_programming/download_images_from_google_query.py) * [Emails From Url](web_programming/emails_from_url.py) * [Fetch Anime And Play](web_programming/fetch_anime_and_play.py) * [Fetch Bbc News](web_programming/fetch_bbc_news.py) * [Fetch Github Info](web_programming/fetch_github_info.py) * [Fetch Jobs](web_programming/fetch_jobs.py) * [Fetch Quotes](web_programming/fetch_quotes.py) * [Fetch Well Rx Price](web_programming/fetch_well_rx_price.py) * [Get Amazon Product Data](web_programming/get_amazon_product_data.py) * [Get Imdb Top 250 Movies Csv](web_programming/get_imdb_top_250_movies_csv.py) * [Get Imdbtop](web_programming/get_imdbtop.py) * [Get Top Billioners](web_programming/get_top_billioners.py) * [Get Top Hn Posts](web_programming/get_top_hn_posts.py) * [Get User Tweets](web_programming/get_user_tweets.py) * [Giphy](web_programming/giphy.py) * [Instagram Crawler](web_programming/instagram_crawler.py) * [Instagram Pic](web_programming/instagram_pic.py) * [Instagram Video](web_programming/instagram_video.py) * [Nasa Data](web_programming/nasa_data.py) * [Open Google Results](web_programming/open_google_results.py) * [Random Anime Character](web_programming/random_anime_character.py) * [Recaptcha Verification](web_programming/recaptcha_verification.py) * [Reddit](web_programming/reddit.py) * [Search Books By Isbn](web_programming/search_books_by_isbn.py) * [Slack Message](web_programming/slack_message.py) * [Test Fetch Github Info](web_programming/test_fetch_github_info.py) * [World Covid19 Stats](web_programming/world_covid19_stats.py)
## Arithmetic Analysis * [Bisection](arithmetic_analysis/bisection.py) * [Gaussian Elimination](arithmetic_analysis/gaussian_elimination.py) * [In Static Equilibrium](arithmetic_analysis/in_static_equilibrium.py) * [Intersection](arithmetic_analysis/intersection.py) * [Jacobi Iteration Method](arithmetic_analysis/jacobi_iteration_method.py) * [Lu Decomposition](arithmetic_analysis/lu_decomposition.py) * [Newton Forward Interpolation](arithmetic_analysis/newton_forward_interpolation.py) * [Newton Method](arithmetic_analysis/newton_method.py) * [Newton Raphson](arithmetic_analysis/newton_raphson.py) * [Newton Raphson New](arithmetic_analysis/newton_raphson_new.py) * [Secant Method](arithmetic_analysis/secant_method.py) ## Audio Filters * [Butterworth Filter](audio_filters/butterworth_filter.py) * [Equal Loudness Filter](audio_filters/equal_loudness_filter.py) * [Iir Filter](audio_filters/iir_filter.py) * [Show Response](audio_filters/show_response.py) ## Backtracking * [All Combinations](backtracking/all_combinations.py) * [All Permutations](backtracking/all_permutations.py) * [All Subsequences](backtracking/all_subsequences.py) * [Coloring](backtracking/coloring.py) * [Combination Sum](backtracking/combination_sum.py) * [Hamiltonian Cycle](backtracking/hamiltonian_cycle.py) * [Knight Tour](backtracking/knight_tour.py) * [Minimax](backtracking/minimax.py) * [Minmax](backtracking/minmax.py) * [N Queens](backtracking/n_queens.py) * [N Queens Math](backtracking/n_queens_math.py) * [Rat In Maze](backtracking/rat_in_maze.py) * [Sudoku](backtracking/sudoku.py) * [Sum Of Subsets](backtracking/sum_of_subsets.py) ## Bit Manipulation * [Binary And Operator](bit_manipulation/binary_and_operator.py) * [Binary Count Setbits](bit_manipulation/binary_count_setbits.py) * [Binary Count Trailing Zeros](bit_manipulation/binary_count_trailing_zeros.py) * [Binary Or Operator](bit_manipulation/binary_or_operator.py) * [Binary Shifts](bit_manipulation/binary_shifts.py) * [Binary Twos Complement](bit_manipulation/binary_twos_complement.py) * [Binary Xor Operator](bit_manipulation/binary_xor_operator.py) * [Count 1S Brian Kernighan Method](bit_manipulation/count_1s_brian_kernighan_method.py) * [Count Number Of One Bits](bit_manipulation/count_number_of_one_bits.py) * [Gray Code Sequence](bit_manipulation/gray_code_sequence.py) * [Highest Set Bit](bit_manipulation/highest_set_bit.py) * [Index Of Rightmost Set Bit](bit_manipulation/index_of_rightmost_set_bit.py) * [Is Even](bit_manipulation/is_even.py) * [Is Power Of Two](bit_manipulation/is_power_of_two.py) * [Reverse Bits](bit_manipulation/reverse_bits.py) * [Single Bit Manipulation Operations](bit_manipulation/single_bit_manipulation_operations.py) ## Blockchain * [Chinese Remainder Theorem](blockchain/chinese_remainder_theorem.py) * [Diophantine Equation](blockchain/diophantine_equation.py) * [Modular Division](blockchain/modular_division.py) ## Boolean Algebra * [And Gate](boolean_algebra/and_gate.py) * [Nand Gate](boolean_algebra/nand_gate.py) * [Norgate](boolean_algebra/norgate.py) * [Not Gate](boolean_algebra/not_gate.py) * [Or Gate](boolean_algebra/or_gate.py) * [Quine Mc Cluskey](boolean_algebra/quine_mc_cluskey.py) * [Xnor Gate](boolean_algebra/xnor_gate.py) * [Xor Gate](boolean_algebra/xor_gate.py) ## Cellular Automata * [Conways Game Of Life](cellular_automata/conways_game_of_life.py) * [Game Of Life](cellular_automata/game_of_life.py) * [Nagel Schrekenberg](cellular_automata/nagel_schrekenberg.py) * [One Dimensional](cellular_automata/one_dimensional.py) ## Ciphers * [A1Z26](ciphers/a1z26.py) * [Affine Cipher](ciphers/affine_cipher.py) * [Atbash](ciphers/atbash.py) * [Baconian Cipher](ciphers/baconian_cipher.py) * [Base16](ciphers/base16.py) * [Base32](ciphers/base32.py) * [Base64](ciphers/base64.py) * [Base85](ciphers/base85.py) * [Beaufort Cipher](ciphers/beaufort_cipher.py) * [Bifid](ciphers/bifid.py) * [Brute Force Caesar Cipher](ciphers/brute_force_caesar_cipher.py) * [Caesar Cipher](ciphers/caesar_cipher.py) * [Cryptomath Module](ciphers/cryptomath_module.py) * [Decrypt Caesar With Chi Squared](ciphers/decrypt_caesar_with_chi_squared.py) * [Deterministic Miller Rabin](ciphers/deterministic_miller_rabin.py) * [Diffie](ciphers/diffie.py) * [Diffie Hellman](ciphers/diffie_hellman.py) * [Elgamal Key Generator](ciphers/elgamal_key_generator.py) * [Enigma Machine2](ciphers/enigma_machine2.py) * [Hill Cipher](ciphers/hill_cipher.py) * [Mixed Keyword Cypher](ciphers/mixed_keyword_cypher.py) * [Mono Alphabetic Ciphers](ciphers/mono_alphabetic_ciphers.py) * [Morse Code](ciphers/morse_code.py) * [Onepad Cipher](ciphers/onepad_cipher.py) * [Playfair Cipher](ciphers/playfair_cipher.py) * [Polybius](ciphers/polybius.py) * [Porta Cipher](ciphers/porta_cipher.py) * [Rabin Miller](ciphers/rabin_miller.py) * [Rail Fence Cipher](ciphers/rail_fence_cipher.py) * [Rot13](ciphers/rot13.py) * [Rsa Cipher](ciphers/rsa_cipher.py) * [Rsa Factorization](ciphers/rsa_factorization.py) * [Rsa Key Generator](ciphers/rsa_key_generator.py) * [Shuffled Shift Cipher](ciphers/shuffled_shift_cipher.py) * [Simple Keyword Cypher](ciphers/simple_keyword_cypher.py) * [Simple Substitution Cipher](ciphers/simple_substitution_cipher.py) * [Trafid Cipher](ciphers/trafid_cipher.py) * [Transposition Cipher](ciphers/transposition_cipher.py) * [Transposition Cipher Encrypt Decrypt File](ciphers/transposition_cipher_encrypt_decrypt_file.py) * [Vigenere Cipher](ciphers/vigenere_cipher.py) * [Xor Cipher](ciphers/xor_cipher.py) ## Compression * [Burrows Wheeler](compression/burrows_wheeler.py) * [Huffman](compression/huffman.py) * [Lempel Ziv](compression/lempel_ziv.py) * [Lempel Ziv Decompress](compression/lempel_ziv_decompress.py) * [Peak Signal To Noise Ratio](compression/peak_signal_to_noise_ratio.py) * [Run Length Encoding](compression/run_length_encoding.py) ## Computer Vision * [Cnn Classification](computer_vision/cnn_classification.py) * [Flip Augmentation](computer_vision/flip_augmentation.py) * [Harris Corner](computer_vision/harris_corner.py) * [Horn Schunck](computer_vision/horn_schunck.py) * [Mean Threshold](computer_vision/mean_threshold.py) * [Mosaic Augmentation](computer_vision/mosaic_augmentation.py) * [Pooling Functions](computer_vision/pooling_functions.py) ## Conversions * [Astronomical Length Scale Conversion](conversions/astronomical_length_scale_conversion.py) * [Binary To Decimal](conversions/binary_to_decimal.py) * [Binary To Hexadecimal](conversions/binary_to_hexadecimal.py) * [Binary To Octal](conversions/binary_to_octal.py) * [Decimal To Any](conversions/decimal_to_any.py) * [Decimal To Binary](conversions/decimal_to_binary.py) * [Decimal To Binary Recursion](conversions/decimal_to_binary_recursion.py) * [Decimal To Hexadecimal](conversions/decimal_to_hexadecimal.py) * [Decimal To Octal](conversions/decimal_to_octal.py) * [Excel Title To Column](conversions/excel_title_to_column.py) * [Hex To Bin](conversions/hex_to_bin.py) * [Hexadecimal To Decimal](conversions/hexadecimal_to_decimal.py) * [Length Conversion](conversions/length_conversion.py) * [Molecular Chemistry](conversions/molecular_chemistry.py) * [Octal To Decimal](conversions/octal_to_decimal.py) * [Prefix Conversions](conversions/prefix_conversions.py) * [Prefix Conversions String](conversions/prefix_conversions_string.py) * [Pressure Conversions](conversions/pressure_conversions.py) * [Rgb Hsv Conversion](conversions/rgb_hsv_conversion.py) * [Roman Numerals](conversions/roman_numerals.py) * [Speed Conversions](conversions/speed_conversions.py) * [Temperature Conversions](conversions/temperature_conversions.py) * [Volume Conversions](conversions/volume_conversions.py) * [Weight Conversion](conversions/weight_conversion.py) ## Data Structures * Arrays * [Permutations](data_structures/arrays/permutations.py) * [Prefix Sum](data_structures/arrays/prefix_sum.py) * Binary Tree * [Avl Tree](data_structures/binary_tree/avl_tree.py) * [Basic Binary Tree](data_structures/binary_tree/basic_binary_tree.py) * [Binary Search Tree](data_structures/binary_tree/binary_search_tree.py) * [Binary Search Tree Recursive](data_structures/binary_tree/binary_search_tree_recursive.py) * [Binary Tree Mirror](data_structures/binary_tree/binary_tree_mirror.py) * [Binary Tree Node Sum](data_structures/binary_tree/binary_tree_node_sum.py) * [Binary Tree Path Sum](data_structures/binary_tree/binary_tree_path_sum.py) * [Binary Tree Traversals](data_structures/binary_tree/binary_tree_traversals.py) * [Diff Views Of Binary Tree](data_structures/binary_tree/diff_views_of_binary_tree.py) * [Distribute Coins](data_structures/binary_tree/distribute_coins.py) * [Fenwick Tree](data_structures/binary_tree/fenwick_tree.py) * [Inorder Tree Traversal 2022](data_structures/binary_tree/inorder_tree_traversal_2022.py) * [Is Bst](data_structures/binary_tree/is_bst.py) * [Lazy Segment Tree](data_structures/binary_tree/lazy_segment_tree.py) * [Lowest Common Ancestor](data_structures/binary_tree/lowest_common_ancestor.py) * [Maximum Fenwick Tree](data_structures/binary_tree/maximum_fenwick_tree.py) * [Merge Two Binary Trees](data_structures/binary_tree/merge_two_binary_trees.py) * [Non Recursive Segment Tree](data_structures/binary_tree/non_recursive_segment_tree.py) * [Number Of Possible Binary Trees](data_structures/binary_tree/number_of_possible_binary_trees.py) * [Red Black Tree](data_structures/binary_tree/red_black_tree.py) * [Segment Tree](data_structures/binary_tree/segment_tree.py) * [Segment Tree Other](data_structures/binary_tree/segment_tree_other.py) * [Treap](data_structures/binary_tree/treap.py) * [Wavelet Tree](data_structures/binary_tree/wavelet_tree.py) * Disjoint Set * [Alternate Disjoint Set](data_structures/disjoint_set/alternate_disjoint_set.py) * [Disjoint Set](data_structures/disjoint_set/disjoint_set.py) * Hashing * [Double Hash](data_structures/hashing/double_hash.py) * [Hash Table](data_structures/hashing/hash_table.py) * [Hash Table With Linked List](data_structures/hashing/hash_table_with_linked_list.py) * Number Theory * [Prime Numbers](data_structures/hashing/number_theory/prime_numbers.py) * [Quadratic Probing](data_structures/hashing/quadratic_probing.py) * Heap * [Binomial Heap](data_structures/heap/binomial_heap.py) * [Heap](data_structures/heap/heap.py) * [Heap Generic](data_structures/heap/heap_generic.py) * [Max Heap](data_structures/heap/max_heap.py) * [Min Heap](data_structures/heap/min_heap.py) * [Randomized Heap](data_structures/heap/randomized_heap.py) * [Skew Heap](data_structures/heap/skew_heap.py) * Linked List * [Circular Linked List](data_structures/linked_list/circular_linked_list.py) * [Deque Doubly](data_structures/linked_list/deque_doubly.py) * [Doubly Linked List](data_structures/linked_list/doubly_linked_list.py) * [Doubly Linked List Two](data_structures/linked_list/doubly_linked_list_two.py) * [From Sequence](data_structures/linked_list/from_sequence.py) * [Has Loop](data_structures/linked_list/has_loop.py) * [Is Palindrome](data_structures/linked_list/is_palindrome.py) * [Merge Two Lists](data_structures/linked_list/merge_two_lists.py) * [Middle Element Of Linked List](data_structures/linked_list/middle_element_of_linked_list.py) * [Print Reverse](data_structures/linked_list/print_reverse.py) * [Singly Linked List](data_structures/linked_list/singly_linked_list.py) * [Skip List](data_structures/linked_list/skip_list.py) * [Swap Nodes](data_structures/linked_list/swap_nodes.py) * Queue * [Circular Queue](data_structures/queue/circular_queue.py) * [Circular Queue Linked List](data_structures/queue/circular_queue_linked_list.py) * [Double Ended Queue](data_structures/queue/double_ended_queue.py) * [Linked Queue](data_structures/queue/linked_queue.py) * [Priority Queue Using List](data_structures/queue/priority_queue_using_list.py) * [Queue On List](data_structures/queue/queue_on_list.py) * [Queue On Pseudo Stack](data_structures/queue/queue_on_pseudo_stack.py) * Stacks * [Balanced Parentheses](data_structures/stacks/balanced_parentheses.py) * [Dijkstras Two Stack Algorithm](data_structures/stacks/dijkstras_two_stack_algorithm.py) * [Evaluate Postfix Notations](data_structures/stacks/evaluate_postfix_notations.py) * [Infix To Postfix Conversion](data_structures/stacks/infix_to_postfix_conversion.py) * [Infix To Prefix Conversion](data_structures/stacks/infix_to_prefix_conversion.py) * [Next Greater Element](data_structures/stacks/next_greater_element.py) * [Postfix Evaluation](data_structures/stacks/postfix_evaluation.py) * [Prefix Evaluation](data_structures/stacks/prefix_evaluation.py) * [Stack](data_structures/stacks/stack.py) * [Stack With Doubly Linked List](data_structures/stacks/stack_with_doubly_linked_list.py) * [Stack With Singly Linked List](data_structures/stacks/stack_with_singly_linked_list.py) * [Stock Span Problem](data_structures/stacks/stock_span_problem.py) * Trie * [Radix Tree](data_structures/trie/radix_tree.py) * [Trie](data_structures/trie/trie.py) ## Digital Image Processing * [Change Brightness](digital_image_processing/change_brightness.py) * [Change Contrast](digital_image_processing/change_contrast.py) * [Convert To Negative](digital_image_processing/convert_to_negative.py) * Dithering * [Burkes](digital_image_processing/dithering/burkes.py) * Edge Detection * [Canny](digital_image_processing/edge_detection/canny.py) * Filters * [Bilateral Filter](digital_image_processing/filters/bilateral_filter.py) * [Convolve](digital_image_processing/filters/convolve.py) * [Gabor Filter](digital_image_processing/filters/gabor_filter.py) * [Gaussian Filter](digital_image_processing/filters/gaussian_filter.py) * [Local Binary Pattern](digital_image_processing/filters/local_binary_pattern.py) * [Median Filter](digital_image_processing/filters/median_filter.py) * [Sobel Filter](digital_image_processing/filters/sobel_filter.py) * Histogram Equalization * [Histogram Stretch](digital_image_processing/histogram_equalization/histogram_stretch.py) * [Index Calculation](digital_image_processing/index_calculation.py) * Morphological Operations * [Dilation Operation](digital_image_processing/morphological_operations/dilation_operation.py) * [Erosion Operation](digital_image_processing/morphological_operations/erosion_operation.py) * Resize * [Resize](digital_image_processing/resize/resize.py) * Rotation * [Rotation](digital_image_processing/rotation/rotation.py) * [Sepia](digital_image_processing/sepia.py) * [Test Digital Image Processing](digital_image_processing/test_digital_image_processing.py) ## Divide And Conquer * [Closest Pair Of Points](divide_and_conquer/closest_pair_of_points.py) * [Convex Hull](divide_and_conquer/convex_hull.py) * [Heaps Algorithm](divide_and_conquer/heaps_algorithm.py) * [Heaps Algorithm Iterative](divide_and_conquer/heaps_algorithm_iterative.py) * [Inversions](divide_and_conquer/inversions.py) * [Kth Order Statistic](divide_and_conquer/kth_order_statistic.py) * [Max Difference Pair](divide_and_conquer/max_difference_pair.py) * [Max Subarray Sum](divide_and_conquer/max_subarray_sum.py) * [Mergesort](divide_and_conquer/mergesort.py) * [Peak](divide_and_conquer/peak.py) * [Power](divide_and_conquer/power.py) * [Strassen Matrix Multiplication](divide_and_conquer/strassen_matrix_multiplication.py) ## Dynamic Programming * [Abbreviation](dynamic_programming/abbreviation.py) * [All Construct](dynamic_programming/all_construct.py) * [Bitmask](dynamic_programming/bitmask.py) * [Catalan Numbers](dynamic_programming/catalan_numbers.py) * [Climbing Stairs](dynamic_programming/climbing_stairs.py) * [Combination Sum Iv](dynamic_programming/combination_sum_iv.py) * [Edit Distance](dynamic_programming/edit_distance.py) * [Factorial](dynamic_programming/factorial.py) * [Fast Fibonacci](dynamic_programming/fast_fibonacci.py) * [Fibonacci](dynamic_programming/fibonacci.py) * [Fizz Buzz](dynamic_programming/fizz_buzz.py) * [Floyd Warshall](dynamic_programming/floyd_warshall.py) * [Integer Partition](dynamic_programming/integer_partition.py) * [Iterating Through Submasks](dynamic_programming/iterating_through_submasks.py) * [Knapsack](dynamic_programming/knapsack.py) * [Longest Common Subsequence](dynamic_programming/longest_common_subsequence.py) * [Longest Common Substring](dynamic_programming/longest_common_substring.py) * [Longest Increasing Subsequence](dynamic_programming/longest_increasing_subsequence.py) * [Longest Increasing Subsequence O(Nlogn)](dynamic_programming/longest_increasing_subsequence_o(nlogn).py) * [Longest Sub Array](dynamic_programming/longest_sub_array.py) * [Matrix Chain Order](dynamic_programming/matrix_chain_order.py) * [Max Non Adjacent Sum](dynamic_programming/max_non_adjacent_sum.py) * [Max Sub Array](dynamic_programming/max_sub_array.py) * [Max Sum Contiguous Subsequence](dynamic_programming/max_sum_contiguous_subsequence.py) * [Min Distance Up Bottom](dynamic_programming/min_distance_up_bottom.py) * [Minimum Coin Change](dynamic_programming/minimum_coin_change.py) * [Minimum Cost Path](dynamic_programming/minimum_cost_path.py) * [Minimum Partition](dynamic_programming/minimum_partition.py) * [Minimum Squares To Represent A Number](dynamic_programming/minimum_squares_to_represent_a_number.py) * [Minimum Steps To One](dynamic_programming/minimum_steps_to_one.py) * [Minimum Tickets Cost](dynamic_programming/minimum_tickets_cost.py) * [Optimal Binary Search Tree](dynamic_programming/optimal_binary_search_tree.py) * [Palindrome Partitioning](dynamic_programming/palindrome_partitioning.py) * [Rod Cutting](dynamic_programming/rod_cutting.py) * [Subset Generation](dynamic_programming/subset_generation.py) * [Sum Of Subset](dynamic_programming/sum_of_subset.py) * [Viterbi](dynamic_programming/viterbi.py) ## Electronics * [Builtin Voltage](electronics/builtin_voltage.py) * [Carrier Concentration](electronics/carrier_concentration.py) * [Coulombs Law](electronics/coulombs_law.py) * [Electric Conductivity](electronics/electric_conductivity.py) * [Electric Power](electronics/electric_power.py) * [Electrical Impedance](electronics/electrical_impedance.py) * [Ind Reactance](electronics/ind_reactance.py) * [Ohms Law](electronics/ohms_law.py) * [Resistor Equivalence](electronics/resistor_equivalence.py) * [Resonant Frequency](electronics/resonant_frequency.py) ## File Transfer * [Receive File](file_transfer/receive_file.py) * [Send File](file_transfer/send_file.py) * Tests * [Test Send File](file_transfer/tests/test_send_file.py) ## Financial * [Equated Monthly Installments](financial/equated_monthly_installments.py) * [Interest](financial/interest.py) * [Price Plus Tax](financial/price_plus_tax.py) ## Fractals * [Julia Sets](fractals/julia_sets.py) * [Koch Snowflake](fractals/koch_snowflake.py) * [Mandelbrot](fractals/mandelbrot.py) * [Sierpinski Triangle](fractals/sierpinski_triangle.py) ## Fuzzy Logic * [Fuzzy Operations](fuzzy_logic/fuzzy_operations.py) ## Genetic Algorithm * [Basic String](genetic_algorithm/basic_string.py) ## Geodesy * [Haversine Distance](geodesy/haversine_distance.py) * [Lamberts Ellipsoidal Distance](geodesy/lamberts_ellipsoidal_distance.py) ## Graphics * [Bezier Curve](graphics/bezier_curve.py) * [Vector3 For 2D Rendering](graphics/vector3_for_2d_rendering.py) ## Graphs * [A Star](graphs/a_star.py) * [Articulation Points](graphs/articulation_points.py) * [Basic Graphs](graphs/basic_graphs.py) * [Bellman Ford](graphs/bellman_ford.py) * [Bi Directional Dijkstra](graphs/bi_directional_dijkstra.py) * [Bidirectional A Star](graphs/bidirectional_a_star.py) * [Bidirectional Breadth First Search](graphs/bidirectional_breadth_first_search.py) * [Boruvka](graphs/boruvka.py) * [Breadth First Search](graphs/breadth_first_search.py) * [Breadth First Search 2](graphs/breadth_first_search_2.py) * [Breadth First Search Shortest Path](graphs/breadth_first_search_shortest_path.py) * [Breadth First Search Shortest Path 2](graphs/breadth_first_search_shortest_path_2.py) * [Breadth First Search Zero One Shortest Path](graphs/breadth_first_search_zero_one_shortest_path.py) * [Check Bipartite Graph Bfs](graphs/check_bipartite_graph_bfs.py) * [Check Bipartite Graph Dfs](graphs/check_bipartite_graph_dfs.py) * [Check Cycle](graphs/check_cycle.py) * [Connected Components](graphs/connected_components.py) * [Depth First Search](graphs/depth_first_search.py) * [Depth First Search 2](graphs/depth_first_search_2.py) * [Dijkstra](graphs/dijkstra.py) * [Dijkstra 2](graphs/dijkstra_2.py) * [Dijkstra Algorithm](graphs/dijkstra_algorithm.py) * [Dijkstra Alternate](graphs/dijkstra_alternate.py) * [Dinic](graphs/dinic.py) * [Directed And Undirected (Weighted) Graph](graphs/directed_and_undirected_(weighted)_graph.py) * [Edmonds Karp Multiple Source And Sink](graphs/edmonds_karp_multiple_source_and_sink.py) * [Eulerian Path And Circuit For Undirected Graph](graphs/eulerian_path_and_circuit_for_undirected_graph.py) * [Even Tree](graphs/even_tree.py) * [Finding Bridges](graphs/finding_bridges.py) * [Frequent Pattern Graph Miner](graphs/frequent_pattern_graph_miner.py) * [G Topological Sort](graphs/g_topological_sort.py) * [Gale Shapley Bigraph](graphs/gale_shapley_bigraph.py) * [Graph List](graphs/graph_list.py) * [Graph Matrix](graphs/graph_matrix.py) * [Graphs Floyd Warshall](graphs/graphs_floyd_warshall.py) * [Greedy Best First](graphs/greedy_best_first.py) * [Greedy Min Vertex Cover](graphs/greedy_min_vertex_cover.py) * [Kahns Algorithm Long](graphs/kahns_algorithm_long.py) * [Kahns Algorithm Topo](graphs/kahns_algorithm_topo.py) * [Karger](graphs/karger.py) * [Markov Chain](graphs/markov_chain.py) * [Matching Min Vertex Cover](graphs/matching_min_vertex_cover.py) * [Minimum Path Sum](graphs/minimum_path_sum.py) * [Minimum Spanning Tree Boruvka](graphs/minimum_spanning_tree_boruvka.py) * [Minimum Spanning Tree Kruskal](graphs/minimum_spanning_tree_kruskal.py) * [Minimum Spanning Tree Kruskal2](graphs/minimum_spanning_tree_kruskal2.py) * [Minimum Spanning Tree Prims](graphs/minimum_spanning_tree_prims.py) * [Minimum Spanning Tree Prims2](graphs/minimum_spanning_tree_prims2.py) * [Multi Heuristic Astar](graphs/multi_heuristic_astar.py) * [Page Rank](graphs/page_rank.py) * [Prim](graphs/prim.py) * [Random Graph Generator](graphs/random_graph_generator.py) * [Scc Kosaraju](graphs/scc_kosaraju.py) * [Strongly Connected Components](graphs/strongly_connected_components.py) * [Tarjans Scc](graphs/tarjans_scc.py) * Tests * [Test Min Spanning Tree Kruskal](graphs/tests/test_min_spanning_tree_kruskal.py) * [Test Min Spanning Tree Prim](graphs/tests/test_min_spanning_tree_prim.py) ## Greedy Methods * [Fractional Knapsack](greedy_methods/fractional_knapsack.py) * [Fractional Knapsack 2](greedy_methods/fractional_knapsack_2.py) * [Optimal Merge Pattern](greedy_methods/optimal_merge_pattern.py) ## Hashes * [Adler32](hashes/adler32.py) * [Chaos Machine](hashes/chaos_machine.py) * [Djb2](hashes/djb2.py) * [Elf](hashes/elf.py) * [Enigma Machine](hashes/enigma_machine.py) * [Hamming Code](hashes/hamming_code.py) * [Luhn](hashes/luhn.py) * [Md5](hashes/md5.py) * [Sdbm](hashes/sdbm.py) * [Sha1](hashes/sha1.py) * [Sha256](hashes/sha256.py) ## Knapsack * [Greedy Knapsack](knapsack/greedy_knapsack.py) * [Knapsack](knapsack/knapsack.py) * [Recursive Approach Knapsack](knapsack/recursive_approach_knapsack.py) * Tests * [Test Greedy Knapsack](knapsack/tests/test_greedy_knapsack.py) * [Test Knapsack](knapsack/tests/test_knapsack.py) ## Linear Algebra * Src * [Conjugate Gradient](linear_algebra/src/conjugate_gradient.py) * [Lib](linear_algebra/src/lib.py) * [Polynom For Points](linear_algebra/src/polynom_for_points.py) * [Power Iteration](linear_algebra/src/power_iteration.py) * [Rayleigh Quotient](linear_algebra/src/rayleigh_quotient.py) * [Schur Complement](linear_algebra/src/schur_complement.py) * [Test Linear Algebra](linear_algebra/src/test_linear_algebra.py) * [Transformations 2D](linear_algebra/src/transformations_2d.py) ## Machine Learning * [Astar](machine_learning/astar.py) * [Data Transformations](machine_learning/data_transformations.py) * [Decision Tree](machine_learning/decision_tree.py) * Forecasting * [Run](machine_learning/forecasting/run.py) * [Gaussian Naive Bayes](machine_learning/gaussian_naive_bayes.py) * [Gradient Boosting Regressor](machine_learning/gradient_boosting_regressor.py) * [Gradient Descent](machine_learning/gradient_descent.py) * [K Means Clust](machine_learning/k_means_clust.py) * [K Nearest Neighbours](machine_learning/k_nearest_neighbours.py) * [Knn Sklearn](machine_learning/knn_sklearn.py) * [Linear Discriminant Analysis](machine_learning/linear_discriminant_analysis.py) * [Linear Regression](machine_learning/linear_regression.py) * Local Weighted Learning * [Local Weighted Learning](machine_learning/local_weighted_learning/local_weighted_learning.py) * [Logistic Regression](machine_learning/logistic_regression.py) * Lstm * [Lstm Prediction](machine_learning/lstm/lstm_prediction.py) * [Multilayer Perceptron Classifier](machine_learning/multilayer_perceptron_classifier.py) * [Polymonial Regression](machine_learning/polymonial_regression.py) * [Random Forest Classifier](machine_learning/random_forest_classifier.py) * [Random Forest Regressor](machine_learning/random_forest_regressor.py) * [Scoring Functions](machine_learning/scoring_functions.py) * [Self Organizing Map](machine_learning/self_organizing_map.py) * [Sequential Minimum Optimization](machine_learning/sequential_minimum_optimization.py) * [Similarity Search](machine_learning/similarity_search.py) * [Support Vector Machines](machine_learning/support_vector_machines.py) * [Word Frequency Functions](machine_learning/word_frequency_functions.py) * [Xgboost Classifier](machine_learning/xgboost_classifier.py) * [Xgboost Regressor](machine_learning/xgboost_regressor.py) ## Maths * [3N Plus 1](maths/3n_plus_1.py) * [Abs](maths/abs.py) * [Add](maths/add.py) * [Addition Without Arithmetic](maths/addition_without_arithmetic.py) * [Aliquot Sum](maths/aliquot_sum.py) * [Allocation Number](maths/allocation_number.py) * [Arc Length](maths/arc_length.py) * [Area](maths/area.py) * [Area Under Curve](maths/area_under_curve.py) * [Armstrong Numbers](maths/armstrong_numbers.py) * [Automorphic Number](maths/automorphic_number.py) * [Average Absolute Deviation](maths/average_absolute_deviation.py) * [Average Mean](maths/average_mean.py) * [Average Median](maths/average_median.py) * [Average Mode](maths/average_mode.py) * [Bailey Borwein Plouffe](maths/bailey_borwein_plouffe.py) * [Basic Maths](maths/basic_maths.py) * [Binary Exp Mod](maths/binary_exp_mod.py) * [Binary Exponentiation](maths/binary_exponentiation.py) * [Binary Exponentiation 2](maths/binary_exponentiation_2.py) * [Binary Exponentiation 3](maths/binary_exponentiation_3.py) * [Binomial Coefficient](maths/binomial_coefficient.py) * [Binomial Distribution](maths/binomial_distribution.py) * [Bisection](maths/bisection.py) * [Carmichael Number](maths/carmichael_number.py) * [Catalan Number](maths/catalan_number.py) * [Ceil](maths/ceil.py) * [Check Polygon](maths/check_polygon.py) * [Chudnovsky Algorithm](maths/chudnovsky_algorithm.py) * [Collatz Sequence](maths/collatz_sequence.py) * [Combinations](maths/combinations.py) * [Decimal Isolate](maths/decimal_isolate.py) * [Dodecahedron](maths/dodecahedron.py) * [Double Factorial Iterative](maths/double_factorial_iterative.py) * [Double Factorial Recursive](maths/double_factorial_recursive.py) * [Entropy](maths/entropy.py) * [Euclidean Distance](maths/euclidean_distance.py) * [Euclidean Gcd](maths/euclidean_gcd.py) * [Euler Method](maths/euler_method.py) * [Euler Modified](maths/euler_modified.py) * [Eulers Totient](maths/eulers_totient.py) * [Extended Euclidean Algorithm](maths/extended_euclidean_algorithm.py) * [Factorial Iterative](maths/factorial_iterative.py) * [Factorial Recursive](maths/factorial_recursive.py) * [Factors](maths/factors.py) * [Fermat Little Theorem](maths/fermat_little_theorem.py) * [Fibonacci](maths/fibonacci.py) * [Find Max](maths/find_max.py) * [Find Max Recursion](maths/find_max_recursion.py) * [Find Min](maths/find_min.py) * [Find Min Recursion](maths/find_min_recursion.py) * [Floor](maths/floor.py) * [Gamma](maths/gamma.py) * [Gamma Recursive](maths/gamma_recursive.py) * [Gaussian](maths/gaussian.py) * [Gaussian Error Linear Unit](maths/gaussian_error_linear_unit.py) * [Greatest Common Divisor](maths/greatest_common_divisor.py) * [Greedy Coin Change](maths/greedy_coin_change.py) * [Hamming Numbers](maths/hamming_numbers.py) * [Hardy Ramanujanalgo](maths/hardy_ramanujanalgo.py) * [Integration By Simpson Approx](maths/integration_by_simpson_approx.py) * [Is Ip V4 Address Valid](maths/is_ip_v4_address_valid.py) * [Is Square Free](maths/is_square_free.py) * [Jaccard Similarity](maths/jaccard_similarity.py) * [Juggler Sequence](maths/juggler_sequence.py) * [Kadanes](maths/kadanes.py) * [Karatsuba](maths/karatsuba.py) * [Krishnamurthy Number](maths/krishnamurthy_number.py) * [Kth Lexicographic Permutation](maths/kth_lexicographic_permutation.py) * [Largest Of Very Large Numbers](maths/largest_of_very_large_numbers.py) * [Largest Subarray Sum](maths/largest_subarray_sum.py) * [Least Common Multiple](maths/least_common_multiple.py) * [Line Length](maths/line_length.py) * [Liouville Lambda](maths/liouville_lambda.py) * [Lucas Lehmer Primality Test](maths/lucas_lehmer_primality_test.py) * [Lucas Series](maths/lucas_series.py) * [Maclaurin Series](maths/maclaurin_series.py) * [Manhattan Distance](maths/manhattan_distance.py) * [Matrix Exponentiation](maths/matrix_exponentiation.py) * [Max Sum Sliding Window](maths/max_sum_sliding_window.py) * [Median Of Two Arrays](maths/median_of_two_arrays.py) * [Miller Rabin](maths/miller_rabin.py) * [Mobius Function](maths/mobius_function.py) * [Modular Exponential](maths/modular_exponential.py) * [Monte Carlo](maths/monte_carlo.py) * [Monte Carlo Dice](maths/monte_carlo_dice.py) * [Nevilles Method](maths/nevilles_method.py) * [Newton Raphson](maths/newton_raphson.py) * [Number Of Digits](maths/number_of_digits.py) * [Numerical Integration](maths/numerical_integration.py) * [Perfect Cube](maths/perfect_cube.py) * [Perfect Number](maths/perfect_number.py) * [Perfect Square](maths/perfect_square.py) * [Persistence](maths/persistence.py) * [Pi Monte Carlo Estimation](maths/pi_monte_carlo_estimation.py) * [Points Are Collinear 3D](maths/points_are_collinear_3d.py) * [Pollard Rho](maths/pollard_rho.py) * [Polynomial Evaluation](maths/polynomial_evaluation.py) * Polynomials * [Single Indeterminate Operations](maths/polynomials/single_indeterminate_operations.py) * [Power Using Recursion](maths/power_using_recursion.py) * [Prime Check](maths/prime_check.py) * [Prime Factors](maths/prime_factors.py) * [Prime Numbers](maths/prime_numbers.py) * [Prime Sieve Eratosthenes](maths/prime_sieve_eratosthenes.py) * [Primelib](maths/primelib.py) * [Print Multiplication Table](maths/print_multiplication_table.py) * [Pronic Number](maths/pronic_number.py) * [Proth Number](maths/proth_number.py) * [Pythagoras](maths/pythagoras.py) * [Qr Decomposition](maths/qr_decomposition.py) * [Quadratic Equations Complex Numbers](maths/quadratic_equations_complex_numbers.py) * [Radians](maths/radians.py) * [Radix2 Fft](maths/radix2_fft.py) * [Relu](maths/relu.py) * [Runge Kutta](maths/runge_kutta.py) * [Segmented Sieve](maths/segmented_sieve.py) * Series * [Arithmetic](maths/series/arithmetic.py) * [Geometric](maths/series/geometric.py) * [Geometric Series](maths/series/geometric_series.py) * [Harmonic](maths/series/harmonic.py) * [Harmonic Series](maths/series/harmonic_series.py) * [Hexagonal Numbers](maths/series/hexagonal_numbers.py) * [P Series](maths/series/p_series.py) * [Sieve Of Eratosthenes](maths/sieve_of_eratosthenes.py) * [Sigmoid](maths/sigmoid.py) * [Sigmoid Linear Unit](maths/sigmoid_linear_unit.py) * [Signum](maths/signum.py) * [Simpson Rule](maths/simpson_rule.py) * [Sin](maths/sin.py) * [Sock Merchant](maths/sock_merchant.py) * [Softmax](maths/softmax.py) * [Square Root](maths/square_root.py) * [Sum Of Arithmetic Series](maths/sum_of_arithmetic_series.py) * [Sum Of Digits](maths/sum_of_digits.py) * [Sum Of Geometric Progression](maths/sum_of_geometric_progression.py) * [Sum Of Harmonic Series](maths/sum_of_harmonic_series.py) * [Sumset](maths/sumset.py) * [Sylvester Sequence](maths/sylvester_sequence.py) * [Test Prime Check](maths/test_prime_check.py) * [Trapezoidal Rule](maths/trapezoidal_rule.py) * [Triplet Sum](maths/triplet_sum.py) * [Twin Prime](maths/twin_prime.py) * [Two Pointer](maths/two_pointer.py) * [Two Sum](maths/two_sum.py) * [Ugly Numbers](maths/ugly_numbers.py) * [Volume](maths/volume.py) * [Weird Number](maths/weird_number.py) * [Zellers Congruence](maths/zellers_congruence.py) ## Matrix * [Binary Search Matrix](matrix/binary_search_matrix.py) * [Count Islands In Matrix](matrix/count_islands_in_matrix.py) * [Count Paths](matrix/count_paths.py) * [Cramers Rule 2X2](matrix/cramers_rule_2x2.py) * [Inverse Of Matrix](matrix/inverse_of_matrix.py) * [Largest Square Area In Matrix](matrix/largest_square_area_in_matrix.py) * [Matrix Class](matrix/matrix_class.py) * [Matrix Operation](matrix/matrix_operation.py) * [Max Area Of Island](matrix/max_area_of_island.py) * [Nth Fibonacci Using Matrix Exponentiation](matrix/nth_fibonacci_using_matrix_exponentiation.py) * [Pascal Triangle](matrix/pascal_triangle.py) * [Rotate Matrix](matrix/rotate_matrix.py) * [Searching In Sorted Matrix](matrix/searching_in_sorted_matrix.py) * [Sherman Morrison](matrix/sherman_morrison.py) * [Spiral Print](matrix/spiral_print.py) * Tests * [Test Matrix Operation](matrix/tests/test_matrix_operation.py) ## Networking Flow * [Ford Fulkerson](networking_flow/ford_fulkerson.py) * [Minimum Cut](networking_flow/minimum_cut.py) ## Neural Network * [2 Hidden Layers Neural Network](neural_network/2_hidden_layers_neural_network.py) * [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py) * [Convolution Neural Network](neural_network/convolution_neural_network.py) * [Perceptron](neural_network/perceptron.py) * [Simple Neural Network](neural_network/simple_neural_network.py) ## Other * [Activity Selection](other/activity_selection.py) * [Alternative List Arrange](other/alternative_list_arrange.py) * [Davisb Putnamb Logemannb Loveland](other/davisb_putnamb_logemannb_loveland.py) * [Dijkstra Bankers Algorithm](other/dijkstra_bankers_algorithm.py) * [Doomsday](other/doomsday.py) * [Fischer Yates Shuffle](other/fischer_yates_shuffle.py) * [Gauss Easter](other/gauss_easter.py) * [Graham Scan](other/graham_scan.py) * [Greedy](other/greedy.py) * [Least Recently Used](other/least_recently_used.py) * [Lfu Cache](other/lfu_cache.py) * [Linear Congruential Generator](other/linear_congruential_generator.py) * [Lru Cache](other/lru_cache.py) * [Magicdiamondpattern](other/magicdiamondpattern.py) * [Maximum Subarray](other/maximum_subarray.py) * [Nested Brackets](other/nested_brackets.py) * [Password](other/password.py) * [Quine](other/quine.py) * [Scoring Algorithm](other/scoring_algorithm.py) * [Sdes](other/sdes.py) * [Tower Of Hanoi](other/tower_of_hanoi.py) ## Physics * [Archimedes Principle](physics/archimedes_principle.py) * [Casimir Effect](physics/casimir_effect.py) * [Centripetal Force](physics/centripetal_force.py) * [Horizontal Projectile Motion](physics/horizontal_projectile_motion.py) * [Hubble Parameter](physics/hubble_parameter.py) * [Ideal Gas Law](physics/ideal_gas_law.py) * [Kinetic Energy](physics/kinetic_energy.py) * [Lorentz Transformation Four Vector](physics/lorentz_transformation_four_vector.py) * [Malus Law](physics/malus_law.py) * [N Body Simulation](physics/n_body_simulation.py) * [Newtons Law Of Gravitation](physics/newtons_law_of_gravitation.py) * [Newtons Second Law Of Motion](physics/newtons_second_law_of_motion.py) * [Potential Energy](physics/potential_energy.py) * [Rms Speed Of Molecule](physics/rms_speed_of_molecule.py) * [Shear Stress](physics/shear_stress.py) ## Project Euler * Problem 001 * [Sol1](project_euler/problem_001/sol1.py) * [Sol2](project_euler/problem_001/sol2.py) * [Sol3](project_euler/problem_001/sol3.py) * [Sol4](project_euler/problem_001/sol4.py) * [Sol5](project_euler/problem_001/sol5.py) * [Sol6](project_euler/problem_001/sol6.py) * [Sol7](project_euler/problem_001/sol7.py) * Problem 002 * [Sol1](project_euler/problem_002/sol1.py) * [Sol2](project_euler/problem_002/sol2.py) * [Sol3](project_euler/problem_002/sol3.py) * [Sol4](project_euler/problem_002/sol4.py) * [Sol5](project_euler/problem_002/sol5.py) * Problem 003 * [Sol1](project_euler/problem_003/sol1.py) * [Sol2](project_euler/problem_003/sol2.py) * [Sol3](project_euler/problem_003/sol3.py) * Problem 004 * [Sol1](project_euler/problem_004/sol1.py) * [Sol2](project_euler/problem_004/sol2.py) * Problem 005 * [Sol1](project_euler/problem_005/sol1.py) * [Sol2](project_euler/problem_005/sol2.py) * Problem 006 * [Sol1](project_euler/problem_006/sol1.py) * [Sol2](project_euler/problem_006/sol2.py) * [Sol3](project_euler/problem_006/sol3.py) * [Sol4](project_euler/problem_006/sol4.py) * Problem 007 * [Sol1](project_euler/problem_007/sol1.py) * [Sol2](project_euler/problem_007/sol2.py) * [Sol3](project_euler/problem_007/sol3.py) * Problem 008 * [Sol1](project_euler/problem_008/sol1.py) * [Sol2](project_euler/problem_008/sol2.py) * [Sol3](project_euler/problem_008/sol3.py) * Problem 009 * [Sol1](project_euler/problem_009/sol1.py) * [Sol2](project_euler/problem_009/sol2.py) * [Sol3](project_euler/problem_009/sol3.py) * Problem 010 * [Sol1](project_euler/problem_010/sol1.py) * [Sol2](project_euler/problem_010/sol2.py) * [Sol3](project_euler/problem_010/sol3.py) * Problem 011 * [Sol1](project_euler/problem_011/sol1.py) * [Sol2](project_euler/problem_011/sol2.py) * Problem 012 * [Sol1](project_euler/problem_012/sol1.py) * [Sol2](project_euler/problem_012/sol2.py) * Problem 013 * [Sol1](project_euler/problem_013/sol1.py) * Problem 014 * [Sol1](project_euler/problem_014/sol1.py) * [Sol2](project_euler/problem_014/sol2.py) * Problem 015 * [Sol1](project_euler/problem_015/sol1.py) * Problem 016 * [Sol1](project_euler/problem_016/sol1.py) * [Sol2](project_euler/problem_016/sol2.py) * Problem 017 * [Sol1](project_euler/problem_017/sol1.py) * Problem 018 * [Solution](project_euler/problem_018/solution.py) * Problem 019 * [Sol1](project_euler/problem_019/sol1.py) * Problem 020 * [Sol1](project_euler/problem_020/sol1.py) * [Sol2](project_euler/problem_020/sol2.py) * [Sol3](project_euler/problem_020/sol3.py) * [Sol4](project_euler/problem_020/sol4.py) * Problem 021 * [Sol1](project_euler/problem_021/sol1.py) * Problem 022 * [Sol1](project_euler/problem_022/sol1.py) * [Sol2](project_euler/problem_022/sol2.py) * Problem 023 * [Sol1](project_euler/problem_023/sol1.py) * Problem 024 * [Sol1](project_euler/problem_024/sol1.py) * Problem 025 * [Sol1](project_euler/problem_025/sol1.py) * [Sol2](project_euler/problem_025/sol2.py) * [Sol3](project_euler/problem_025/sol3.py) * Problem 026 * [Sol1](project_euler/problem_026/sol1.py) * Problem 027 * [Sol1](project_euler/problem_027/sol1.py) * Problem 028 * [Sol1](project_euler/problem_028/sol1.py) * Problem 029 * [Sol1](project_euler/problem_029/sol1.py) * Problem 030 * [Sol1](project_euler/problem_030/sol1.py) * Problem 031 * [Sol1](project_euler/problem_031/sol1.py) * [Sol2](project_euler/problem_031/sol2.py) * Problem 032 * [Sol32](project_euler/problem_032/sol32.py) * Problem 033 * [Sol1](project_euler/problem_033/sol1.py) * Problem 034 * [Sol1](project_euler/problem_034/sol1.py) * Problem 035 * [Sol1](project_euler/problem_035/sol1.py) * Problem 036 * [Sol1](project_euler/problem_036/sol1.py) * Problem 037 * [Sol1](project_euler/problem_037/sol1.py) * Problem 038 * [Sol1](project_euler/problem_038/sol1.py) * Problem 039 * [Sol1](project_euler/problem_039/sol1.py) * Problem 040 * [Sol1](project_euler/problem_040/sol1.py) * Problem 041 * [Sol1](project_euler/problem_041/sol1.py) * Problem 042 * [Solution42](project_euler/problem_042/solution42.py) * Problem 043 * [Sol1](project_euler/problem_043/sol1.py) * Problem 044 * [Sol1](project_euler/problem_044/sol1.py) * Problem 045 * [Sol1](project_euler/problem_045/sol1.py) * Problem 046 * [Sol1](project_euler/problem_046/sol1.py) * Problem 047 * [Sol1](project_euler/problem_047/sol1.py) * Problem 048 * [Sol1](project_euler/problem_048/sol1.py) * Problem 049 * [Sol1](project_euler/problem_049/sol1.py) * Problem 050 * [Sol1](project_euler/problem_050/sol1.py) * Problem 051 * [Sol1](project_euler/problem_051/sol1.py) * Problem 052 * [Sol1](project_euler/problem_052/sol1.py) * Problem 053 * [Sol1](project_euler/problem_053/sol1.py) * Problem 054 * [Sol1](project_euler/problem_054/sol1.py) * [Test Poker Hand](project_euler/problem_054/test_poker_hand.py) * Problem 055 * [Sol1](project_euler/problem_055/sol1.py) * Problem 056 * [Sol1](project_euler/problem_056/sol1.py) * Problem 057 * [Sol1](project_euler/problem_057/sol1.py) * Problem 058 * [Sol1](project_euler/problem_058/sol1.py) * Problem 059 * [Sol1](project_euler/problem_059/sol1.py) * Problem 062 * [Sol1](project_euler/problem_062/sol1.py) * Problem 063 * [Sol1](project_euler/problem_063/sol1.py) * Problem 064 * [Sol1](project_euler/problem_064/sol1.py) * Problem 065 * [Sol1](project_euler/problem_065/sol1.py) * Problem 067 * [Sol1](project_euler/problem_067/sol1.py) * [Sol2](project_euler/problem_067/sol2.py) * Problem 068 * [Sol1](project_euler/problem_068/sol1.py) * Problem 069 * [Sol1](project_euler/problem_069/sol1.py) * Problem 070 * [Sol1](project_euler/problem_070/sol1.py) * Problem 071 * [Sol1](project_euler/problem_071/sol1.py) * Problem 072 * [Sol1](project_euler/problem_072/sol1.py) * [Sol2](project_euler/problem_072/sol2.py) * Problem 073 * [Sol1](project_euler/problem_073/sol1.py) * Problem 074 * [Sol1](project_euler/problem_074/sol1.py) * [Sol2](project_euler/problem_074/sol2.py) * Problem 075 * [Sol1](project_euler/problem_075/sol1.py) * Problem 076 * [Sol1](project_euler/problem_076/sol1.py) * Problem 077 * [Sol1](project_euler/problem_077/sol1.py) * Problem 078 * [Sol1](project_euler/problem_078/sol1.py) * Problem 080 * [Sol1](project_euler/problem_080/sol1.py) * Problem 081 * [Sol1](project_euler/problem_081/sol1.py) * Problem 085 * [Sol1](project_euler/problem_085/sol1.py) * Problem 086 * [Sol1](project_euler/problem_086/sol1.py) * Problem 087 * [Sol1](project_euler/problem_087/sol1.py) * Problem 089 * [Sol1](project_euler/problem_089/sol1.py) * Problem 091 * [Sol1](project_euler/problem_091/sol1.py) * Problem 092 * [Sol1](project_euler/problem_092/sol1.py) * Problem 097 * [Sol1](project_euler/problem_097/sol1.py) * Problem 099 * [Sol1](project_euler/problem_099/sol1.py) * Problem 101 * [Sol1](project_euler/problem_101/sol1.py) * Problem 102 * [Sol1](project_euler/problem_102/sol1.py) * Problem 104 * [Sol1](project_euler/problem_104/sol1.py) * Problem 107 * [Sol1](project_euler/problem_107/sol1.py) * Problem 109 * [Sol1](project_euler/problem_109/sol1.py) * Problem 112 * [Sol1](project_euler/problem_112/sol1.py) * Problem 113 * [Sol1](project_euler/problem_113/sol1.py) * Problem 114 * [Sol1](project_euler/problem_114/sol1.py) * Problem 115 * [Sol1](project_euler/problem_115/sol1.py) * Problem 116 * [Sol1](project_euler/problem_116/sol1.py) * Problem 119 * [Sol1](project_euler/problem_119/sol1.py) * Problem 120 * [Sol1](project_euler/problem_120/sol1.py) * Problem 121 * [Sol1](project_euler/problem_121/sol1.py) * Problem 123 * [Sol1](project_euler/problem_123/sol1.py) * Problem 125 * [Sol1](project_euler/problem_125/sol1.py) * Problem 129 * [Sol1](project_euler/problem_129/sol1.py) * Problem 135 * [Sol1](project_euler/problem_135/sol1.py) * Problem 144 * [Sol1](project_euler/problem_144/sol1.py) * Problem 145 * [Sol1](project_euler/problem_145/sol1.py) * Problem 173 * [Sol1](project_euler/problem_173/sol1.py) * Problem 174 * [Sol1](project_euler/problem_174/sol1.py) * Problem 180 * [Sol1](project_euler/problem_180/sol1.py) * Problem 188 * [Sol1](project_euler/problem_188/sol1.py) * Problem 191 * [Sol1](project_euler/problem_191/sol1.py) * Problem 203 * [Sol1](project_euler/problem_203/sol1.py) * Problem 205 * [Sol1](project_euler/problem_205/sol1.py) * Problem 206 * [Sol1](project_euler/problem_206/sol1.py) * Problem 207 * [Sol1](project_euler/problem_207/sol1.py) * Problem 234 * [Sol1](project_euler/problem_234/sol1.py) * Problem 301 * [Sol1](project_euler/problem_301/sol1.py) * Problem 493 * [Sol1](project_euler/problem_493/sol1.py) * Problem 551 * [Sol1](project_euler/problem_551/sol1.py) * Problem 587 * [Sol1](project_euler/problem_587/sol1.py) * Problem 686 * [Sol1](project_euler/problem_686/sol1.py) ## Quantum * [Bb84](quantum/bb84.py) * [Deutsch Jozsa](quantum/deutsch_jozsa.py) * [Half Adder](quantum/half_adder.py) * [Not Gate](quantum/not_gate.py) * [Q Fourier Transform](quantum/q_fourier_transform.py) * [Q Full Adder](quantum/q_full_adder.py) * [Quantum Entanglement](quantum/quantum_entanglement.py) * [Quantum Teleportation](quantum/quantum_teleportation.py) * [Ripple Adder Classic](quantum/ripple_adder_classic.py) * [Single Qubit Measure](quantum/single_qubit_measure.py) * [Superdense Coding](quantum/superdense_coding.py) ## Scheduling * [First Come First Served](scheduling/first_come_first_served.py) * [Highest Response Ratio Next](scheduling/highest_response_ratio_next.py) * [Job Sequencing With Deadline](scheduling/job_sequencing_with_deadline.py) * [Multi Level Feedback Queue](scheduling/multi_level_feedback_queue.py) * [Non Preemptive Shortest Job First](scheduling/non_preemptive_shortest_job_first.py) * [Round Robin](scheduling/round_robin.py) * [Shortest Job First](scheduling/shortest_job_first.py) ## Searches * [Binary Search](searches/binary_search.py) * [Binary Tree Traversal](searches/binary_tree_traversal.py) * [Double Linear Search](searches/double_linear_search.py) * [Double Linear Search Recursion](searches/double_linear_search_recursion.py) * [Fibonacci Search](searches/fibonacci_search.py) * [Hill Climbing](searches/hill_climbing.py) * [Interpolation Search](searches/interpolation_search.py) * [Jump Search](searches/jump_search.py) * [Linear Search](searches/linear_search.py) * [Quick Select](searches/quick_select.py) * [Sentinel Linear Search](searches/sentinel_linear_search.py) * [Simple Binary Search](searches/simple_binary_search.py) * [Simulated Annealing](searches/simulated_annealing.py) * [Tabu Search](searches/tabu_search.py) * [Ternary Search](searches/ternary_search.py) ## Sorts * [Bead Sort](sorts/bead_sort.py) * [Bitonic Sort](sorts/bitonic_sort.py) * [Bogo Sort](sorts/bogo_sort.py) * [Bubble Sort](sorts/bubble_sort.py) * [Bucket Sort](sorts/bucket_sort.py) * [Circle Sort](sorts/circle_sort.py) * [Cocktail Shaker Sort](sorts/cocktail_shaker_sort.py) * [Comb Sort](sorts/comb_sort.py) * [Counting Sort](sorts/counting_sort.py) * [Cycle Sort](sorts/cycle_sort.py) * [Double Sort](sorts/double_sort.py) * [Dutch National Flag Sort](sorts/dutch_national_flag_sort.py) * [Exchange Sort](sorts/exchange_sort.py) * [External Sort](sorts/external_sort.py) * [Gnome Sort](sorts/gnome_sort.py) * [Heap Sort](sorts/heap_sort.py) * [Insertion Sort](sorts/insertion_sort.py) * [Intro Sort](sorts/intro_sort.py) * [Iterative Merge Sort](sorts/iterative_merge_sort.py) * [Merge Insertion Sort](sorts/merge_insertion_sort.py) * [Merge Sort](sorts/merge_sort.py) * [Msd Radix Sort](sorts/msd_radix_sort.py) * [Natural Sort](sorts/natural_sort.py) * [Odd Even Sort](sorts/odd_even_sort.py) * [Odd Even Transposition Parallel](sorts/odd_even_transposition_parallel.py) * [Odd Even Transposition Single Threaded](sorts/odd_even_transposition_single_threaded.py) * [Pancake Sort](sorts/pancake_sort.py) * [Patience Sort](sorts/patience_sort.py) * [Pigeon Sort](sorts/pigeon_sort.py) * [Pigeonhole Sort](sorts/pigeonhole_sort.py) * [Quick Sort](sorts/quick_sort.py) * [Quick Sort 3 Partition](sorts/quick_sort_3_partition.py) * [Radix Sort](sorts/radix_sort.py) * [Random Normal Distribution Quicksort](sorts/random_normal_distribution_quicksort.py) * [Random Pivot Quick Sort](sorts/random_pivot_quick_sort.py) * [Recursive Bubble Sort](sorts/recursive_bubble_sort.py) * [Recursive Insertion Sort](sorts/recursive_insertion_sort.py) * [Recursive Mergesort Array](sorts/recursive_mergesort_array.py) * [Recursive Quick Sort](sorts/recursive_quick_sort.py) * [Selection Sort](sorts/selection_sort.py) * [Shell Sort](sorts/shell_sort.py) * [Shrink Shell Sort](sorts/shrink_shell_sort.py) * [Slowsort](sorts/slowsort.py) * [Stooge Sort](sorts/stooge_sort.py) * [Strand Sort](sorts/strand_sort.py) * [Tim Sort](sorts/tim_sort.py) * [Topological Sort](sorts/topological_sort.py) * [Tree Sort](sorts/tree_sort.py) * [Unknown Sort](sorts/unknown_sort.py) * [Wiggle Sort](sorts/wiggle_sort.py) ## Strings * [Aho Corasick](strings/aho_corasick.py) * [Alternative String Arrange](strings/alternative_string_arrange.py) * [Anagrams](strings/anagrams.py) * [Autocomplete Using Trie](strings/autocomplete_using_trie.py) * [Barcode Validator](strings/barcode_validator.py) * [Boyer Moore Search](strings/boyer_moore_search.py) * [Can String Be Rearranged As Palindrome](strings/can_string_be_rearranged_as_palindrome.py) * [Capitalize](strings/capitalize.py) * [Check Anagrams](strings/check_anagrams.py) * [Credit Card Validator](strings/credit_card_validator.py) * [Detecting English Programmatically](strings/detecting_english_programmatically.py) * [Dna](strings/dna.py) * [Frequency Finder](strings/frequency_finder.py) * [Hamming Distance](strings/hamming_distance.py) * [Indian Phone Validator](strings/indian_phone_validator.py) * [Is Contains Unique Chars](strings/is_contains_unique_chars.py) * [Is Isogram](strings/is_isogram.py) * [Is Palindrome](strings/is_palindrome.py) * [Is Pangram](strings/is_pangram.py) * [Is Spain National Id](strings/is_spain_national_id.py) * [Is Srilankan Phone Number](strings/is_srilankan_phone_number.py) * [Jaro Winkler](strings/jaro_winkler.py) * [Join](strings/join.py) * [Knuth Morris Pratt](strings/knuth_morris_pratt.py) * [Levenshtein Distance](strings/levenshtein_distance.py) * [Lower](strings/lower.py) * [Manacher](strings/manacher.py) * [Min Cost String Conversion](strings/min_cost_string_conversion.py) * [Naive String Search](strings/naive_string_search.py) * [Ngram](strings/ngram.py) * [Palindrome](strings/palindrome.py) * [Prefix Function](strings/prefix_function.py) * [Rabin Karp](strings/rabin_karp.py) * [Remove Duplicate](strings/remove_duplicate.py) * [Reverse Letters](strings/reverse_letters.py) * [Reverse Long Words](strings/reverse_long_words.py) * [Reverse Words](strings/reverse_words.py) * [Snake Case To Camel Pascal Case](strings/snake_case_to_camel_pascal_case.py) * [Split](strings/split.py) * [Text Justification](strings/text_justification.py) * [Upper](strings/upper.py) * [Wave](strings/wave.py) * [Wildcard Pattern Matching](strings/wildcard_pattern_matching.py) * [Word Occurrence](strings/word_occurrence.py) * [Word Patterns](strings/word_patterns.py) * [Z Function](strings/z_function.py) ## Web Programming * [Co2 Emission](web_programming/co2_emission.py) * [Convert Number To Words](web_programming/convert_number_to_words.py) * [Covid Stats Via Xpath](web_programming/covid_stats_via_xpath.py) * [Crawl Google Results](web_programming/crawl_google_results.py) * [Crawl Google Scholar Citation](web_programming/crawl_google_scholar_citation.py) * [Currency Converter](web_programming/currency_converter.py) * [Current Stock Price](web_programming/current_stock_price.py) * [Current Weather](web_programming/current_weather.py) * [Daily Horoscope](web_programming/daily_horoscope.py) * [Download Images From Google Query](web_programming/download_images_from_google_query.py) * [Emails From Url](web_programming/emails_from_url.py) * [Fetch Anime And Play](web_programming/fetch_anime_and_play.py) * [Fetch Bbc News](web_programming/fetch_bbc_news.py) * [Fetch Github Info](web_programming/fetch_github_info.py) * [Fetch Jobs](web_programming/fetch_jobs.py) * [Fetch Quotes](web_programming/fetch_quotes.py) * [Fetch Well Rx Price](web_programming/fetch_well_rx_price.py) * [Get Amazon Product Data](web_programming/get_amazon_product_data.py) * [Get Imdb Top 250 Movies Csv](web_programming/get_imdb_top_250_movies_csv.py) * [Get Imdbtop](web_programming/get_imdbtop.py) * [Get Top Billioners](web_programming/get_top_billioners.py) * [Get Top Hn Posts](web_programming/get_top_hn_posts.py) * [Get User Tweets](web_programming/get_user_tweets.py) * [Giphy](web_programming/giphy.py) * [Instagram Crawler](web_programming/instagram_crawler.py) * [Instagram Pic](web_programming/instagram_pic.py) * [Instagram Video](web_programming/instagram_video.py) * [Nasa Data](web_programming/nasa_data.py) * [Open Google Results](web_programming/open_google_results.py) * [Random Anime Character](web_programming/random_anime_character.py) * [Recaptcha Verification](web_programming/recaptcha_verification.py) * [Reddit](web_programming/reddit.py) * [Search Books By Isbn](web_programming/search_books_by_isbn.py) * [Slack Message](web_programming/slack_message.py) * [Test Fetch Github Info](web_programming/test_fetch_github_info.py) * [World Covid19 Stats](web_programming/world_covid19_stats.py)
1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
from __future__ import annotations import sys class Letter: def __init__(self, letter: str, freq: int): self.letter: str = letter self.freq: int = freq self.bitstring: dict[str, str] = {} def __repr__(self) -> str: return f"{self.letter}:{self.freq}" class TreeNode: def __init__(self, freq: int, left: Letter | TreeNode, right: Letter | TreeNode): self.freq: int = freq self.left: Letter | TreeNode = left self.right: Letter | TreeNode = right def parse_file(file_path: str) -> list[Letter]: """ Read the file and build a dict of all letters and their frequencies, then convert the dict into a list of Letters. """ chars: dict[str, int] = {} with open(file_path) as f: while True: c = f.read(1) if not c: break chars[c] = chars[c] + 1 if c in chars else 1 return sorted((Letter(c, f) for c, f in chars.items()), key=lambda l: l.freq) def build_tree(letters: list[Letter]) -> Letter | TreeNode: """ Run through the list of Letters and build the min heap for the Huffman Tree. """ response: list[Letter | TreeNode] = letters # type: ignore while len(response) > 1: left = response.pop(0) right = response.pop(0) total_freq = left.freq + right.freq node = TreeNode(total_freq, left, right) response.append(node) response.sort(key=lambda l: l.freq) return response[0] def traverse_tree(root: Letter | TreeNode, bitstring: str) -> list[Letter]: """ Recursively traverse the Huffman Tree to set each Letter's bitstring dictionary, and return the list of Letters """ if isinstance(root, Letter): root.bitstring[root.letter] = bitstring return [root] treenode: TreeNode = root # type: ignore letters = [] letters += traverse_tree(treenode.left, bitstring + "0") letters += traverse_tree(treenode.right, bitstring + "1") return letters def huffman(file_path: str) -> None: """ Parse the file, build the tree, then run through the file again, using the letters dictionary to find and print out the bitstring for each letter. """ letters_list = parse_file(file_path) root = build_tree(letters_list) letters = { k: v for letter in traverse_tree(root, "") for k, v in letter.bitstring.items() } print(f"Huffman Coding of {file_path}: ") with open(file_path) as f: while True: c = f.read(1) if not c: break print(letters[c], end=" ") print() if __name__ == "__main__": # pass the file path to the huffman function huffman(sys.argv[1])
from __future__ import annotations import sys class Letter: def __init__(self, letter: str, freq: int): self.letter: str = letter self.freq: int = freq self.bitstring: dict[str, str] = {} def __repr__(self) -> str: return f"{self.letter}:{self.freq}" class TreeNode: def __init__(self, freq: int, left: Letter | TreeNode, right: Letter | TreeNode): self.freq: int = freq self.left: Letter | TreeNode = left self.right: Letter | TreeNode = right def parse_file(file_path: str) -> list[Letter]: """ Read the file and build a dict of all letters and their frequencies, then convert the dict into a list of Letters. """ chars: dict[str, int] = {} with open(file_path) as f: while True: c = f.read(1) if not c: break chars[c] = chars[c] + 1 if c in chars else 1 return sorted((Letter(c, f) for c, f in chars.items()), key=lambda x: x.freq) def build_tree(letters: list[Letter]) -> Letter | TreeNode: """ Run through the list of Letters and build the min heap for the Huffman Tree. """ response: list[Letter | TreeNode] = letters # type: ignore while len(response) > 1: left = response.pop(0) right = response.pop(0) total_freq = left.freq + right.freq node = TreeNode(total_freq, left, right) response.append(node) response.sort(key=lambda x: x.freq) return response[0] def traverse_tree(root: Letter | TreeNode, bitstring: str) -> list[Letter]: """ Recursively traverse the Huffman Tree to set each Letter's bitstring dictionary, and return the list of Letters """ if isinstance(root, Letter): root.bitstring[root.letter] = bitstring return [root] treenode: TreeNode = root # type: ignore letters = [] letters += traverse_tree(treenode.left, bitstring + "0") letters += traverse_tree(treenode.right, bitstring + "1") return letters def huffman(file_path: str) -> None: """ Parse the file, build the tree, then run through the file again, using the letters dictionary to find and print out the bitstring for each letter. """ letters_list = parse_file(file_path) root = build_tree(letters_list) letters = { k: v for letter in traverse_tree(root, "") for k, v in letter.bitstring.items() } print(f"Huffman Coding of {file_path}: ") with open(file_path) as f: while True: c = f.read(1) if not c: break print(letters[c], end=" ") print() if __name__ == "__main__": # pass the file path to the huffman function huffman(sys.argv[1])
1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" A non-recursive Segment Tree implementation with range query and single element update, works virtually with any list of the same type of elements with a "commutative" combiner. Explanation: https://www.geeksforgeeks.org/iterative-segment-tree-range-minimum-query/ https://www.geeksforgeeks.org/segment-tree-efficient-implementation/ >>> SegmentTree([1, 2, 3], lambda a, b: a + b).query(0, 2) 6 >>> SegmentTree([3, 1, 2], min).query(0, 2) 1 >>> SegmentTree([2, 3, 1], max).query(0, 2) 3 >>> st = SegmentTree([1, 5, 7, -1, 6], lambda a, b: a + b) >>> st.update(1, -1) >>> st.update(2, 3) >>> st.query(1, 2) 2 >>> st.query(1, 1) -1 >>> st.update(4, 1) >>> st.query(3, 4) 0 >>> st = SegmentTree([[1, 2, 3], [3, 2, 1], [1, 1, 1]], lambda a, b: [a[i] + b[i] for i ... in range(len(a))]) >>> st.query(0, 1) [4, 4, 4] >>> st.query(1, 2) [4, 3, 2] >>> st.update(1, [-1, -1, -1]) >>> st.query(1, 2) [0, 0, 0] >>> st.query(0, 2) [1, 2, 3] """ from __future__ import annotations from collections.abc import Callable from typing import Any, Generic, TypeVar T = TypeVar("T") class SegmentTree(Generic[T]): def __init__(self, arr: list[T], fnc: Callable[[T, T], T]) -> None: """ Segment Tree constructor, it works just with commutative combiner. :param arr: list of elements for the segment tree :param fnc: commutative function for combine two elements >>> SegmentTree(['a', 'b', 'c'], lambda a, b: f'{a}{b}').query(0, 2) 'abc' >>> SegmentTree([(1, 2), (2, 3), (3, 4)], ... lambda a, b: (a[0] + b[0], a[1] + b[1])).query(0, 2) (6, 9) """ any_type: Any | T = None self.N: int = len(arr) self.st: list[T] = [any_type for _ in range(self.N)] + arr self.fn = fnc self.build() def build(self) -> None: for p in range(self.N - 1, 0, -1): self.st[p] = self.fn(self.st[p * 2], self.st[p * 2 + 1]) def update(self, p: int, v: T) -> None: """ Update an element in log(N) time :param p: position to be update :param v: new value >>> st = SegmentTree([3, 1, 2, 4], min) >>> st.query(0, 3) 1 >>> st.update(2, -1) >>> st.query(0, 3) -1 """ p += self.N self.st[p] = v while p > 1: p = p // 2 self.st[p] = self.fn(self.st[p * 2], self.st[p * 2 + 1]) def query(self, l: int, r: int) -> T | None: # noqa: E741 """ Get range query value in log(N) time :param l: left element index :param r: right element index :return: element combined in the range [l, r] >>> st = SegmentTree([1, 2, 3, 4], lambda a, b: a + b) >>> st.query(0, 2) 6 >>> st.query(1, 2) 5 >>> st.query(0, 3) 10 >>> st.query(2, 3) 7 """ l, r = l + self.N, r + self.N res: T | None = None while l <= r: # noqa: E741 if l % 2 == 1: res = self.st[l] if res is None else self.fn(res, self.st[l]) if r % 2 == 0: res = self.st[r] if res is None else self.fn(res, self.st[r]) l, r = (l + 1) // 2, (r - 1) // 2 return res if __name__ == "__main__": from functools import reduce test_array = [1, 10, -2, 9, -3, 8, 4, -7, 5, 6, 11, -12] test_updates = { 0: 7, 1: 2, 2: 6, 3: -14, 4: 5, 5: 4, 6: 7, 7: -10, 8: 9, 9: 10, 10: 12, 11: 1, } min_segment_tree = SegmentTree(test_array, min) max_segment_tree = SegmentTree(test_array, max) sum_segment_tree = SegmentTree(test_array, lambda a, b: a + b) def test_all_segments() -> None: """ Test all possible segments """ for i in range(len(test_array)): for j in range(i, len(test_array)): min_range = reduce(min, test_array[i : j + 1]) max_range = reduce(max, test_array[i : j + 1]) sum_range = reduce(lambda a, b: a + b, test_array[i : j + 1]) assert min_range == min_segment_tree.query(i, j) assert max_range == max_segment_tree.query(i, j) assert sum_range == sum_segment_tree.query(i, j) test_all_segments() for index, value in test_updates.items(): test_array[index] = value min_segment_tree.update(index, value) max_segment_tree.update(index, value) sum_segment_tree.update(index, value) test_all_segments()
""" A non-recursive Segment Tree implementation with range query and single element update, works virtually with any list of the same type of elements with a "commutative" combiner. Explanation: https://www.geeksforgeeks.org/iterative-segment-tree-range-minimum-query/ https://www.geeksforgeeks.org/segment-tree-efficient-implementation/ >>> SegmentTree([1, 2, 3], lambda a, b: a + b).query(0, 2) 6 >>> SegmentTree([3, 1, 2], min).query(0, 2) 1 >>> SegmentTree([2, 3, 1], max).query(0, 2) 3 >>> st = SegmentTree([1, 5, 7, -1, 6], lambda a, b: a + b) >>> st.update(1, -1) >>> st.update(2, 3) >>> st.query(1, 2) 2 >>> st.query(1, 1) -1 >>> st.update(4, 1) >>> st.query(3, 4) 0 >>> st = SegmentTree([[1, 2, 3], [3, 2, 1], [1, 1, 1]], lambda a, b: [a[i] + b[i] for i ... in range(len(a))]) >>> st.query(0, 1) [4, 4, 4] >>> st.query(1, 2) [4, 3, 2] >>> st.update(1, [-1, -1, -1]) >>> st.query(1, 2) [0, 0, 0] >>> st.query(0, 2) [1, 2, 3] """ from __future__ import annotations from collections.abc import Callable from typing import Any, Generic, TypeVar T = TypeVar("T") class SegmentTree(Generic[T]): def __init__(self, arr: list[T], fnc: Callable[[T, T], T]) -> None: """ Segment Tree constructor, it works just with commutative combiner. :param arr: list of elements for the segment tree :param fnc: commutative function for combine two elements >>> SegmentTree(['a', 'b', 'c'], lambda a, b: f'{a}{b}').query(0, 2) 'abc' >>> SegmentTree([(1, 2), (2, 3), (3, 4)], ... lambda a, b: (a[0] + b[0], a[1] + b[1])).query(0, 2) (6, 9) """ any_type: Any | T = None self.N: int = len(arr) self.st: list[T] = [any_type for _ in range(self.N)] + arr self.fn = fnc self.build() def build(self) -> None: for p in range(self.N - 1, 0, -1): self.st[p] = self.fn(self.st[p * 2], self.st[p * 2 + 1]) def update(self, p: int, v: T) -> None: """ Update an element in log(N) time :param p: position to be update :param v: new value >>> st = SegmentTree([3, 1, 2, 4], min) >>> st.query(0, 3) 1 >>> st.update(2, -1) >>> st.query(0, 3) -1 """ p += self.N self.st[p] = v while p > 1: p = p // 2 self.st[p] = self.fn(self.st[p * 2], self.st[p * 2 + 1]) def query(self, l: int, r: int) -> T | None: # noqa: E741 """ Get range query value in log(N) time :param l: left element index :param r: right element index :return: element combined in the range [l, r] >>> st = SegmentTree([1, 2, 3, 4], lambda a, b: a + b) >>> st.query(0, 2) 6 >>> st.query(1, 2) 5 >>> st.query(0, 3) 10 >>> st.query(2, 3) 7 """ l, r = l + self.N, r + self.N res: T | None = None while l <= r: if l % 2 == 1: res = self.st[l] if res is None else self.fn(res, self.st[l]) if r % 2 == 0: res = self.st[r] if res is None else self.fn(res, self.st[r]) l, r = (l + 1) // 2, (r - 1) // 2 return res if __name__ == "__main__": from functools import reduce test_array = [1, 10, -2, 9, -3, 8, 4, -7, 5, 6, 11, -12] test_updates = { 0: 7, 1: 2, 2: 6, 3: -14, 4: 5, 5: 4, 6: 7, 7: -10, 8: 9, 9: 10, 10: 12, 11: 1, } min_segment_tree = SegmentTree(test_array, min) max_segment_tree = SegmentTree(test_array, max) sum_segment_tree = SegmentTree(test_array, lambda a, b: a + b) def test_all_segments() -> None: """ Test all possible segments """ for i in range(len(test_array)): for j in range(i, len(test_array)): min_range = reduce(min, test_array[i : j + 1]) max_range = reduce(max, test_array[i : j + 1]) sum_range = reduce(lambda a, b: a + b, test_array[i : j + 1]) assert min_range == min_segment_tree.query(i, j) assert max_range == max_segment_tree.query(i, j) assert sum_range == sum_segment_tree.query(i, j) test_all_segments() for index, value in test_updates.items(): test_array[index] = value min_segment_tree.update(index, value) max_segment_tree.update(index, value) sum_segment_tree.update(index, value) test_all_segments()
1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
import math class SegmentTree: def __init__(self, a): self.N = len(a) self.st = [0] * ( 4 * self.N ) # approximate the overall size of segment tree with array N self.build(1, 0, self.N - 1) def left(self, idx): return idx * 2 def right(self, idx): return idx * 2 + 1 def build(self, idx, l, r): # noqa: E741 if l == r: # noqa: E741 self.st[idx] = A[l] else: mid = (l + r) // 2 self.build(self.left(idx), l, mid) self.build(self.right(idx), mid + 1, r) self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)]) def update(self, a, b, val): return self.update_recursive(1, 0, self.N - 1, a - 1, b - 1, val) def update_recursive(self, idx, l, r, a, b, val): # noqa: E741 """ update(1, 1, N, a, b, v) for update val v to [a,b] """ if r < a or l > b: return True if l == r: # noqa: E741 self.st[idx] = val return True mid = (l + r) // 2 self.update_recursive(self.left(idx), l, mid, a, b, val) self.update_recursive(self.right(idx), mid + 1, r, a, b, val) self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)]) return True def query(self, a, b): return self.query_recursive(1, 0, self.N - 1, a - 1, b - 1) def query_recursive(self, idx, l, r, a, b): # noqa: E741 """ query(1, 1, N, a, b) for query max of [a,b] """ if r < a or l > b: return -math.inf if l >= a and r <= b: # noqa: E741 return self.st[idx] mid = (l + r) // 2 q1 = self.query_recursive(self.left(idx), l, mid, a, b) q2 = self.query_recursive(self.right(idx), mid + 1, r, a, b) return max(q1, q2) def show_data(self): show_list = [] for i in range(1, N + 1): show_list += [self.query(i, i)] print(show_list) if __name__ == "__main__": A = [1, 2, -4, 7, 3, -5, 6, 11, -20, 9, 14, 15, 5, 2, -8] N = 15 segt = SegmentTree(A) print(segt.query(4, 6)) print(segt.query(7, 11)) print(segt.query(7, 12)) segt.update(1, 3, 111) print(segt.query(1, 15)) segt.update(7, 8, 235) segt.show_data()
import math class SegmentTree: def __init__(self, a): self.N = len(a) self.st = [0] * ( 4 * self.N ) # approximate the overall size of segment tree with array N self.build(1, 0, self.N - 1) def left(self, idx): return idx * 2 def right(self, idx): return idx * 2 + 1 def build(self, idx, l, r): # noqa: E741 if l == r: self.st[idx] = A[l] else: mid = (l + r) // 2 self.build(self.left(idx), l, mid) self.build(self.right(idx), mid + 1, r) self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)]) def update(self, a, b, val): return self.update_recursive(1, 0, self.N - 1, a - 1, b - 1, val) def update_recursive(self, idx, l, r, a, b, val): # noqa: E741 """ update(1, 1, N, a, b, v) for update val v to [a,b] """ if r < a or l > b: return True if l == r: self.st[idx] = val return True mid = (l + r) // 2 self.update_recursive(self.left(idx), l, mid, a, b, val) self.update_recursive(self.right(idx), mid + 1, r, a, b, val) self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)]) return True def query(self, a, b): return self.query_recursive(1, 0, self.N - 1, a - 1, b - 1) def query_recursive(self, idx, l, r, a, b): # noqa: E741 """ query(1, 1, N, a, b) for query max of [a,b] """ if r < a or l > b: return -math.inf if l >= a and r <= b: return self.st[idx] mid = (l + r) // 2 q1 = self.query_recursive(self.left(idx), l, mid, a, b) q2 = self.query_recursive(self.right(idx), mid + 1, r, a, b) return max(q1, q2) def show_data(self): show_list = [] for i in range(1, N + 1): show_list += [self.query(i, i)] print(show_list) if __name__ == "__main__": A = [1, 2, -4, 7, 3, -5, 6, 11, -20, 9, 14, 15, 5, 2, -8] N = 15 segt = SegmentTree(A) print(segt.query(4, 6)) print(segt.query(7, 11)) print(segt.query(7, 12)) segt.update(1, 3, 111) print(segt.query(1, 15)) segt.update(7, 8, 235) segt.show_data()
1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Implementation of sequential minimal optimization (SMO) for support vector machines (SVM). Sequential minimal optimization (SMO) is an algorithm for solving the quadratic programming (QP) problem that arises during the training of support vector machines. It was invented by John Platt in 1998. Input: 0: type: numpy.ndarray. 1: first column of ndarray must be tags of samples, must be 1 or -1. 2: rows of ndarray represent samples. Usage: Command: python3 sequential_minimum_optimization.py Code: from sequential_minimum_optimization import SmoSVM, Kernel kernel = Kernel(kernel='poly', degree=3., coef0=1., gamma=0.5) init_alphas = np.zeros(train.shape[0]) SVM = SmoSVM(train=train, alpha_list=init_alphas, kernel_func=kernel, cost=0.4, b=0.0, tolerance=0.001) SVM.fit() predict = SVM.predict(test_samples) Reference: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/smo-book.pdf https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-98-14.pdf """ import os import sys import urllib.request import numpy as np import pandas as pd from matplotlib import pyplot as plt from sklearn.datasets import make_blobs, make_circles from sklearn.preprocessing import StandardScaler CANCER_DATASET_URL = ( "https://archive.ics.uci.edu/ml/machine-learning-databases/" "breast-cancer-wisconsin/wdbc.data" ) class SmoSVM: def __init__( self, train, kernel_func, alpha_list=None, cost=0.4, b=0.0, tolerance=0.001, auto_norm=True, ): self._init = True self._auto_norm = auto_norm self._c = np.float64(cost) self._b = np.float64(b) self._tol = np.float64(tolerance) if tolerance > 0.0001 else np.float64(0.001) self.tags = train[:, 0] self.samples = self._norm(train[:, 1:]) if self._auto_norm else train[:, 1:] self.alphas = alpha_list if alpha_list is not None else np.zeros(train.shape[0]) self.Kernel = kernel_func self._eps = 0.001 self._all_samples = list(range(self.length)) self._K_matrix = self._calculate_k_matrix() self._error = np.zeros(self.length) self._unbound = [] self.choose_alpha = self._choose_alphas() # Calculate alphas using SMO algorithm def fit(self): k = self._k state = None while True: # 1: Find alpha1, alpha2 try: i1, i2 = self.choose_alpha.send(state) state = None except StopIteration: print("Optimization done!\nEvery sample satisfy the KKT condition!") break # 2: calculate new alpha2 and new alpha1 y1, y2 = self.tags[i1], self.tags[i2] a1, a2 = self.alphas[i1].copy(), self.alphas[i2].copy() e1, e2 = self._e(i1), self._e(i2) args = (i1, i2, a1, a2, e1, e2, y1, y2) a1_new, a2_new = self._get_new_alpha(*args) if not a1_new and not a2_new: state = False continue self.alphas[i1], self.alphas[i2] = a1_new, a2_new # 3: update threshold(b) b1_new = np.float64( -e1 - y1 * k(i1, i1) * (a1_new - a1) - y2 * k(i2, i1) * (a2_new - a2) + self._b ) b2_new = np.float64( -e2 - y2 * k(i2, i2) * (a2_new - a2) - y1 * k(i1, i2) * (a1_new - a1) + self._b ) if 0.0 < a1_new < self._c: b = b1_new if 0.0 < a2_new < self._c: b = b2_new if not (np.float64(0) < a2_new < self._c) and not ( np.float64(0) < a1_new < self._c ): b = (b1_new + b2_new) / 2.0 b_old = self._b self._b = b # 4: update error value,here we only calculate those non-bound samples' # error self._unbound = [i for i in self._all_samples if self._is_unbound(i)] for s in self.unbound: if s == i1 or s == i2: continue self._error[s] += ( y1 * (a1_new - a1) * k(i1, s) + y2 * (a2_new - a2) * k(i2, s) + (self._b - b_old) ) # if i1 or i2 is non-bound,update there error value to zero if self._is_unbound(i1): self._error[i1] = 0 if self._is_unbound(i2): self._error[i2] = 0 # Predict test samples def predict(self, test_samples, classify=True): if test_samples.shape[1] > self.samples.shape[1]: raise ValueError( "Test samples' feature length does not equal to that of train samples" ) if self._auto_norm: test_samples = self._norm(test_samples) results = [] for test_sample in test_samples: result = self._predict(test_sample) if classify: results.append(1 if result > 0 else -1) else: results.append(result) return np.array(results) # Check if alpha violate KKT condition def _check_obey_kkt(self, index): alphas = self.alphas tol = self._tol r = self._e(index) * self.tags[index] c = self._c return (r < -tol and alphas[index] < c) or (r > tol and alphas[index] > 0.0) # Get value calculated from kernel function def _k(self, i1, i2): # for test samples,use Kernel function if isinstance(i2, np.ndarray): return self.Kernel(self.samples[i1], i2) # for train samples,Kernel values have been saved in matrix else: return self._K_matrix[i1, i2] # Get sample's error def _e(self, index): """ Two cases: 1:Sample[index] is non-bound,Fetch error from list: _error 2:sample[index] is bound,Use predicted value deduct true value: g(xi) - yi """ # get from error data if self._is_unbound(index): return self._error[index] # get by g(xi) - yi else: gx = np.dot(self.alphas * self.tags, self._K_matrix[:, index]) + self._b yi = self.tags[index] return gx - yi # Calculate Kernel matrix of all possible i1,i2 ,saving time def _calculate_k_matrix(self): k_matrix = np.zeros([self.length, self.length]) for i in self._all_samples: for j in self._all_samples: k_matrix[i, j] = np.float64( self.Kernel(self.samples[i, :], self.samples[j, :]) ) return k_matrix # Predict test sample's tag def _predict(self, sample): k = self._k predicted_value = ( np.sum( [ self.alphas[i1] * self.tags[i1] * k(i1, sample) for i1 in self._all_samples ] ) + self._b ) return predicted_value # Choose alpha1 and alpha2 def _choose_alphas(self): locis = yield from self._choose_a1() if not locis: return return locis def _choose_a1(self): """ Choose first alpha ;steps: 1:First loop over all sample 2:Second loop over all non-bound samples till all non-bound samples does not voilate kkt condition. 3:Repeat this two process endlessly,till all samples does not voilate kkt condition samples after first loop. """ while True: all_not_obey = True # all sample print("scanning all sample!") for i1 in [i for i in self._all_samples if self._check_obey_kkt(i)]: all_not_obey = False yield from self._choose_a2(i1) # non-bound sample print("scanning non-bound sample!") while True: not_obey = True for i1 in [ i for i in self._all_samples if self._check_obey_kkt(i) and self._is_unbound(i) ]: not_obey = False yield from self._choose_a2(i1) if not_obey: print("all non-bound samples fit the KKT condition!") break if all_not_obey: print("all samples fit the KKT condition! Optimization done!") break return False def _choose_a2(self, i1): """ Choose the second alpha by using heuristic algorithm ;steps: 1: Choose alpha2 which gets the maximum step size (|E1 - E2|). 2: Start in a random point,loop over all non-bound samples till alpha1 and alpha2 are optimized. 3: Start in a random point,loop over all samples till alpha1 and alpha2 are optimized. """ self._unbound = [i for i in self._all_samples if self._is_unbound(i)] if len(self.unbound) > 0: tmp_error = self._error.copy().tolist() tmp_error_dict = { index: value for index, value in enumerate(tmp_error) if self._is_unbound(index) } if self._e(i1) >= 0: i2 = min(tmp_error_dict, key=lambda index: tmp_error_dict[index]) else: i2 = max(tmp_error_dict, key=lambda index: tmp_error_dict[index]) cmd = yield i1, i2 if cmd is None: return for i2 in np.roll(self.unbound, np.random.choice(self.length)): cmd = yield i1, i2 if cmd is None: return for i2 in np.roll(self._all_samples, np.random.choice(self.length)): cmd = yield i1, i2 if cmd is None: return # Get the new alpha2 and new alpha1 def _get_new_alpha(self, i1, i2, a1, a2, e1, e2, y1, y2): k = self._k if i1 == i2: return None, None # calculate L and H which bound the new alpha2 s = y1 * y2 if s == -1: l, h = max(0.0, a2 - a1), min(self._c, self._c + a2 - a1) else: l, h = max(0.0, a2 + a1 - self._c), min(self._c, a2 + a1) if l == h: # noqa: E741 return None, None # calculate eta k11 = k(i1, i1) k22 = k(i2, i2) k12 = k(i1, i2) # select the new alpha2 which could get the minimal objectives if (eta := k11 + k22 - 2.0 * k12) > 0.0: a2_new_unc = a2 + (y2 * (e1 - e2)) / eta # a2_new has a boundary if a2_new_unc >= h: a2_new = h elif a2_new_unc <= l: a2_new = l else: a2_new = a2_new_unc else: b = self._b l1 = a1 + s * (a2 - l) h1 = a1 + s * (a2 - h) # way 1 f1 = y1 * (e1 + b) - a1 * k(i1, i1) - s * a2 * k(i1, i2) f2 = y2 * (e2 + b) - a2 * k(i2, i2) - s * a1 * k(i1, i2) ol = ( l1 * f1 + l * f2 + 1 / 2 * l1**2 * k(i1, i1) + 1 / 2 * l**2 * k(i2, i2) + s * l * l1 * k(i1, i2) ) oh = ( h1 * f1 + h * f2 + 1 / 2 * h1**2 * k(i1, i1) + 1 / 2 * h**2 * k(i2, i2) + s * h * h1 * k(i1, i2) ) """ # way 2 Use objective function check which alpha2 new could get the minimal objectives """ if ol < (oh - self._eps): a2_new = l elif ol > oh + self._eps: a2_new = h else: a2_new = a2 # a1_new has a boundary too a1_new = a1 + s * (a2 - a2_new) if a1_new < 0: a2_new += s * a1_new a1_new = 0 if a1_new > self._c: a2_new += s * (a1_new - self._c) a1_new = self._c return a1_new, a2_new # Normalise data using min_max way def _norm(self, data): if self._init: self._min = np.min(data, axis=0) self._max = np.max(data, axis=0) self._init = False return (data - self._min) / (self._max - self._min) else: return (data - self._min) / (self._max - self._min) def _is_unbound(self, index): return bool(0.0 < self.alphas[index] < self._c) def _is_support(self, index): return bool(self.alphas[index] > 0) @property def unbound(self): return self._unbound @property def support(self): return [i for i in range(self.length) if self._is_support(i)] @property def length(self): return self.samples.shape[0] class Kernel: def __init__(self, kernel, degree=1.0, coef0=0.0, gamma=1.0): self.degree = np.float64(degree) self.coef0 = np.float64(coef0) self.gamma = np.float64(gamma) self._kernel_name = kernel self._kernel = self._get_kernel(kernel_name=kernel) self._check() def _polynomial(self, v1, v2): return (self.gamma * np.inner(v1, v2) + self.coef0) ** self.degree def _linear(self, v1, v2): return np.inner(v1, v2) + self.coef0 def _rbf(self, v1, v2): return np.exp(-1 * (self.gamma * np.linalg.norm(v1 - v2) ** 2)) def _check(self): if self._kernel == self._rbf: if self.gamma < 0: raise ValueError("gamma value must greater than 0") def _get_kernel(self, kernel_name): maps = {"linear": self._linear, "poly": self._polynomial, "rbf": self._rbf} return maps[kernel_name] def __call__(self, v1, v2): return self._kernel(v1, v2) def __repr__(self): return self._kernel_name def count_time(func): def call_func(*args, **kwargs): import time start_time = time.time() func(*args, **kwargs) end_time = time.time() print(f"smo algorithm cost {end_time - start_time} seconds") return call_func @count_time def test_cancel_data(): print("Hello!\nStart test svm by smo algorithm!") # 0: download dataset and load into pandas' dataframe if not os.path.exists(r"cancel_data.csv"): request = urllib.request.Request( CANCER_DATASET_URL, headers={"User-Agent": "Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)"}, ) response = urllib.request.urlopen(request) content = response.read().decode("utf-8") with open(r"cancel_data.csv", "w") as f: f.write(content) data = pd.read_csv(r"cancel_data.csv", header=None) # 1: pre-processing data del data[data.columns.tolist()[0]] data = data.dropna(axis=0) data = data.replace({"M": np.float64(1), "B": np.float64(-1)}) samples = np.array(data)[:, :] # 2: dividing data into train_data data and test_data data train_data, test_data = samples[:328, :], samples[328:, :] test_tags, test_samples = test_data[:, 0], test_data[:, 1:] # 3: choose kernel function,and set initial alphas to zero(optional) mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5) al = np.zeros(train_data.shape[0]) # 4: calculating best alphas using SMO algorithm and predict test_data samples mysvm = SmoSVM( train=train_data, kernel_func=mykernel, alpha_list=al, cost=0.4, b=0.0, tolerance=0.001, ) mysvm.fit() predict = mysvm.predict(test_samples) # 5: check accuracy score = 0 test_num = test_tags.shape[0] for i in range(test_tags.shape[0]): if test_tags[i] == predict[i]: score += 1 print(f"\nall: {test_num}\nright: {score}\nfalse: {test_num - score}") print(f"Rough Accuracy: {score / test_tags.shape[0]}") def test_demonstration(): # change stdout print("\nStart plot,please wait!!!") sys.stdout = open(os.devnull, "w") ax1 = plt.subplot2grid((2, 2), (0, 0)) ax2 = plt.subplot2grid((2, 2), (0, 1)) ax3 = plt.subplot2grid((2, 2), (1, 0)) ax4 = plt.subplot2grid((2, 2), (1, 1)) ax1.set_title("linear svm,cost:0.1") test_linear_kernel(ax1, cost=0.1) ax2.set_title("linear svm,cost:500") test_linear_kernel(ax2, cost=500) ax3.set_title("rbf kernel svm,cost:0.1") test_rbf_kernel(ax3, cost=0.1) ax4.set_title("rbf kernel svm,cost:500") test_rbf_kernel(ax4, cost=500) sys.stdout = sys.__stdout__ print("Plot done!!!") def test_linear_kernel(ax, cost): train_x, train_y = make_blobs( n_samples=500, centers=2, n_features=2, random_state=1 ) train_y[train_y == 0] = -1 scaler = StandardScaler() train_x_scaled = scaler.fit_transform(train_x, train_y) train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled)) mykernel = Kernel(kernel="linear", degree=5, coef0=1, gamma=0.5) mysvm = SmoSVM( train=train_data, kernel_func=mykernel, cost=cost, tolerance=0.001, auto_norm=False, ) mysvm.fit() plot_partition_boundary(mysvm, train_data, ax=ax) def test_rbf_kernel(ax, cost): train_x, train_y = make_circles( n_samples=500, noise=0.1, factor=0.1, random_state=1 ) train_y[train_y == 0] = -1 scaler = StandardScaler() train_x_scaled = scaler.fit_transform(train_x, train_y) train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled)) mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5) mysvm = SmoSVM( train=train_data, kernel_func=mykernel, cost=cost, tolerance=0.001, auto_norm=False, ) mysvm.fit() plot_partition_boundary(mysvm, train_data, ax=ax) def plot_partition_boundary( model, train_data, ax, resolution=100, colors=("b", "k", "r") ): """ We can not get the optimum w of our kernel svm model which is different from linear svm. For this reason, we generate randomly distributed points with high desity and prediced values of these points are calculated by using our tained model. Then we could use this prediced values to draw contour map. And this contour map can represent svm's partition boundary. """ train_data_x = train_data[:, 1] train_data_y = train_data[:, 2] train_data_tags = train_data[:, 0] xrange = np.linspace(train_data_x.min(), train_data_x.max(), resolution) yrange = np.linspace(train_data_y.min(), train_data_y.max(), resolution) test_samples = np.array([(x, y) for x in xrange for y in yrange]).reshape( resolution * resolution, 2 ) test_tags = model.predict(test_samples, classify=False) grid = test_tags.reshape((len(xrange), len(yrange))) # Plot contour map which represents the partition boundary ax.contour( xrange, yrange, np.mat(grid).T, levels=(-1, 0, 1), linestyles=("--", "-", "--"), linewidths=(1, 1, 1), colors=colors, ) # Plot all train samples ax.scatter( train_data_x, train_data_y, c=train_data_tags, cmap=plt.cm.Dark2, lw=0, alpha=0.5, ) # Plot support vectors support = model.support ax.scatter( train_data_x[support], train_data_y[support], c=train_data_tags[support], cmap=plt.cm.Dark2, ) if __name__ == "__main__": test_cancel_data() test_demonstration() plt.show()
""" Implementation of sequential minimal optimization (SMO) for support vector machines (SVM). Sequential minimal optimization (SMO) is an algorithm for solving the quadratic programming (QP) problem that arises during the training of support vector machines. It was invented by John Platt in 1998. Input: 0: type: numpy.ndarray. 1: first column of ndarray must be tags of samples, must be 1 or -1. 2: rows of ndarray represent samples. Usage: Command: python3 sequential_minimum_optimization.py Code: from sequential_minimum_optimization import SmoSVM, Kernel kernel = Kernel(kernel='poly', degree=3., coef0=1., gamma=0.5) init_alphas = np.zeros(train.shape[0]) SVM = SmoSVM(train=train, alpha_list=init_alphas, kernel_func=kernel, cost=0.4, b=0.0, tolerance=0.001) SVM.fit() predict = SVM.predict(test_samples) Reference: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/smo-book.pdf https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-98-14.pdf """ import os import sys import urllib.request import numpy as np import pandas as pd from matplotlib import pyplot as plt from sklearn.datasets import make_blobs, make_circles from sklearn.preprocessing import StandardScaler CANCER_DATASET_URL = ( "https://archive.ics.uci.edu/ml/machine-learning-databases/" "breast-cancer-wisconsin/wdbc.data" ) class SmoSVM: def __init__( self, train, kernel_func, alpha_list=None, cost=0.4, b=0.0, tolerance=0.001, auto_norm=True, ): self._init = True self._auto_norm = auto_norm self._c = np.float64(cost) self._b = np.float64(b) self._tol = np.float64(tolerance) if tolerance > 0.0001 else np.float64(0.001) self.tags = train[:, 0] self.samples = self._norm(train[:, 1:]) if self._auto_norm else train[:, 1:] self.alphas = alpha_list if alpha_list is not None else np.zeros(train.shape[0]) self.Kernel = kernel_func self._eps = 0.001 self._all_samples = list(range(self.length)) self._K_matrix = self._calculate_k_matrix() self._error = np.zeros(self.length) self._unbound = [] self.choose_alpha = self._choose_alphas() # Calculate alphas using SMO algorithm def fit(self): k = self._k state = None while True: # 1: Find alpha1, alpha2 try: i1, i2 = self.choose_alpha.send(state) state = None except StopIteration: print("Optimization done!\nEvery sample satisfy the KKT condition!") break # 2: calculate new alpha2 and new alpha1 y1, y2 = self.tags[i1], self.tags[i2] a1, a2 = self.alphas[i1].copy(), self.alphas[i2].copy() e1, e2 = self._e(i1), self._e(i2) args = (i1, i2, a1, a2, e1, e2, y1, y2) a1_new, a2_new = self._get_new_alpha(*args) if not a1_new and not a2_new: state = False continue self.alphas[i1], self.alphas[i2] = a1_new, a2_new # 3: update threshold(b) b1_new = np.float64( -e1 - y1 * k(i1, i1) * (a1_new - a1) - y2 * k(i2, i1) * (a2_new - a2) + self._b ) b2_new = np.float64( -e2 - y2 * k(i2, i2) * (a2_new - a2) - y1 * k(i1, i2) * (a1_new - a1) + self._b ) if 0.0 < a1_new < self._c: b = b1_new if 0.0 < a2_new < self._c: b = b2_new if not (np.float64(0) < a2_new < self._c) and not ( np.float64(0) < a1_new < self._c ): b = (b1_new + b2_new) / 2.0 b_old = self._b self._b = b # 4: update error value,here we only calculate those non-bound samples' # error self._unbound = [i for i in self._all_samples if self._is_unbound(i)] for s in self.unbound: if s == i1 or s == i2: continue self._error[s] += ( y1 * (a1_new - a1) * k(i1, s) + y2 * (a2_new - a2) * k(i2, s) + (self._b - b_old) ) # if i1 or i2 is non-bound,update there error value to zero if self._is_unbound(i1): self._error[i1] = 0 if self._is_unbound(i2): self._error[i2] = 0 # Predict test samples def predict(self, test_samples, classify=True): if test_samples.shape[1] > self.samples.shape[1]: raise ValueError( "Test samples' feature length does not equal to that of train samples" ) if self._auto_norm: test_samples = self._norm(test_samples) results = [] for test_sample in test_samples: result = self._predict(test_sample) if classify: results.append(1 if result > 0 else -1) else: results.append(result) return np.array(results) # Check if alpha violate KKT condition def _check_obey_kkt(self, index): alphas = self.alphas tol = self._tol r = self._e(index) * self.tags[index] c = self._c return (r < -tol and alphas[index] < c) or (r > tol and alphas[index] > 0.0) # Get value calculated from kernel function def _k(self, i1, i2): # for test samples,use Kernel function if isinstance(i2, np.ndarray): return self.Kernel(self.samples[i1], i2) # for train samples,Kernel values have been saved in matrix else: return self._K_matrix[i1, i2] # Get sample's error def _e(self, index): """ Two cases: 1:Sample[index] is non-bound,Fetch error from list: _error 2:sample[index] is bound,Use predicted value deduct true value: g(xi) - yi """ # get from error data if self._is_unbound(index): return self._error[index] # get by g(xi) - yi else: gx = np.dot(self.alphas * self.tags, self._K_matrix[:, index]) + self._b yi = self.tags[index] return gx - yi # Calculate Kernel matrix of all possible i1,i2 ,saving time def _calculate_k_matrix(self): k_matrix = np.zeros([self.length, self.length]) for i in self._all_samples: for j in self._all_samples: k_matrix[i, j] = np.float64( self.Kernel(self.samples[i, :], self.samples[j, :]) ) return k_matrix # Predict test sample's tag def _predict(self, sample): k = self._k predicted_value = ( np.sum( [ self.alphas[i1] * self.tags[i1] * k(i1, sample) for i1 in self._all_samples ] ) + self._b ) return predicted_value # Choose alpha1 and alpha2 def _choose_alphas(self): locis = yield from self._choose_a1() if not locis: return return locis def _choose_a1(self): """ Choose first alpha ;steps: 1:First loop over all sample 2:Second loop over all non-bound samples till all non-bound samples does not voilate kkt condition. 3:Repeat this two process endlessly,till all samples does not voilate kkt condition samples after first loop. """ while True: all_not_obey = True # all sample print("scanning all sample!") for i1 in [i for i in self._all_samples if self._check_obey_kkt(i)]: all_not_obey = False yield from self._choose_a2(i1) # non-bound sample print("scanning non-bound sample!") while True: not_obey = True for i1 in [ i for i in self._all_samples if self._check_obey_kkt(i) and self._is_unbound(i) ]: not_obey = False yield from self._choose_a2(i1) if not_obey: print("all non-bound samples fit the KKT condition!") break if all_not_obey: print("all samples fit the KKT condition! Optimization done!") break return False def _choose_a2(self, i1): """ Choose the second alpha by using heuristic algorithm ;steps: 1: Choose alpha2 which gets the maximum step size (|E1 - E2|). 2: Start in a random point,loop over all non-bound samples till alpha1 and alpha2 are optimized. 3: Start in a random point,loop over all samples till alpha1 and alpha2 are optimized. """ self._unbound = [i for i in self._all_samples if self._is_unbound(i)] if len(self.unbound) > 0: tmp_error = self._error.copy().tolist() tmp_error_dict = { index: value for index, value in enumerate(tmp_error) if self._is_unbound(index) } if self._e(i1) >= 0: i2 = min(tmp_error_dict, key=lambda index: tmp_error_dict[index]) else: i2 = max(tmp_error_dict, key=lambda index: tmp_error_dict[index]) cmd = yield i1, i2 if cmd is None: return for i2 in np.roll(self.unbound, np.random.choice(self.length)): cmd = yield i1, i2 if cmd is None: return for i2 in np.roll(self._all_samples, np.random.choice(self.length)): cmd = yield i1, i2 if cmd is None: return # Get the new alpha2 and new alpha1 def _get_new_alpha(self, i1, i2, a1, a2, e1, e2, y1, y2): k = self._k if i1 == i2: return None, None # calculate L and H which bound the new alpha2 s = y1 * y2 if s == -1: l, h = max(0.0, a2 - a1), min(self._c, self._c + a2 - a1) else: l, h = max(0.0, a2 + a1 - self._c), min(self._c, a2 + a1) if l == h: return None, None # calculate eta k11 = k(i1, i1) k22 = k(i2, i2) k12 = k(i1, i2) # select the new alpha2 which could get the minimal objectives if (eta := k11 + k22 - 2.0 * k12) > 0.0: a2_new_unc = a2 + (y2 * (e1 - e2)) / eta # a2_new has a boundary if a2_new_unc >= h: a2_new = h elif a2_new_unc <= l: a2_new = l else: a2_new = a2_new_unc else: b = self._b l1 = a1 + s * (a2 - l) h1 = a1 + s * (a2 - h) # way 1 f1 = y1 * (e1 + b) - a1 * k(i1, i1) - s * a2 * k(i1, i2) f2 = y2 * (e2 + b) - a2 * k(i2, i2) - s * a1 * k(i1, i2) ol = ( l1 * f1 + l * f2 + 1 / 2 * l1**2 * k(i1, i1) + 1 / 2 * l**2 * k(i2, i2) + s * l * l1 * k(i1, i2) ) oh = ( h1 * f1 + h * f2 + 1 / 2 * h1**2 * k(i1, i1) + 1 / 2 * h**2 * k(i2, i2) + s * h * h1 * k(i1, i2) ) """ # way 2 Use objective function check which alpha2 new could get the minimal objectives """ if ol < (oh - self._eps): a2_new = l elif ol > oh + self._eps: a2_new = h else: a2_new = a2 # a1_new has a boundary too a1_new = a1 + s * (a2 - a2_new) if a1_new < 0: a2_new += s * a1_new a1_new = 0 if a1_new > self._c: a2_new += s * (a1_new - self._c) a1_new = self._c return a1_new, a2_new # Normalise data using min_max way def _norm(self, data): if self._init: self._min = np.min(data, axis=0) self._max = np.max(data, axis=0) self._init = False return (data - self._min) / (self._max - self._min) else: return (data - self._min) / (self._max - self._min) def _is_unbound(self, index): return bool(0.0 < self.alphas[index] < self._c) def _is_support(self, index): return bool(self.alphas[index] > 0) @property def unbound(self): return self._unbound @property def support(self): return [i for i in range(self.length) if self._is_support(i)] @property def length(self): return self.samples.shape[0] class Kernel: def __init__(self, kernel, degree=1.0, coef0=0.0, gamma=1.0): self.degree = np.float64(degree) self.coef0 = np.float64(coef0) self.gamma = np.float64(gamma) self._kernel_name = kernel self._kernel = self._get_kernel(kernel_name=kernel) self._check() def _polynomial(self, v1, v2): return (self.gamma * np.inner(v1, v2) + self.coef0) ** self.degree def _linear(self, v1, v2): return np.inner(v1, v2) + self.coef0 def _rbf(self, v1, v2): return np.exp(-1 * (self.gamma * np.linalg.norm(v1 - v2) ** 2)) def _check(self): if self._kernel == self._rbf: if self.gamma < 0: raise ValueError("gamma value must greater than 0") def _get_kernel(self, kernel_name): maps = {"linear": self._linear, "poly": self._polynomial, "rbf": self._rbf} return maps[kernel_name] def __call__(self, v1, v2): return self._kernel(v1, v2) def __repr__(self): return self._kernel_name def count_time(func): def call_func(*args, **kwargs): import time start_time = time.time() func(*args, **kwargs) end_time = time.time() print(f"smo algorithm cost {end_time - start_time} seconds") return call_func @count_time def test_cancel_data(): print("Hello!\nStart test svm by smo algorithm!") # 0: download dataset and load into pandas' dataframe if not os.path.exists(r"cancel_data.csv"): request = urllib.request.Request( CANCER_DATASET_URL, headers={"User-Agent": "Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)"}, ) response = urllib.request.urlopen(request) content = response.read().decode("utf-8") with open(r"cancel_data.csv", "w") as f: f.write(content) data = pd.read_csv(r"cancel_data.csv", header=None) # 1: pre-processing data del data[data.columns.tolist()[0]] data = data.dropna(axis=0) data = data.replace({"M": np.float64(1), "B": np.float64(-1)}) samples = np.array(data)[:, :] # 2: dividing data into train_data data and test_data data train_data, test_data = samples[:328, :], samples[328:, :] test_tags, test_samples = test_data[:, 0], test_data[:, 1:] # 3: choose kernel function,and set initial alphas to zero(optional) mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5) al = np.zeros(train_data.shape[0]) # 4: calculating best alphas using SMO algorithm and predict test_data samples mysvm = SmoSVM( train=train_data, kernel_func=mykernel, alpha_list=al, cost=0.4, b=0.0, tolerance=0.001, ) mysvm.fit() predict = mysvm.predict(test_samples) # 5: check accuracy score = 0 test_num = test_tags.shape[0] for i in range(test_tags.shape[0]): if test_tags[i] == predict[i]: score += 1 print(f"\nall: {test_num}\nright: {score}\nfalse: {test_num - score}") print(f"Rough Accuracy: {score / test_tags.shape[0]}") def test_demonstration(): # change stdout print("\nStart plot,please wait!!!") sys.stdout = open(os.devnull, "w") ax1 = plt.subplot2grid((2, 2), (0, 0)) ax2 = plt.subplot2grid((2, 2), (0, 1)) ax3 = plt.subplot2grid((2, 2), (1, 0)) ax4 = plt.subplot2grid((2, 2), (1, 1)) ax1.set_title("linear svm,cost:0.1") test_linear_kernel(ax1, cost=0.1) ax2.set_title("linear svm,cost:500") test_linear_kernel(ax2, cost=500) ax3.set_title("rbf kernel svm,cost:0.1") test_rbf_kernel(ax3, cost=0.1) ax4.set_title("rbf kernel svm,cost:500") test_rbf_kernel(ax4, cost=500) sys.stdout = sys.__stdout__ print("Plot done!!!") def test_linear_kernel(ax, cost): train_x, train_y = make_blobs( n_samples=500, centers=2, n_features=2, random_state=1 ) train_y[train_y == 0] = -1 scaler = StandardScaler() train_x_scaled = scaler.fit_transform(train_x, train_y) train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled)) mykernel = Kernel(kernel="linear", degree=5, coef0=1, gamma=0.5) mysvm = SmoSVM( train=train_data, kernel_func=mykernel, cost=cost, tolerance=0.001, auto_norm=False, ) mysvm.fit() plot_partition_boundary(mysvm, train_data, ax=ax) def test_rbf_kernel(ax, cost): train_x, train_y = make_circles( n_samples=500, noise=0.1, factor=0.1, random_state=1 ) train_y[train_y == 0] = -1 scaler = StandardScaler() train_x_scaled = scaler.fit_transform(train_x, train_y) train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled)) mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5) mysvm = SmoSVM( train=train_data, kernel_func=mykernel, cost=cost, tolerance=0.001, auto_norm=False, ) mysvm.fit() plot_partition_boundary(mysvm, train_data, ax=ax) def plot_partition_boundary( model, train_data, ax, resolution=100, colors=("b", "k", "r") ): """ We can not get the optimum w of our kernel svm model which is different from linear svm. For this reason, we generate randomly distributed points with high desity and prediced values of these points are calculated by using our tained model. Then we could use this prediced values to draw contour map. And this contour map can represent svm's partition boundary. """ train_data_x = train_data[:, 1] train_data_y = train_data[:, 2] train_data_tags = train_data[:, 0] xrange = np.linspace(train_data_x.min(), train_data_x.max(), resolution) yrange = np.linspace(train_data_y.min(), train_data_y.max(), resolution) test_samples = np.array([(x, y) for x in xrange for y in yrange]).reshape( resolution * resolution, 2 ) test_tags = model.predict(test_samples, classify=False) grid = test_tags.reshape((len(xrange), len(yrange))) # Plot contour map which represents the partition boundary ax.contour( xrange, yrange, np.mat(grid).T, levels=(-1, 0, 1), linestyles=("--", "-", "--"), linewidths=(1, 1, 1), colors=colors, ) # Plot all train samples ax.scatter( train_data_x, train_data_y, c=train_data_tags, cmap=plt.cm.Dark2, lw=0, alpha=0.5, ) # Plot support vectors support = model.support ax.scatter( train_data_x[support], train_data_y[support], c=train_data_tags[support], cmap=plt.cm.Dark2, ) if __name__ == "__main__": test_cancel_data() test_demonstration() plt.show()
1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" The following undirected network consists of seven vertices and twelve edges with a total weight of 243. οΏΌ The same network can be represented by the matrix below. A B C D E F G A - 16 12 21 - - - B 16 - - 17 20 - - C 12 - - 28 - 31 - D 21 17 28 - 18 19 23 E - 20 - 18 - - 11 F - - 31 19 - - 27 G - - - 23 11 27 - However, it is possible to optimise the network by removing some edges and still ensure that all points on the network remain connected. The network which achieves the maximum saving is shown below. It has a weight of 93, representing a saving of 243 - 93 = 150 from the original network. Using network.txt (right click and 'Save Link/Target As...'), a 6K text file containing a network with forty vertices, and given in matrix form, find the maximum saving which can be achieved by removing redundant edges whilst ensuring that the network remains connected. Solution: We use Prim's algorithm to find a Minimum Spanning Tree. Reference: https://en.wikipedia.org/wiki/Prim%27s_algorithm """ from __future__ import annotations import os from collections.abc import Mapping EdgeT = tuple[int, int] class Graph: """ A class representing an undirected weighted graph. """ def __init__(self, vertices: set[int], edges: Mapping[EdgeT, int]) -> None: self.vertices: set[int] = vertices self.edges: dict[EdgeT, int] = { (min(edge), max(edge)): weight for edge, weight in edges.items() } def add_edge(self, edge: EdgeT, weight: int) -> None: """ Add a new edge to the graph. >>> graph = Graph({1, 2}, {(2, 1): 4}) >>> graph.add_edge((3, 1), 5) >>> sorted(graph.vertices) [1, 2, 3] >>> sorted([(v,k) for k,v in graph.edges.items()]) [(4, (1, 2)), (5, (1, 3))] """ self.vertices.add(edge[0]) self.vertices.add(edge[1]) self.edges[(min(edge), max(edge))] = weight def prims_algorithm(self) -> Graph: """ Run Prim's algorithm to find the minimum spanning tree. Reference: https://en.wikipedia.org/wiki/Prim%27s_algorithm >>> graph = Graph({1,2,3,4},{(1,2):5, (1,3):10, (1,4):20, (2,4):30, (3,4):1}) >>> mst = graph.prims_algorithm() >>> sorted(mst.vertices) [1, 2, 3, 4] >>> sorted(mst.edges) [(1, 2), (1, 3), (3, 4)] """ subgraph: Graph = Graph({min(self.vertices)}, {}) min_edge: EdgeT min_weight: int edge: EdgeT weight: int while len(subgraph.vertices) < len(self.vertices): min_weight = max(self.edges.values()) + 1 for edge, weight in self.edges.items(): if (edge[0] in subgraph.vertices) ^ (edge[1] in subgraph.vertices): if weight < min_weight: min_edge = edge min_weight = weight subgraph.add_edge(min_edge, min_weight) return subgraph def solution(filename: str = "p107_network.txt") -> int: """ Find the maximum saving which can be achieved by removing redundant edges whilst ensuring that the network remains connected. >>> solution("test_network.txt") 150 """ script_dir: str = os.path.abspath(os.path.dirname(__file__)) network_file: str = os.path.join(script_dir, filename) adjacency_matrix: list[list[str]] edges: dict[EdgeT, int] = {} data: list[str] edge1: int edge2: int with open(network_file) as f: data = f.read().strip().split("\n") adjaceny_matrix = [line.split(",") for line in data] for edge1 in range(1, len(adjaceny_matrix)): for edge2 in range(edge1): if adjaceny_matrix[edge1][edge2] != "-": edges[(edge2, edge1)] = int(adjaceny_matrix[edge1][edge2]) graph: Graph = Graph(set(range(len(adjaceny_matrix))), edges) subgraph: Graph = graph.prims_algorithm() initial_total: int = sum(graph.edges.values()) optimal_total: int = sum(subgraph.edges.values()) return initial_total - optimal_total if __name__ == "__main__": print(f"{solution() = }")
""" The following undirected network consists of seven vertices and twelve edges with a total weight of 243. οΏΌ The same network can be represented by the matrix below. A B C D E F G A - 16 12 21 - - - B 16 - - 17 20 - - C 12 - - 28 - 31 - D 21 17 28 - 18 19 23 E - 20 - 18 - - 11 F - - 31 19 - - 27 G - - - 23 11 27 - However, it is possible to optimise the network by removing some edges and still ensure that all points on the network remain connected. The network which achieves the maximum saving is shown below. It has a weight of 93, representing a saving of 243 - 93 = 150 from the original network. Using network.txt (right click and 'Save Link/Target As...'), a 6K text file containing a network with forty vertices, and given in matrix form, find the maximum saving which can be achieved by removing redundant edges whilst ensuring that the network remains connected. Solution: We use Prim's algorithm to find a Minimum Spanning Tree. Reference: https://en.wikipedia.org/wiki/Prim%27s_algorithm """ from __future__ import annotations import os from collections.abc import Mapping EdgeT = tuple[int, int] class Graph: """ A class representing an undirected weighted graph. """ def __init__(self, vertices: set[int], edges: Mapping[EdgeT, int]) -> None: self.vertices: set[int] = vertices self.edges: dict[EdgeT, int] = { (min(edge), max(edge)): weight for edge, weight in edges.items() } def add_edge(self, edge: EdgeT, weight: int) -> None: """ Add a new edge to the graph. >>> graph = Graph({1, 2}, {(2, 1): 4}) >>> graph.add_edge((3, 1), 5) >>> sorted(graph.vertices) [1, 2, 3] >>> sorted([(v,k) for k,v in graph.edges.items()]) [(4, (1, 2)), (5, (1, 3))] """ self.vertices.add(edge[0]) self.vertices.add(edge[1]) self.edges[(min(edge), max(edge))] = weight def prims_algorithm(self) -> Graph: """ Run Prim's algorithm to find the minimum spanning tree. Reference: https://en.wikipedia.org/wiki/Prim%27s_algorithm >>> graph = Graph({1,2,3,4},{(1,2):5, (1,3):10, (1,4):20, (2,4):30, (3,4):1}) >>> mst = graph.prims_algorithm() >>> sorted(mst.vertices) [1, 2, 3, 4] >>> sorted(mst.edges) [(1, 2), (1, 3), (3, 4)] """ subgraph: Graph = Graph({min(self.vertices)}, {}) min_edge: EdgeT min_weight: int edge: EdgeT weight: int while len(subgraph.vertices) < len(self.vertices): min_weight = max(self.edges.values()) + 1 for edge, weight in self.edges.items(): if (edge[0] in subgraph.vertices) ^ (edge[1] in subgraph.vertices): if weight < min_weight: min_edge = edge min_weight = weight subgraph.add_edge(min_edge, min_weight) return subgraph def solution(filename: str = "p107_network.txt") -> int: """ Find the maximum saving which can be achieved by removing redundant edges whilst ensuring that the network remains connected. >>> solution("test_network.txt") 150 """ script_dir: str = os.path.abspath(os.path.dirname(__file__)) network_file: str = os.path.join(script_dir, filename) edges: dict[EdgeT, int] = {} data: list[str] edge1: int edge2: int with open(network_file) as f: data = f.read().strip().split("\n") adjaceny_matrix = [line.split(",") for line in data] for edge1 in range(1, len(adjaceny_matrix)): for edge2 in range(edge1): if adjaceny_matrix[edge1][edge2] != "-": edges[(edge2, edge1)] = int(adjaceny_matrix[edge1][edge2]) graph: Graph = Graph(set(range(len(adjaceny_matrix))), edges) subgraph: Graph = graph.prims_algorithm() initial_total: int = sum(graph.edges.values()) optimal_total: int = sum(subgraph.edges.values()) return initial_total - optimal_total if __name__ == "__main__": print(f"{solution() = }")
1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
def hex_to_bin(hex_num: str) -> int: """ Convert a hexadecimal value to its binary equivalent #https://stackoverflow.com/questions/1425493/convert-hex-to-binary Here, we have used the bitwise right shift operator: >> Shifts the bits of the number to the right and fills 0 on voids left as a result. Similar effect as of dividing the number with some power of two. Example: a = 10 a >> 1 = 5 >>> hex_to_bin("AC") 10101100 >>> hex_to_bin("9A4") 100110100100 >>> hex_to_bin(" 12f ") 100101111 >>> hex_to_bin("FfFf") 1111111111111111 >>> hex_to_bin("-fFfF") -1111111111111111 >>> hex_to_bin("F-f") Traceback (most recent call last): ... ValueError: Invalid value was passed to the function >>> hex_to_bin("") Traceback (most recent call last): ... ValueError: No value was passed to the function """ hex_num = hex_num.strip() if not hex_num: raise ValueError("No value was passed to the function") is_negative = hex_num[0] == "-" if is_negative: hex_num = hex_num[1:] try: int_num = int(hex_num, 16) except ValueError: raise ValueError("Invalid value was passed to the function") bin_str = "" while int_num > 0: bin_str = str(int_num % 2) + bin_str int_num >>= 1 return int(("-" + bin_str) if is_negative else bin_str) if __name__ == "__main__": import doctest doctest.testmod()
def hex_to_bin(hex_num: str) -> int: """ Convert a hexadecimal value to its binary equivalent #https://stackoverflow.com/questions/1425493/convert-hex-to-binary Here, we have used the bitwise right shift operator: >> Shifts the bits of the number to the right and fills 0 on voids left as a result. Similar effect as of dividing the number with some power of two. Example: a = 10 a >> 1 = 5 >>> hex_to_bin("AC") 10101100 >>> hex_to_bin("9A4") 100110100100 >>> hex_to_bin(" 12f ") 100101111 >>> hex_to_bin("FfFf") 1111111111111111 >>> hex_to_bin("-fFfF") -1111111111111111 >>> hex_to_bin("F-f") Traceback (most recent call last): ... ValueError: Invalid value was passed to the function >>> hex_to_bin("") Traceback (most recent call last): ... ValueError: No value was passed to the function """ hex_num = hex_num.strip() if not hex_num: raise ValueError("No value was passed to the function") is_negative = hex_num[0] == "-" if is_negative: hex_num = hex_num[1:] try: int_num = int(hex_num, 16) except ValueError: raise ValueError("Invalid value was passed to the function") bin_str = "" while int_num > 0: bin_str = str(int_num % 2) + bin_str int_num >>= 1 return int(("-" + bin_str) if is_negative else bin_str) if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Print all the Catalan numbers from 0 to n, n being the user input. * The Catalan numbers are a sequence of positive integers that * appear in many counting problems in combinatorics [1]. Such * problems include counting [2]: * - The number of Dyck words of length 2n * - The number well-formed expressions with n pairs of parentheses * (e.g., `()()` is valid but `())(` is not) * - The number of different ways n + 1 factors can be completely * parenthesized (e.g., for n = 2, C(n) = 2 and (ab)c and a(bc) * are the two valid ways to parenthesize. * - The number of full binary trees with n + 1 leaves * A Catalan number satisfies the following recurrence relation * which we will use in this algorithm [1]. * C(0) = C(1) = 1 * C(n) = sum(C(i).C(n-i-1)), from i = 0 to n-1 * In addition, the n-th Catalan number can be calculated using * the closed form formula below [1]: * C(n) = (1 / (n + 1)) * (2n choose n) * Sources: * [1] https://brilliant.org/wiki/catalan-numbers/ * [2] https://en.wikipedia.org/wiki/Catalan_number """ def catalan_numbers(upper_limit: int) -> "list[int]": """ Return a list of the Catalan number sequence from 0 through `upper_limit`. >>> catalan_numbers(5) [1, 1, 2, 5, 14, 42] >>> catalan_numbers(2) [1, 1, 2] >>> catalan_numbers(-1) Traceback (most recent call last): ValueError: Limit for the Catalan sequence must be β‰₯ 0 """ if upper_limit < 0: raise ValueError("Limit for the Catalan sequence must be β‰₯ 0") catalan_list = [0] * (upper_limit + 1) # Base case: C(0) = C(1) = 1 catalan_list[0] = 1 if upper_limit > 0: catalan_list[1] = 1 # Recurrence relation: C(i) = sum(C(j).C(i-j-1)), from j = 0 to i for i in range(2, upper_limit + 1): for j in range(i): catalan_list[i] += catalan_list[j] * catalan_list[i - j - 1] return catalan_list if __name__ == "__main__": print("\n********* Catalan Numbers Using Dynamic Programming ************\n") print("\n*** Enter -1 at any time to quit ***") print("\nEnter the upper limit (β‰₯ 0) for the Catalan number sequence: ", end="") try: while True: N = int(input().strip()) if N < 0: print("\n********* Goodbye!! ************") break else: print(f"The Catalan numbers from 0 through {N} are:") print(catalan_numbers(N)) print("Try another upper limit for the sequence: ", end="") except (NameError, ValueError): print("\n********* Invalid input, goodbye! ************\n") import doctest doctest.testmod()
""" Print all the Catalan numbers from 0 to n, n being the user input. * The Catalan numbers are a sequence of positive integers that * appear in many counting problems in combinatorics [1]. Such * problems include counting [2]: * - The number of Dyck words of length 2n * - The number well-formed expressions with n pairs of parentheses * (e.g., `()()` is valid but `())(` is not) * - The number of different ways n + 1 factors can be completely * parenthesized (e.g., for n = 2, C(n) = 2 and (ab)c and a(bc) * are the two valid ways to parenthesize. * - The number of full binary trees with n + 1 leaves * A Catalan number satisfies the following recurrence relation * which we will use in this algorithm [1]. * C(0) = C(1) = 1 * C(n) = sum(C(i).C(n-i-1)), from i = 0 to n-1 * In addition, the n-th Catalan number can be calculated using * the closed form formula below [1]: * C(n) = (1 / (n + 1)) * (2n choose n) * Sources: * [1] https://brilliant.org/wiki/catalan-numbers/ * [2] https://en.wikipedia.org/wiki/Catalan_number """ def catalan_numbers(upper_limit: int) -> "list[int]": """ Return a list of the Catalan number sequence from 0 through `upper_limit`. >>> catalan_numbers(5) [1, 1, 2, 5, 14, 42] >>> catalan_numbers(2) [1, 1, 2] >>> catalan_numbers(-1) Traceback (most recent call last): ValueError: Limit for the Catalan sequence must be β‰₯ 0 """ if upper_limit < 0: raise ValueError("Limit for the Catalan sequence must be β‰₯ 0") catalan_list = [0] * (upper_limit + 1) # Base case: C(0) = C(1) = 1 catalan_list[0] = 1 if upper_limit > 0: catalan_list[1] = 1 # Recurrence relation: C(i) = sum(C(j).C(i-j-1)), from j = 0 to i for i in range(2, upper_limit + 1): for j in range(i): catalan_list[i] += catalan_list[j] * catalan_list[i - j - 1] return catalan_list if __name__ == "__main__": print("\n********* Catalan Numbers Using Dynamic Programming ************\n") print("\n*** Enter -1 at any time to quit ***") print("\nEnter the upper limit (β‰₯ 0) for the Catalan number sequence: ", end="") try: while True: N = int(input().strip()) if N < 0: print("\n********* Goodbye!! ************") break else: print(f"The Catalan numbers from 0 through {N} are:") print(catalan_numbers(N)) print("Try another upper limit for the sequence: ", end="") except (NameError, ValueError): print("\n********* Invalid input, goodbye! ************\n") import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
from collections import deque def tarjan(g): """ Tarjan's algo for finding strongly connected components in a directed graph Uses two main attributes of each node to track reachability, the index of that node within a component(index), and the lowest index reachable from that node(lowlink). We then perform a dfs of the each component making sure to update these parameters for each node and saving the nodes we visit on the way. If ever we find that the lowest reachable node from a current node is equal to the index of the current node then it must be the root of a strongly connected component and so we save it and it's equireachable vertices as a strongly connected component. Complexity: strong_connect() is called at most once for each node and has a complexity of O(|E|) as it is DFS. Therefore this has complexity O(|V| + |E|) for a graph G = (V, E) """ n = len(g) stack = deque() on_stack = [False for _ in range(n)] index_of = [-1 for _ in range(n)] lowlink_of = index_of[:] def strong_connect(v, index, components): index_of[v] = index # the number when this node is seen lowlink_of[v] = index # lowest rank node reachable from here index += 1 stack.append(v) on_stack[v] = True for w in g[v]: if index_of[w] == -1: index = strong_connect(w, index, components) lowlink_of[v] = ( lowlink_of[w] if lowlink_of[w] < lowlink_of[v] else lowlink_of[v] ) elif on_stack[w]: lowlink_of[v] = ( lowlink_of[w] if lowlink_of[w] < lowlink_of[v] else lowlink_of[v] ) if lowlink_of[v] == index_of[v]: component = [] w = stack.pop() on_stack[w] = False component.append(w) while w != v: w = stack.pop() on_stack[w] = False component.append(w) components.append(component) return index components = [] for v in range(n): if index_of[v] == -1: strong_connect(v, 0, components) return components def create_graph(n, edges): g = [[] for _ in range(n)] for u, v in edges: g[u].append(v) return g if __name__ == "__main__": # Test n_vertices = 7 source = [0, 0, 1, 2, 3, 3, 4, 4, 6] target = [1, 3, 2, 0, 1, 4, 5, 6, 5] edges = [(u, v) for u, v in zip(source, target)] g = create_graph(n_vertices, edges) assert [[5], [6], [4], [3, 2, 1, 0]] == tarjan(g)
from collections import deque def tarjan(g): """ Tarjan's algo for finding strongly connected components in a directed graph Uses two main attributes of each node to track reachability, the index of that node within a component(index), and the lowest index reachable from that node(lowlink). We then perform a dfs of the each component making sure to update these parameters for each node and saving the nodes we visit on the way. If ever we find that the lowest reachable node from a current node is equal to the index of the current node then it must be the root of a strongly connected component and so we save it and it's equireachable vertices as a strongly connected component. Complexity: strong_connect() is called at most once for each node and has a complexity of O(|E|) as it is DFS. Therefore this has complexity O(|V| + |E|) for a graph G = (V, E) """ n = len(g) stack = deque() on_stack = [False for _ in range(n)] index_of = [-1 for _ in range(n)] lowlink_of = index_of[:] def strong_connect(v, index, components): index_of[v] = index # the number when this node is seen lowlink_of[v] = index # lowest rank node reachable from here index += 1 stack.append(v) on_stack[v] = True for w in g[v]: if index_of[w] == -1: index = strong_connect(w, index, components) lowlink_of[v] = ( lowlink_of[w] if lowlink_of[w] < lowlink_of[v] else lowlink_of[v] ) elif on_stack[w]: lowlink_of[v] = ( lowlink_of[w] if lowlink_of[w] < lowlink_of[v] else lowlink_of[v] ) if lowlink_of[v] == index_of[v]: component = [] w = stack.pop() on_stack[w] = False component.append(w) while w != v: w = stack.pop() on_stack[w] = False component.append(w) components.append(component) return index components = [] for v in range(n): if index_of[v] == -1: strong_connect(v, 0, components) return components def create_graph(n, edges): g = [[] for _ in range(n)] for u, v in edges: g[u].append(v) return g if __name__ == "__main__": # Test n_vertices = 7 source = [0, 0, 1, 2, 3, 3, 4, 4, 6] target = [1, 3, 2, 0, 1, 4, 5, 6, 5] edges = [(u, v) for u, v in zip(source, target)] g = create_graph(n_vertices, edges) assert [[5], [6], [4], [3, 2, 1, 0]] == tarjan(g)
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
from xml.dom import NotFoundErr import requests from bs4 import BeautifulSoup, NavigableString from fake_useragent import UserAgent BASE_URL = "https://ww1.gogoanime2.org" def search_scraper(anime_name: str) -> list: """[summary] Take an url and return list of anime after scraping the site. >>> type(search_scraper("demon_slayer")) <class 'list'> Args: anime_name (str): [Name of anime] Raises: e: [Raises exception on failure] Returns: [list]: [List of animes] """ # concat the name to form the search url. search_url = f"{BASE_URL}/search/{anime_name}" response = requests.get( search_url, headers={"UserAgent": UserAgent().chrome} ) # request the url. # Is the response ok? response.raise_for_status() # parse with soup. soup = BeautifulSoup(response.text, "html.parser") # get list of anime anime_ul = soup.find("ul", {"class": "items"}) anime_li = anime_ul.children # for each anime, insert to list. the name and url. anime_list = [] for anime in anime_li: if not isinstance(anime, NavigableString): try: anime_url, anime_title = ( anime.find("a")["href"], anime.find("a")["title"], ) anime_list.append( { "title": anime_title, "url": anime_url, } ) except (NotFoundErr, KeyError): pass return anime_list def search_anime_episode_list(episode_endpoint: str) -> list: """[summary] Take an url and return list of episodes after scraping the site for an url. >>> type(search_anime_episode_list("/anime/kimetsu-no-yaiba")) <class 'list'> Args: episode_endpoint (str): [Endpoint of episode] Raises: e: [description] Returns: [list]: [List of episodes] """ request_url = f"{BASE_URL}{episode_endpoint}" response = requests.get(url=request_url, headers={"UserAgent": UserAgent().chrome}) response.raise_for_status() soup = BeautifulSoup(response.text, "html.parser") # With this id. get the episode list. episode_page_ul = soup.find("ul", {"id": "episode_related"}) episode_page_li = episode_page_ul.children episode_list = [] for episode in episode_page_li: try: if not isinstance(episode, NavigableString): episode_list.append( { "title": episode.find("div", {"class": "name"}).text.replace( " ", "" ), "url": episode.find("a")["href"], } ) except (KeyError, NotFoundErr): pass return episode_list def get_anime_episode(episode_endpoint: str) -> list: """[summary] Get click url and download url from episode url >>> type(get_anime_episode("/watch/kimetsu-no-yaiba/1")) <class 'list'> Args: episode_endpoint (str): [Endpoint of episode] Raises: e: [description] Returns: [list]: [List of download and watch url] """ episode_page_url = f"{BASE_URL}{episode_endpoint}" response = requests.get( url=episode_page_url, headers={"User-Agent": UserAgent().chrome} ) response.raise_for_status() soup = BeautifulSoup(response.text, "html.parser") try: episode_url = soup.find("iframe", {"id": "playerframe"})["src"] download_url = episode_url.replace("/embed/", "/playlist/") + ".m3u8" except (KeyError, NotFoundErr) as e: raise e return [f"{BASE_URL}{episode_url}", f"{BASE_URL}{download_url}"] if __name__ == "__main__": anime_name = input("Enter anime name: ").strip() anime_list = search_scraper(anime_name) print("\n") if len(anime_list) == 0: print("No anime found with this name") else: print(f"Found {len(anime_list)} results: ") for (i, anime) in enumerate(anime_list): anime_title = anime["title"] print(f"{i+1}. {anime_title}") anime_choice = int(input("\nPlease choose from the following list: ").strip()) chosen_anime = anime_list[anime_choice - 1] print(f"You chose {chosen_anime['title']}. Searching for episodes...") episode_list = search_anime_episode_list(chosen_anime["url"]) if len(episode_list) == 0: print("No episode found for this anime") else: print(f"Found {len(episode_list)} results: ") for (i, episode) in enumerate(episode_list): print(f"{i+1}. {episode['title']}") episode_choice = int(input("\nChoose an episode by serial no: ").strip()) chosen_episode = episode_list[episode_choice - 1] print(f"You chose {chosen_episode['title']}. Searching...") episode_url, download_url = get_anime_episode(chosen_episode["url"]) print(f"\nTo watch, ctrl+click on {episode_url}.") print(f"To download, ctrl+click on {download_url}.")
from xml.dom import NotFoundErr import requests from bs4 import BeautifulSoup, NavigableString from fake_useragent import UserAgent BASE_URL = "https://ww1.gogoanime2.org" def search_scraper(anime_name: str) -> list: """[summary] Take an url and return list of anime after scraping the site. >>> type(search_scraper("demon_slayer")) <class 'list'> Args: anime_name (str): [Name of anime] Raises: e: [Raises exception on failure] Returns: [list]: [List of animes] """ # concat the name to form the search url. search_url = f"{BASE_URL}/search/{anime_name}" response = requests.get( search_url, headers={"UserAgent": UserAgent().chrome} ) # request the url. # Is the response ok? response.raise_for_status() # parse with soup. soup = BeautifulSoup(response.text, "html.parser") # get list of anime anime_ul = soup.find("ul", {"class": "items"}) anime_li = anime_ul.children # for each anime, insert to list. the name and url. anime_list = [] for anime in anime_li: if not isinstance(anime, NavigableString): try: anime_url, anime_title = ( anime.find("a")["href"], anime.find("a")["title"], ) anime_list.append( { "title": anime_title, "url": anime_url, } ) except (NotFoundErr, KeyError): pass return anime_list def search_anime_episode_list(episode_endpoint: str) -> list: """[summary] Take an url and return list of episodes after scraping the site for an url. >>> type(search_anime_episode_list("/anime/kimetsu-no-yaiba")) <class 'list'> Args: episode_endpoint (str): [Endpoint of episode] Raises: e: [description] Returns: [list]: [List of episodes] """ request_url = f"{BASE_URL}{episode_endpoint}" response = requests.get(url=request_url, headers={"UserAgent": UserAgent().chrome}) response.raise_for_status() soup = BeautifulSoup(response.text, "html.parser") # With this id. get the episode list. episode_page_ul = soup.find("ul", {"id": "episode_related"}) episode_page_li = episode_page_ul.children episode_list = [] for episode in episode_page_li: try: if not isinstance(episode, NavigableString): episode_list.append( { "title": episode.find("div", {"class": "name"}).text.replace( " ", "" ), "url": episode.find("a")["href"], } ) except (KeyError, NotFoundErr): pass return episode_list def get_anime_episode(episode_endpoint: str) -> list: """[summary] Get click url and download url from episode url >>> type(get_anime_episode("/watch/kimetsu-no-yaiba/1")) <class 'list'> Args: episode_endpoint (str): [Endpoint of episode] Raises: e: [description] Returns: [list]: [List of download and watch url] """ episode_page_url = f"{BASE_URL}{episode_endpoint}" response = requests.get( url=episode_page_url, headers={"User-Agent": UserAgent().chrome} ) response.raise_for_status() soup = BeautifulSoup(response.text, "html.parser") try: episode_url = soup.find("iframe", {"id": "playerframe"})["src"] download_url = episode_url.replace("/embed/", "/playlist/") + ".m3u8" except (KeyError, NotFoundErr) as e: raise e return [f"{BASE_URL}{episode_url}", f"{BASE_URL}{download_url}"] if __name__ == "__main__": anime_name = input("Enter anime name: ").strip() anime_list = search_scraper(anime_name) print("\n") if len(anime_list) == 0: print("No anime found with this name") else: print(f"Found {len(anime_list)} results: ") for (i, anime) in enumerate(anime_list): anime_title = anime["title"] print(f"{i+1}. {anime_title}") anime_choice = int(input("\nPlease choose from the following list: ").strip()) chosen_anime = anime_list[anime_choice - 1] print(f"You chose {chosen_anime['title']}. Searching for episodes...") episode_list = search_anime_episode_list(chosen_anime["url"]) if len(episode_list) == 0: print("No episode found for this anime") else: print(f"Found {len(episode_list)} results: ") for (i, episode) in enumerate(episode_list): print(f"{i+1}. {episode['title']}") episode_choice = int(input("\nChoose an episode by serial no: ").strip()) chosen_episode = episode_list[episode_choice - 1] print(f"You chose {chosen_episode['title']}. Searching...") episode_url, download_url = get_anime_episode(chosen_episode["url"]) print(f"\nTo watch, ctrl+click on {episode_url}.") print(f"To download, ctrl+click on {download_url}.")
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
from __future__ import annotations import csv import requests from bs4 import BeautifulSoup def get_imdb_top_250_movies(url: str = "") -> dict[str, float]: url = url or "https://www.imdb.com/chart/top/?ref_=nv_mv_250" soup = BeautifulSoup(requests.get(url).text, "html.parser") titles = soup.find_all("td", attrs="titleColumn") ratings = soup.find_all("td", class_="ratingColumn imdbRating") return { title.a.text: float(rating.strong.text) for title, rating in zip(titles, ratings) } def write_movies(filename: str = "IMDb_Top_250_Movies.csv") -> None: movies = get_imdb_top_250_movies() with open(filename, "w", newline="") as out_file: writer = csv.writer(out_file) writer.writerow(["Movie title", "IMDb rating"]) for title, rating in movies.items(): writer.writerow([title, rating]) if __name__ == "__main__": write_movies()
from __future__ import annotations import csv import requests from bs4 import BeautifulSoup def get_imdb_top_250_movies(url: str = "") -> dict[str, float]: url = url or "https://www.imdb.com/chart/top/?ref_=nv_mv_250" soup = BeautifulSoup(requests.get(url).text, "html.parser") titles = soup.find_all("td", attrs="titleColumn") ratings = soup.find_all("td", class_="ratingColumn imdbRating") return { title.a.text: float(rating.strong.text) for title, rating in zip(titles, ratings) } def write_movies(filename: str = "IMDb_Top_250_Movies.csv") -> None: movies = get_imdb_top_250_movies() with open(filename, "w", newline="") as out_file: writer = csv.writer(out_file) writer.writerow(["Movie title", "IMDb rating"]) for title, rating in movies.items(): writer.writerow([title, rating]) if __name__ == "__main__": write_movies()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
import pandas as pd from matplotlib import pyplot as plt from sklearn.linear_model import LinearRegression # Splitting the dataset into the Training set and Test set from sklearn.model_selection import train_test_split # Fitting Polynomial Regression to the dataset from sklearn.preprocessing import PolynomialFeatures # Importing the dataset dataset = pd.read_csv( "https://s3.us-west-2.amazonaws.com/public.gamelab.fun/dataset/" "position_salaries.csv" ) X = dataset.iloc[:, 1:2].values y = dataset.iloc[:, 2].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) poly_reg = PolynomialFeatures(degree=4) X_poly = poly_reg.fit_transform(X) pol_reg = LinearRegression() pol_reg.fit(X_poly, y) # Visualizing the Polymonial Regression results def viz_polymonial(): plt.scatter(X, y, color="red") plt.plot(X, pol_reg.predict(poly_reg.fit_transform(X)), color="blue") plt.title("Truth or Bluff (Linear Regression)") plt.xlabel("Position level") plt.ylabel("Salary") plt.show() return if __name__ == "__main__": viz_polymonial() # Predicting a new result with Polymonial Regression pol_reg.predict(poly_reg.fit_transform([[5.5]])) # output should be 132148.43750003
import pandas as pd from matplotlib import pyplot as plt from sklearn.linear_model import LinearRegression # Splitting the dataset into the Training set and Test set from sklearn.model_selection import train_test_split # Fitting Polynomial Regression to the dataset from sklearn.preprocessing import PolynomialFeatures # Importing the dataset dataset = pd.read_csv( "https://s3.us-west-2.amazonaws.com/public.gamelab.fun/dataset/" "position_salaries.csv" ) X = dataset.iloc[:, 1:2].values y = dataset.iloc[:, 2].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) poly_reg = PolynomialFeatures(degree=4) X_poly = poly_reg.fit_transform(X) pol_reg = LinearRegression() pol_reg.fit(X_poly, y) # Visualizing the Polymonial Regression results def viz_polymonial(): plt.scatter(X, y, color="red") plt.plot(X, pol_reg.predict(poly_reg.fit_transform(X)), color="blue") plt.title("Truth or Bluff (Linear Regression)") plt.xlabel("Position level") plt.ylabel("Salary") plt.show() return if __name__ == "__main__": viz_polymonial() # Predicting a new result with Polymonial Regression pol_reg.predict(poly_reg.fit_transform([[5.5]])) # output should be 132148.43750003
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Author : Alexander Pantyukhin Date : November 7, 2022 Task: You are given a tree root of a binary tree with n nodes, where each node has node.data coins. There are exactly n coins in whole tree. In one move, we may choose two adjacent nodes and move one coin from one node to another. A move may be from parent to child, or from child to parent. Return the minimum number of moves required to make every node have exactly one coin. Example 1: 3 / \ 0 0 Result: 2 Example 2: 0 / \ 3 0 Result 3 leetcode: https://leetcode.com/problems/distribute-coins-in-binary-tree/ Implementation notes: User depth-first search approach. Let n is the number of nodes in tree Runtime: O(n) Space: O(1) """ from __future__ import annotations from collections import namedtuple from dataclasses import dataclass @dataclass class TreeNode: data: int left: TreeNode | None = None right: TreeNode | None = None CoinsDistribResult = namedtuple("CoinsDistribResult", "moves excess") def distribute_coins(root: TreeNode | None) -> int: """ >>> distribute_coins(TreeNode(3, TreeNode(0), TreeNode(0))) 2 >>> distribute_coins(TreeNode(0, TreeNode(3), TreeNode(0))) 3 >>> distribute_coins(TreeNode(0, TreeNode(0), TreeNode(3))) 3 >>> distribute_coins(None) 0 >>> distribute_coins(TreeNode(0, TreeNode(0), TreeNode(0))) Traceback (most recent call last): ... ValueError: The nodes number should be same as the number of coins >>> distribute_coins(TreeNode(0, TreeNode(1), TreeNode(1))) Traceback (most recent call last): ... ValueError: The nodes number should be same as the number of coins """ if root is None: return 0 # Validation def count_nodes(node: TreeNode | None) -> int: """ >>> count_nodes(None): 0 """ if node is None: return 0 return count_nodes(node.left) + count_nodes(node.right) + 1 def count_coins(node: TreeNode | None) -> int: """ >>> count_coins(None): 0 """ if node is None: return 0 return count_coins(node.left) + count_coins(node.right) + node.data if count_nodes(root) != count_coins(root): raise ValueError("The nodes number should be same as the number of coins") # Main calculation def get_distrib(node: TreeNode | None) -> CoinsDistribResult: """ >>> get_distrib(None) namedtuple("CoinsDistribResult", "0 2") """ if node is None: return CoinsDistribResult(0, 1) left_distrib_moves, left_distrib_excess = get_distrib(node.left) right_distrib_moves, right_distrib_excess = get_distrib(node.right) coins_to_left = 1 - left_distrib_excess coins_to_right = 1 - right_distrib_excess result_moves = ( left_distrib_moves + right_distrib_moves + abs(coins_to_left) + abs(coins_to_right) ) result_excess = node.data - coins_to_left - coins_to_right return CoinsDistribResult(result_moves, result_excess) return get_distrib(root)[0] if __name__ == "__main__": import doctest doctest.testmod()
""" Author : Alexander Pantyukhin Date : November 7, 2022 Task: You are given a tree root of a binary tree with n nodes, where each node has node.data coins. There are exactly n coins in whole tree. In one move, we may choose two adjacent nodes and move one coin from one node to another. A move may be from parent to child, or from child to parent. Return the minimum number of moves required to make every node have exactly one coin. Example 1: 3 / \ 0 0 Result: 2 Example 2: 0 / \ 3 0 Result 3 leetcode: https://leetcode.com/problems/distribute-coins-in-binary-tree/ Implementation notes: User depth-first search approach. Let n is the number of nodes in tree Runtime: O(n) Space: O(1) """ from __future__ import annotations from collections import namedtuple from dataclasses import dataclass @dataclass class TreeNode: data: int left: TreeNode | None = None right: TreeNode | None = None CoinsDistribResult = namedtuple("CoinsDistribResult", "moves excess") def distribute_coins(root: TreeNode | None) -> int: """ >>> distribute_coins(TreeNode(3, TreeNode(0), TreeNode(0))) 2 >>> distribute_coins(TreeNode(0, TreeNode(3), TreeNode(0))) 3 >>> distribute_coins(TreeNode(0, TreeNode(0), TreeNode(3))) 3 >>> distribute_coins(None) 0 >>> distribute_coins(TreeNode(0, TreeNode(0), TreeNode(0))) Traceback (most recent call last): ... ValueError: The nodes number should be same as the number of coins >>> distribute_coins(TreeNode(0, TreeNode(1), TreeNode(1))) Traceback (most recent call last): ... ValueError: The nodes number should be same as the number of coins """ if root is None: return 0 # Validation def count_nodes(node: TreeNode | None) -> int: """ >>> count_nodes(None): 0 """ if node is None: return 0 return count_nodes(node.left) + count_nodes(node.right) + 1 def count_coins(node: TreeNode | None) -> int: """ >>> count_coins(None): 0 """ if node is None: return 0 return count_coins(node.left) + count_coins(node.right) + node.data if count_nodes(root) != count_coins(root): raise ValueError("The nodes number should be same as the number of coins") # Main calculation def get_distrib(node: TreeNode | None) -> CoinsDistribResult: """ >>> get_distrib(None) namedtuple("CoinsDistribResult", "0 2") """ if node is None: return CoinsDistribResult(0, 1) left_distrib_moves, left_distrib_excess = get_distrib(node.left) right_distrib_moves, right_distrib_excess = get_distrib(node.right) coins_to_left = 1 - left_distrib_excess coins_to_right = 1 - right_distrib_excess result_moves = ( left_distrib_moves + right_distrib_moves + abs(coins_to_left) + abs(coins_to_right) ) result_excess = node.data - coins_to_left - coins_to_right return CoinsDistribResult(result_moves, result_excess) return get_distrib(root)[0] if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" https://en.wikipedia.org/wiki/Euclidean_algorithm """ def euclidean_gcd(a: int, b: int) -> int: """ Examples: >>> euclidean_gcd(3, 5) 1 >>> euclidean_gcd(6, 3) 3 """ while b: a, b = b, a % b return a def euclidean_gcd_recursive(a: int, b: int) -> int: """ Recursive method for euclicedan gcd algorithm Examples: >>> euclidean_gcd_recursive(3, 5) 1 >>> euclidean_gcd_recursive(6, 3) 3 """ return a if b == 0 else euclidean_gcd_recursive(b, a % b) def main(): print(f"euclidean_gcd(3, 5) = {euclidean_gcd(3, 5)}") print(f"euclidean_gcd(5, 3) = {euclidean_gcd(5, 3)}") print(f"euclidean_gcd(1, 3) = {euclidean_gcd(1, 3)}") print(f"euclidean_gcd(3, 6) = {euclidean_gcd(3, 6)}") print(f"euclidean_gcd(6, 3) = {euclidean_gcd(6, 3)}") print(f"euclidean_gcd_recursive(3, 5) = {euclidean_gcd_recursive(3, 5)}") print(f"euclidean_gcd_recursive(5, 3) = {euclidean_gcd_recursive(5, 3)}") print(f"euclidean_gcd_recursive(1, 3) = {euclidean_gcd_recursive(1, 3)}") print(f"euclidean_gcd_recursive(3, 6) = {euclidean_gcd_recursive(3, 6)}") print(f"euclidean_gcd_recursive(6, 3) = {euclidean_gcd_recursive(6, 3)}") if __name__ == "__main__": main()
""" https://en.wikipedia.org/wiki/Euclidean_algorithm """ def euclidean_gcd(a: int, b: int) -> int: """ Examples: >>> euclidean_gcd(3, 5) 1 >>> euclidean_gcd(6, 3) 3 """ while b: a, b = b, a % b return a def euclidean_gcd_recursive(a: int, b: int) -> int: """ Recursive method for euclicedan gcd algorithm Examples: >>> euclidean_gcd_recursive(3, 5) 1 >>> euclidean_gcd_recursive(6, 3) 3 """ return a if b == 0 else euclidean_gcd_recursive(b, a % b) def main(): print(f"euclidean_gcd(3, 5) = {euclidean_gcd(3, 5)}") print(f"euclidean_gcd(5, 3) = {euclidean_gcd(5, 3)}") print(f"euclidean_gcd(1, 3) = {euclidean_gcd(1, 3)}") print(f"euclidean_gcd(3, 6) = {euclidean_gcd(3, 6)}") print(f"euclidean_gcd(6, 3) = {euclidean_gcd(6, 3)}") print(f"euclidean_gcd_recursive(3, 5) = {euclidean_gcd_recursive(3, 5)}") print(f"euclidean_gcd_recursive(5, 3) = {euclidean_gcd_recursive(5, 3)}") print(f"euclidean_gcd_recursive(1, 3) = {euclidean_gcd_recursive(1, 3)}") print(f"euclidean_gcd_recursive(3, 6) = {euclidean_gcd_recursive(3, 6)}") print(f"euclidean_gcd_recursive(6, 3) = {euclidean_gcd_recursive(6, 3)}") if __name__ == "__main__": main()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Project Euler Problem 1: https://projecteuler.net/problem=1 Multiples of 3 and 5 If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. """ def solution(n: int = 1000) -> int: """ This solution is based on the pattern that the successive numbers in the series follow: 0+3,+2,+1,+3,+1,+2,+3. Returns the sum of all the multiples of 3 or 5 below n. >>> solution(3) 0 >>> solution(4) 3 >>> solution(10) 23 >>> solution(600) 83700 """ total = 0 num = 0 while 1: num += 3 if num >= n: break total += num num += 2 if num >= n: break total += num num += 1 if num >= n: break total += num num += 3 if num >= n: break total += num num += 1 if num >= n: break total += num num += 2 if num >= n: break total += num num += 3 if num >= n: break total += num return total if __name__ == "__main__": print(f"{solution() = }")
""" Project Euler Problem 1: https://projecteuler.net/problem=1 Multiples of 3 and 5 If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. """ def solution(n: int = 1000) -> int: """ This solution is based on the pattern that the successive numbers in the series follow: 0+3,+2,+1,+3,+1,+2,+3. Returns the sum of all the multiples of 3 or 5 below n. >>> solution(3) 0 >>> solution(4) 3 >>> solution(10) 23 >>> solution(600) 83700 """ total = 0 num = 0 while 1: num += 3 if num >= n: break total += num num += 2 if num >= n: break total += num num += 1 if num >= n: break total += num num += 3 if num >= n: break total += num num += 1 if num >= n: break total += num num += 2 if num >= n: break total += num num += 3 if num >= n: break total += num return total if __name__ == "__main__": print(f"{solution() = }")
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Isolate the Decimal part of a Number https://stackoverflow.com/questions/3886402/how-to-get-numbers-after-decimal-point """ def decimal_isolate(number: float, digit_amount: int) -> float: """ Isolates the decimal part of a number. If digitAmount > 0 round to that decimal place, else print the entire decimal. >>> decimal_isolate(1.53, 0) 0.53 >>> decimal_isolate(35.345, 1) 0.3 >>> decimal_isolate(35.345, 2) 0.34 >>> decimal_isolate(35.345, 3) 0.345 >>> decimal_isolate(-14.789, 3) -0.789 >>> decimal_isolate(0, 2) 0 >>> decimal_isolate(-14.123, 1) -0.1 >>> decimal_isolate(-14.123, 2) -0.12 >>> decimal_isolate(-14.123, 3) -0.123 """ if digit_amount > 0: return round(number - int(number), digit_amount) return number - int(number) if __name__ == "__main__": print(decimal_isolate(1.53, 0)) print(decimal_isolate(35.345, 1)) print(decimal_isolate(35.345, 2)) print(decimal_isolate(35.345, 3)) print(decimal_isolate(-14.789, 3)) print(decimal_isolate(0, 2)) print(decimal_isolate(-14.123, 1)) print(decimal_isolate(-14.123, 2)) print(decimal_isolate(-14.123, 3))
""" Isolate the Decimal part of a Number https://stackoverflow.com/questions/3886402/how-to-get-numbers-after-decimal-point """ def decimal_isolate(number: float, digit_amount: int) -> float: """ Isolates the decimal part of a number. If digitAmount > 0 round to that decimal place, else print the entire decimal. >>> decimal_isolate(1.53, 0) 0.53 >>> decimal_isolate(35.345, 1) 0.3 >>> decimal_isolate(35.345, 2) 0.34 >>> decimal_isolate(35.345, 3) 0.345 >>> decimal_isolate(-14.789, 3) -0.789 >>> decimal_isolate(0, 2) 0 >>> decimal_isolate(-14.123, 1) -0.1 >>> decimal_isolate(-14.123, 2) -0.12 >>> decimal_isolate(-14.123, 3) -0.123 """ if digit_amount > 0: return round(number - int(number), digit_amount) return number - int(number) if __name__ == "__main__": print(decimal_isolate(1.53, 0)) print(decimal_isolate(35.345, 1)) print(decimal_isolate(35.345, 2)) print(decimal_isolate(35.345, 3)) print(decimal_isolate(-14.789, 3)) print(decimal_isolate(0, 2)) print(decimal_isolate(-14.123, 1)) print(decimal_isolate(-14.123, 2)) print(decimal_isolate(-14.123, 3))
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
"""Binary Exponentiation.""" # Author : Junth Basnet # Time Complexity : O(logn) def binary_exponentiation(a, n): if n == 0: return 1 elif n % 2 == 1: return binary_exponentiation(a, n - 1) * a else: b = binary_exponentiation(a, n / 2) return b * b if __name__ == "__main__": try: BASE = int(input("Enter Base : ").strip()) POWER = int(input("Enter Power : ").strip()) except ValueError: print("Invalid literal for integer") RESULT = binary_exponentiation(BASE, POWER) print(f"{BASE}^({POWER}) : {RESULT}")
"""Binary Exponentiation.""" # Author : Junth Basnet # Time Complexity : O(logn) def binary_exponentiation(a, n): if n == 0: return 1 elif n % 2 == 1: return binary_exponentiation(a, n - 1) * a else: b = binary_exponentiation(a, n / 2) return b * b if __name__ == "__main__": try: BASE = int(input("Enter Base : ").strip()) POWER = int(input("Enter Power : ").strip()) except ValueError: print("Invalid literal for integer") RESULT = binary_exponentiation(BASE, POWER) print(f"{BASE}^({POWER}) : {RESULT}")
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
def bubble_sort(list_data: list, length: int = 0) -> list: """ It is similar is bubble sort but recursive. :param list_data: mutable ordered sequence of elements :param length: length of list data :return: the same list in ascending order >>> bubble_sort([0, 5, 2, 3, 2], 5) [0, 2, 2, 3, 5] >>> bubble_sort([], 0) [] >>> bubble_sort([-2, -45, -5], 3) [-45, -5, -2] >>> bubble_sort([-23, 0, 6, -4, 34], 5) [-23, -4, 0, 6, 34] >>> bubble_sort([-23, 0, 6, -4, 34], 5) == sorted([-23, 0, 6, -4, 34]) True >>> bubble_sort(['z','a','y','b','x','c'], 6) ['a', 'b', 'c', 'x', 'y', 'z'] >>> bubble_sort([1.1, 3.3, 5.5, 7.7, 2.2, 4.4, 6.6]) [1.1, 2.2, 3.3, 4.4, 5.5, 6.6, 7.7] """ length = length or len(list_data) swapped = False for i in range(length - 1): if list_data[i] > list_data[i + 1]: list_data[i], list_data[i + 1] = list_data[i + 1], list_data[i] swapped = True return list_data if not swapped else bubble_sort(list_data, length - 1) if __name__ == "__main__": import doctest doctest.testmod()
def bubble_sort(list_data: list, length: int = 0) -> list: """ It is similar is bubble sort but recursive. :param list_data: mutable ordered sequence of elements :param length: length of list data :return: the same list in ascending order >>> bubble_sort([0, 5, 2, 3, 2], 5) [0, 2, 2, 3, 5] >>> bubble_sort([], 0) [] >>> bubble_sort([-2, -45, -5], 3) [-45, -5, -2] >>> bubble_sort([-23, 0, 6, -4, 34], 5) [-23, -4, 0, 6, 34] >>> bubble_sort([-23, 0, 6, -4, 34], 5) == sorted([-23, 0, 6, -4, 34]) True >>> bubble_sort(['z','a','y','b','x','c'], 6) ['a', 'b', 'c', 'x', 'y', 'z'] >>> bubble_sort([1.1, 3.3, 5.5, 7.7, 2.2, 4.4, 6.6]) [1.1, 2.2, 3.3, 4.4, 5.5, 6.6, 7.7] """ length = length or len(list_data) swapped = False for i in range(length - 1): if list_data[i] > list_data[i + 1]: list_data[i], list_data[i + 1] = list_data[i + 1], list_data[i] swapped = True return list_data if not swapped else bubble_sort(list_data, length - 1) if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Project Euler Problem 1: https://projecteuler.net/problem=1 Multiples of 3 and 5 If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. """ def solution(n: int = 1000) -> int: """ Returns the sum of all the multiples of 3 or 5 below n. A straightforward pythonic solution using list comprehension. >>> solution(3) 0 >>> solution(4) 3 >>> solution(10) 23 >>> solution(600) 83700 """ return sum(i for i in range(n) if i % 3 == 0 or i % 5 == 0) if __name__ == "__main__": print(f"{solution() = }")
""" Project Euler Problem 1: https://projecteuler.net/problem=1 Multiples of 3 and 5 If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. """ def solution(n: int = 1000) -> int: """ Returns the sum of all the multiples of 3 or 5 below n. A straightforward pythonic solution using list comprehension. >>> solution(3) 0 >>> solution(4) 3 >>> solution(10) 23 >>> solution(600) 83700 """ return sum(i for i in range(n) if i % 3 == 0 or i % 5 == 0) if __name__ == "__main__": print(f"{solution() = }")
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
from collections import deque from math import floor from random import random from time import time # the default weight is 1 if not assigned but all the implementation is weighted class DirectedGraph: def __init__(self): self.graph = {} # adding vertices and edges # adding the weight is optional # handles repetition def add_pair(self, u, v, w=1): if self.graph.get(u): if self.graph[u].count([w, v]) == 0: self.graph[u].append([w, v]) else: self.graph[u] = [[w, v]] if not self.graph.get(v): self.graph[v] = [] def all_nodes(self): return list(self.graph) # handles if the input does not exist def remove_pair(self, u, v): if self.graph.get(u): for _ in self.graph[u]: if _[1] == v: self.graph[u].remove(_) # if no destination is meant the default value is -1 def dfs(self, s=-2, d=-1): if s == d: return [] stack = [] visited = [] if s == -2: s = list(self.graph)[0] stack.append(s) visited.append(s) ss = s while True: # check if there is any non isolated nodes if len(self.graph[s]) != 0: ss = s for node in self.graph[s]: if visited.count(node[1]) < 1: if node[1] == d: visited.append(d) return visited else: stack.append(node[1]) visited.append(node[1]) ss = node[1] break # check if all the children are visited if s == ss: stack.pop() if len(stack) != 0: s = stack[len(stack) - 1] else: s = ss # check if se have reached the starting point if len(stack) == 0: return visited # c is the count of nodes you want and if you leave it or pass -1 to the function # the count will be random from 10 to 10000 def fill_graph_randomly(self, c=-1): if c == -1: c = floor(random() * 10000) + 10 for i in range(c): # every vertex has max 100 edges for _ in range(floor(random() * 102) + 1): n = floor(random() * c) + 1 if n != i: self.add_pair(i, n, 1) def bfs(self, s=-2): d = deque() visited = [] if s == -2: s = list(self.graph)[0] d.append(s) visited.append(s) while d: s = d.popleft() if len(self.graph[s]) != 0: for node in self.graph[s]: if visited.count(node[1]) < 1: d.append(node[1]) visited.append(node[1]) return visited def in_degree(self, u): count = 0 for x in self.graph: for y in self.graph[x]: if y[1] == u: count += 1 return count def out_degree(self, u): return len(self.graph[u]) def topological_sort(self, s=-2): stack = [] visited = [] if s == -2: s = list(self.graph)[0] stack.append(s) visited.append(s) ss = s sorted_nodes = [] while True: # check if there is any non isolated nodes if len(self.graph[s]) != 0: ss = s for node in self.graph[s]: if visited.count(node[1]) < 1: stack.append(node[1]) visited.append(node[1]) ss = node[1] break # check if all the children are visited if s == ss: sorted_nodes.append(stack.pop()) if len(stack) != 0: s = stack[len(stack) - 1] else: s = ss # check if se have reached the starting point if len(stack) == 0: return sorted_nodes def cycle_nodes(self): stack = [] visited = [] s = list(self.graph)[0] stack.append(s) visited.append(s) parent = -2 indirect_parents = [] ss = s on_the_way_back = False anticipating_nodes = set() while True: # check if there is any non isolated nodes if len(self.graph[s]) != 0: ss = s for node in self.graph[s]: if ( visited.count(node[1]) > 0 and node[1] != parent and indirect_parents.count(node[1]) > 0 and not on_the_way_back ): len_stack = len(stack) - 1 while len_stack >= 0: if stack[len_stack] == node[1]: anticipating_nodes.add(node[1]) break else: anticipating_nodes.add(stack[len_stack]) len_stack -= 1 if visited.count(node[1]) < 1: stack.append(node[1]) visited.append(node[1]) ss = node[1] break # check if all the children are visited if s == ss: stack.pop() on_the_way_back = True if len(stack) != 0: s = stack[len(stack) - 1] else: on_the_way_back = False indirect_parents.append(parent) parent = s s = ss # check if se have reached the starting point if len(stack) == 0: return list(anticipating_nodes) def has_cycle(self): stack = [] visited = [] s = list(self.graph)[0] stack.append(s) visited.append(s) parent = -2 indirect_parents = [] ss = s on_the_way_back = False anticipating_nodes = set() while True: # check if there is any non isolated nodes if len(self.graph[s]) != 0: ss = s for node in self.graph[s]: if ( visited.count(node[1]) > 0 and node[1] != parent and indirect_parents.count(node[1]) > 0 and not on_the_way_back ): len_stack_minus_one = len(stack) - 1 while len_stack_minus_one >= 0: if stack[len_stack_minus_one] == node[1]: anticipating_nodes.add(node[1]) break else: return True if visited.count(node[1]) < 1: stack.append(node[1]) visited.append(node[1]) ss = node[1] break # check if all the children are visited if s == ss: stack.pop() on_the_way_back = True if len(stack) != 0: s = stack[len(stack) - 1] else: on_the_way_back = False indirect_parents.append(parent) parent = s s = ss # check if se have reached the starting point if len(stack) == 0: return False def dfs_time(self, s=-2, e=-1): begin = time() self.dfs(s, e) end = time() return end - begin def bfs_time(self, s=-2): begin = time() self.bfs(s) end = time() return end - begin class Graph: def __init__(self): self.graph = {} # adding vertices and edges # adding the weight is optional # handles repetition def add_pair(self, u, v, w=1): # check if the u exists if self.graph.get(u): # if there already is a edge if self.graph[u].count([w, v]) == 0: self.graph[u].append([w, v]) else: # if u does not exist self.graph[u] = [[w, v]] # add the other way if self.graph.get(v): # if there already is a edge if self.graph[v].count([w, u]) == 0: self.graph[v].append([w, u]) else: # if u does not exist self.graph[v] = [[w, u]] # handles if the input does not exist def remove_pair(self, u, v): if self.graph.get(u): for _ in self.graph[u]: if _[1] == v: self.graph[u].remove(_) # the other way round if self.graph.get(v): for _ in self.graph[v]: if _[1] == u: self.graph[v].remove(_) # if no destination is meant the default value is -1 def dfs(self, s=-2, d=-1): if s == d: return [] stack = [] visited = [] if s == -2: s = list(self.graph)[0] stack.append(s) visited.append(s) ss = s while True: # check if there is any non isolated nodes if len(self.graph[s]) != 0: ss = s for node in self.graph[s]: if visited.count(node[1]) < 1: if node[1] == d: visited.append(d) return visited else: stack.append(node[1]) visited.append(node[1]) ss = node[1] break # check if all the children are visited if s == ss: stack.pop() if len(stack) != 0: s = stack[len(stack) - 1] else: s = ss # check if se have reached the starting point if len(stack) == 0: return visited # c is the count of nodes you want and if you leave it or pass -1 to the function # the count will be random from 10 to 10000 def fill_graph_randomly(self, c=-1): if c == -1: c = floor(random() * 10000) + 10 for i in range(c): # every vertex has max 100 edges for _ in range(floor(random() * 102) + 1): n = floor(random() * c) + 1 if n != i: self.add_pair(i, n, 1) def bfs(self, s=-2): d = deque() visited = [] if s == -2: s = list(self.graph)[0] d.append(s) visited.append(s) while d: s = d.popleft() if len(self.graph[s]) != 0: for node in self.graph[s]: if visited.count(node[1]) < 1: d.append(node[1]) visited.append(node[1]) return visited def degree(self, u): return len(self.graph[u]) def cycle_nodes(self): stack = [] visited = [] s = list(self.graph)[0] stack.append(s) visited.append(s) parent = -2 indirect_parents = [] ss = s on_the_way_back = False anticipating_nodes = set() while True: # check if there is any non isolated nodes if len(self.graph[s]) != 0: ss = s for node in self.graph[s]: if ( visited.count(node[1]) > 0 and node[1] != parent and indirect_parents.count(node[1]) > 0 and not on_the_way_back ): len_stack = len(stack) - 1 while len_stack >= 0: if stack[len_stack] == node[1]: anticipating_nodes.add(node[1]) break else: anticipating_nodes.add(stack[len_stack]) len_stack -= 1 if visited.count(node[1]) < 1: stack.append(node[1]) visited.append(node[1]) ss = node[1] break # check if all the children are visited if s == ss: stack.pop() on_the_way_back = True if len(stack) != 0: s = stack[len(stack) - 1] else: on_the_way_back = False indirect_parents.append(parent) parent = s s = ss # check if se have reached the starting point if len(stack) == 0: return list(anticipating_nodes) def has_cycle(self): stack = [] visited = [] s = list(self.graph)[0] stack.append(s) visited.append(s) parent = -2 indirect_parents = [] ss = s on_the_way_back = False anticipating_nodes = set() while True: # check if there is any non isolated nodes if len(self.graph[s]) != 0: ss = s for node in self.graph[s]: if ( visited.count(node[1]) > 0 and node[1] != parent and indirect_parents.count(node[1]) > 0 and not on_the_way_back ): len_stack_minus_one = len(stack) - 1 while len_stack_minus_one >= 0: if stack[len_stack_minus_one] == node[1]: anticipating_nodes.add(node[1]) break else: return True if visited.count(node[1]) < 1: stack.append(node[1]) visited.append(node[1]) ss = node[1] break # check if all the children are visited if s == ss: stack.pop() on_the_way_back = True if len(stack) != 0: s = stack[len(stack) - 1] else: on_the_way_back = False indirect_parents.append(parent) parent = s s = ss # check if se have reached the starting point if len(stack) == 0: return False def all_nodes(self): return list(self.graph) def dfs_time(self, s=-2, e=-1): begin = time() self.dfs(s, e) end = time() return end - begin def bfs_time(self, s=-2): begin = time() self.bfs(s) end = time() return end - begin
from collections import deque from math import floor from random import random from time import time # the default weight is 1 if not assigned but all the implementation is weighted class DirectedGraph: def __init__(self): self.graph = {} # adding vertices and edges # adding the weight is optional # handles repetition def add_pair(self, u, v, w=1): if self.graph.get(u): if self.graph[u].count([w, v]) == 0: self.graph[u].append([w, v]) else: self.graph[u] = [[w, v]] if not self.graph.get(v): self.graph[v] = [] def all_nodes(self): return list(self.graph) # handles if the input does not exist def remove_pair(self, u, v): if self.graph.get(u): for _ in self.graph[u]: if _[1] == v: self.graph[u].remove(_) # if no destination is meant the default value is -1 def dfs(self, s=-2, d=-1): if s == d: return [] stack = [] visited = [] if s == -2: s = list(self.graph)[0] stack.append(s) visited.append(s) ss = s while True: # check if there is any non isolated nodes if len(self.graph[s]) != 0: ss = s for node in self.graph[s]: if visited.count(node[1]) < 1: if node[1] == d: visited.append(d) return visited else: stack.append(node[1]) visited.append(node[1]) ss = node[1] break # check if all the children are visited if s == ss: stack.pop() if len(stack) != 0: s = stack[len(stack) - 1] else: s = ss # check if se have reached the starting point if len(stack) == 0: return visited # c is the count of nodes you want and if you leave it or pass -1 to the function # the count will be random from 10 to 10000 def fill_graph_randomly(self, c=-1): if c == -1: c = floor(random() * 10000) + 10 for i in range(c): # every vertex has max 100 edges for _ in range(floor(random() * 102) + 1): n = floor(random() * c) + 1 if n != i: self.add_pair(i, n, 1) def bfs(self, s=-2): d = deque() visited = [] if s == -2: s = list(self.graph)[0] d.append(s) visited.append(s) while d: s = d.popleft() if len(self.graph[s]) != 0: for node in self.graph[s]: if visited.count(node[1]) < 1: d.append(node[1]) visited.append(node[1]) return visited def in_degree(self, u): count = 0 for x in self.graph: for y in self.graph[x]: if y[1] == u: count += 1 return count def out_degree(self, u): return len(self.graph[u]) def topological_sort(self, s=-2): stack = [] visited = [] if s == -2: s = list(self.graph)[0] stack.append(s) visited.append(s) ss = s sorted_nodes = [] while True: # check if there is any non isolated nodes if len(self.graph[s]) != 0: ss = s for node in self.graph[s]: if visited.count(node[1]) < 1: stack.append(node[1]) visited.append(node[1]) ss = node[1] break # check if all the children are visited if s == ss: sorted_nodes.append(stack.pop()) if len(stack) != 0: s = stack[len(stack) - 1] else: s = ss # check if se have reached the starting point if len(stack) == 0: return sorted_nodes def cycle_nodes(self): stack = [] visited = [] s = list(self.graph)[0] stack.append(s) visited.append(s) parent = -2 indirect_parents = [] ss = s on_the_way_back = False anticipating_nodes = set() while True: # check if there is any non isolated nodes if len(self.graph[s]) != 0: ss = s for node in self.graph[s]: if ( visited.count(node[1]) > 0 and node[1] != parent and indirect_parents.count(node[1]) > 0 and not on_the_way_back ): len_stack = len(stack) - 1 while len_stack >= 0: if stack[len_stack] == node[1]: anticipating_nodes.add(node[1]) break else: anticipating_nodes.add(stack[len_stack]) len_stack -= 1 if visited.count(node[1]) < 1: stack.append(node[1]) visited.append(node[1]) ss = node[1] break # check if all the children are visited if s == ss: stack.pop() on_the_way_back = True if len(stack) != 0: s = stack[len(stack) - 1] else: on_the_way_back = False indirect_parents.append(parent) parent = s s = ss # check if se have reached the starting point if len(stack) == 0: return list(anticipating_nodes) def has_cycle(self): stack = [] visited = [] s = list(self.graph)[0] stack.append(s) visited.append(s) parent = -2 indirect_parents = [] ss = s on_the_way_back = False anticipating_nodes = set() while True: # check if there is any non isolated nodes if len(self.graph[s]) != 0: ss = s for node in self.graph[s]: if ( visited.count(node[1]) > 0 and node[1] != parent and indirect_parents.count(node[1]) > 0 and not on_the_way_back ): len_stack_minus_one = len(stack) - 1 while len_stack_minus_one >= 0: if stack[len_stack_minus_one] == node[1]: anticipating_nodes.add(node[1]) break else: return True if visited.count(node[1]) < 1: stack.append(node[1]) visited.append(node[1]) ss = node[1] break # check if all the children are visited if s == ss: stack.pop() on_the_way_back = True if len(stack) != 0: s = stack[len(stack) - 1] else: on_the_way_back = False indirect_parents.append(parent) parent = s s = ss # check if se have reached the starting point if len(stack) == 0: return False def dfs_time(self, s=-2, e=-1): begin = time() self.dfs(s, e) end = time() return end - begin def bfs_time(self, s=-2): begin = time() self.bfs(s) end = time() return end - begin class Graph: def __init__(self): self.graph = {} # adding vertices and edges # adding the weight is optional # handles repetition def add_pair(self, u, v, w=1): # check if the u exists if self.graph.get(u): # if there already is a edge if self.graph[u].count([w, v]) == 0: self.graph[u].append([w, v]) else: # if u does not exist self.graph[u] = [[w, v]] # add the other way if self.graph.get(v): # if there already is a edge if self.graph[v].count([w, u]) == 0: self.graph[v].append([w, u]) else: # if u does not exist self.graph[v] = [[w, u]] # handles if the input does not exist def remove_pair(self, u, v): if self.graph.get(u): for _ in self.graph[u]: if _[1] == v: self.graph[u].remove(_) # the other way round if self.graph.get(v): for _ in self.graph[v]: if _[1] == u: self.graph[v].remove(_) # if no destination is meant the default value is -1 def dfs(self, s=-2, d=-1): if s == d: return [] stack = [] visited = [] if s == -2: s = list(self.graph)[0] stack.append(s) visited.append(s) ss = s while True: # check if there is any non isolated nodes if len(self.graph[s]) != 0: ss = s for node in self.graph[s]: if visited.count(node[1]) < 1: if node[1] == d: visited.append(d) return visited else: stack.append(node[1]) visited.append(node[1]) ss = node[1] break # check if all the children are visited if s == ss: stack.pop() if len(stack) != 0: s = stack[len(stack) - 1] else: s = ss # check if se have reached the starting point if len(stack) == 0: return visited # c is the count of nodes you want and if you leave it or pass -1 to the function # the count will be random from 10 to 10000 def fill_graph_randomly(self, c=-1): if c == -1: c = floor(random() * 10000) + 10 for i in range(c): # every vertex has max 100 edges for _ in range(floor(random() * 102) + 1): n = floor(random() * c) + 1 if n != i: self.add_pair(i, n, 1) def bfs(self, s=-2): d = deque() visited = [] if s == -2: s = list(self.graph)[0] d.append(s) visited.append(s) while d: s = d.popleft() if len(self.graph[s]) != 0: for node in self.graph[s]: if visited.count(node[1]) < 1: d.append(node[1]) visited.append(node[1]) return visited def degree(self, u): return len(self.graph[u]) def cycle_nodes(self): stack = [] visited = [] s = list(self.graph)[0] stack.append(s) visited.append(s) parent = -2 indirect_parents = [] ss = s on_the_way_back = False anticipating_nodes = set() while True: # check if there is any non isolated nodes if len(self.graph[s]) != 0: ss = s for node in self.graph[s]: if ( visited.count(node[1]) > 0 and node[1] != parent and indirect_parents.count(node[1]) > 0 and not on_the_way_back ): len_stack = len(stack) - 1 while len_stack >= 0: if stack[len_stack] == node[1]: anticipating_nodes.add(node[1]) break else: anticipating_nodes.add(stack[len_stack]) len_stack -= 1 if visited.count(node[1]) < 1: stack.append(node[1]) visited.append(node[1]) ss = node[1] break # check if all the children are visited if s == ss: stack.pop() on_the_way_back = True if len(stack) != 0: s = stack[len(stack) - 1] else: on_the_way_back = False indirect_parents.append(parent) parent = s s = ss # check if se have reached the starting point if len(stack) == 0: return list(anticipating_nodes) def has_cycle(self): stack = [] visited = [] s = list(self.graph)[0] stack.append(s) visited.append(s) parent = -2 indirect_parents = [] ss = s on_the_way_back = False anticipating_nodes = set() while True: # check if there is any non isolated nodes if len(self.graph[s]) != 0: ss = s for node in self.graph[s]: if ( visited.count(node[1]) > 0 and node[1] != parent and indirect_parents.count(node[1]) > 0 and not on_the_way_back ): len_stack_minus_one = len(stack) - 1 while len_stack_minus_one >= 0: if stack[len_stack_minus_one] == node[1]: anticipating_nodes.add(node[1]) break else: return True if visited.count(node[1]) < 1: stack.append(node[1]) visited.append(node[1]) ss = node[1] break # check if all the children are visited if s == ss: stack.pop() on_the_way_back = True if len(stack) != 0: s = stack[len(stack) - 1] else: on_the_way_back = False indirect_parents.append(parent) parent = s s = ss # check if se have reached the starting point if len(stack) == 0: return False def all_nodes(self): return list(self.graph) def dfs_time(self, s=-2, e=-1): begin = time() self.dfs(s, e) end = time() return end - begin def bfs_time(self, s=-2): begin = time() self.bfs(s) end = time() return end - begin
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Project Euler Problem 686: https://projecteuler.net/problem=686 2^7 = 128 is the first power of two whose leading digits are "12". The next power of two whose leading digits are "12" is 2^80. Define p(L,n) to be the nth-smallest value of j such that the base 10 representation of 2^j begins with the digits of L. So p(12, 1) = 7 and p(12, 2) = 80. You are given that p(123, 45) = 12710. Find p(123, 678910). """ import math def log_difference(number: int) -> float: """ This function returns the decimal value of a number multiplied with log(2) Since the problem is on powers of two, finding the powers of two with large exponents is time consuming. Hence we use log to reduce compute time. We can find out that the first power of 2 with starting digits 123 is 90. Computing 2^90 is time consuming. Hence we find log(2^90) = 90*log(2) = 27.092699609758302 But we require only the decimal part to determine whether the power starts with 123. So we just return the decimal part of the log product. Therefore we return 0.092699609758302 >>> log_difference(90) 0.092699609758302 >>> log_difference(379) 0.090368356648852 """ log_number = math.log(2, 10) * number difference = round((log_number - int(log_number)), 15) return difference def solution(number: int = 678910) -> int: """ This function calculates the power of two which is nth (n = number) smallest value of power of 2 such that the starting digits of the 2^power is 123. For example the powers of 2 for which starting digits is 123 are: 90, 379, 575, 864, 1060, 1545, 1741, 2030, 2226, 2515 and so on. 90 is the first power of 2 whose starting digits are 123, 379 is second power of 2 whose starting digits are 123, and so on. So if number = 10, then solution returns 2515 as we observe from above series. We will define a lowerbound and upperbound. lowerbound = log(1.23), upperbound = log(1.24) because we need to find the powers that yield 123 as starting digits. log(1.23) = 0.08990511143939792, log(1,24) = 0.09342168516223506. We use 1.23 and not 12.3 or 123, because log(1.23) yields only decimal value which is less than 1. log(12.3) will be same decimal value but 1 added to it which is log(12.3) = 1.093421685162235. We observe that decimal value remains same no matter 1.23 or 12.3 Since we use the function log_difference(), which returns the value that is only decimal part, using 1.23 is logical. If we see, 90*log(2) = 27.092699609758302, decimal part = 0.092699609758302, which is inside the range of lowerbound and upperbound. If we compute the difference between all the powers which lead to 123 starting digits is as follows: 379 - 90 = 289 575 - 379 = 196 864 - 575 = 289 1060 - 864 = 196 We see a pattern here. The difference is either 196 or 289 = 196 + 93. Hence to optimize the algorithm we will increment by 196 or 93 depending upon the log_difference() value. Let's take for example 90. Since 90 is the first power leading to staring digits as 123, we will increment iterator by 196. Because the difference between any two powers leading to 123 as staring digits is greater than or equal to 196. After incrementing by 196 we get 286. log_difference(286) = 0.09457875989861 which is greater than upperbound. The next power is 379, and we need to add 93 to get there. The iterator will now become 379, which is the next power leading to 123 as starting digits. Let's take 1060. We increment by 196, we get 1256. log_difference(1256) = 0.09367455396034, Which is greater than upperbound hence we increment by 93. Now iterator is 1349. log_difference(1349) = 0.08946415071057 which is less than lowerbound. The next power is 1545 and we need to add 196 to get 1545. Conditions are as follows: 1) If we find a power whose log_difference() is in the range of lower and upperbound, we will increment by 196. which implies that the power is a number which will lead to 123 as starting digits. 2) If we find a power, whose log_difference() is greater than or equal upperbound, we will increment by 93. 3) if log_difference() < lowerbound, we increment by 196. Reference to the above logic: https://math.stackexchange.com/questions/4093970/powers-of-2-starting-with-123-does-a-pattern-exist >>> solution(1000) 284168 >>> solution(56000) 15924915 >>> solution(678910) 193060223 """ power_iterator = 90 position = 0 lower_limit = math.log(1.23, 10) upper_limit = math.log(1.24, 10) previous_power = 0 while position < number: difference = log_difference(power_iterator) if difference >= upper_limit: power_iterator += 93 elif difference < lower_limit: power_iterator += 196 else: previous_power = power_iterator power_iterator += 196 position += 1 return previous_power if __name__ == "__main__": import doctest doctest.testmod() print(f"{solution() = }")
""" Project Euler Problem 686: https://projecteuler.net/problem=686 2^7 = 128 is the first power of two whose leading digits are "12". The next power of two whose leading digits are "12" is 2^80. Define p(L,n) to be the nth-smallest value of j such that the base 10 representation of 2^j begins with the digits of L. So p(12, 1) = 7 and p(12, 2) = 80. You are given that p(123, 45) = 12710. Find p(123, 678910). """ import math def log_difference(number: int) -> float: """ This function returns the decimal value of a number multiplied with log(2) Since the problem is on powers of two, finding the powers of two with large exponents is time consuming. Hence we use log to reduce compute time. We can find out that the first power of 2 with starting digits 123 is 90. Computing 2^90 is time consuming. Hence we find log(2^90) = 90*log(2) = 27.092699609758302 But we require only the decimal part to determine whether the power starts with 123. So we just return the decimal part of the log product. Therefore we return 0.092699609758302 >>> log_difference(90) 0.092699609758302 >>> log_difference(379) 0.090368356648852 """ log_number = math.log(2, 10) * number difference = round((log_number - int(log_number)), 15) return difference def solution(number: int = 678910) -> int: """ This function calculates the power of two which is nth (n = number) smallest value of power of 2 such that the starting digits of the 2^power is 123. For example the powers of 2 for which starting digits is 123 are: 90, 379, 575, 864, 1060, 1545, 1741, 2030, 2226, 2515 and so on. 90 is the first power of 2 whose starting digits are 123, 379 is second power of 2 whose starting digits are 123, and so on. So if number = 10, then solution returns 2515 as we observe from above series. We will define a lowerbound and upperbound. lowerbound = log(1.23), upperbound = log(1.24) because we need to find the powers that yield 123 as starting digits. log(1.23) = 0.08990511143939792, log(1,24) = 0.09342168516223506. We use 1.23 and not 12.3 or 123, because log(1.23) yields only decimal value which is less than 1. log(12.3) will be same decimal value but 1 added to it which is log(12.3) = 1.093421685162235. We observe that decimal value remains same no matter 1.23 or 12.3 Since we use the function log_difference(), which returns the value that is only decimal part, using 1.23 is logical. If we see, 90*log(2) = 27.092699609758302, decimal part = 0.092699609758302, which is inside the range of lowerbound and upperbound. If we compute the difference between all the powers which lead to 123 starting digits is as follows: 379 - 90 = 289 575 - 379 = 196 864 - 575 = 289 1060 - 864 = 196 We see a pattern here. The difference is either 196 or 289 = 196 + 93. Hence to optimize the algorithm we will increment by 196 or 93 depending upon the log_difference() value. Let's take for example 90. Since 90 is the first power leading to staring digits as 123, we will increment iterator by 196. Because the difference between any two powers leading to 123 as staring digits is greater than or equal to 196. After incrementing by 196 we get 286. log_difference(286) = 0.09457875989861 which is greater than upperbound. The next power is 379, and we need to add 93 to get there. The iterator will now become 379, which is the next power leading to 123 as starting digits. Let's take 1060. We increment by 196, we get 1256. log_difference(1256) = 0.09367455396034, Which is greater than upperbound hence we increment by 93. Now iterator is 1349. log_difference(1349) = 0.08946415071057 which is less than lowerbound. The next power is 1545 and we need to add 196 to get 1545. Conditions are as follows: 1) If we find a power whose log_difference() is in the range of lower and upperbound, we will increment by 196. which implies that the power is a number which will lead to 123 as starting digits. 2) If we find a power, whose log_difference() is greater than or equal upperbound, we will increment by 93. 3) if log_difference() < lowerbound, we increment by 196. Reference to the above logic: https://math.stackexchange.com/questions/4093970/powers-of-2-starting-with-123-does-a-pattern-exist >>> solution(1000) 284168 >>> solution(56000) 15924915 >>> solution(678910) 193060223 """ power_iterator = 90 position = 0 lower_limit = math.log(1.23, 10) upper_limit = math.log(1.24, 10) previous_power = 0 while position < number: difference = log_difference(power_iterator) if difference >= upper_limit: power_iterator += 93 elif difference < lower_limit: power_iterator += 196 else: previous_power = power_iterator power_iterator += 196 position += 1 return previous_power if __name__ == "__main__": import doctest doctest.testmod() print(f"{solution() = }")
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" - A linked list is similar to an array, it holds values. However, links in a linked list do not have indexes. - This is an example of a double ended, doubly linked list. - Each link references the next link and the previous one. - A Doubly Linked List (DLL) contains an extra pointer, typically called previous pointer, together with next pointer and data which are there in singly linked list. - Advantages over SLL - It can be traversed in both forward and backward direction. Delete operation is more efficient """ class Node: def __init__(self, data: int, previous=None, next_node=None): self.data = data self.previous = previous self.next = next_node def __str__(self) -> str: return f"{self.data}" def get_data(self) -> int: return self.data def get_next(self): return self.next def get_previous(self): return self.previous class LinkedListIterator: def __init__(self, head): self.current = head def __iter__(self): return self def __next__(self): if not self.current: raise StopIteration else: value = self.current.get_data() self.current = self.current.get_next() return value class LinkedList: def __init__(self): self.head = None # First node in list self.tail = None # Last node in list def __str__(self): current = self.head nodes = [] while current is not None: nodes.append(current.get_data()) current = current.get_next() return " ".join(str(node) for node in nodes) def __contains__(self, value: int): current = self.head while current: if current.get_data() == value: return True current = current.get_next() return False def __iter__(self): return LinkedListIterator(self.head) def get_head_data(self): if self.head: return self.head.get_data() return None def get_tail_data(self): if self.tail: return self.tail.get_data() return None def set_head(self, node: Node) -> None: if self.head is None: self.head = node self.tail = node else: self.insert_before_node(self.head, node) def set_tail(self, node: Node) -> None: if self.head is None: self.set_head(node) else: self.insert_after_node(self.tail, node) def insert(self, value: int) -> None: node = Node(value) if self.head is None: self.set_head(node) else: self.set_tail(node) def insert_before_node(self, node: Node, node_to_insert: Node) -> None: node_to_insert.next = node node_to_insert.previous = node.previous if node.get_previous() is None: self.head = node_to_insert else: node.previous.next = node_to_insert node.previous = node_to_insert def insert_after_node(self, node: Node, node_to_insert: Node) -> None: node_to_insert.previous = node node_to_insert.next = node.next if node.get_next() is None: self.tail = node_to_insert else: node.next.previous = node_to_insert node.next = node_to_insert def insert_at_position(self, position: int, value: int) -> None: current_position = 1 new_node = Node(value) node = self.head while node: if current_position == position: self.insert_before_node(node, new_node) return None current_position += 1 node = node.next self.insert_after_node(self.tail, new_node) def get_node(self, item: int) -> Node: node = self.head while node: if node.get_data() == item: return node node = node.get_next() raise Exception("Node not found") def delete_value(self, value): if (node := self.get_node(value)) is not None: if node == self.head: self.head = self.head.get_next() if node == self.tail: self.tail = self.tail.get_previous() self.remove_node_pointers(node) @staticmethod def remove_node_pointers(node: Node) -> None: if node.get_next(): node.next.previous = node.previous if node.get_previous(): node.previous.next = node.next node.next = None node.previous = None def is_empty(self): return self.head is None def create_linked_list() -> None: """ >>> new_linked_list = LinkedList() >>> new_linked_list.get_head_data() is None True >>> new_linked_list.get_tail_data() is None True >>> new_linked_list.is_empty() True >>> new_linked_list.insert(10) >>> new_linked_list.get_head_data() 10 >>> new_linked_list.get_tail_data() 10 >>> new_linked_list.insert_at_position(position=3, value=20) >>> new_linked_list.get_head_data() 10 >>> new_linked_list.get_tail_data() 20 >>> new_linked_list.set_head(Node(1000)) >>> new_linked_list.get_head_data() 1000 >>> new_linked_list.get_tail_data() 20 >>> new_linked_list.set_tail(Node(2000)) >>> new_linked_list.get_head_data() 1000 >>> new_linked_list.get_tail_data() 2000 >>> for value in new_linked_list: ... print(value) 1000 10 20 2000 >>> new_linked_list.is_empty() False >>> for value in new_linked_list: ... print(value) 1000 10 20 2000 >>> 10 in new_linked_list True >>> new_linked_list.delete_value(value=10) >>> 10 in new_linked_list False >>> new_linked_list.delete_value(value=2000) >>> new_linked_list.get_tail_data() 20 >>> new_linked_list.delete_value(value=1000) >>> new_linked_list.get_tail_data() 20 >>> new_linked_list.get_head_data() 20 >>> for value in new_linked_list: ... print(value) 20 >>> new_linked_list.delete_value(value=20) >>> for value in new_linked_list: ... print(value) >>> for value in range(1,10): ... new_linked_list.insert(value=value) >>> for value in new_linked_list: ... print(value) 1 2 3 4 5 6 7 8 9 """ if __name__ == "__main__": import doctest doctest.testmod()
""" - A linked list is similar to an array, it holds values. However, links in a linked list do not have indexes. - This is an example of a double ended, doubly linked list. - Each link references the next link and the previous one. - A Doubly Linked List (DLL) contains an extra pointer, typically called previous pointer, together with next pointer and data which are there in singly linked list. - Advantages over SLL - It can be traversed in both forward and backward direction. Delete operation is more efficient """ class Node: def __init__(self, data: int, previous=None, next_node=None): self.data = data self.previous = previous self.next = next_node def __str__(self) -> str: return f"{self.data}" def get_data(self) -> int: return self.data def get_next(self): return self.next def get_previous(self): return self.previous class LinkedListIterator: def __init__(self, head): self.current = head def __iter__(self): return self def __next__(self): if not self.current: raise StopIteration else: value = self.current.get_data() self.current = self.current.get_next() return value class LinkedList: def __init__(self): self.head = None # First node in list self.tail = None # Last node in list def __str__(self): current = self.head nodes = [] while current is not None: nodes.append(current.get_data()) current = current.get_next() return " ".join(str(node) for node in nodes) def __contains__(self, value: int): current = self.head while current: if current.get_data() == value: return True current = current.get_next() return False def __iter__(self): return LinkedListIterator(self.head) def get_head_data(self): if self.head: return self.head.get_data() return None def get_tail_data(self): if self.tail: return self.tail.get_data() return None def set_head(self, node: Node) -> None: if self.head is None: self.head = node self.tail = node else: self.insert_before_node(self.head, node) def set_tail(self, node: Node) -> None: if self.head is None: self.set_head(node) else: self.insert_after_node(self.tail, node) def insert(self, value: int) -> None: node = Node(value) if self.head is None: self.set_head(node) else: self.set_tail(node) def insert_before_node(self, node: Node, node_to_insert: Node) -> None: node_to_insert.next = node node_to_insert.previous = node.previous if node.get_previous() is None: self.head = node_to_insert else: node.previous.next = node_to_insert node.previous = node_to_insert def insert_after_node(self, node: Node, node_to_insert: Node) -> None: node_to_insert.previous = node node_to_insert.next = node.next if node.get_next() is None: self.tail = node_to_insert else: node.next.previous = node_to_insert node.next = node_to_insert def insert_at_position(self, position: int, value: int) -> None: current_position = 1 new_node = Node(value) node = self.head while node: if current_position == position: self.insert_before_node(node, new_node) return None current_position += 1 node = node.next self.insert_after_node(self.tail, new_node) def get_node(self, item: int) -> Node: node = self.head while node: if node.get_data() == item: return node node = node.get_next() raise Exception("Node not found") def delete_value(self, value): if (node := self.get_node(value)) is not None: if node == self.head: self.head = self.head.get_next() if node == self.tail: self.tail = self.tail.get_previous() self.remove_node_pointers(node) @staticmethod def remove_node_pointers(node: Node) -> None: if node.get_next(): node.next.previous = node.previous if node.get_previous(): node.previous.next = node.next node.next = None node.previous = None def is_empty(self): return self.head is None def create_linked_list() -> None: """ >>> new_linked_list = LinkedList() >>> new_linked_list.get_head_data() is None True >>> new_linked_list.get_tail_data() is None True >>> new_linked_list.is_empty() True >>> new_linked_list.insert(10) >>> new_linked_list.get_head_data() 10 >>> new_linked_list.get_tail_data() 10 >>> new_linked_list.insert_at_position(position=3, value=20) >>> new_linked_list.get_head_data() 10 >>> new_linked_list.get_tail_data() 20 >>> new_linked_list.set_head(Node(1000)) >>> new_linked_list.get_head_data() 1000 >>> new_linked_list.get_tail_data() 20 >>> new_linked_list.set_tail(Node(2000)) >>> new_linked_list.get_head_data() 1000 >>> new_linked_list.get_tail_data() 2000 >>> for value in new_linked_list: ... print(value) 1000 10 20 2000 >>> new_linked_list.is_empty() False >>> for value in new_linked_list: ... print(value) 1000 10 20 2000 >>> 10 in new_linked_list True >>> new_linked_list.delete_value(value=10) >>> 10 in new_linked_list False >>> new_linked_list.delete_value(value=2000) >>> new_linked_list.get_tail_data() 20 >>> new_linked_list.delete_value(value=1000) >>> new_linked_list.get_tail_data() 20 >>> new_linked_list.get_head_data() 20 >>> for value in new_linked_list: ... print(value) 20 >>> new_linked_list.delete_value(value=20) >>> for value in new_linked_list: ... print(value) >>> for value in range(1,10): ... new_linked_list.insert(value=value) >>> for value in new_linked_list: ... print(value) 1 2 3 4 5 6 7 8 9 """ if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" In the Combination Sum problem, we are given a list consisting of distinct integers. We need to find all the combinations whose sum equals to target given. We can use an element more than one. Time complexity(Average Case): O(n!) Constraints: 1 <= candidates.length <= 30 2 <= candidates[i] <= 40 All elements of candidates are distinct. 1 <= target <= 40 """ def backtrack( candidates: list, path: list, answer: list, target: int, previous_index: int ) -> None: """ A recursive function that searches for possible combinations. Backtracks in case of a bigger current combination value than the target value. Parameters ---------- previous_index: Last index from the previous search target: The value we need to obtain by summing our integers in the path list. answer: A list of possible combinations path: Current combination candidates: A list of integers we can use. """ if target == 0: answer.append(path.copy()) else: for index in range(previous_index, len(candidates)): if target >= candidates[index]: path.append(candidates[index]) backtrack(candidates, path, answer, target - candidates[index], index) path.pop(len(path) - 1) def combination_sum(candidates: list, target: int) -> list: """ >>> combination_sum([2, 3, 5], 8) [[2, 2, 2, 2], [2, 3, 3], [3, 5]] >>> combination_sum([2, 3, 6, 7], 7) [[2, 2, 3], [7]] >>> combination_sum([-8, 2.3, 0], 1) Traceback (most recent call last): ... RecursionError: maximum recursion depth exceeded in comparison """ path = [] # type: list[int] answer = [] # type: list[int] backtrack(candidates, path, answer, target, 0) return answer def main() -> None: print(combination_sum([-8, 2.3, 0], 1)) if __name__ == "__main__": import doctest doctest.testmod() main()
""" In the Combination Sum problem, we are given a list consisting of distinct integers. We need to find all the combinations whose sum equals to target given. We can use an element more than one. Time complexity(Average Case): O(n!) Constraints: 1 <= candidates.length <= 30 2 <= candidates[i] <= 40 All elements of candidates are distinct. 1 <= target <= 40 """ def backtrack( candidates: list, path: list, answer: list, target: int, previous_index: int ) -> None: """ A recursive function that searches for possible combinations. Backtracks in case of a bigger current combination value than the target value. Parameters ---------- previous_index: Last index from the previous search target: The value we need to obtain by summing our integers in the path list. answer: A list of possible combinations path: Current combination candidates: A list of integers we can use. """ if target == 0: answer.append(path.copy()) else: for index in range(previous_index, len(candidates)): if target >= candidates[index]: path.append(candidates[index]) backtrack(candidates, path, answer, target - candidates[index], index) path.pop(len(path) - 1) def combination_sum(candidates: list, target: int) -> list: """ >>> combination_sum([2, 3, 5], 8) [[2, 2, 2, 2], [2, 3, 3], [3, 5]] >>> combination_sum([2, 3, 6, 7], 7) [[2, 2, 3], [7]] >>> combination_sum([-8, 2.3, 0], 1) Traceback (most recent call last): ... RecursionError: maximum recursion depth exceeded in comparison """ path = [] # type: list[int] answer = [] # type: list[int] backtrack(candidates, path, answer, target, 0) return answer def main() -> None: print(combination_sum([-8, 2.3, 0], 1)) if __name__ == "__main__": import doctest doctest.testmod() main()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" This is pure Python implementation of Tabu search algorithm for a Travelling Salesman Problem, that the distances between the cities are symmetric (the distance between city 'a' and city 'b' is the same between city 'b' and city 'a'). The TSP can be represented into a graph. The cities are represented by nodes and the distance between them is represented by the weight of the ark between the nodes. The .txt file with the graph has the form: node1 node2 distance_between_node1_and_node2 node1 node3 distance_between_node1_and_node3 ... Be careful node1, node2 and the distance between them, must exist only once. This means in the .txt file should not exist: node1 node2 distance_between_node1_and_node2 node2 node1 distance_between_node2_and_node1 For pytests run following command: pytest For manual testing run: python tabu_search.py -f your_file_name.txt -number_of_iterations_of_tabu_search \ -s size_of_tabu_search e.g. python tabu_search.py -f tabudata2.txt -i 4 -s 3 """ import argparse import copy def generate_neighbours(path): """ Pure implementation of generating a dictionary of neighbors and the cost with each neighbor, given a path file that includes a graph. :param path: The path to the .txt file that includes the graph (e.g.tabudata2.txt) :return dict_of_neighbours: Dictionary with key each node and value a list of lists with the neighbors of the node and the cost (distance) for each neighbor. Example of dict_of_neighbours: >>) dict_of_neighbours[a] [[b,20],[c,18],[d,22],[e,26]] This indicates the neighbors of node (city) 'a', which has neighbor the node 'b' with distance 20, the node 'c' with distance 18, the node 'd' with distance 22 and the node 'e' with distance 26. """ dict_of_neighbours = {} with open(path) as f: for line in f: if line.split()[0] not in dict_of_neighbours: _list = [] _list.append([line.split()[1], line.split()[2]]) dict_of_neighbours[line.split()[0]] = _list else: dict_of_neighbours[line.split()[0]].append( [line.split()[1], line.split()[2]] ) if line.split()[1] not in dict_of_neighbours: _list = [] _list.append([line.split()[0], line.split()[2]]) dict_of_neighbours[line.split()[1]] = _list else: dict_of_neighbours[line.split()[1]].append( [line.split()[0], line.split()[2]] ) return dict_of_neighbours def generate_first_solution(path, dict_of_neighbours): """ Pure implementation of generating the first solution for the Tabu search to start, with the redundant resolution strategy. That means that we start from the starting node (e.g. node 'a'), then we go to the city nearest (lowest distance) to this node (let's assume is node 'c'), then we go to the nearest city of the node 'c', etc. till we have visited all cities and return to the starting node. :param path: The path to the .txt file that includes the graph (e.g.tabudata2.txt) :param dict_of_neighbours: Dictionary with key each node and value a list of lists with the neighbors of the node and the cost (distance) for each neighbor. :return first_solution: The solution for the first iteration of Tabu search using the redundant resolution strategy in a list. :return distance_of_first_solution: The total distance that Travelling Salesman will travel, if he follows the path in first_solution. """ with open(path) as f: start_node = f.read(1) end_node = start_node first_solution = [] visiting = start_node distance_of_first_solution = 0 while visiting not in first_solution: minim = 10000 for k in dict_of_neighbours[visiting]: if int(k[1]) < int(minim) and k[0] not in first_solution: minim = k[1] best_node = k[0] first_solution.append(visiting) distance_of_first_solution = distance_of_first_solution + int(minim) visiting = best_node first_solution.append(end_node) position = 0 for k in dict_of_neighbours[first_solution[-2]]: if k[0] == start_node: break position += 1 distance_of_first_solution = ( distance_of_first_solution + int(dict_of_neighbours[first_solution[-2]][position][1]) - 10000 ) return first_solution, distance_of_first_solution def find_neighborhood(solution, dict_of_neighbours): """ Pure implementation of generating the neighborhood (sorted by total distance of each solution from lowest to highest) of a solution with 1-1 exchange method, that means we exchange each node in a solution with each other node and generating a number of solution named neighborhood. :param solution: The solution in which we want to find the neighborhood. :param dict_of_neighbours: Dictionary with key each node and value a list of lists with the neighbors of the node and the cost (distance) for each neighbor. :return neighborhood_of_solution: A list that includes the solutions and the total distance of each solution (in form of list) that are produced with 1-1 exchange from the solution that the method took as an input Example: >>> find_neighborhood(['a', 'c', 'b', 'd', 'e', 'a'], ... {'a': [['b', '20'], ['c', '18'], ['d', '22'], ['e', '26']], ... 'c': [['a', '18'], ['b', '10'], ['d', '23'], ['e', '24']], ... 'b': [['a', '20'], ['c', '10'], ['d', '11'], ['e', '12']], ... 'e': [['a', '26'], ['b', '12'], ['c', '24'], ['d', '40']], ... 'd': [['a', '22'], ['b', '11'], ['c', '23'], ['e', '40']]} ... ) # doctest: +NORMALIZE_WHITESPACE [['a', 'e', 'b', 'd', 'c', 'a', 90], ['a', 'c', 'd', 'b', 'e', 'a', 90], ['a', 'd', 'b', 'c', 'e', 'a', 93], ['a', 'c', 'b', 'e', 'd', 'a', 102], ['a', 'c', 'e', 'd', 'b', 'a', 113], ['a', 'b', 'c', 'd', 'e', 'a', 119]] """ neighborhood_of_solution = [] for n in solution[1:-1]: idx1 = solution.index(n) for kn in solution[1:-1]: idx2 = solution.index(kn) if n == kn: continue _tmp = copy.deepcopy(solution) _tmp[idx1] = kn _tmp[idx2] = n distance = 0 for k in _tmp[:-1]: next_node = _tmp[_tmp.index(k) + 1] for i in dict_of_neighbours[k]: if i[0] == next_node: distance = distance + int(i[1]) _tmp.append(distance) if _tmp not in neighborhood_of_solution: neighborhood_of_solution.append(_tmp) index_of_last_item_in_the_list = len(neighborhood_of_solution[0]) - 1 neighborhood_of_solution.sort(key=lambda x: x[index_of_last_item_in_the_list]) return neighborhood_of_solution def tabu_search( first_solution, distance_of_first_solution, dict_of_neighbours, iters, size ): """ Pure implementation of Tabu search algorithm for a Travelling Salesman Problem in Python. :param first_solution: The solution for the first iteration of Tabu search using the redundant resolution strategy in a list. :param distance_of_first_solution: The total distance that Travelling Salesman will travel, if he follows the path in first_solution. :param dict_of_neighbours: Dictionary with key each node and value a list of lists with the neighbors of the node and the cost (distance) for each neighbor. :param iters: The number of iterations that Tabu search will execute. :param size: The size of Tabu List. :return best_solution_ever: The solution with the lowest distance that occurred during the execution of Tabu search. :return best_cost: The total distance that Travelling Salesman will travel, if he follows the path in best_solution ever. """ count = 1 solution = first_solution tabu_list = [] best_cost = distance_of_first_solution best_solution_ever = solution while count <= iters: neighborhood = find_neighborhood(solution, dict_of_neighbours) index_of_best_solution = 0 best_solution = neighborhood[index_of_best_solution] best_cost_index = len(best_solution) - 1 found = False while not found: i = 0 while i < len(best_solution): if best_solution[i] != solution[i]: first_exchange_node = best_solution[i] second_exchange_node = solution[i] break i = i + 1 if [first_exchange_node, second_exchange_node] not in tabu_list and [ second_exchange_node, first_exchange_node, ] not in tabu_list: tabu_list.append([first_exchange_node, second_exchange_node]) found = True solution = best_solution[:-1] cost = neighborhood[index_of_best_solution][best_cost_index] if cost < best_cost: best_cost = cost best_solution_ever = solution else: index_of_best_solution = index_of_best_solution + 1 best_solution = neighborhood[index_of_best_solution] if len(tabu_list) >= size: tabu_list.pop(0) count = count + 1 return best_solution_ever, best_cost def main(args=None): dict_of_neighbours = generate_neighbours(args.File) first_solution, distance_of_first_solution = generate_first_solution( args.File, dict_of_neighbours ) best_sol, best_cost = tabu_search( first_solution, distance_of_first_solution, dict_of_neighbours, args.Iterations, args.Size, ) print(f"Best solution: {best_sol}, with total distance: {best_cost}.") if __name__ == "__main__": parser = argparse.ArgumentParser(description="Tabu Search") parser.add_argument( "-f", "--File", type=str, help="Path to the file containing the data", required=True, ) parser.add_argument( "-i", "--Iterations", type=int, help="How many iterations the algorithm should perform", required=True, ) parser.add_argument( "-s", "--Size", type=int, help="Size of the tabu list", required=True ) # Pass the arguments to main method main(parser.parse_args())
""" This is pure Python implementation of Tabu search algorithm for a Travelling Salesman Problem, that the distances between the cities are symmetric (the distance between city 'a' and city 'b' is the same between city 'b' and city 'a'). The TSP can be represented into a graph. The cities are represented by nodes and the distance between them is represented by the weight of the ark between the nodes. The .txt file with the graph has the form: node1 node2 distance_between_node1_and_node2 node1 node3 distance_between_node1_and_node3 ... Be careful node1, node2 and the distance between them, must exist only once. This means in the .txt file should not exist: node1 node2 distance_between_node1_and_node2 node2 node1 distance_between_node2_and_node1 For pytests run following command: pytest For manual testing run: python tabu_search.py -f your_file_name.txt -number_of_iterations_of_tabu_search \ -s size_of_tabu_search e.g. python tabu_search.py -f tabudata2.txt -i 4 -s 3 """ import argparse import copy def generate_neighbours(path): """ Pure implementation of generating a dictionary of neighbors and the cost with each neighbor, given a path file that includes a graph. :param path: The path to the .txt file that includes the graph (e.g.tabudata2.txt) :return dict_of_neighbours: Dictionary with key each node and value a list of lists with the neighbors of the node and the cost (distance) for each neighbor. Example of dict_of_neighbours: >>) dict_of_neighbours[a] [[b,20],[c,18],[d,22],[e,26]] This indicates the neighbors of node (city) 'a', which has neighbor the node 'b' with distance 20, the node 'c' with distance 18, the node 'd' with distance 22 and the node 'e' with distance 26. """ dict_of_neighbours = {} with open(path) as f: for line in f: if line.split()[0] not in dict_of_neighbours: _list = [] _list.append([line.split()[1], line.split()[2]]) dict_of_neighbours[line.split()[0]] = _list else: dict_of_neighbours[line.split()[0]].append( [line.split()[1], line.split()[2]] ) if line.split()[1] not in dict_of_neighbours: _list = [] _list.append([line.split()[0], line.split()[2]]) dict_of_neighbours[line.split()[1]] = _list else: dict_of_neighbours[line.split()[1]].append( [line.split()[0], line.split()[2]] ) return dict_of_neighbours def generate_first_solution(path, dict_of_neighbours): """ Pure implementation of generating the first solution for the Tabu search to start, with the redundant resolution strategy. That means that we start from the starting node (e.g. node 'a'), then we go to the city nearest (lowest distance) to this node (let's assume is node 'c'), then we go to the nearest city of the node 'c', etc. till we have visited all cities and return to the starting node. :param path: The path to the .txt file that includes the graph (e.g.tabudata2.txt) :param dict_of_neighbours: Dictionary with key each node and value a list of lists with the neighbors of the node and the cost (distance) for each neighbor. :return first_solution: The solution for the first iteration of Tabu search using the redundant resolution strategy in a list. :return distance_of_first_solution: The total distance that Travelling Salesman will travel, if he follows the path in first_solution. """ with open(path) as f: start_node = f.read(1) end_node = start_node first_solution = [] visiting = start_node distance_of_first_solution = 0 while visiting not in first_solution: minim = 10000 for k in dict_of_neighbours[visiting]: if int(k[1]) < int(minim) and k[0] not in first_solution: minim = k[1] best_node = k[0] first_solution.append(visiting) distance_of_first_solution = distance_of_first_solution + int(minim) visiting = best_node first_solution.append(end_node) position = 0 for k in dict_of_neighbours[first_solution[-2]]: if k[0] == start_node: break position += 1 distance_of_first_solution = ( distance_of_first_solution + int(dict_of_neighbours[first_solution[-2]][position][1]) - 10000 ) return first_solution, distance_of_first_solution def find_neighborhood(solution, dict_of_neighbours): """ Pure implementation of generating the neighborhood (sorted by total distance of each solution from lowest to highest) of a solution with 1-1 exchange method, that means we exchange each node in a solution with each other node and generating a number of solution named neighborhood. :param solution: The solution in which we want to find the neighborhood. :param dict_of_neighbours: Dictionary with key each node and value a list of lists with the neighbors of the node and the cost (distance) for each neighbor. :return neighborhood_of_solution: A list that includes the solutions and the total distance of each solution (in form of list) that are produced with 1-1 exchange from the solution that the method took as an input Example: >>> find_neighborhood(['a', 'c', 'b', 'd', 'e', 'a'], ... {'a': [['b', '20'], ['c', '18'], ['d', '22'], ['e', '26']], ... 'c': [['a', '18'], ['b', '10'], ['d', '23'], ['e', '24']], ... 'b': [['a', '20'], ['c', '10'], ['d', '11'], ['e', '12']], ... 'e': [['a', '26'], ['b', '12'], ['c', '24'], ['d', '40']], ... 'd': [['a', '22'], ['b', '11'], ['c', '23'], ['e', '40']]} ... ) # doctest: +NORMALIZE_WHITESPACE [['a', 'e', 'b', 'd', 'c', 'a', 90], ['a', 'c', 'd', 'b', 'e', 'a', 90], ['a', 'd', 'b', 'c', 'e', 'a', 93], ['a', 'c', 'b', 'e', 'd', 'a', 102], ['a', 'c', 'e', 'd', 'b', 'a', 113], ['a', 'b', 'c', 'd', 'e', 'a', 119]] """ neighborhood_of_solution = [] for n in solution[1:-1]: idx1 = solution.index(n) for kn in solution[1:-1]: idx2 = solution.index(kn) if n == kn: continue _tmp = copy.deepcopy(solution) _tmp[idx1] = kn _tmp[idx2] = n distance = 0 for k in _tmp[:-1]: next_node = _tmp[_tmp.index(k) + 1] for i in dict_of_neighbours[k]: if i[0] == next_node: distance = distance + int(i[1]) _tmp.append(distance) if _tmp not in neighborhood_of_solution: neighborhood_of_solution.append(_tmp) index_of_last_item_in_the_list = len(neighborhood_of_solution[0]) - 1 neighborhood_of_solution.sort(key=lambda x: x[index_of_last_item_in_the_list]) return neighborhood_of_solution def tabu_search( first_solution, distance_of_first_solution, dict_of_neighbours, iters, size ): """ Pure implementation of Tabu search algorithm for a Travelling Salesman Problem in Python. :param first_solution: The solution for the first iteration of Tabu search using the redundant resolution strategy in a list. :param distance_of_first_solution: The total distance that Travelling Salesman will travel, if he follows the path in first_solution. :param dict_of_neighbours: Dictionary with key each node and value a list of lists with the neighbors of the node and the cost (distance) for each neighbor. :param iters: The number of iterations that Tabu search will execute. :param size: The size of Tabu List. :return best_solution_ever: The solution with the lowest distance that occurred during the execution of Tabu search. :return best_cost: The total distance that Travelling Salesman will travel, if he follows the path in best_solution ever. """ count = 1 solution = first_solution tabu_list = [] best_cost = distance_of_first_solution best_solution_ever = solution while count <= iters: neighborhood = find_neighborhood(solution, dict_of_neighbours) index_of_best_solution = 0 best_solution = neighborhood[index_of_best_solution] best_cost_index = len(best_solution) - 1 found = False while not found: i = 0 while i < len(best_solution): if best_solution[i] != solution[i]: first_exchange_node = best_solution[i] second_exchange_node = solution[i] break i = i + 1 if [first_exchange_node, second_exchange_node] not in tabu_list and [ second_exchange_node, first_exchange_node, ] not in tabu_list: tabu_list.append([first_exchange_node, second_exchange_node]) found = True solution = best_solution[:-1] cost = neighborhood[index_of_best_solution][best_cost_index] if cost < best_cost: best_cost = cost best_solution_ever = solution else: index_of_best_solution = index_of_best_solution + 1 best_solution = neighborhood[index_of_best_solution] if len(tabu_list) >= size: tabu_list.pop(0) count = count + 1 return best_solution_ever, best_cost def main(args=None): dict_of_neighbours = generate_neighbours(args.File) first_solution, distance_of_first_solution = generate_first_solution( args.File, dict_of_neighbours ) best_sol, best_cost = tabu_search( first_solution, distance_of_first_solution, dict_of_neighbours, args.Iterations, args.Size, ) print(f"Best solution: {best_sol}, with total distance: {best_cost}.") if __name__ == "__main__": parser = argparse.ArgumentParser(description="Tabu Search") parser.add_argument( "-f", "--File", type=str, help="Path to the file containing the data", required=True, ) parser.add_argument( "-i", "--Iterations", type=int, help="How many iterations the algorithm should perform", required=True, ) parser.add_argument( "-s", "--Size", type=int, help="Size of the tabu list", required=True ) # Pass the arguments to main method main(parser.parse_args())
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" https://en.wikipedia.org/wiki/Rayleigh_quotient """ from typing import Any import numpy as np def is_hermitian(matrix: np.ndarray) -> bool: """ Checks if a matrix is Hermitian. >>> import numpy as np >>> A = np.array([ ... [2, 2+1j, 4], ... [2-1j, 3, 1j], ... [4, -1j, 1]]) >>> is_hermitian(A) True >>> A = np.array([ ... [2, 2+1j, 4+1j], ... [2-1j, 3, 1j], ... [4, -1j, 1]]) >>> is_hermitian(A) False """ return np.array_equal(matrix, matrix.conjugate().T) def rayleigh_quotient(a: np.ndarray, v: np.ndarray) -> Any: """ Returns the Rayleigh quotient of a Hermitian matrix A and vector v. >>> import numpy as np >>> A = np.array([ ... [1, 2, 4], ... [2, 3, -1], ... [4, -1, 1] ... ]) >>> v = np.array([ ... [1], ... [2], ... [3] ... ]) >>> rayleigh_quotient(A, v) array([[3.]]) """ v_star = v.conjugate().T v_star_dot = v_star.dot(a) assert isinstance(v_star_dot, np.ndarray) return (v_star_dot.dot(v)) / (v_star.dot(v)) def tests() -> None: a = np.array([[2, 2 + 1j, 4], [2 - 1j, 3, 1j], [4, -1j, 1]]) v = np.array([[1], [2], [3]]) assert is_hermitian(a), f"{a} is not hermitian." print(rayleigh_quotient(a, v)) a = np.array([[1, 2, 4], [2, 3, -1], [4, -1, 1]]) assert is_hermitian(a), f"{a} is not hermitian." assert rayleigh_quotient(a, v) == float(3) if __name__ == "__main__": import doctest doctest.testmod() tests()
""" https://en.wikipedia.org/wiki/Rayleigh_quotient """ from typing import Any import numpy as np def is_hermitian(matrix: np.ndarray) -> bool: """ Checks if a matrix is Hermitian. >>> import numpy as np >>> A = np.array([ ... [2, 2+1j, 4], ... [2-1j, 3, 1j], ... [4, -1j, 1]]) >>> is_hermitian(A) True >>> A = np.array([ ... [2, 2+1j, 4+1j], ... [2-1j, 3, 1j], ... [4, -1j, 1]]) >>> is_hermitian(A) False """ return np.array_equal(matrix, matrix.conjugate().T) def rayleigh_quotient(a: np.ndarray, v: np.ndarray) -> Any: """ Returns the Rayleigh quotient of a Hermitian matrix A and vector v. >>> import numpy as np >>> A = np.array([ ... [1, 2, 4], ... [2, 3, -1], ... [4, -1, 1] ... ]) >>> v = np.array([ ... [1], ... [2], ... [3] ... ]) >>> rayleigh_quotient(A, v) array([[3.]]) """ v_star = v.conjugate().T v_star_dot = v_star.dot(a) assert isinstance(v_star_dot, np.ndarray) return (v_star_dot.dot(v)) / (v_star.dot(v)) def tests() -> None: a = np.array([[2, 2 + 1j, 4], [2 - 1j, 3, 1j], [4, -1j, 1]]) v = np.array([[1], [2], [3]]) assert is_hermitian(a), f"{a} is not hermitian." print(rayleigh_quotient(a, v)) a = np.array([[1, 2, 4], [2, 3, -1], [4, -1, 1]]) assert is_hermitian(a), f"{a} is not hermitian." assert rayleigh_quotient(a, v) == float(3) if __name__ == "__main__": import doctest doctest.testmod() tests()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
# Project Euler Problems are taken from https://projecteuler.net/, the Project Euler. [Problems are licensed under CC BY-NC-SA 4.0](https://projecteuler.net/copyright). Project Euler is a series of challenging mathematical/computer programming problems that require more than just mathematical insights to solve. Project Euler is ideal for mathematicians who are learning to code. The solutions will be checked by our [automated testing on GitHub Actions](https://github.com/TheAlgorithms/Python/actions) with the help of [this script](https://github.com/TheAlgorithms/Python/blob/master/scripts/validate_solutions.py). The efficiency of your code is also checked. You can view the top 10 slowest solutions on GitHub Actions logs (under `slowest 10 durations`) and open a pull request to improve those solutions. ## Solution Guidelines Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before reading the solution guidelines, make sure you read the whole [Contributing Guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) as it won't be repeated in here. If you have any doubt on the guidelines, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://gitter.im/TheAlgorithms). You can use the [template](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#solution-template) we have provided below as your starting point but be sure to read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) part first. ### Coding Style * Please maintain consistency in project directory and solution file names. Keep the following points in mind: * Create a new directory only for the problems which do not exist yet. * If you create a new directory, please create an empty `__init__.py` file inside it as well. * Please name the project **directory** as `problem_<problem_number>` where `problem_number` should be filled with 0s so as to occupy 3 digits. Example: `problem_001`, `problem_002`, `problem_067`, `problem_145`, and so on. * Please provide a link to the problem and other references, if used, in the **module-level docstring**. * All imports should come ***after*** the module-level docstring. * You can have as many helper functions as you want but there should be one main function called `solution` which should satisfy the conditions as stated below: * It should contain positional argument(s) whose default value is the question input. Example: Please take a look at [Problem 1](https://projecteuler.net/problem=1) where the question is to *Find the sum of all the multiples of 3 or 5 below 1000.* In this case the main solution function will be `solution(limit: int = 1000)`. * When the `solution` function is called without any arguments like so: `solution()`, it should return the answer to the problem. * Every function, which includes all the helper functions, if any, and the main solution function, should have `doctest` in the function docstring along with a brief statement mentioning what the function is about. * There should not be a `doctest` for testing the answer as that is done by our GitHub Actions build using this [script](https://github.com/TheAlgorithms/Python/blob/master/scripts/validate_solutions.py). Keeping in mind the above example of [Problem 1](https://projecteuler.net/problem=1): ```python def solution(limit: int = 1000): """ A brief statement mentioning what the function is about. You can have a detailed explanation about the solution method in the module-level docstring. >>> solution(1) ... >>> solution(16) ... >>> solution(100) ... """ ``` ### Solution Template You can use the below template as your starting point but please read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) first to understand how the template works. Please change the name of the helper functions accordingly, change the parameter names with a descriptive one, replace the content within `[square brackets]` (including the brackets) with the appropriate content. ```python """ Project Euler Problem [problem number]: [link to the original problem] ... [Entire problem statement] ... ... [Solution explanation - Optional] ... References [Optional]: - [Wikipedia link to the topic] - [Stackoverflow link] ... """ import module1 import module2 ... def helper1(arg1: [type hint], arg2: [type hint], ...) -> [Return type hint]: """ A brief statement explaining what the function is about. ... A more elaborate description ... [Optional] ... [Doctest] ... """ ... # calculations ... return # You can have multiple helper functions but the solution function should be # after all the helper functions ... def solution(arg1: [type hint], arg2: [type hint], ...) -> [Return type hint]: """ A brief statement mentioning what the function is about. You can have a detailed explanation about the solution in the module-level docstring. ... [Doctest as mentioned above] ... """ ... # calculations ... return answer if __name__ == "__main__": print(f"{solution() = }") ```
# Project Euler Problems are taken from https://projecteuler.net/, the Project Euler. [Problems are licensed under CC BY-NC-SA 4.0](https://projecteuler.net/copyright). Project Euler is a series of challenging mathematical/computer programming problems that require more than just mathematical insights to solve. Project Euler is ideal for mathematicians who are learning to code. The solutions will be checked by our [automated testing on GitHub Actions](https://github.com/TheAlgorithms/Python/actions) with the help of [this script](https://github.com/TheAlgorithms/Python/blob/master/scripts/validate_solutions.py). The efficiency of your code is also checked. You can view the top 10 slowest solutions on GitHub Actions logs (under `slowest 10 durations`) and open a pull request to improve those solutions. ## Solution Guidelines Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before reading the solution guidelines, make sure you read the whole [Contributing Guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) as it won't be repeated in here. If you have any doubt on the guidelines, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://gitter.im/TheAlgorithms). You can use the [template](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#solution-template) we have provided below as your starting point but be sure to read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) part first. ### Coding Style * Please maintain consistency in project directory and solution file names. Keep the following points in mind: * Create a new directory only for the problems which do not exist yet. * If you create a new directory, please create an empty `__init__.py` file inside it as well. * Please name the project **directory** as `problem_<problem_number>` where `problem_number` should be filled with 0s so as to occupy 3 digits. Example: `problem_001`, `problem_002`, `problem_067`, `problem_145`, and so on. * Please provide a link to the problem and other references, if used, in the **module-level docstring**. * All imports should come ***after*** the module-level docstring. * You can have as many helper functions as you want but there should be one main function called `solution` which should satisfy the conditions as stated below: * It should contain positional argument(s) whose default value is the question input. Example: Please take a look at [Problem 1](https://projecteuler.net/problem=1) where the question is to *Find the sum of all the multiples of 3 or 5 below 1000.* In this case the main solution function will be `solution(limit: int = 1000)`. * When the `solution` function is called without any arguments like so: `solution()`, it should return the answer to the problem. * Every function, which includes all the helper functions, if any, and the main solution function, should have `doctest` in the function docstring along with a brief statement mentioning what the function is about. * There should not be a `doctest` for testing the answer as that is done by our GitHub Actions build using this [script](https://github.com/TheAlgorithms/Python/blob/master/scripts/validate_solutions.py). Keeping in mind the above example of [Problem 1](https://projecteuler.net/problem=1): ```python def solution(limit: int = 1000): """ A brief statement mentioning what the function is about. You can have a detailed explanation about the solution method in the module-level docstring. >>> solution(1) ... >>> solution(16) ... >>> solution(100) ... """ ``` ### Solution Template You can use the below template as your starting point but please read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) first to understand how the template works. Please change the name of the helper functions accordingly, change the parameter names with a descriptive one, replace the content within `[square brackets]` (including the brackets) with the appropriate content. ```python """ Project Euler Problem [problem number]: [link to the original problem] ... [Entire problem statement] ... ... [Solution explanation - Optional] ... References [Optional]: - [Wikipedia link to the topic] - [Stackoverflow link] ... """ import module1 import module2 ... def helper1(arg1: [type hint], arg2: [type hint], ...) -> [Return type hint]: """ A brief statement explaining what the function is about. ... A more elaborate description ... [Optional] ... [Doctest] ... """ ... # calculations ... return # You can have multiple helper functions but the solution function should be # after all the helper functions ... def solution(arg1: [type hint], arg2: [type hint], ...) -> [Return type hint]: """ A brief statement mentioning what the function is about. You can have a detailed explanation about the solution in the module-level docstring. ... [Doctest as mentioned above] ... """ ... # calculations ... return answer if __name__ == "__main__": print(f"{solution() = }") ```
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" The convex hull problem is problem of finding all the vertices of convex polygon, P of a set of points in a plane such that all the points are either on the vertices of P or inside P. TH convex hull problem has several applications in geometrical problems, computer graphics and game development. Two algorithms have been implemented for the convex hull problem here. 1. A brute-force algorithm which runs in O(n^3) 2. A divide-and-conquer algorithm which runs in O(n log(n)) There are other several other algorithms for the convex hull problem which have not been implemented here, yet. """ from __future__ import annotations from collections.abc import Iterable class Point: """ Defines a 2-d point for use by all convex-hull algorithms. Parameters ---------- x: an int or a float, the x-coordinate of the 2-d point y: an int or a float, the y-coordinate of the 2-d point Examples -------- >>> Point(1, 2) (1.0, 2.0) >>> Point("1", "2") (1.0, 2.0) >>> Point(1, 2) > Point(0, 1) True >>> Point(1, 1) == Point(1, 1) True >>> Point(-0.5, 1) == Point(0.5, 1) False >>> Point("pi", "e") Traceback (most recent call last): ... ValueError: could not convert string to float: 'pi' """ def __init__(self, x, y): self.x, self.y = float(x), float(y) def __eq__(self, other): return self.x == other.x and self.y == other.y def __ne__(self, other): return not self == other def __gt__(self, other): if self.x > other.x: return True elif self.x == other.x: return self.y > other.y return False def __lt__(self, other): return not self > other def __ge__(self, other): if self.x > other.x: return True elif self.x == other.x: return self.y >= other.y return False def __le__(self, other): if self.x < other.x: return True elif self.x == other.x: return self.y <= other.y return False def __repr__(self): return f"({self.x}, {self.y})" def __hash__(self): return hash(self.x) def _construct_points( list_of_tuples: list[Point] | list[list[float]] | Iterable[list[float]], ) -> list[Point]: """ constructs a list of points from an array-like object of numbers Arguments --------- list_of_tuples: array-like object of type numbers. Acceptable types so far are lists, tuples and sets. Returns -------- points: a list where each item is of type Point. This contains only objects which can be converted into a Point. Examples ------- >>> _construct_points([[1, 1], [2, -1], [0.3, 4]]) [(1.0, 1.0), (2.0, -1.0), (0.3, 4.0)] >>> _construct_points([1, 2]) Ignoring deformed point 1. All points must have at least 2 coordinates. Ignoring deformed point 2. All points must have at least 2 coordinates. [] >>> _construct_points([]) [] >>> _construct_points(None) [] """ points: list[Point] = [] if list_of_tuples: for p in list_of_tuples: if isinstance(p, Point): points.append(p) else: try: points.append(Point(p[0], p[1])) except (IndexError, TypeError): print( f"Ignoring deformed point {p}. All points" " must have at least 2 coordinates." ) return points def _validate_input(points: list[Point] | list[list[float]]) -> list[Point]: """ validates an input instance before a convex-hull algorithms uses it Parameters --------- points: array-like, the 2d points to validate before using with a convex-hull algorithm. The elements of points must be either lists, tuples or Points. Returns ------- points: array_like, an iterable of all well-defined Points constructed passed in. Exception --------- ValueError: if points is empty or None, or if a wrong data structure like a scalar is passed TypeError: if an iterable but non-indexable object (eg. dictionary) is passed. The exception to this a set which we'll convert to a list before using Examples ------- >>> _validate_input([[1, 2]]) [(1.0, 2.0)] >>> _validate_input([(1, 2)]) [(1.0, 2.0)] >>> _validate_input([Point(2, 1), Point(-1, 2)]) [(2.0, 1.0), (-1.0, 2.0)] >>> _validate_input([]) Traceback (most recent call last): ... ValueError: Expecting a list of points but got [] >>> _validate_input(1) Traceback (most recent call last): ... ValueError: Expecting an iterable object but got an non-iterable type 1 """ if not hasattr(points, "__iter__"): raise ValueError( f"Expecting an iterable object but got an non-iterable type {points}" ) if not points: raise ValueError(f"Expecting a list of points but got {points}") return _construct_points(points) def _det(a: Point, b: Point, c: Point) -> float: """ Computes the sign perpendicular distance of a 2d point c from a line segment ab. The sign indicates the direction of c relative to ab. A Positive value means c is above ab (to the left), while a negative value means c is below ab (to the right). 0 means all three points are on a straight line. As a side note, 0.5 * abs|det| is the area of triangle abc Parameters ---------- a: point, the point on the left end of line segment ab b: point, the point on the right end of line segment ab c: point, the point for which the direction and location is desired. Returns -------- det: float, abs(det) is the distance of c from ab. The sign indicates which side of line segment ab c is. det is computed as (a_xb_y + c_xa_y + b_xc_y) - (a_yb_x + c_ya_x + b_yc_x) Examples ---------- >>> _det(Point(1, 1), Point(1, 2), Point(1, 5)) 0.0 >>> _det(Point(0, 0), Point(10, 0), Point(0, 10)) 100.0 >>> _det(Point(0, 0), Point(10, 0), Point(0, -10)) -100.0 """ det = (a.x * b.y + b.x * c.y + c.x * a.y) - (a.y * b.x + b.y * c.x + c.y * a.x) return det def convex_hull_bf(points: list[Point]) -> list[Point]: """ Constructs the convex hull of a set of 2D points using a brute force algorithm. The algorithm basically considers all combinations of points (i, j) and uses the definition of convexity to determine whether (i, j) is part of the convex hull or not. (i, j) is part of the convex hull if and only iff there are no points on both sides of the line segment connecting the ij, and there is no point k such that k is on either end of the ij. Runtime: O(n^3) - definitely horrible Parameters --------- points: array-like of object of Points, lists or tuples. The set of 2d points for which the convex-hull is needed Returns ------ convex_set: list, the convex-hull of points sorted in non-decreasing order. See Also -------- convex_hull_recursive, Examples --------- >>> convex_hull_bf([[0, 0], [1, 0], [10, 1]]) [(0.0, 0.0), (1.0, 0.0), (10.0, 1.0)] >>> convex_hull_bf([[0, 0], [1, 0], [10, 0]]) [(0.0, 0.0), (10.0, 0.0)] >>> convex_hull_bf([[-1, 1],[-1, -1], [0, 0], [0.5, 0.5], [1, -1], [1, 1], ... [-0.75, 1]]) [(-1.0, -1.0), (-1.0, 1.0), (1.0, -1.0), (1.0, 1.0)] >>> convex_hull_bf([(0, 3), (2, 2), (1, 1), (2, 1), (3, 0), (0, 0), (3, 3), ... (2, -1), (2, -4), (1, -3)]) [(0.0, 0.0), (0.0, 3.0), (1.0, -3.0), (2.0, -4.0), (3.0, 0.0), (3.0, 3.0)] """ points = sorted(_validate_input(points)) n = len(points) convex_set = set() for i in range(n - 1): for j in range(i + 1, n): points_left_of_ij = points_right_of_ij = False ij_part_of_convex_hull = True for k in range(n): if k != i and k != j: det_k = _det(points[i], points[j], points[k]) if det_k > 0: points_left_of_ij = True elif det_k < 0: points_right_of_ij = True else: # point[i], point[j], point[k] all lie on a straight line # if point[k] is to the left of point[i] or it's to the # right of point[j], then point[i], point[j] cannot be # part of the convex hull of A if points[k] < points[i] or points[k] > points[j]: ij_part_of_convex_hull = False break if points_left_of_ij and points_right_of_ij: ij_part_of_convex_hull = False break if ij_part_of_convex_hull: convex_set.update([points[i], points[j]]) return sorted(convex_set) def convex_hull_recursive(points: list[Point]) -> list[Point]: """ Constructs the convex hull of a set of 2D points using a divide-and-conquer strategy The algorithm exploits the geometric properties of the problem by repeatedly partitioning the set of points into smaller hulls, and finding the convex hull of these smaller hulls. The union of the convex hull from smaller hulls is the solution to the convex hull of the larger problem. Parameter --------- points: array-like of object of Points, lists or tuples. The set of 2d points for which the convex-hull is needed Runtime: O(n log n) Returns ------- convex_set: list, the convex-hull of points sorted in non-decreasing order. Examples --------- >>> convex_hull_recursive([[0, 0], [1, 0], [10, 1]]) [(0.0, 0.0), (1.0, 0.0), (10.0, 1.0)] >>> convex_hull_recursive([[0, 0], [1, 0], [10, 0]]) [(0.0, 0.0), (10.0, 0.0)] >>> convex_hull_recursive([[-1, 1],[-1, -1], [0, 0], [0.5, 0.5], [1, -1], [1, 1], ... [-0.75, 1]]) [(-1.0, -1.0), (-1.0, 1.0), (1.0, -1.0), (1.0, 1.0)] >>> convex_hull_recursive([(0, 3), (2, 2), (1, 1), (2, 1), (3, 0), (0, 0), (3, 3), ... (2, -1), (2, -4), (1, -3)]) [(0.0, 0.0), (0.0, 3.0), (1.0, -3.0), (2.0, -4.0), (3.0, 0.0), (3.0, 3.0)] """ points = sorted(_validate_input(points)) n = len(points) # divide all the points into an upper hull and a lower hull # the left most point and the right most point are definitely # members of the convex hull by definition. # use these two anchors to divide all the points into two hulls, # an upper hull and a lower hull. # all points to the left (above) the line joining the extreme points belong to the # upper hull # all points to the right (below) the line joining the extreme points below to the # lower hull # ignore all points on the line joining the extreme points since they cannot be # part of the convex hull left_most_point = points[0] right_most_point = points[n - 1] convex_set = {left_most_point, right_most_point} upper_hull = [] lower_hull = [] for i in range(1, n - 1): det = _det(left_most_point, right_most_point, points[i]) if det > 0: upper_hull.append(points[i]) elif det < 0: lower_hull.append(points[i]) _construct_hull(upper_hull, left_most_point, right_most_point, convex_set) _construct_hull(lower_hull, right_most_point, left_most_point, convex_set) return sorted(convex_set) def _construct_hull( points: list[Point], left: Point, right: Point, convex_set: set[Point] ) -> None: """ Parameters --------- points: list or None, the hull of points from which to choose the next convex-hull point left: Point, the point to the left of line segment joining left and right right: The point to the right of the line segment joining left and right convex_set: set, the current convex-hull. The state of convex-set gets updated by this function Note ---- For the line segment 'ab', 'a' is on the left and 'b' on the right. but the reverse is true for the line segment 'ba'. Returns ------- Nothing, only updates the state of convex-set """ if points: extreme_point = None extreme_point_distance = float("-inf") candidate_points = [] for p in points: det = _det(left, right, p) if det > 0: candidate_points.append(p) if det > extreme_point_distance: extreme_point_distance = det extreme_point = p if extreme_point: _construct_hull(candidate_points, left, extreme_point, convex_set) convex_set.add(extreme_point) _construct_hull(candidate_points, extreme_point, right, convex_set) def convex_hull_melkman(points: list[Point]) -> list[Point]: """ Constructs the convex hull of a set of 2D points using the melkman algorithm. The algorithm works by iteratively inserting points of a simple polygonal chain (meaning that no line segments between two consecutive points cross each other). Sorting the points yields such a polygonal chain. For a detailed description, see http://cgm.cs.mcgill.ca/~athens/cs601/Melkman.html Runtime: O(n log n) - O(n) if points are already sorted in the input Parameters --------- points: array-like of object of Points, lists or tuples. The set of 2d points for which the convex-hull is needed Returns ------ convex_set: list, the convex-hull of points sorted in non-decreasing order. See Also -------- Examples --------- >>> convex_hull_melkman([[0, 0], [1, 0], [10, 1]]) [(0.0, 0.0), (1.0, 0.0), (10.0, 1.0)] >>> convex_hull_melkman([[0, 0], [1, 0], [10, 0]]) [(0.0, 0.0), (10.0, 0.0)] >>> convex_hull_melkman([[-1, 1],[-1, -1], [0, 0], [0.5, 0.5], [1, -1], [1, 1], ... [-0.75, 1]]) [(-1.0, -1.0), (-1.0, 1.0), (1.0, -1.0), (1.0, 1.0)] >>> convex_hull_melkman([(0, 3), (2, 2), (1, 1), (2, 1), (3, 0), (0, 0), (3, 3), ... (2, -1), (2, -4), (1, -3)]) [(0.0, 0.0), (0.0, 3.0), (1.0, -3.0), (2.0, -4.0), (3.0, 0.0), (3.0, 3.0)] """ points = sorted(_validate_input(points)) n = len(points) convex_hull = points[:2] for i in range(2, n): det = _det(convex_hull[1], convex_hull[0], points[i]) if det > 0: convex_hull.insert(0, points[i]) break elif det < 0: convex_hull.append(points[i]) break else: convex_hull[1] = points[i] i += 1 for j in range(i, n): if ( _det(convex_hull[0], convex_hull[-1], points[j]) > 0 and _det(convex_hull[-1], convex_hull[0], points[1]) < 0 ): # The point lies within the convex hull continue convex_hull.insert(0, points[j]) convex_hull.append(points[j]) while _det(convex_hull[0], convex_hull[1], convex_hull[2]) >= 0: del convex_hull[1] while _det(convex_hull[-1], convex_hull[-2], convex_hull[-3]) <= 0: del convex_hull[-2] # `convex_hull` is contains the convex hull in circular order return sorted(convex_hull[1:] if len(convex_hull) > 3 else convex_hull) def main(): points = [ (0, 3), (2, 2), (1, 1), (2, 1), (3, 0), (0, 0), (3, 3), (2, -1), (2, -4), (1, -3), ] # the convex set of points is # [(0, 0), (0, 3), (1, -3), (2, -4), (3, 0), (3, 3)] results_bf = convex_hull_bf(points) results_recursive = convex_hull_recursive(points) assert results_bf == results_recursive results_melkman = convex_hull_melkman(points) assert results_bf == results_melkman print(results_bf) if __name__ == "__main__": main()
""" The convex hull problem is problem of finding all the vertices of convex polygon, P of a set of points in a plane such that all the points are either on the vertices of P or inside P. TH convex hull problem has several applications in geometrical problems, computer graphics and game development. Two algorithms have been implemented for the convex hull problem here. 1. A brute-force algorithm which runs in O(n^3) 2. A divide-and-conquer algorithm which runs in O(n log(n)) There are other several other algorithms for the convex hull problem which have not been implemented here, yet. """ from __future__ import annotations from collections.abc import Iterable class Point: """ Defines a 2-d point for use by all convex-hull algorithms. Parameters ---------- x: an int or a float, the x-coordinate of the 2-d point y: an int or a float, the y-coordinate of the 2-d point Examples -------- >>> Point(1, 2) (1.0, 2.0) >>> Point("1", "2") (1.0, 2.0) >>> Point(1, 2) > Point(0, 1) True >>> Point(1, 1) == Point(1, 1) True >>> Point(-0.5, 1) == Point(0.5, 1) False >>> Point("pi", "e") Traceback (most recent call last): ... ValueError: could not convert string to float: 'pi' """ def __init__(self, x, y): self.x, self.y = float(x), float(y) def __eq__(self, other): return self.x == other.x and self.y == other.y def __ne__(self, other): return not self == other def __gt__(self, other): if self.x > other.x: return True elif self.x == other.x: return self.y > other.y return False def __lt__(self, other): return not self > other def __ge__(self, other): if self.x > other.x: return True elif self.x == other.x: return self.y >= other.y return False def __le__(self, other): if self.x < other.x: return True elif self.x == other.x: return self.y <= other.y return False def __repr__(self): return f"({self.x}, {self.y})" def __hash__(self): return hash(self.x) def _construct_points( list_of_tuples: list[Point] | list[list[float]] | Iterable[list[float]], ) -> list[Point]: """ constructs a list of points from an array-like object of numbers Arguments --------- list_of_tuples: array-like object of type numbers. Acceptable types so far are lists, tuples and sets. Returns -------- points: a list where each item is of type Point. This contains only objects which can be converted into a Point. Examples ------- >>> _construct_points([[1, 1], [2, -1], [0.3, 4]]) [(1.0, 1.0), (2.0, -1.0), (0.3, 4.0)] >>> _construct_points([1, 2]) Ignoring deformed point 1. All points must have at least 2 coordinates. Ignoring deformed point 2. All points must have at least 2 coordinates. [] >>> _construct_points([]) [] >>> _construct_points(None) [] """ points: list[Point] = [] if list_of_tuples: for p in list_of_tuples: if isinstance(p, Point): points.append(p) else: try: points.append(Point(p[0], p[1])) except (IndexError, TypeError): print( f"Ignoring deformed point {p}. All points" " must have at least 2 coordinates." ) return points def _validate_input(points: list[Point] | list[list[float]]) -> list[Point]: """ validates an input instance before a convex-hull algorithms uses it Parameters --------- points: array-like, the 2d points to validate before using with a convex-hull algorithm. The elements of points must be either lists, tuples or Points. Returns ------- points: array_like, an iterable of all well-defined Points constructed passed in. Exception --------- ValueError: if points is empty or None, or if a wrong data structure like a scalar is passed TypeError: if an iterable but non-indexable object (eg. dictionary) is passed. The exception to this a set which we'll convert to a list before using Examples ------- >>> _validate_input([[1, 2]]) [(1.0, 2.0)] >>> _validate_input([(1, 2)]) [(1.0, 2.0)] >>> _validate_input([Point(2, 1), Point(-1, 2)]) [(2.0, 1.0), (-1.0, 2.0)] >>> _validate_input([]) Traceback (most recent call last): ... ValueError: Expecting a list of points but got [] >>> _validate_input(1) Traceback (most recent call last): ... ValueError: Expecting an iterable object but got an non-iterable type 1 """ if not hasattr(points, "__iter__"): raise ValueError( f"Expecting an iterable object but got an non-iterable type {points}" ) if not points: raise ValueError(f"Expecting a list of points but got {points}") return _construct_points(points) def _det(a: Point, b: Point, c: Point) -> float: """ Computes the sign perpendicular distance of a 2d point c from a line segment ab. The sign indicates the direction of c relative to ab. A Positive value means c is above ab (to the left), while a negative value means c is below ab (to the right). 0 means all three points are on a straight line. As a side note, 0.5 * abs|det| is the area of triangle abc Parameters ---------- a: point, the point on the left end of line segment ab b: point, the point on the right end of line segment ab c: point, the point for which the direction and location is desired. Returns -------- det: float, abs(det) is the distance of c from ab. The sign indicates which side of line segment ab c is. det is computed as (a_xb_y + c_xa_y + b_xc_y) - (a_yb_x + c_ya_x + b_yc_x) Examples ---------- >>> _det(Point(1, 1), Point(1, 2), Point(1, 5)) 0.0 >>> _det(Point(0, 0), Point(10, 0), Point(0, 10)) 100.0 >>> _det(Point(0, 0), Point(10, 0), Point(0, -10)) -100.0 """ det = (a.x * b.y + b.x * c.y + c.x * a.y) - (a.y * b.x + b.y * c.x + c.y * a.x) return det def convex_hull_bf(points: list[Point]) -> list[Point]: """ Constructs the convex hull of a set of 2D points using a brute force algorithm. The algorithm basically considers all combinations of points (i, j) and uses the definition of convexity to determine whether (i, j) is part of the convex hull or not. (i, j) is part of the convex hull if and only iff there are no points on both sides of the line segment connecting the ij, and there is no point k such that k is on either end of the ij. Runtime: O(n^3) - definitely horrible Parameters --------- points: array-like of object of Points, lists or tuples. The set of 2d points for which the convex-hull is needed Returns ------ convex_set: list, the convex-hull of points sorted in non-decreasing order. See Also -------- convex_hull_recursive, Examples --------- >>> convex_hull_bf([[0, 0], [1, 0], [10, 1]]) [(0.0, 0.0), (1.0, 0.0), (10.0, 1.0)] >>> convex_hull_bf([[0, 0], [1, 0], [10, 0]]) [(0.0, 0.0), (10.0, 0.0)] >>> convex_hull_bf([[-1, 1],[-1, -1], [0, 0], [0.5, 0.5], [1, -1], [1, 1], ... [-0.75, 1]]) [(-1.0, -1.0), (-1.0, 1.0), (1.0, -1.0), (1.0, 1.0)] >>> convex_hull_bf([(0, 3), (2, 2), (1, 1), (2, 1), (3, 0), (0, 0), (3, 3), ... (2, -1), (2, -4), (1, -3)]) [(0.0, 0.0), (0.0, 3.0), (1.0, -3.0), (2.0, -4.0), (3.0, 0.0), (3.0, 3.0)] """ points = sorted(_validate_input(points)) n = len(points) convex_set = set() for i in range(n - 1): for j in range(i + 1, n): points_left_of_ij = points_right_of_ij = False ij_part_of_convex_hull = True for k in range(n): if k != i and k != j: det_k = _det(points[i], points[j], points[k]) if det_k > 0: points_left_of_ij = True elif det_k < 0: points_right_of_ij = True else: # point[i], point[j], point[k] all lie on a straight line # if point[k] is to the left of point[i] or it's to the # right of point[j], then point[i], point[j] cannot be # part of the convex hull of A if points[k] < points[i] or points[k] > points[j]: ij_part_of_convex_hull = False break if points_left_of_ij and points_right_of_ij: ij_part_of_convex_hull = False break if ij_part_of_convex_hull: convex_set.update([points[i], points[j]]) return sorted(convex_set) def convex_hull_recursive(points: list[Point]) -> list[Point]: """ Constructs the convex hull of a set of 2D points using a divide-and-conquer strategy The algorithm exploits the geometric properties of the problem by repeatedly partitioning the set of points into smaller hulls, and finding the convex hull of these smaller hulls. The union of the convex hull from smaller hulls is the solution to the convex hull of the larger problem. Parameter --------- points: array-like of object of Points, lists or tuples. The set of 2d points for which the convex-hull is needed Runtime: O(n log n) Returns ------- convex_set: list, the convex-hull of points sorted in non-decreasing order. Examples --------- >>> convex_hull_recursive([[0, 0], [1, 0], [10, 1]]) [(0.0, 0.0), (1.0, 0.0), (10.0, 1.0)] >>> convex_hull_recursive([[0, 0], [1, 0], [10, 0]]) [(0.0, 0.0), (10.0, 0.0)] >>> convex_hull_recursive([[-1, 1],[-1, -1], [0, 0], [0.5, 0.5], [1, -1], [1, 1], ... [-0.75, 1]]) [(-1.0, -1.0), (-1.0, 1.0), (1.0, -1.0), (1.0, 1.0)] >>> convex_hull_recursive([(0, 3), (2, 2), (1, 1), (2, 1), (3, 0), (0, 0), (3, 3), ... (2, -1), (2, -4), (1, -3)]) [(0.0, 0.0), (0.0, 3.0), (1.0, -3.0), (2.0, -4.0), (3.0, 0.0), (3.0, 3.0)] """ points = sorted(_validate_input(points)) n = len(points) # divide all the points into an upper hull and a lower hull # the left most point and the right most point are definitely # members of the convex hull by definition. # use these two anchors to divide all the points into two hulls, # an upper hull and a lower hull. # all points to the left (above) the line joining the extreme points belong to the # upper hull # all points to the right (below) the line joining the extreme points below to the # lower hull # ignore all points on the line joining the extreme points since they cannot be # part of the convex hull left_most_point = points[0] right_most_point = points[n - 1] convex_set = {left_most_point, right_most_point} upper_hull = [] lower_hull = [] for i in range(1, n - 1): det = _det(left_most_point, right_most_point, points[i]) if det > 0: upper_hull.append(points[i]) elif det < 0: lower_hull.append(points[i]) _construct_hull(upper_hull, left_most_point, right_most_point, convex_set) _construct_hull(lower_hull, right_most_point, left_most_point, convex_set) return sorted(convex_set) def _construct_hull( points: list[Point], left: Point, right: Point, convex_set: set[Point] ) -> None: """ Parameters --------- points: list or None, the hull of points from which to choose the next convex-hull point left: Point, the point to the left of line segment joining left and right right: The point to the right of the line segment joining left and right convex_set: set, the current convex-hull. The state of convex-set gets updated by this function Note ---- For the line segment 'ab', 'a' is on the left and 'b' on the right. but the reverse is true for the line segment 'ba'. Returns ------- Nothing, only updates the state of convex-set """ if points: extreme_point = None extreme_point_distance = float("-inf") candidate_points = [] for p in points: det = _det(left, right, p) if det > 0: candidate_points.append(p) if det > extreme_point_distance: extreme_point_distance = det extreme_point = p if extreme_point: _construct_hull(candidate_points, left, extreme_point, convex_set) convex_set.add(extreme_point) _construct_hull(candidate_points, extreme_point, right, convex_set) def convex_hull_melkman(points: list[Point]) -> list[Point]: """ Constructs the convex hull of a set of 2D points using the melkman algorithm. The algorithm works by iteratively inserting points of a simple polygonal chain (meaning that no line segments between two consecutive points cross each other). Sorting the points yields such a polygonal chain. For a detailed description, see http://cgm.cs.mcgill.ca/~athens/cs601/Melkman.html Runtime: O(n log n) - O(n) if points are already sorted in the input Parameters --------- points: array-like of object of Points, lists or tuples. The set of 2d points for which the convex-hull is needed Returns ------ convex_set: list, the convex-hull of points sorted in non-decreasing order. See Also -------- Examples --------- >>> convex_hull_melkman([[0, 0], [1, 0], [10, 1]]) [(0.0, 0.0), (1.0, 0.0), (10.0, 1.0)] >>> convex_hull_melkman([[0, 0], [1, 0], [10, 0]]) [(0.0, 0.0), (10.0, 0.0)] >>> convex_hull_melkman([[-1, 1],[-1, -1], [0, 0], [0.5, 0.5], [1, -1], [1, 1], ... [-0.75, 1]]) [(-1.0, -1.0), (-1.0, 1.0), (1.0, -1.0), (1.0, 1.0)] >>> convex_hull_melkman([(0, 3), (2, 2), (1, 1), (2, 1), (3, 0), (0, 0), (3, 3), ... (2, -1), (2, -4), (1, -3)]) [(0.0, 0.0), (0.0, 3.0), (1.0, -3.0), (2.0, -4.0), (3.0, 0.0), (3.0, 3.0)] """ points = sorted(_validate_input(points)) n = len(points) convex_hull = points[:2] for i in range(2, n): det = _det(convex_hull[1], convex_hull[0], points[i]) if det > 0: convex_hull.insert(0, points[i]) break elif det < 0: convex_hull.append(points[i]) break else: convex_hull[1] = points[i] i += 1 for j in range(i, n): if ( _det(convex_hull[0], convex_hull[-1], points[j]) > 0 and _det(convex_hull[-1], convex_hull[0], points[1]) < 0 ): # The point lies within the convex hull continue convex_hull.insert(0, points[j]) convex_hull.append(points[j]) while _det(convex_hull[0], convex_hull[1], convex_hull[2]) >= 0: del convex_hull[1] while _det(convex_hull[-1], convex_hull[-2], convex_hull[-3]) <= 0: del convex_hull[-2] # `convex_hull` is contains the convex hull in circular order return sorted(convex_hull[1:] if len(convex_hull) > 3 else convex_hull) def main(): points = [ (0, 3), (2, 2), (1, 1), (2, 1), (3, 0), (0, 0), (3, 3), (2, -1), (2, -4), (1, -3), ] # the convex set of points is # [(0, 0), (0, 3), (1, -3), (2, -4), (3, 0), (3, 3)] results_bf = convex_hull_bf(points) results_recursive = convex_hull_recursive(points) assert results_bf == results_recursive results_melkman = convex_hull_melkman(points) assert results_bf == results_melkman print(results_bf) if __name__ == "__main__": main()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Approximates the area under the curve using the trapezoidal rule """ from __future__ import annotations from collections.abc import Callable def trapezoidal_area( fnc: Callable[[int | float], int | float], x_start: int | float, x_end: int | float, steps: int = 100, ) -> float: """ Treats curve as a collection of linear lines and sums the area of the trapezium shape they form :param fnc: a function which defines a curve :param x_start: left end point to indicate the start of line segment :param x_end: right end point to indicate end of line segment :param steps: an accuracy gauge; more steps increases the accuracy :return: a float representing the length of the curve >>> def f(x): ... return 5 >>> '%.3f' % trapezoidal_area(f, 12.0, 14.0, 1000) '10.000' >>> def f(x): ... return 9*x**2 >>> '%.4f' % trapezoidal_area(f, -4.0, 0, 10000) '192.0000' >>> '%.4f' % trapezoidal_area(f, -4.0, 4.0, 10000) '384.0000' """ x1 = x_start fx1 = fnc(x_start) area = 0.0 for _ in range(steps): # Approximates small segments of curve as linear and solve # for trapezoidal area x2 = (x_end - x_start) / steps + x1 fx2 = fnc(x2) area += abs(fx2 + fx1) * (x2 - x1) / 2 # Increment step x1 = x2 fx1 = fx2 return area if __name__ == "__main__": def f(x): return x**3 print("f(x) = x^3") print("The area between the curve, x = -10, x = 10 and the x axis is:") i = 10 while i <= 100000: area = trapezoidal_area(f, -5, 5, i) print(f"with {i} steps: {area}") i *= 10
""" Approximates the area under the curve using the trapezoidal rule """ from __future__ import annotations from collections.abc import Callable def trapezoidal_area( fnc: Callable[[int | float], int | float], x_start: int | float, x_end: int | float, steps: int = 100, ) -> float: """ Treats curve as a collection of linear lines and sums the area of the trapezium shape they form :param fnc: a function which defines a curve :param x_start: left end point to indicate the start of line segment :param x_end: right end point to indicate end of line segment :param steps: an accuracy gauge; more steps increases the accuracy :return: a float representing the length of the curve >>> def f(x): ... return 5 >>> '%.3f' % trapezoidal_area(f, 12.0, 14.0, 1000) '10.000' >>> def f(x): ... return 9*x**2 >>> '%.4f' % trapezoidal_area(f, -4.0, 0, 10000) '192.0000' >>> '%.4f' % trapezoidal_area(f, -4.0, 4.0, 10000) '384.0000' """ x1 = x_start fx1 = fnc(x_start) area = 0.0 for _ in range(steps): # Approximates small segments of curve as linear and solve # for trapezoidal area x2 = (x_end - x_start) / steps + x1 fx2 = fnc(x2) area += abs(fx2 + fx1) * (x2 - x1) / 2 # Increment step x1 = x2 fx1 = fx2 return area if __name__ == "__main__": def f(x): return x**3 print("f(x) = x^3") print("The area between the curve, x = -10, x = 10 and the x axis is:") i = 10 while i <= 100000: area = trapezoidal_area(f, -5, 5, i) print(f"with {i} steps: {area}") i *= 10
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Signum function -- https://en.wikipedia.org/wiki/Sign_function """ def signum(num: float) -> int: """ Applies signum function on the number >>> signum(-10) -1 >>> signum(10) 1 >>> signum(0) 0 """ if num < 0: return -1 return 1 if num else 0 def test_signum() -> None: """ Tests the signum function """ assert signum(5) == 1 assert signum(-5) == -1 assert signum(0) == 0 if __name__ == "__main__": print(signum(12)) print(signum(-12)) print(signum(0))
""" Signum function -- https://en.wikipedia.org/wiki/Sign_function """ def signum(num: float) -> int: """ Applies signum function on the number >>> signum(-10) -1 >>> signum(10) 1 >>> signum(0) 0 """ if num < 0: return -1 return 1 if num else 0 def test_signum() -> None: """ Tests the signum function """ assert signum(5) == 1 assert signum(-5) == -1 assert signum(0) == 0 if __name__ == "__main__": print(signum(12)) print(signum(-12)) print(signum(0))
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
"""Queue represented by a pseudo stack (represented by a list with pop and append)""" from typing import Any class Queue: def __init__(self): self.stack = [] self.length = 0 def __str__(self): printed = "<" + str(self.stack)[1:-1] + ">" return printed """Enqueues {@code item} @param item item to enqueue""" def put(self, item: Any) -> None: self.stack.append(item) self.length = self.length + 1 """Dequeues {@code item} @requirement: |self.length| > 0 @return dequeued item that was dequeued""" def get(self) -> Any: self.rotate(1) dequeued = self.stack[self.length - 1] self.stack = self.stack[:-1] self.rotate(self.length - 1) self.length = self.length - 1 return dequeued """Rotates the queue {@code rotation} times @param rotation number of times to rotate queue""" def rotate(self, rotation: int) -> None: for _ in range(rotation): temp = self.stack[0] self.stack = self.stack[1:] self.put(temp) self.length = self.length - 1 """Reports item at the front of self @return item at front of self.stack""" def front(self) -> Any: front = self.get() self.put(front) self.rotate(self.length - 1) return front """Returns the length of this.stack""" def size(self) -> int: return self.length
"""Queue represented by a pseudo stack (represented by a list with pop and append)""" from typing import Any class Queue: def __init__(self): self.stack = [] self.length = 0 def __str__(self): printed = "<" + str(self.stack)[1:-1] + ">" return printed """Enqueues {@code item} @param item item to enqueue""" def put(self, item: Any) -> None: self.stack.append(item) self.length = self.length + 1 """Dequeues {@code item} @requirement: |self.length| > 0 @return dequeued item that was dequeued""" def get(self) -> Any: self.rotate(1) dequeued = self.stack[self.length - 1] self.stack = self.stack[:-1] self.rotate(self.length - 1) self.length = self.length - 1 return dequeued """Rotates the queue {@code rotation} times @param rotation number of times to rotate queue""" def rotate(self, rotation: int) -> None: for _ in range(rotation): temp = self.stack[0] self.stack = self.stack[1:] self.put(temp) self.length = self.length - 1 """Reports item at the front of self @return item at front of self.stack""" def front(self) -> Any: front = self.get() self.put(front) self.rotate(self.length - 1) return front """Returns the length of this.stack""" def size(self) -> int: return self.length
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
def is_contains_unique_chars(input_str: str) -> bool: """ Check if all characters in the string is unique or not. >>> is_contains_unique_chars("I_love.py") True >>> is_contains_unique_chars("I don't love Python") False Time complexity: O(n) Space complexity: O(1) 19320 bytes as we are having 144697 characters in unicode """ # Each bit will represent each unicode character # For example 65th bit representing 'A' # https://stackoverflow.com/a/12811293 bitmap = 0 for ch in input_str: ch_unicode = ord(ch) ch_bit_index_on = pow(2, ch_unicode) # If we already turned on bit for current character's unicode if bitmap >> ch_unicode & 1 == 1: return False bitmap |= ch_bit_index_on return True if __name__ == "__main__": import doctest doctest.testmod()
def is_contains_unique_chars(input_str: str) -> bool: """ Check if all characters in the string is unique or not. >>> is_contains_unique_chars("I_love.py") True >>> is_contains_unique_chars("I don't love Python") False Time complexity: O(n) Space complexity: O(1) 19320 bytes as we are having 144697 characters in unicode """ # Each bit will represent each unicode character # For example 65th bit representing 'A' # https://stackoverflow.com/a/12811293 bitmap = 0 for ch in input_str: ch_unicode = ord(ch) ch_bit_index_on = pow(2, ch_unicode) # If we already turned on bit for current character's unicode if bitmap >> ch_unicode & 1 == 1: return False bitmap |= ch_bit_index_on return True if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" wiki: https://en.wikipedia.org/wiki/Heterogram_(literature)#Isograms """ def is_isogram(string: str) -> bool: """ An isogram is a word in which no letter is repeated. Examples of isograms are uncopyrightable and ambidextrously. >>> is_isogram('Uncopyrightable') True >>> is_isogram('allowance') False >>> is_isogram('copy1') Traceback (most recent call last): ... ValueError: String must only contain alphabetic characters. """ if not all(x.isalpha() for x in string): raise ValueError("String must only contain alphabetic characters.") letters = sorted(string.lower()) return len(letters) == len(set(letters)) if __name__ == "__main__": input_str = input("Enter a string ").strip() isogram = is_isogram(input_str) print(f"{input_str} is {'an' if isogram else 'not an'} isogram.")
""" wiki: https://en.wikipedia.org/wiki/Heterogram_(literature)#Isograms """ def is_isogram(string: str) -> bool: """ An isogram is a word in which no letter is repeated. Examples of isograms are uncopyrightable and ambidextrously. >>> is_isogram('Uncopyrightable') True >>> is_isogram('allowance') False >>> is_isogram('copy1') Traceback (most recent call last): ... ValueError: String must only contain alphabetic characters. """ if not all(x.isalpha() for x in string): raise ValueError("String must only contain alphabetic characters.") letters = sorted(string.lower()) return len(letters) == len(set(letters)) if __name__ == "__main__": input_str = input("Enter a string ").strip() isogram = is_isogram(input_str) print(f"{input_str} is {'an' if isogram else 'not an'} isogram.")
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Linear regression is the most basic type of regression commonly used for predictive analysis. The idea is pretty simple: we have a dataset and we have features associated with it. Features should be chosen very cautiously as they determine how much our model will be able to make future predictions. We try to set the weight of these features, over many iterations, so that they best fit our dataset. In this particular code, I had used a CSGO dataset (ADR vs Rating). We try to best fit a line through dataset and estimate the parameters. """ import numpy as np import requests def collect_dataset(): """Collect dataset of CSGO The dataset contains ADR vs Rating of a Player :return : dataset obtained from the link, as matrix """ response = requests.get( "https://raw.githubusercontent.com/yashLadha/The_Math_of_Intelligence/" "master/Week1/ADRvsRating.csv" ) lines = response.text.splitlines() data = [] for item in lines: item = item.split(",") data.append(item) data.pop(0) # This is for removing the labels from the list dataset = np.matrix(data) return dataset def run_steep_gradient_descent(data_x, data_y, len_data, alpha, theta): """Run steep gradient descent and updates the Feature vector accordingly_ :param data_x : contains the dataset :param data_y : contains the output associated with each data-entry :param len_data : length of the data_ :param alpha : Learning rate of the model :param theta : Feature vector (weight's for our model) ;param return : Updated Feature's, using curr_features - alpha_ * gradient(w.r.t. feature) """ n = len_data prod = np.dot(theta, data_x.transpose()) prod -= data_y.transpose() sum_grad = np.dot(prod, data_x) theta = theta - (alpha / n) * sum_grad return theta def sum_of_square_error(data_x, data_y, len_data, theta): """Return sum of square error for error calculation :param data_x : contains our dataset :param data_y : contains the output (result vector) :param len_data : len of the dataset :param theta : contains the feature vector :return : sum of square error computed from given feature's """ prod = np.dot(theta, data_x.transpose()) prod -= data_y.transpose() sum_elem = np.sum(np.square(prod)) error = sum_elem / (2 * len_data) return error def run_linear_regression(data_x, data_y): """Implement Linear regression over the dataset :param data_x : contains our dataset :param data_y : contains the output (result vector) :return : feature for line of best fit (Feature vector) """ iterations = 100000 alpha = 0.0001550 no_features = data_x.shape[1] len_data = data_x.shape[0] - 1 theta = np.zeros((1, no_features)) for i in range(0, iterations): theta = run_steep_gradient_descent(data_x, data_y, len_data, alpha, theta) error = sum_of_square_error(data_x, data_y, len_data, theta) print(f"At Iteration {i + 1} - Error is {error:.5f}") return theta def mean_absolute_error(predicted_y, original_y): """Return sum of square error for error calculation :param predicted_y : contains the output of prediction (result vector) :param original_y : contains values of expected outcome :return : mean absolute error computed from given feature's """ total = sum(abs(y - predicted_y[i]) for i, y in enumerate(original_y)) return total / len(original_y) def main(): """Driver function""" data = collect_dataset() len_data = data.shape[0] data_x = np.c_[np.ones(len_data), data[:, :-1]].astype(float) data_y = data[:, -1].astype(float) theta = run_linear_regression(data_x, data_y) len_result = theta.shape[1] print("Resultant Feature vector : ") for i in range(0, len_result): print(f"{theta[0, i]:.5f}") if __name__ == "__main__": main()
""" Linear regression is the most basic type of regression commonly used for predictive analysis. The idea is pretty simple: we have a dataset and we have features associated with it. Features should be chosen very cautiously as they determine how much our model will be able to make future predictions. We try to set the weight of these features, over many iterations, so that they best fit our dataset. In this particular code, I had used a CSGO dataset (ADR vs Rating). We try to best fit a line through dataset and estimate the parameters. """ import numpy as np import requests def collect_dataset(): """Collect dataset of CSGO The dataset contains ADR vs Rating of a Player :return : dataset obtained from the link, as matrix """ response = requests.get( "https://raw.githubusercontent.com/yashLadha/The_Math_of_Intelligence/" "master/Week1/ADRvsRating.csv" ) lines = response.text.splitlines() data = [] for item in lines: item = item.split(",") data.append(item) data.pop(0) # This is for removing the labels from the list dataset = np.matrix(data) return dataset def run_steep_gradient_descent(data_x, data_y, len_data, alpha, theta): """Run steep gradient descent and updates the Feature vector accordingly_ :param data_x : contains the dataset :param data_y : contains the output associated with each data-entry :param len_data : length of the data_ :param alpha : Learning rate of the model :param theta : Feature vector (weight's for our model) ;param return : Updated Feature's, using curr_features - alpha_ * gradient(w.r.t. feature) """ n = len_data prod = np.dot(theta, data_x.transpose()) prod -= data_y.transpose() sum_grad = np.dot(prod, data_x) theta = theta - (alpha / n) * sum_grad return theta def sum_of_square_error(data_x, data_y, len_data, theta): """Return sum of square error for error calculation :param data_x : contains our dataset :param data_y : contains the output (result vector) :param len_data : len of the dataset :param theta : contains the feature vector :return : sum of square error computed from given feature's """ prod = np.dot(theta, data_x.transpose()) prod -= data_y.transpose() sum_elem = np.sum(np.square(prod)) error = sum_elem / (2 * len_data) return error def run_linear_regression(data_x, data_y): """Implement Linear regression over the dataset :param data_x : contains our dataset :param data_y : contains the output (result vector) :return : feature for line of best fit (Feature vector) """ iterations = 100000 alpha = 0.0001550 no_features = data_x.shape[1] len_data = data_x.shape[0] - 1 theta = np.zeros((1, no_features)) for i in range(0, iterations): theta = run_steep_gradient_descent(data_x, data_y, len_data, alpha, theta) error = sum_of_square_error(data_x, data_y, len_data, theta) print(f"At Iteration {i + 1} - Error is {error:.5f}") return theta def mean_absolute_error(predicted_y, original_y): """Return sum of square error for error calculation :param predicted_y : contains the output of prediction (result vector) :param original_y : contains values of expected outcome :return : mean absolute error computed from given feature's """ total = sum(abs(y - predicted_y[i]) for i, y in enumerate(original_y)) return total / len(original_y) def main(): """Driver function""" data = collect_dataset() len_data = data.shape[0] data_x = np.c_[np.ones(len_data), data[:, :-1]].astype(float) data_y = data[:, -1].astype(float) theta = run_linear_regression(data_x, data_y) len_result = theta.shape[1] print("Resultant Feature vector : ") for i in range(0, len_result): print(f"{theta[0, i]:.5f}") if __name__ == "__main__": main()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Normalization. Wikipedia: https://en.wikipedia.org/wiki/Normalization Normalization is the process of converting numerical data to a standard range of values. This range is typically between [0, 1] or [-1, 1]. The equation for normalization is x_norm = (x - x_min)/(x_max - x_min) where x_norm is the normalized value, x is the value, x_min is the minimum value within the column or list of data, and x_max is the maximum value within the column or list of data. Normalization is used to speed up the training of data and put all of the data on a similar scale. This is useful because variance in the range of values of a dataset can heavily impact optimization (particularly Gradient Descent). Standardization Wikipedia: https://en.wikipedia.org/wiki/Standardization Standardization is the process of converting numerical data to a normally distributed range of values. This range will have a mean of 0 and standard deviation of 1. This is also known as z-score normalization. The equation for standardization is x_std = (x - mu)/(sigma) where mu is the mean of the column or list of values and sigma is the standard deviation of the column or list of values. Choosing between Normalization & Standardization is more of an art of a science, but it is often recommended to run experiments with both to see which performs better. Additionally, a few rules of thumb are: 1. gaussian (normal) distributions work better with standardization 2. non-gaussian (non-normal) distributions work better with normalization 3. If a column or list of values has extreme values / outliers, use standardization """ from statistics import mean, stdev def normalization(data: list, ndigits: int = 3) -> list: """ Return a normalized list of values. @params: data, a list of values to normalize @returns: a list of normalized values (rounded to ndigits decimal places) @examples: >>> normalization([2, 7, 10, 20, 30, 50]) [0.0, 0.104, 0.167, 0.375, 0.583, 1.0] >>> normalization([5, 10, 15, 20, 25]) [0.0, 0.25, 0.5, 0.75, 1.0] """ # variables for calculation x_min = min(data) x_max = max(data) # normalize data return [round((x - x_min) / (x_max - x_min), ndigits) for x in data] def standardization(data: list, ndigits: int = 3) -> list: """ Return a standardized list of values. @params: data, a list of values to standardize @returns: a list of standardized values (rounded to ndigits decimal places) @examples: >>> standardization([2, 7, 10, 20, 30, 50]) [-0.999, -0.719, -0.551, 0.009, 0.57, 1.69] >>> standardization([5, 10, 15, 20, 25]) [-1.265, -0.632, 0.0, 0.632, 1.265] """ # variables for calculation mu = mean(data) sigma = stdev(data) # standardize data return [round((x - mu) / (sigma), ndigits) for x in data]
""" Normalization. Wikipedia: https://en.wikipedia.org/wiki/Normalization Normalization is the process of converting numerical data to a standard range of values. This range is typically between [0, 1] or [-1, 1]. The equation for normalization is x_norm = (x - x_min)/(x_max - x_min) where x_norm is the normalized value, x is the value, x_min is the minimum value within the column or list of data, and x_max is the maximum value within the column or list of data. Normalization is used to speed up the training of data and put all of the data on a similar scale. This is useful because variance in the range of values of a dataset can heavily impact optimization (particularly Gradient Descent). Standardization Wikipedia: https://en.wikipedia.org/wiki/Standardization Standardization is the process of converting numerical data to a normally distributed range of values. This range will have a mean of 0 and standard deviation of 1. This is also known as z-score normalization. The equation for standardization is x_std = (x - mu)/(sigma) where mu is the mean of the column or list of values and sigma is the standard deviation of the column or list of values. Choosing between Normalization & Standardization is more of an art of a science, but it is often recommended to run experiments with both to see which performs better. Additionally, a few rules of thumb are: 1. gaussian (normal) distributions work better with standardization 2. non-gaussian (non-normal) distributions work better with normalization 3. If a column or list of values has extreme values / outliers, use standardization """ from statistics import mean, stdev def normalization(data: list, ndigits: int = 3) -> list: """ Return a normalized list of values. @params: data, a list of values to normalize @returns: a list of normalized values (rounded to ndigits decimal places) @examples: >>> normalization([2, 7, 10, 20, 30, 50]) [0.0, 0.104, 0.167, 0.375, 0.583, 1.0] >>> normalization([5, 10, 15, 20, 25]) [0.0, 0.25, 0.5, 0.75, 1.0] """ # variables for calculation x_min = min(data) x_max = max(data) # normalize data return [round((x - x_min) / (x_max - x_min), ndigits) for x in data] def standardization(data: list, ndigits: int = 3) -> list: """ Return a standardized list of values. @params: data, a list of values to standardize @returns: a list of standardized values (rounded to ndigits decimal places) @examples: >>> standardization([2, 7, 10, 20, 30, 50]) [-0.999, -0.719, -0.551, 0.009, 0.57, 1.69] >>> standardization([5, 10, 15, 20, 25]) [-1.265, -0.632, 0.0, 0.632, 1.265] """ # variables for calculation mu = mean(data) sigma = stdev(data) # standardize data return [round((x - mu) / (sigma), ndigits) for x in data]
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Implementation of an auto-balanced binary tree! For doctests run following command: python3 -m doctest -v avl_tree.py For testing run: python avl_tree.py """ from __future__ import annotations import math import random from typing import Any class MyQueue: def __init__(self) -> None: self.data: list[Any] = [] self.head: int = 0 self.tail: int = 0 def is_empty(self) -> bool: return self.head == self.tail def push(self, data: Any) -> None: self.data.append(data) self.tail = self.tail + 1 def pop(self) -> Any: ret = self.data[self.head] self.head = self.head + 1 return ret def count(self) -> int: return self.tail - self.head def print_queue(self) -> None: print(self.data) print("**************") print(self.data[self.head : self.tail]) class MyNode: def __init__(self, data: Any) -> None: self.data = data self.left: MyNode | None = None self.right: MyNode | None = None self.height: int = 1 def get_data(self) -> Any: return self.data def get_left(self) -> MyNode | None: return self.left def get_right(self) -> MyNode | None: return self.right def get_height(self) -> int: return self.height def set_data(self, data: Any) -> None: self.data = data return def set_left(self, node: MyNode | None) -> None: self.left = node return def set_right(self, node: MyNode | None) -> None: self.right = node return def set_height(self, height: int) -> None: self.height = height return def get_height(node: MyNode | None) -> int: if node is None: return 0 return node.get_height() def my_max(a: int, b: int) -> int: if a > b: return a return b def right_rotation(node: MyNode) -> MyNode: r""" A B / \ / \ B C Bl A / \ --> / / \ Bl Br UB Br C / UB UB = unbalanced node """ print("left rotation node:", node.get_data()) ret = node.get_left() assert ret is not None node.set_left(ret.get_right()) ret.set_right(node) h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1 node.set_height(h1) h2 = my_max(get_height(ret.get_right()), get_height(ret.get_left())) + 1 ret.set_height(h2) return ret def left_rotation(node: MyNode) -> MyNode: """ a mirror symmetry rotation of the left_rotation """ print("right rotation node:", node.get_data()) ret = node.get_right() assert ret is not None node.set_right(ret.get_left()) ret.set_left(node) h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1 node.set_height(h1) h2 = my_max(get_height(ret.get_right()), get_height(ret.get_left())) + 1 ret.set_height(h2) return ret def lr_rotation(node: MyNode) -> MyNode: r""" A A Br / \ / \ / \ B C LR Br C RR B A / \ --> / \ --> / / \ Bl Br B UB Bl UB C \ / UB Bl RR = right_rotation LR = left_rotation """ left_child = node.get_left() assert left_child is not None node.set_left(left_rotation(left_child)) return right_rotation(node) def rl_rotation(node: MyNode) -> MyNode: right_child = node.get_right() assert right_child is not None node.set_right(right_rotation(right_child)) return left_rotation(node) def insert_node(node: MyNode | None, data: Any) -> MyNode | None: if node is None: return MyNode(data) if data < node.get_data(): node.set_left(insert_node(node.get_left(), data)) if ( get_height(node.get_left()) - get_height(node.get_right()) == 2 ): # an unbalance detected left_child = node.get_left() assert left_child is not None if ( data < left_child.get_data() ): # new node is the left child of the left child node = right_rotation(node) else: node = lr_rotation(node) else: node.set_right(insert_node(node.get_right(), data)) if get_height(node.get_right()) - get_height(node.get_left()) == 2: right_child = node.get_right() assert right_child is not None if data < right_child.get_data(): node = rl_rotation(node) else: node = left_rotation(node) h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1 node.set_height(h1) return node def get_right_most(root: MyNode) -> Any: while True: right_child = root.get_right() if right_child is None: break root = right_child return root.get_data() def get_left_most(root: MyNode) -> Any: while True: left_child = root.get_left() if left_child is None: break root = left_child return root.get_data() def del_node(root: MyNode, data: Any) -> MyNode | None: left_child = root.get_left() right_child = root.get_right() if root.get_data() == data: if left_child is not None and right_child is not None: temp_data = get_left_most(right_child) root.set_data(temp_data) root.set_right(del_node(right_child, temp_data)) elif left_child is not None: root = left_child elif right_child is not None: root = right_child else: return None elif root.get_data() > data: if left_child is None: print("No such data") return root else: root.set_left(del_node(left_child, data)) else: # root.get_data() < data if right_child is None: return root else: root.set_right(del_node(right_child, data)) if get_height(right_child) - get_height(left_child) == 2: assert right_child is not None if get_height(right_child.get_right()) > get_height(right_child.get_left()): root = left_rotation(root) else: root = rl_rotation(root) elif get_height(right_child) - get_height(left_child) == -2: assert left_child is not None if get_height(left_child.get_left()) > get_height(left_child.get_right()): root = right_rotation(root) else: root = lr_rotation(root) height = my_max(get_height(root.get_right()), get_height(root.get_left())) + 1 root.set_height(height) return root class AVLtree: """ An AVL tree doctest Examples: >>> t = AVLtree() >>> t.insert(4) insert:4 >>> print(str(t).replace(" \\n","\\n")) 4 ************************************* >>> t.insert(2) insert:2 >>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n")) 4 2 * ************************************* >>> t.insert(3) insert:3 right rotation node: 2 left rotation node: 4 >>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n")) 3 2 4 ************************************* >>> t.get_height() 2 >>> t.del_node(3) delete:3 >>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n")) 4 2 * ************************************* """ def __init__(self) -> None: self.root: MyNode | None = None def get_height(self) -> int: return get_height(self.root) def insert(self, data: Any) -> None: print("insert:" + str(data)) self.root = insert_node(self.root, data) def del_node(self, data: Any) -> None: print("delete:" + str(data)) if self.root is None: print("Tree is empty!") return self.root = del_node(self.root, data) def __str__( self, ) -> str: # a level traversale, gives a more intuitive look on the tree output = "" q = MyQueue() q.push(self.root) layer = self.get_height() if layer == 0: return output cnt = 0 while not q.is_empty(): node = q.pop() space = " " * int(math.pow(2, layer - 1)) output += space if node is None: output += "*" q.push(None) q.push(None) else: output += str(node.get_data()) q.push(node.get_left()) q.push(node.get_right()) output += space cnt = cnt + 1 for i in range(100): if cnt == math.pow(2, i) - 1: layer = layer - 1 if layer == 0: output += "\n*************************************" return output output += "\n" break output += "\n*************************************" return output def _test() -> None: import doctest doctest.testmod() if __name__ == "__main__": _test() t = AVLtree() lst = list(range(10)) random.shuffle(lst) for i in lst: t.insert(i) print(str(t)) random.shuffle(lst) for i in lst: t.del_node(i) print(str(t))
""" Implementation of an auto-balanced binary tree! For doctests run following command: python3 -m doctest -v avl_tree.py For testing run: python avl_tree.py """ from __future__ import annotations import math import random from typing import Any class MyQueue: def __init__(self) -> None: self.data: list[Any] = [] self.head: int = 0 self.tail: int = 0 def is_empty(self) -> bool: return self.head == self.tail def push(self, data: Any) -> None: self.data.append(data) self.tail = self.tail + 1 def pop(self) -> Any: ret = self.data[self.head] self.head = self.head + 1 return ret def count(self) -> int: return self.tail - self.head def print_queue(self) -> None: print(self.data) print("**************") print(self.data[self.head : self.tail]) class MyNode: def __init__(self, data: Any) -> None: self.data = data self.left: MyNode | None = None self.right: MyNode | None = None self.height: int = 1 def get_data(self) -> Any: return self.data def get_left(self) -> MyNode | None: return self.left def get_right(self) -> MyNode | None: return self.right def get_height(self) -> int: return self.height def set_data(self, data: Any) -> None: self.data = data return def set_left(self, node: MyNode | None) -> None: self.left = node return def set_right(self, node: MyNode | None) -> None: self.right = node return def set_height(self, height: int) -> None: self.height = height return def get_height(node: MyNode | None) -> int: if node is None: return 0 return node.get_height() def my_max(a: int, b: int) -> int: if a > b: return a return b def right_rotation(node: MyNode) -> MyNode: r""" A B / \ / \ B C Bl A / \ --> / / \ Bl Br UB Br C / UB UB = unbalanced node """ print("left rotation node:", node.get_data()) ret = node.get_left() assert ret is not None node.set_left(ret.get_right()) ret.set_right(node) h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1 node.set_height(h1) h2 = my_max(get_height(ret.get_right()), get_height(ret.get_left())) + 1 ret.set_height(h2) return ret def left_rotation(node: MyNode) -> MyNode: """ a mirror symmetry rotation of the left_rotation """ print("right rotation node:", node.get_data()) ret = node.get_right() assert ret is not None node.set_right(ret.get_left()) ret.set_left(node) h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1 node.set_height(h1) h2 = my_max(get_height(ret.get_right()), get_height(ret.get_left())) + 1 ret.set_height(h2) return ret def lr_rotation(node: MyNode) -> MyNode: r""" A A Br / \ / \ / \ B C LR Br C RR B A / \ --> / \ --> / / \ Bl Br B UB Bl UB C \ / UB Bl RR = right_rotation LR = left_rotation """ left_child = node.get_left() assert left_child is not None node.set_left(left_rotation(left_child)) return right_rotation(node) def rl_rotation(node: MyNode) -> MyNode: right_child = node.get_right() assert right_child is not None node.set_right(right_rotation(right_child)) return left_rotation(node) def insert_node(node: MyNode | None, data: Any) -> MyNode | None: if node is None: return MyNode(data) if data < node.get_data(): node.set_left(insert_node(node.get_left(), data)) if ( get_height(node.get_left()) - get_height(node.get_right()) == 2 ): # an unbalance detected left_child = node.get_left() assert left_child is not None if ( data < left_child.get_data() ): # new node is the left child of the left child node = right_rotation(node) else: node = lr_rotation(node) else: node.set_right(insert_node(node.get_right(), data)) if get_height(node.get_right()) - get_height(node.get_left()) == 2: right_child = node.get_right() assert right_child is not None if data < right_child.get_data(): node = rl_rotation(node) else: node = left_rotation(node) h1 = my_max(get_height(node.get_right()), get_height(node.get_left())) + 1 node.set_height(h1) return node def get_right_most(root: MyNode) -> Any: while True: right_child = root.get_right() if right_child is None: break root = right_child return root.get_data() def get_left_most(root: MyNode) -> Any: while True: left_child = root.get_left() if left_child is None: break root = left_child return root.get_data() def del_node(root: MyNode, data: Any) -> MyNode | None: left_child = root.get_left() right_child = root.get_right() if root.get_data() == data: if left_child is not None and right_child is not None: temp_data = get_left_most(right_child) root.set_data(temp_data) root.set_right(del_node(right_child, temp_data)) elif left_child is not None: root = left_child elif right_child is not None: root = right_child else: return None elif root.get_data() > data: if left_child is None: print("No such data") return root else: root.set_left(del_node(left_child, data)) else: # root.get_data() < data if right_child is None: return root else: root.set_right(del_node(right_child, data)) if get_height(right_child) - get_height(left_child) == 2: assert right_child is not None if get_height(right_child.get_right()) > get_height(right_child.get_left()): root = left_rotation(root) else: root = rl_rotation(root) elif get_height(right_child) - get_height(left_child) == -2: assert left_child is not None if get_height(left_child.get_left()) > get_height(left_child.get_right()): root = right_rotation(root) else: root = lr_rotation(root) height = my_max(get_height(root.get_right()), get_height(root.get_left())) + 1 root.set_height(height) return root class AVLtree: """ An AVL tree doctest Examples: >>> t = AVLtree() >>> t.insert(4) insert:4 >>> print(str(t).replace(" \\n","\\n")) 4 ************************************* >>> t.insert(2) insert:2 >>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n")) 4 2 * ************************************* >>> t.insert(3) insert:3 right rotation node: 2 left rotation node: 4 >>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n")) 3 2 4 ************************************* >>> t.get_height() 2 >>> t.del_node(3) delete:3 >>> print(str(t).replace(" \\n","\\n").replace(" \\n","\\n")) 4 2 * ************************************* """ def __init__(self) -> None: self.root: MyNode | None = None def get_height(self) -> int: return get_height(self.root) def insert(self, data: Any) -> None: print("insert:" + str(data)) self.root = insert_node(self.root, data) def del_node(self, data: Any) -> None: print("delete:" + str(data)) if self.root is None: print("Tree is empty!") return self.root = del_node(self.root, data) def __str__( self, ) -> str: # a level traversale, gives a more intuitive look on the tree output = "" q = MyQueue() q.push(self.root) layer = self.get_height() if layer == 0: return output cnt = 0 while not q.is_empty(): node = q.pop() space = " " * int(math.pow(2, layer - 1)) output += space if node is None: output += "*" q.push(None) q.push(None) else: output += str(node.get_data()) q.push(node.get_left()) q.push(node.get_right()) output += space cnt = cnt + 1 for i in range(100): if cnt == math.pow(2, i) - 1: layer = layer - 1 if layer == 0: output += "\n*************************************" return output output += "\n" break output += "\n*************************************" return output def _test() -> None: import doctest doctest.testmod() if __name__ == "__main__": _test() t = AVLtree() lst = list(range(10)) random.shuffle(lst) for i in lst: t.insert(i) print(str(t)) random.shuffle(lst) for i in lst: t.del_node(i) print(str(t))
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" This script demonstrates the implementation of the Sigmoid Linear Unit (SiLU) or swish function. * https://en.wikipedia.org/wiki/Rectifier_(neural_networks) * https://en.wikipedia.org/wiki/Swish_function The function takes a vector x of K real numbers as input and returns x * sigmoid(x). Swish is a smooth, non-monotonic function defined as f(x) = x * sigmoid(x). Extensive experiments shows that Swish consistently matches or outperforms ReLU on deep networks applied to a variety of challenging domains such as image classification and machine translation. This script is inspired by a corresponding research paper. * https://arxiv.org/abs/1710.05941 """ import numpy as np def sigmoid(vector: np.array) -> np.array: """ Mathematical function sigmoid takes a vector x of K real numbers as input and returns 1/ (1 + e^-x). https://en.wikipedia.org/wiki/Sigmoid_function >>> sigmoid(np.array([-1.0, 1.0, 2.0])) array([0.26894142, 0.73105858, 0.88079708]) """ return 1 / (1 + np.exp(-vector)) def sigmoid_linear_unit(vector: np.array) -> np.array: """ Implements the Sigmoid Linear Unit (SiLU) or swish function Parameters: vector (np.array): A numpy array consisting of real values. Returns: swish_vec (np.array): The input numpy array, after applying swish. Examples: >>> sigmoid_linear_unit(np.array([-1.0, 1.0, 2.0])) array([-0.26894142, 0.73105858, 1.76159416]) >>> sigmoid_linear_unit(np.array([-2])) array([-0.23840584]) """ return vector * sigmoid(vector) if __name__ == "__main__": import doctest doctest.testmod()
""" This script demonstrates the implementation of the Sigmoid Linear Unit (SiLU) or swish function. * https://en.wikipedia.org/wiki/Rectifier_(neural_networks) * https://en.wikipedia.org/wiki/Swish_function The function takes a vector x of K real numbers as input and returns x * sigmoid(x). Swish is a smooth, non-monotonic function defined as f(x) = x * sigmoid(x). Extensive experiments shows that Swish consistently matches or outperforms ReLU on deep networks applied to a variety of challenging domains such as image classification and machine translation. This script is inspired by a corresponding research paper. * https://arxiv.org/abs/1710.05941 """ import numpy as np def sigmoid(vector: np.array) -> np.array: """ Mathematical function sigmoid takes a vector x of K real numbers as input and returns 1/ (1 + e^-x). https://en.wikipedia.org/wiki/Sigmoid_function >>> sigmoid(np.array([-1.0, 1.0, 2.0])) array([0.26894142, 0.73105858, 0.88079708]) """ return 1 / (1 + np.exp(-vector)) def sigmoid_linear_unit(vector: np.array) -> np.array: """ Implements the Sigmoid Linear Unit (SiLU) or swish function Parameters: vector (np.array): A numpy array consisting of real values. Returns: swish_vec (np.array): The input numpy array, after applying swish. Examples: >>> sigmoid_linear_unit(np.array([-1.0, 1.0, 2.0])) array([-0.26894142, 0.73105858, 1.76159416]) >>> sigmoid_linear_unit(np.array([-2])) array([-0.23840584]) """ return vector * sigmoid(vector) if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" This is pure Python implementation of comb sort algorithm. Comb sort is a relatively simple sorting algorithm originally designed by Wlodzimierz Dobosiewicz in 1980. It was rediscovered by Stephen Lacey and Richard Box in 1991. Comb sort improves on bubble sort algorithm. In bubble sort, distance (or gap) between two compared elements is always one. Comb sort improvement is that gap can be much more than 1, in order to prevent slowing down by small values at the end of a list. More info on: https://en.wikipedia.org/wiki/Comb_sort For doctests run following command: python -m doctest -v comb_sort.py or python3 -m doctest -v comb_sort.py For manual testing run: python comb_sort.py """ def comb_sort(data: list) -> list: """Pure implementation of comb sort algorithm in Python :param data: mutable collection with comparable items :return: the same collection in ascending order Examples: >>> comb_sort([0, 5, 3, 2, 2]) [0, 2, 2, 3, 5] >>> comb_sort([]) [] >>> comb_sort([99, 45, -7, 8, 2, 0, -15, 3]) [-15, -7, 0, 2, 3, 8, 45, 99] """ shrink_factor = 1.3 gap = len(data) completed = False while not completed: # Update the gap value for a next comb gap = int(gap / shrink_factor) if gap <= 1: completed = True index = 0 while index + gap < len(data): if data[index] > data[index + gap]: # Swap values data[index], data[index + gap] = data[index + gap], data[index] completed = False index += 1 return data if __name__ == "__main__": import doctest doctest.testmod() user_input = input("Enter numbers separated by a comma:\n").strip() unsorted = [int(item) for item in user_input.split(",")] print(comb_sort(unsorted))
""" This is pure Python implementation of comb sort algorithm. Comb sort is a relatively simple sorting algorithm originally designed by Wlodzimierz Dobosiewicz in 1980. It was rediscovered by Stephen Lacey and Richard Box in 1991. Comb sort improves on bubble sort algorithm. In bubble sort, distance (or gap) between two compared elements is always one. Comb sort improvement is that gap can be much more than 1, in order to prevent slowing down by small values at the end of a list. More info on: https://en.wikipedia.org/wiki/Comb_sort For doctests run following command: python -m doctest -v comb_sort.py or python3 -m doctest -v comb_sort.py For manual testing run: python comb_sort.py """ def comb_sort(data: list) -> list: """Pure implementation of comb sort algorithm in Python :param data: mutable collection with comparable items :return: the same collection in ascending order Examples: >>> comb_sort([0, 5, 3, 2, 2]) [0, 2, 2, 3, 5] >>> comb_sort([]) [] >>> comb_sort([99, 45, -7, 8, 2, 0, -15, 3]) [-15, -7, 0, 2, 3, 8, 45, 99] """ shrink_factor = 1.3 gap = len(data) completed = False while not completed: # Update the gap value for a next comb gap = int(gap / shrink_factor) if gap <= 1: completed = True index = 0 while index + gap < len(data): if data[index] > data[index + gap]: # Swap values data[index], data[index + gap] = data[index + gap], data[index] completed = False index += 1 return data if __name__ == "__main__": import doctest doctest.testmod() user_input = input("Enter numbers separated by a comma:\n").strip() unsorted = [int(item) for item in user_input.split(",")] print(comb_sort(unsorted))
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Project Euler Problem 206: https://projecteuler.net/problem=206 Find the unique positive integer whose square has the form 1_2_3_4_5_6_7_8_9_0, where each β€œ_” is a single digit. ----- Instead of computing every single permutation of that number and going through a 10^9 search space, we can narrow it down considerably. If the square ends in a 0, then the square root must also end in a 0. Thus, the last missing digit must be 0 and the square root is a multiple of 10. We can narrow the search space down to the first 8 digits and multiply the result of that by 10 at the end. Now the last digit is a 9, which can only happen if the square root ends in a 3 or 7. From this point, we can try one of two different methods to find the answer: 1. Start at the lowest possible base number whose square would be in the format, and count up. The base we would start at is 101010103, whose square is the closest number to 10203040506070809. Alternate counting up by 4 and 6 so the last digit of the base is always a 3 or 7. 2. Start at the highest possible base number whose square would be in the format, and count down. That base would be 138902663, whose square is the closest number to 1929394959697989. Alternate counting down by 6 and 4 so the last digit of the base is always a 3 or 7. The solution does option 2 because the answer happens to be much closer to the starting point. """ def is_square_form(num: int) -> bool: """ Determines if num is in the form 1_2_3_4_5_6_7_8_9 >>> is_square_form(1) False >>> is_square_form(112233445566778899) True >>> is_square_form(123456789012345678) False """ digit = 9 while num > 0: if num % 10 != digit: return False num //= 100 digit -= 1 return True def solution() -> int: """ Returns the first integer whose square is of the form 1_2_3_4_5_6_7_8_9_0 """ num = 138902663 while not is_square_form(num * num): if num % 10 == 3: num -= 6 # (3 - 6) % 10 = 7 else: num -= 4 # (7 - 4) % 10 = 3 return num * 10 if __name__ == "__main__": print(f"{solution() = }")
""" Project Euler Problem 206: https://projecteuler.net/problem=206 Find the unique positive integer whose square has the form 1_2_3_4_5_6_7_8_9_0, where each β€œ_” is a single digit. ----- Instead of computing every single permutation of that number and going through a 10^9 search space, we can narrow it down considerably. If the square ends in a 0, then the square root must also end in a 0. Thus, the last missing digit must be 0 and the square root is a multiple of 10. We can narrow the search space down to the first 8 digits and multiply the result of that by 10 at the end. Now the last digit is a 9, which can only happen if the square root ends in a 3 or 7. From this point, we can try one of two different methods to find the answer: 1. Start at the lowest possible base number whose square would be in the format, and count up. The base we would start at is 101010103, whose square is the closest number to 10203040506070809. Alternate counting up by 4 and 6 so the last digit of the base is always a 3 or 7. 2. Start at the highest possible base number whose square would be in the format, and count down. That base would be 138902663, whose square is the closest number to 1929394959697989. Alternate counting down by 6 and 4 so the last digit of the base is always a 3 or 7. The solution does option 2 because the answer happens to be much closer to the starting point. """ def is_square_form(num: int) -> bool: """ Determines if num is in the form 1_2_3_4_5_6_7_8_9 >>> is_square_form(1) False >>> is_square_form(112233445566778899) True >>> is_square_form(123456789012345678) False """ digit = 9 while num > 0: if num % 10 != digit: return False num //= 100 digit -= 1 return True def solution() -> int: """ Returns the first integer whose square is of the form 1_2_3_4_5_6_7_8_9_0 """ num = 138902663 while not is_square_form(num * num): if num % 10 == 3: num -= 6 # (3 - 6) % 10 = 7 else: num -= 4 # (7 - 4) % 10 = 3 return num * 10 if __name__ == "__main__": print(f"{solution() = }")
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
from __future__ import annotations import requests def get_hackernews_story(story_id: str) -> dict: url = f"https://hacker-news.firebaseio.com/v0/item/{story_id}.json?print=pretty" return requests.get(url).json() def hackernews_top_stories(max_stories: int = 10) -> list[dict]: """ Get the top max_stories posts from HackerNews - https://news.ycombinator.com/ """ url = "https://hacker-news.firebaseio.com/v0/topstories.json?print=pretty" story_ids = requests.get(url).json()[:max_stories] return [get_hackernews_story(story_id) for story_id in story_ids] def hackernews_top_stories_as_markdown(max_stories: int = 10) -> str: stories = hackernews_top_stories(max_stories) return "\n".join("* [{title}]({url})".format(**story) for story in stories) if __name__ == "__main__": print(hackernews_top_stories_as_markdown())
from __future__ import annotations import requests def get_hackernews_story(story_id: str) -> dict: url = f"https://hacker-news.firebaseio.com/v0/item/{story_id}.json?print=pretty" return requests.get(url).json() def hackernews_top_stories(max_stories: int = 10) -> list[dict]: """ Get the top max_stories posts from HackerNews - https://news.ycombinator.com/ """ url = "https://hacker-news.firebaseio.com/v0/topstories.json?print=pretty" story_ids = requests.get(url).json()[:max_stories] return [get_hackernews_story(story_id) for story_id in story_ids] def hackernews_top_stories_as_markdown(max_stories: int = 10) -> str: stories = hackernews_top_stories(max_stories) return "\n".join("* [{title}]({url})".format(**story) for story in stories) if __name__ == "__main__": print(hackernews_top_stories_as_markdown())
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" The Reverse Polish Nation also known as Polish postfix notation or simply postfix notation. https://en.wikipedia.org/wiki/Reverse_Polish_notation Classic examples of simple stack implementations Valid operators are +, -, *, /. Each operand may be an integer or another expression. """ from __future__ import annotations from typing import Any def evaluate_postfix(postfix_notation: list) -> int: """ >>> evaluate_postfix(["2", "1", "+", "3", "*"]) 9 >>> evaluate_postfix(["4", "13", "5", "/", "+"]) 6 >>> evaluate_postfix([]) 0 """ if not postfix_notation: return 0 operations = {"+", "-", "*", "/"} stack: list[Any] = [] for token in postfix_notation: if token in operations: b, a = stack.pop(), stack.pop() if token == "+": stack.append(a + b) elif token == "-": stack.append(a - b) elif token == "*": stack.append(a * b) else: if a * b < 0 and a % b != 0: stack.append(a // b + 1) else: stack.append(a // b) else: stack.append(int(token)) return stack.pop() if __name__ == "__main__": import doctest doctest.testmod()
""" The Reverse Polish Nation also known as Polish postfix notation or simply postfix notation. https://en.wikipedia.org/wiki/Reverse_Polish_notation Classic examples of simple stack implementations Valid operators are +, -, *, /. Each operand may be an integer or another expression. """ from __future__ import annotations from typing import Any def evaluate_postfix(postfix_notation: list) -> int: """ >>> evaluate_postfix(["2", "1", "+", "3", "*"]) 9 >>> evaluate_postfix(["4", "13", "5", "/", "+"]) 6 >>> evaluate_postfix([]) 0 """ if not postfix_notation: return 0 operations = {"+", "-", "*", "/"} stack: list[Any] = [] for token in postfix_notation: if token in operations: b, a = stack.pop(), stack.pop() if token == "+": stack.append(a + b) elif token == "-": stack.append(a - b) elif token == "*": stack.append(a * b) else: if a * b < 0 and a % b != 0: stack.append(a // b + 1) else: stack.append(a // b) else: stack.append(int(token)) return stack.pop() if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
def points_to_polynomial(coordinates: list[list[int]]) -> str: """ coordinates is a two dimensional matrix: [[x, y], [x, y], ...] number of points you want to use >>> print(points_to_polynomial([])) Traceback (most recent call last): ... ValueError: The program cannot work out a fitting polynomial. >>> print(points_to_polynomial([[]])) Traceback (most recent call last): ... ValueError: The program cannot work out a fitting polynomial. >>> print(points_to_polynomial([[1, 0], [2, 0], [3, 0]])) f(x)=x^2*0.0+x^1*-0.0+x^0*0.0 >>> print(points_to_polynomial([[1, 1], [2, 1], [3, 1]])) f(x)=x^2*0.0+x^1*-0.0+x^0*1.0 >>> print(points_to_polynomial([[1, 3], [2, 3], [3, 3]])) f(x)=x^2*0.0+x^1*-0.0+x^0*3.0 >>> print(points_to_polynomial([[1, 1], [2, 2], [3, 3]])) f(x)=x^2*0.0+x^1*1.0+x^0*0.0 >>> print(points_to_polynomial([[1, 1], [2, 4], [3, 9]])) f(x)=x^2*1.0+x^1*-0.0+x^0*0.0 >>> print(points_to_polynomial([[1, 3], [2, 6], [3, 11]])) f(x)=x^2*1.0+x^1*-0.0+x^0*2.0 >>> print(points_to_polynomial([[1, -3], [2, -6], [3, -11]])) f(x)=x^2*-1.0+x^1*-0.0+x^0*-2.0 >>> print(points_to_polynomial([[1, 5], [2, 2], [3, 9]])) f(x)=x^2*5.0+x^1*-18.0+x^0*18.0 """ if len(coordinates) == 0 or not all(len(pair) == 2 for pair in coordinates): raise ValueError("The program cannot work out a fitting polynomial.") if len({tuple(pair) for pair in coordinates}) != len(coordinates): raise ValueError("The program cannot work out a fitting polynomial.") set_x = {x for x, _ in coordinates} if len(set_x) == 1: return f"x={coordinates[0][0]}" if len(set_x) != len(coordinates): raise ValueError("The program cannot work out a fitting polynomial.") x = len(coordinates) count_of_line = 0 matrix: list[list[float]] = [] # put the x and x to the power values in a matrix while count_of_line < x: count_in_line = 0 a = coordinates[count_of_line][0] count_line: list[float] = [] while count_in_line < x: count_line.append(a ** (x - (count_in_line + 1))) count_in_line += 1 matrix.append(count_line) count_of_line += 1 count_of_line = 0 # put the y values into a vector vector: list[float] = [] while count_of_line < x: vector.append(coordinates[count_of_line][1]) count_of_line += 1 count = 0 while count < x: zahlen = 0 while zahlen < x: if count == zahlen: zahlen += 1 if zahlen == x: break bruch = matrix[zahlen][count] / matrix[count][count] for counting_columns, item in enumerate(matrix[count]): # manipulating all the values in the matrix matrix[zahlen][counting_columns] -= item * bruch # manipulating the values in the vector vector[zahlen] -= vector[count] * bruch zahlen += 1 count += 1 count = 0 # make solutions solution: list[str] = [] while count < x: solution.append(str(vector[count] / matrix[count][count])) count += 1 count = 0 solved = "f(x)=" while count < x: remove_e: list[str] = solution[count].split("E") if len(remove_e) > 1: solution[count] = f"{remove_e[0]}*10^{remove_e[1]}" solved += f"x^{x - (count + 1)}*{solution[count]}" if count + 1 != x: solved += "+" count += 1 return solved if __name__ == "__main__": print(points_to_polynomial([])) print(points_to_polynomial([[]])) print(points_to_polynomial([[1, 0], [2, 0], [3, 0]])) print(points_to_polynomial([[1, 1], [2, 1], [3, 1]])) print(points_to_polynomial([[1, 3], [2, 3], [3, 3]])) print(points_to_polynomial([[1, 1], [2, 2], [3, 3]])) print(points_to_polynomial([[1, 1], [2, 4], [3, 9]])) print(points_to_polynomial([[1, 3], [2, 6], [3, 11]])) print(points_to_polynomial([[1, -3], [2, -6], [3, -11]])) print(points_to_polynomial([[1, 5], [2, 2], [3, 9]]))
def points_to_polynomial(coordinates: list[list[int]]) -> str: """ coordinates is a two dimensional matrix: [[x, y], [x, y], ...] number of points you want to use >>> print(points_to_polynomial([])) Traceback (most recent call last): ... ValueError: The program cannot work out a fitting polynomial. >>> print(points_to_polynomial([[]])) Traceback (most recent call last): ... ValueError: The program cannot work out a fitting polynomial. >>> print(points_to_polynomial([[1, 0], [2, 0], [3, 0]])) f(x)=x^2*0.0+x^1*-0.0+x^0*0.0 >>> print(points_to_polynomial([[1, 1], [2, 1], [3, 1]])) f(x)=x^2*0.0+x^1*-0.0+x^0*1.0 >>> print(points_to_polynomial([[1, 3], [2, 3], [3, 3]])) f(x)=x^2*0.0+x^1*-0.0+x^0*3.0 >>> print(points_to_polynomial([[1, 1], [2, 2], [3, 3]])) f(x)=x^2*0.0+x^1*1.0+x^0*0.0 >>> print(points_to_polynomial([[1, 1], [2, 4], [3, 9]])) f(x)=x^2*1.0+x^1*-0.0+x^0*0.0 >>> print(points_to_polynomial([[1, 3], [2, 6], [3, 11]])) f(x)=x^2*1.0+x^1*-0.0+x^0*2.0 >>> print(points_to_polynomial([[1, -3], [2, -6], [3, -11]])) f(x)=x^2*-1.0+x^1*-0.0+x^0*-2.0 >>> print(points_to_polynomial([[1, 5], [2, 2], [3, 9]])) f(x)=x^2*5.0+x^1*-18.0+x^0*18.0 """ if len(coordinates) == 0 or not all(len(pair) == 2 for pair in coordinates): raise ValueError("The program cannot work out a fitting polynomial.") if len({tuple(pair) for pair in coordinates}) != len(coordinates): raise ValueError("The program cannot work out a fitting polynomial.") set_x = {x for x, _ in coordinates} if len(set_x) == 1: return f"x={coordinates[0][0]}" if len(set_x) != len(coordinates): raise ValueError("The program cannot work out a fitting polynomial.") x = len(coordinates) count_of_line = 0 matrix: list[list[float]] = [] # put the x and x to the power values in a matrix while count_of_line < x: count_in_line = 0 a = coordinates[count_of_line][0] count_line: list[float] = [] while count_in_line < x: count_line.append(a ** (x - (count_in_line + 1))) count_in_line += 1 matrix.append(count_line) count_of_line += 1 count_of_line = 0 # put the y values into a vector vector: list[float] = [] while count_of_line < x: vector.append(coordinates[count_of_line][1]) count_of_line += 1 count = 0 while count < x: zahlen = 0 while zahlen < x: if count == zahlen: zahlen += 1 if zahlen == x: break bruch = matrix[zahlen][count] / matrix[count][count] for counting_columns, item in enumerate(matrix[count]): # manipulating all the values in the matrix matrix[zahlen][counting_columns] -= item * bruch # manipulating the values in the vector vector[zahlen] -= vector[count] * bruch zahlen += 1 count += 1 count = 0 # make solutions solution: list[str] = [] while count < x: solution.append(str(vector[count] / matrix[count][count])) count += 1 count = 0 solved = "f(x)=" while count < x: remove_e: list[str] = solution[count].split("E") if len(remove_e) > 1: solution[count] = f"{remove_e[0]}*10^{remove_e[1]}" solved += f"x^{x - (count + 1)}*{solution[count]}" if count + 1 != x: solved += "+" count += 1 return solved if __name__ == "__main__": print(points_to_polynomial([])) print(points_to_polynomial([[]])) print(points_to_polynomial([[1, 0], [2, 0], [3, 0]])) print(points_to_polynomial([[1, 1], [2, 1], [3, 1]])) print(points_to_polynomial([[1, 3], [2, 3], [3, 3]])) print(points_to_polynomial([[1, 1], [2, 2], [3, 3]])) print(points_to_polynomial([[1, 1], [2, 4], [3, 9]])) print(points_to_polynomial([[1, 3], [2, 6], [3, 11]])) print(points_to_polynomial([[1, -3], [2, -6], [3, -11]])) print(points_to_polynomial([[1, 5], [2, 2], [3, 9]]))
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Project Euler Problem 301: https://projecteuler.net/problem=301 Problem Statement: Nim is a game played with heaps of stones, where two players take it in turn to remove any number of stones from any heap until no stones remain. We'll consider the three-heap normal-play version of Nim, which works as follows: - At the start of the game there are three heaps of stones. - On each player's turn, the player may remove any positive number of stones from any single heap. - The first player unable to move (because no stones remain) loses. If (n1, n2, n3) indicates a Nim position consisting of heaps of size n1, n2, and n3, then there is a simple function, which you may look up or attempt to deduce for yourself, X(n1, n2, n3) that returns: - zero if, with perfect strategy, the player about to move will eventually lose; or - non-zero if, with perfect strategy, the player about to move will eventually win. For example X(1,2,3) = 0 because, no matter what the current player does, the opponent can respond with a move that leaves two heaps of equal size, at which point every move by the current player can be mirrored by the opponent until no stones remain; so the current player loses. To illustrate: - current player moves to (1,2,1) - opponent moves to (1,0,1) - current player moves to (0,0,1) - opponent moves to (0,0,0), and so wins. For how many positive integers n <= 2^30 does X(n,2n,3n) = 0? """ def solution(exponent: int = 30) -> int: """ For any given exponent x >= 0, 1 <= n <= 2^x. This function returns how many Nim games are lost given that each Nim game has three heaps of the form (n, 2*n, 3*n). >>> solution(0) 1 >>> solution(2) 3 >>> solution(10) 144 """ # To find how many total games were lost for a given exponent x, # we need to find the Fibonacci number F(x+2). fibonacci_index = exponent + 2 phi = (1 + 5**0.5) / 2 fibonacci = (phi**fibonacci_index - (phi - 1) ** fibonacci_index) / 5**0.5 return int(fibonacci) if __name__ == "__main__": print(f"{solution() = }")
""" Project Euler Problem 301: https://projecteuler.net/problem=301 Problem Statement: Nim is a game played with heaps of stones, where two players take it in turn to remove any number of stones from any heap until no stones remain. We'll consider the three-heap normal-play version of Nim, which works as follows: - At the start of the game there are three heaps of stones. - On each player's turn, the player may remove any positive number of stones from any single heap. - The first player unable to move (because no stones remain) loses. If (n1, n2, n3) indicates a Nim position consisting of heaps of size n1, n2, and n3, then there is a simple function, which you may look up or attempt to deduce for yourself, X(n1, n2, n3) that returns: - zero if, with perfect strategy, the player about to move will eventually lose; or - non-zero if, with perfect strategy, the player about to move will eventually win. For example X(1,2,3) = 0 because, no matter what the current player does, the opponent can respond with a move that leaves two heaps of equal size, at which point every move by the current player can be mirrored by the opponent until no stones remain; so the current player loses. To illustrate: - current player moves to (1,2,1) - opponent moves to (1,0,1) - current player moves to (0,0,1) - opponent moves to (0,0,0), and so wins. For how many positive integers n <= 2^30 does X(n,2n,3n) = 0? """ def solution(exponent: int = 30) -> int: """ For any given exponent x >= 0, 1 <= n <= 2^x. This function returns how many Nim games are lost given that each Nim game has three heaps of the form (n, 2*n, 3*n). >>> solution(0) 1 >>> solution(2) 3 >>> solution(10) 144 """ # To find how many total games were lost for a given exponent x, # we need to find the Fibonacci number F(x+2). fibonacci_index = exponent + 2 phi = (1 + 5**0.5) / 2 fibonacci = (phi**fibonacci_index - (phi - 1) ** fibonacci_index) / 5**0.5 return int(fibonacci) if __name__ == "__main__": print(f"{solution() = }")
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
from PIL import Image def change_brightness(img: Image, level: float) -> Image: """ Change the brightness of a PIL Image to a given level. """ def brightness(c: int) -> float: """ Fundamental Transformation/Operation that'll be performed on every bit. """ return 128 + level + (c - 128) if not -255.0 <= level <= 255.0: raise ValueError("level must be between -255.0 (black) and 255.0 (white)") return img.point(brightness) if __name__ == "__main__": # Load image with Image.open("image_data/lena.jpg") as img: # Change brightness to 100 brigt_img = change_brightness(img, 100) brigt_img.save("image_data/lena_brightness.png", format="png")
from PIL import Image def change_brightness(img: Image, level: float) -> Image: """ Change the brightness of a PIL Image to a given level. """ def brightness(c: int) -> float: """ Fundamental Transformation/Operation that'll be performed on every bit. """ return 128 + level + (c - 128) if not -255.0 <= level <= 255.0: raise ValueError("level must be between -255.0 (black) and 255.0 (white)") return img.point(brightness) if __name__ == "__main__": # Load image with Image.open("image_data/lena.jpg") as img: # Change brightness to 100 brigt_img = change_brightness(img, 100) brigt_img.save("image_data/lena_brightness.png", format="png")
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Conversion of volume units. Available Units:- Cubic metre,Litre,KiloLitre,Gallon,Cubic yard,Cubic foot,cup USAGE : -> Import this file into their respective project. -> Use the function length_conversion() for conversion of volume units. -> Parameters : -> value : The number of from units you want to convert -> from_type : From which type you want to convert -> to_type : To which type you want to convert REFERENCES : -> Wikipedia reference: https://en.wikipedia.org/wiki/Cubic_metre -> Wikipedia reference: https://en.wikipedia.org/wiki/Litre -> Wikipedia reference: https://en.wiktionary.org/wiki/kilolitre -> Wikipedia reference: https://en.wikipedia.org/wiki/Gallon -> Wikipedia reference: https://en.wikipedia.org/wiki/Cubic_yard -> Wikipedia reference: https://en.wikipedia.org/wiki/Cubic_foot -> Wikipedia reference: https://en.wikipedia.org/wiki/Cup_(unit) """ from collections import namedtuple from_to = namedtuple("from_to", "from_ to") METRIC_CONVERSION = { "cubicmeter": from_to(1, 1), "litre": from_to(0.001, 1000), "kilolitre": from_to(1, 1), "gallon": from_to(0.00454, 264.172), "cubicyard": from_to(0.76455, 1.30795), "cubicfoot": from_to(0.028, 35.3147), "cup": from_to(0.000236588, 4226.75), } def volume_conversion(value: float, from_type: str, to_type: str) -> float: """ Conversion between volume units. >>> volume_conversion(4, "cubicmeter", "litre") 4000 >>> volume_conversion(1, "litre", "gallon") 0.264172 >>> volume_conversion(1, "kilolitre", "cubicmeter") 1 >>> volume_conversion(3, "gallon", "cubicyard") 0.017814279 >>> volume_conversion(2, "cubicyard", "litre") 1529.1 >>> volume_conversion(4, "cubicfoot", "cup") 473.396 >>> volume_conversion(1, "cup", "kilolitre") 0.000236588 >>> volume_conversion(4, "wrongUnit", "litre") Traceback (most recent call last): ... ValueError: Invalid 'from_type' value: 'wrongUnit' Supported values are: cubicmeter, litre, kilolitre, gallon, cubicyard, cubicfoot, cup """ if from_type not in METRIC_CONVERSION: raise ValueError( f"Invalid 'from_type' value: {from_type!r} Supported values are:\n" + ", ".join(METRIC_CONVERSION) ) if to_type not in METRIC_CONVERSION: raise ValueError( f"Invalid 'to_type' value: {to_type!r}. Supported values are:\n" + ", ".join(METRIC_CONVERSION) ) return value * METRIC_CONVERSION[from_type].from_ * METRIC_CONVERSION[to_type].to if __name__ == "__main__": import doctest doctest.testmod()
""" Conversion of volume units. Available Units:- Cubic metre,Litre,KiloLitre,Gallon,Cubic yard,Cubic foot,cup USAGE : -> Import this file into their respective project. -> Use the function length_conversion() for conversion of volume units. -> Parameters : -> value : The number of from units you want to convert -> from_type : From which type you want to convert -> to_type : To which type you want to convert REFERENCES : -> Wikipedia reference: https://en.wikipedia.org/wiki/Cubic_metre -> Wikipedia reference: https://en.wikipedia.org/wiki/Litre -> Wikipedia reference: https://en.wiktionary.org/wiki/kilolitre -> Wikipedia reference: https://en.wikipedia.org/wiki/Gallon -> Wikipedia reference: https://en.wikipedia.org/wiki/Cubic_yard -> Wikipedia reference: https://en.wikipedia.org/wiki/Cubic_foot -> Wikipedia reference: https://en.wikipedia.org/wiki/Cup_(unit) """ from collections import namedtuple from_to = namedtuple("from_to", "from_ to") METRIC_CONVERSION = { "cubicmeter": from_to(1, 1), "litre": from_to(0.001, 1000), "kilolitre": from_to(1, 1), "gallon": from_to(0.00454, 264.172), "cubicyard": from_to(0.76455, 1.30795), "cubicfoot": from_to(0.028, 35.3147), "cup": from_to(0.000236588, 4226.75), } def volume_conversion(value: float, from_type: str, to_type: str) -> float: """ Conversion between volume units. >>> volume_conversion(4, "cubicmeter", "litre") 4000 >>> volume_conversion(1, "litre", "gallon") 0.264172 >>> volume_conversion(1, "kilolitre", "cubicmeter") 1 >>> volume_conversion(3, "gallon", "cubicyard") 0.017814279 >>> volume_conversion(2, "cubicyard", "litre") 1529.1 >>> volume_conversion(4, "cubicfoot", "cup") 473.396 >>> volume_conversion(1, "cup", "kilolitre") 0.000236588 >>> volume_conversion(4, "wrongUnit", "litre") Traceback (most recent call last): ... ValueError: Invalid 'from_type' value: 'wrongUnit' Supported values are: cubicmeter, litre, kilolitre, gallon, cubicyard, cubicfoot, cup """ if from_type not in METRIC_CONVERSION: raise ValueError( f"Invalid 'from_type' value: {from_type!r} Supported values are:\n" + ", ".join(METRIC_CONVERSION) ) if to_type not in METRIC_CONVERSION: raise ValueError( f"Invalid 'to_type' value: {to_type!r}. Supported values are:\n" + ", ".join(METRIC_CONVERSION) ) return value * METRIC_CONVERSION[from_type].from_ * METRIC_CONVERSION[to_type].to if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Project Euler Problem 3: https://projecteuler.net/problem=3 Largest prime factor The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143? References: - https://en.wikipedia.org/wiki/Prime_number#Unique_factorization """ def solution(n: int = 600851475143) -> int: """ Returns the largest prime factor of a given number n. >>> solution(13195) 29 >>> solution(10) 5 >>> solution(17) 17 >>> solution(3.4) 3 >>> solution(0) Traceback (most recent call last): ... ValueError: Parameter n must be greater than or equal to one. >>> solution(-17) Traceback (most recent call last): ... ValueError: Parameter n must be greater than or equal to one. >>> solution([]) Traceback (most recent call last): ... TypeError: Parameter n must be int or castable to int. >>> solution("asd") Traceback (most recent call last): ... TypeError: Parameter n must be int or castable to int. """ try: n = int(n) except (TypeError, ValueError): raise TypeError("Parameter n must be int or castable to int.") if n <= 0: raise ValueError("Parameter n must be greater than or equal to one.") prime = 1 i = 2 while i * i <= n: while n % i == 0: prime = i n //= i i += 1 if n > 1: prime = n return int(prime) if __name__ == "__main__": print(f"{solution() = }")
""" Project Euler Problem 3: https://projecteuler.net/problem=3 Largest prime factor The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143? References: - https://en.wikipedia.org/wiki/Prime_number#Unique_factorization """ def solution(n: int = 600851475143) -> int: """ Returns the largest prime factor of a given number n. >>> solution(13195) 29 >>> solution(10) 5 >>> solution(17) 17 >>> solution(3.4) 3 >>> solution(0) Traceback (most recent call last): ... ValueError: Parameter n must be greater than or equal to one. >>> solution(-17) Traceback (most recent call last): ... ValueError: Parameter n must be greater than or equal to one. >>> solution([]) Traceback (most recent call last): ... TypeError: Parameter n must be int or castable to int. >>> solution("asd") Traceback (most recent call last): ... TypeError: Parameter n must be int or castable to int. """ try: n = int(n) except (TypeError, ValueError): raise TypeError("Parameter n must be int or castable to int.") if n <= 0: raise ValueError("Parameter n must be greater than or equal to one.") prime = 1 i = 2 while i * i <= n: while n % i == 0: prime = i n //= i i += 1 if n > 1: prime = n return int(prime) if __name__ == "__main__": print(f"{solution() = }")
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Similarity Search : https://en.wikipedia.org/wiki/Similarity_search Similarity search is a search algorithm for finding the nearest vector from vectors, used in natural language processing. In this algorithm, it calculates distance with euclidean distance and returns a list containing two data for each vector: 1. the nearest vector 2. distance between the vector and the nearest vector (float) """ from __future__ import annotations import math import numpy as np from numpy.linalg import norm def euclidean(input_a: np.ndarray, input_b: np.ndarray) -> float: """ Calculates euclidean distance between two data. :param input_a: ndarray of first vector. :param input_b: ndarray of second vector. :return: Euclidean distance of input_a and input_b. By using math.sqrt(), result will be float. >>> euclidean(np.array([0]), np.array([1])) 1.0 >>> euclidean(np.array([0, 1]), np.array([1, 1])) 1.0 >>> euclidean(np.array([0, 0, 0]), np.array([0, 0, 1])) 1.0 """ return math.sqrt(sum(pow(a - b, 2) for a, b in zip(input_a, input_b))) def similarity_search( dataset: np.ndarray, value_array: np.ndarray ) -> list[list[list[float] | float]]: """ :param dataset: Set containing the vectors. Should be ndarray. :param value_array: vector/vectors we want to know the nearest vector from dataset. :return: Result will be a list containing 1. the nearest vector 2. distance from the vector >>> dataset = np.array([[0], [1], [2]]) >>> value_array = np.array([[0]]) >>> similarity_search(dataset, value_array) [[[0], 0.0]] >>> dataset = np.array([[0, 0], [1, 1], [2, 2]]) >>> value_array = np.array([[0, 1]]) >>> similarity_search(dataset, value_array) [[[0, 0], 1.0]] >>> dataset = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]]) >>> value_array = np.array([[0, 0, 1]]) >>> similarity_search(dataset, value_array) [[[0, 0, 0], 1.0]] >>> dataset = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]]) >>> value_array = np.array([[0, 0, 0], [0, 0, 1]]) >>> similarity_search(dataset, value_array) [[[0, 0, 0], 0.0], [[0, 0, 0], 1.0]] These are the errors that might occur: 1. If dimensions are different. For example, dataset has 2d array and value_array has 1d array: >>> dataset = np.array([[1]]) >>> value_array = np.array([1]) >>> similarity_search(dataset, value_array) Traceback (most recent call last): ... ValueError: Wrong input data's dimensions... dataset : 2, value_array : 1 2. If data's shapes are different. For example, dataset has shape of (3, 2) and value_array has (2, 3). We are expecting same shapes of two arrays, so it is wrong. >>> dataset = np.array([[0, 0], [1, 1], [2, 2]]) >>> value_array = np.array([[0, 0, 0], [0, 0, 1]]) >>> similarity_search(dataset, value_array) Traceback (most recent call last): ... ValueError: Wrong input data's shape... dataset : 2, value_array : 3 3. If data types are different. When trying to compare, we are expecting same types so they should be same. If not, it'll come up with errors. >>> dataset = np.array([[0, 0], [1, 1], [2, 2]], dtype=np.float32) >>> value_array = np.array([[0, 0], [0, 1]], dtype=np.int32) >>> similarity_search(dataset, value_array) # doctest: +NORMALIZE_WHITESPACE Traceback (most recent call last): ... TypeError: Input data have different datatype... dataset : float32, value_array : int32 """ if dataset.ndim != value_array.ndim: raise ValueError( f"Wrong input data's dimensions... dataset : {dataset.ndim}, " f"value_array : {value_array.ndim}" ) try: if dataset.shape[1] != value_array.shape[1]: raise ValueError( f"Wrong input data's shape... dataset : {dataset.shape[1]}, " f"value_array : {value_array.shape[1]}" ) except IndexError: if dataset.ndim != value_array.ndim: raise TypeError("Wrong shape") if dataset.dtype != value_array.dtype: raise TypeError( f"Input data have different datatype... dataset : {dataset.dtype}, " f"value_array : {value_array.dtype}" ) answer = [] for value in value_array: dist = euclidean(value, dataset[0]) vector = dataset[0].tolist() for dataset_value in dataset[1:]: temp_dist = euclidean(value, dataset_value) if dist > temp_dist: dist = temp_dist vector = dataset_value.tolist() answer.append([vector, dist]) return answer def cosine_similarity(input_a: np.ndarray, input_b: np.ndarray) -> float: """ Calculates cosine similarity between two data. :param input_a: ndarray of first vector. :param input_b: ndarray of second vector. :return: Cosine similarity of input_a and input_b. By using math.sqrt(), result will be float. >>> cosine_similarity(np.array([1]), np.array([1])) 1.0 >>> cosine_similarity(np.array([1, 2]), np.array([6, 32])) 0.9615239476408232 """ return np.dot(input_a, input_b) / (norm(input_a) * norm(input_b)) if __name__ == "__main__": import doctest doctest.testmod()
""" Similarity Search : https://en.wikipedia.org/wiki/Similarity_search Similarity search is a search algorithm for finding the nearest vector from vectors, used in natural language processing. In this algorithm, it calculates distance with euclidean distance and returns a list containing two data for each vector: 1. the nearest vector 2. distance between the vector and the nearest vector (float) """ from __future__ import annotations import math import numpy as np from numpy.linalg import norm def euclidean(input_a: np.ndarray, input_b: np.ndarray) -> float: """ Calculates euclidean distance between two data. :param input_a: ndarray of first vector. :param input_b: ndarray of second vector. :return: Euclidean distance of input_a and input_b. By using math.sqrt(), result will be float. >>> euclidean(np.array([0]), np.array([1])) 1.0 >>> euclidean(np.array([0, 1]), np.array([1, 1])) 1.0 >>> euclidean(np.array([0, 0, 0]), np.array([0, 0, 1])) 1.0 """ return math.sqrt(sum(pow(a - b, 2) for a, b in zip(input_a, input_b))) def similarity_search( dataset: np.ndarray, value_array: np.ndarray ) -> list[list[list[float] | float]]: """ :param dataset: Set containing the vectors. Should be ndarray. :param value_array: vector/vectors we want to know the nearest vector from dataset. :return: Result will be a list containing 1. the nearest vector 2. distance from the vector >>> dataset = np.array([[0], [1], [2]]) >>> value_array = np.array([[0]]) >>> similarity_search(dataset, value_array) [[[0], 0.0]] >>> dataset = np.array([[0, 0], [1, 1], [2, 2]]) >>> value_array = np.array([[0, 1]]) >>> similarity_search(dataset, value_array) [[[0, 0], 1.0]] >>> dataset = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]]) >>> value_array = np.array([[0, 0, 1]]) >>> similarity_search(dataset, value_array) [[[0, 0, 0], 1.0]] >>> dataset = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]]) >>> value_array = np.array([[0, 0, 0], [0, 0, 1]]) >>> similarity_search(dataset, value_array) [[[0, 0, 0], 0.0], [[0, 0, 0], 1.0]] These are the errors that might occur: 1. If dimensions are different. For example, dataset has 2d array and value_array has 1d array: >>> dataset = np.array([[1]]) >>> value_array = np.array([1]) >>> similarity_search(dataset, value_array) Traceback (most recent call last): ... ValueError: Wrong input data's dimensions... dataset : 2, value_array : 1 2. If data's shapes are different. For example, dataset has shape of (3, 2) and value_array has (2, 3). We are expecting same shapes of two arrays, so it is wrong. >>> dataset = np.array([[0, 0], [1, 1], [2, 2]]) >>> value_array = np.array([[0, 0, 0], [0, 0, 1]]) >>> similarity_search(dataset, value_array) Traceback (most recent call last): ... ValueError: Wrong input data's shape... dataset : 2, value_array : 3 3. If data types are different. When trying to compare, we are expecting same types so they should be same. If not, it'll come up with errors. >>> dataset = np.array([[0, 0], [1, 1], [2, 2]], dtype=np.float32) >>> value_array = np.array([[0, 0], [0, 1]], dtype=np.int32) >>> similarity_search(dataset, value_array) # doctest: +NORMALIZE_WHITESPACE Traceback (most recent call last): ... TypeError: Input data have different datatype... dataset : float32, value_array : int32 """ if dataset.ndim != value_array.ndim: raise ValueError( f"Wrong input data's dimensions... dataset : {dataset.ndim}, " f"value_array : {value_array.ndim}" ) try: if dataset.shape[1] != value_array.shape[1]: raise ValueError( f"Wrong input data's shape... dataset : {dataset.shape[1]}, " f"value_array : {value_array.shape[1]}" ) except IndexError: if dataset.ndim != value_array.ndim: raise TypeError("Wrong shape") if dataset.dtype != value_array.dtype: raise TypeError( f"Input data have different datatype... dataset : {dataset.dtype}, " f"value_array : {value_array.dtype}" ) answer = [] for value in value_array: dist = euclidean(value, dataset[0]) vector = dataset[0].tolist() for dataset_value in dataset[1:]: temp_dist = euclidean(value, dataset_value) if dist > temp_dist: dist = temp_dist vector = dataset_value.tolist() answer.append([vector, dist]) return answer def cosine_similarity(input_a: np.ndarray, input_b: np.ndarray) -> float: """ Calculates cosine similarity between two data. :param input_a: ndarray of first vector. :param input_b: ndarray of second vector. :return: Cosine similarity of input_a and input_b. By using math.sqrt(), result will be float. >>> cosine_similarity(np.array([1]), np.array([1])) 1.0 >>> cosine_similarity(np.array([1, 2]), np.array([6, 32])) 0.9615239476408232 """ return np.dot(input_a, input_b) / (norm(input_a) * norm(input_b)) if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
"""Implementation of GradientBoostingRegressor in sklearn using the boston dataset which is very popular for regression problem to predict house price. """ import matplotlib.pyplot as plt import pandas as pd from sklearn.datasets import load_boston from sklearn.ensemble import GradientBoostingRegressor from sklearn.metrics import mean_squared_error, r2_score from sklearn.model_selection import train_test_split def main(): # loading the dataset from the sklearn df = load_boston() print(df.keys()) # now let construct a data frame df_boston = pd.DataFrame(df.data, columns=df.feature_names) # let add the target to the dataframe df_boston["Price"] = df.target # print the first five rows using the head function print(df_boston.head()) # Summary statistics print(df_boston.describe().T) # Feature selection x = df_boston.iloc[:, :-1] y = df_boston.iloc[:, -1] # target variable # split the data with 75% train and 25% test sets. x_train, x_test, y_train, y_test = train_test_split( x, y, random_state=0, test_size=0.25 ) model = GradientBoostingRegressor( n_estimators=500, max_depth=5, min_samples_split=4, learning_rate=0.01 ) # training the model model.fit(x_train, y_train) # to see how good the model fit the data training_score = model.score(x_train, y_train).round(3) test_score = model.score(x_test, y_test).round(3) print("Training score of GradientBoosting is :", training_score) print("The test score of GradientBoosting is :", test_score) # Let us evaluation the model by finding the errors y_pred = model.predict(x_test) # The mean squared error print(f"Mean squared error: {mean_squared_error(y_test, y_pred):.2f}") # Explained variance score: 1 is perfect prediction print(f"Test Variance score: {r2_score(y_test, y_pred):.2f}") # So let's run the model against the test data fig, ax = plt.subplots() ax.scatter(y_test, y_pred, edgecolors=(0, 0, 0)) ax.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], "k--", lw=4) ax.set_xlabel("Actual") ax.set_ylabel("Predicted") ax.set_title("Truth vs Predicted") # this show function will display the plotting plt.show() if __name__ == "__main__": main()
"""Implementation of GradientBoostingRegressor in sklearn using the boston dataset which is very popular for regression problem to predict house price. """ import matplotlib.pyplot as plt import pandas as pd from sklearn.datasets import load_boston from sklearn.ensemble import GradientBoostingRegressor from sklearn.metrics import mean_squared_error, r2_score from sklearn.model_selection import train_test_split def main(): # loading the dataset from the sklearn df = load_boston() print(df.keys()) # now let construct a data frame df_boston = pd.DataFrame(df.data, columns=df.feature_names) # let add the target to the dataframe df_boston["Price"] = df.target # print the first five rows using the head function print(df_boston.head()) # Summary statistics print(df_boston.describe().T) # Feature selection x = df_boston.iloc[:, :-1] y = df_boston.iloc[:, -1] # target variable # split the data with 75% train and 25% test sets. x_train, x_test, y_train, y_test = train_test_split( x, y, random_state=0, test_size=0.25 ) model = GradientBoostingRegressor( n_estimators=500, max_depth=5, min_samples_split=4, learning_rate=0.01 ) # training the model model.fit(x_train, y_train) # to see how good the model fit the data training_score = model.score(x_train, y_train).round(3) test_score = model.score(x_test, y_test).round(3) print("Training score of GradientBoosting is :", training_score) print("The test score of GradientBoosting is :", test_score) # Let us evaluation the model by finding the errors y_pred = model.predict(x_test) # The mean squared error print(f"Mean squared error: {mean_squared_error(y_test, y_pred):.2f}") # Explained variance score: 1 is perfect prediction print(f"Test Variance score: {r2_score(y_test, y_pred):.2f}") # So let's run the model against the test data fig, ax = plt.subplots() ax.scatter(y_test, y_pred, edgecolors=(0, 0, 0)) ax.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], "k--", lw=4) ax.set_xlabel("Actual") ax.set_ylabel("Predicted") ax.set_title("Truth vs Predicted") # this show function will display the plotting plt.show() if __name__ == "__main__": main()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Calculate the nth Catalan number Source: https://en.wikipedia.org/wiki/Catalan_number """ def catalan(number: int) -> int: """ :param number: nth catalan number to calculate :return: the nth catalan number Note: A catalan number is only defined for positive integers >>> catalan(5) 14 >>> catalan(0) Traceback (most recent call last): ... ValueError: Input value of [number=0] must be > 0 >>> catalan(-1) Traceback (most recent call last): ... ValueError: Input value of [number=-1] must be > 0 >>> catalan(5.0) Traceback (most recent call last): ... TypeError: Input value of [number=5.0] must be an integer """ if not isinstance(number, int): raise TypeError(f"Input value of [number={number}] must be an integer") if number < 1: raise ValueError(f"Input value of [number={number}] must be > 0") current_number = 1 for i in range(1, number): current_number *= 4 * i - 2 current_number //= i + 1 return current_number if __name__ == "__main__": import doctest doctest.testmod()
""" Calculate the nth Catalan number Source: https://en.wikipedia.org/wiki/Catalan_number """ def catalan(number: int) -> int: """ :param number: nth catalan number to calculate :return: the nth catalan number Note: A catalan number is only defined for positive integers >>> catalan(5) 14 >>> catalan(0) Traceback (most recent call last): ... ValueError: Input value of [number=0] must be > 0 >>> catalan(-1) Traceback (most recent call last): ... ValueError: Input value of [number=-1] must be > 0 >>> catalan(5.0) Traceback (most recent call last): ... TypeError: Input value of [number=5.0] must be an integer """ if not isinstance(number, int): raise TypeError(f"Input value of [number={number}] must be an integer") if number < 1: raise ValueError(f"Input value of [number={number}] must be > 0") current_number = 1 for i in range(1, number): current_number *= 4 * i - 2 current_number //= i + 1 return current_number if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
import os from string import ascii_letters LETTERS_AND_SPACE = ascii_letters + " \t\n" def load_dictionary() -> dict[str, None]: path = os.path.split(os.path.realpath(__file__)) english_words: dict[str, None] = {} with open(path[0] + "/dictionary.txt") as dictionary_file: for word in dictionary_file.read().split("\n"): english_words[word] = None return english_words ENGLISH_WORDS = load_dictionary() def get_english_count(message: str) -> float: message = message.upper() message = remove_non_letters(message) possible_words = message.split() matches = len([word for word in possible_words if word in ENGLISH_WORDS]) return float(matches) / len(possible_words) def remove_non_letters(message: str) -> str: return "".join(symbol for symbol in message if symbol in LETTERS_AND_SPACE) def is_english( message: str, word_percentage: int = 20, letter_percentage: int = 85 ) -> bool: """ >>> is_english('Hello World') True >>> is_english('llold HorWd') False """ words_match = get_english_count(message) * 100 >= word_percentage num_letters = len(remove_non_letters(message)) message_letters_percentage = (float(num_letters) / len(message)) * 100 letters_match = message_letters_percentage >= letter_percentage return words_match and letters_match if __name__ == "__main__": import doctest doctest.testmod()
import os from string import ascii_letters LETTERS_AND_SPACE = ascii_letters + " \t\n" def load_dictionary() -> dict[str, None]: path = os.path.split(os.path.realpath(__file__)) english_words: dict[str, None] = {} with open(path[0] + "/dictionary.txt") as dictionary_file: for word in dictionary_file.read().split("\n"): english_words[word] = None return english_words ENGLISH_WORDS = load_dictionary() def get_english_count(message: str) -> float: message = message.upper() message = remove_non_letters(message) possible_words = message.split() matches = len([word for word in possible_words if word in ENGLISH_WORDS]) return float(matches) / len(possible_words) def remove_non_letters(message: str) -> str: return "".join(symbol for symbol in message if symbol in LETTERS_AND_SPACE) def is_english( message: str, word_percentage: int = 20, letter_percentage: int = 85 ) -> bool: """ >>> is_english('Hello World') True >>> is_english('llold HorWd') False """ words_match = get_english_count(message) * 100 >= word_percentage num_letters = len(remove_non_letters(message)) message_letters_percentage = (float(num_letters) / len(message)) * 100 letters_match = message_letters_percentage >= letter_percentage return words_match and letters_match if __name__ == "__main__": import doctest doctest.testmod()
-1
TheAlgorithms/Python
8,007
Upgrade to flake8 v6
### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
cclauss
"2022-11-29T14:16:34Z"
"2022-11-29T15:56:42Z"
f32d611689dc72bda67f1c4636ab1599c60d27a4
08c22457058207dc465b9ba9fd95659d33b3f1dd
Upgrade to flake8 v6. ### Describe your change: The [new version of flake8](https://pypi.org/project/flake8/) complains about misplaced comments in `.flake8` `flake8-broken-line` is not compatible with flake8 v6 -- wemake-services/flake8-broken-line#280 * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [ ] All new Python files are placed inside an existing directory. * [ ] All filenames are in all lowercase characters with no spaces or dashes. * [ ] All functions and variable names follow Python naming conventions. * [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
""" Project Euler Problem 2: https://projecteuler.net/problem=2 Even Fibonacci Numbers Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be: 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms. References: - https://en.wikipedia.org/wiki/Fibonacci_number """ def solution(n: int = 4000000) -> int: """ Returns the sum of all even fibonacci sequence elements that are lower or equal to n. >>> solution(10) 10 >>> solution(15) 10 >>> solution(2) 2 >>> solution(1) 0 >>> solution(34) 44 """ fib = [0, 1] i = 0 while fib[i] <= n: fib.append(fib[i] + fib[i + 1]) if fib[i + 2] > n: break i += 1 total = 0 for j in range(len(fib) - 1): if fib[j] % 2 == 0: total += fib[j] return total if __name__ == "__main__": print(f"{solution() = }")
""" Project Euler Problem 2: https://projecteuler.net/problem=2 Even Fibonacci Numbers Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be: 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms. References: - https://en.wikipedia.org/wiki/Fibonacci_number """ def solution(n: int = 4000000) -> int: """ Returns the sum of all even fibonacci sequence elements that are lower or equal to n. >>> solution(10) 10 >>> solution(15) 10 >>> solution(2) 2 >>> solution(1) 0 >>> solution(34) 44 """ fib = [0, 1] i = 0 while fib[i] <= n: fib.append(fib[i] + fib[i + 1]) if fib[i + 2] > n: break i += 1 total = 0 for j in range(len(fib) - 1): if fib[j] % 2 == 0: total += fib[j] return total if __name__ == "__main__": print(f"{solution() = }")
-1