task_url
stringlengths 30
116
| task_name
stringlengths 2
86
| task_description
stringlengths 0
14.4k
| language_url
stringlengths 2
53
| language_name
stringlengths 1
52
| code
stringlengths 0
61.9k
|
---|---|---|---|---|---|
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Java | Java | public static void heapSort(int[] a){
int count = a.length;
//first place a in max-heap order
heapify(a, count);
int end = count - 1;
while(end > 0){
//swap the root(maximum value) of the heap with the
//last element of the heap
int tmp = a[end];
a[end] = a[0];
a[0] = tmp;
//put the heap back in max-heap order
siftDown(a, 0, end - 1);
//decrement the size of the heap so that the previous
//max value will stay in its proper place
end--;
}
}
public static void heapify(int[] a, int count){
//start is assigned the index in a of the last parent node
int start = (count - 2) / 2; //binary heap
while(start >= 0){
//sift down the node at index start to the proper place
//such that all nodes below the start index are in heap
//order
siftDown(a, start, count - 1);
start--;
}
//after sifting down the root all nodes/elements are in heap order
}
public static void siftDown(int[] a, int start, int end){
//end represents the limit of how far down the heap to sift
int root = start;
while((root * 2 + 1) <= end){ //While the root has at least one child
int child = root * 2 + 1; //root*2+1 points to the left child
//if the child has a sibling and the child's value is less than its sibling's...
if(child + 1 <= end && a[child] < a[child + 1])
child = child + 1; //... then point to the right child instead
if(a[root] < a[child]){ //out of max-heap order
int tmp = a[root];
a[root] = a[child];
a[child] = tmp;
root = child; //repeat to continue sifting down the child now
}else
return;
}
} |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Forth | Forth | : merge-step ( right mid left -- right mid+ left+ )
over @ over @ < if
over @ >r
2dup - over dup cell+ rot move
r> over !
>r cell+ 2dup = if rdrop dup else r> then
then cell+ ;
: merge ( right mid left -- right left )
dup >r begin 2dup > while merge-step repeat 2drop r> ;
: mid ( l r -- mid ) over - 2/ cell negate and + ;
: mergesort ( right left -- right left )
2dup cell+ <= if exit then
swap 2dup mid recurse rot recurse merge ;
: sort ( addr len -- ) cells over + swap mergesort 2drop ;
create test 8 , 1 , 5 , 3 , 9 , 0 , 2 , 7 , 6 , 4 ,
: .array ( addr len -- ) 0 do dup i cells + @ . loop drop ;
test 10 2dup sort .array \ 0 1 2 3 4 5 6 7 8 9 |
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #Racket | Racket |
#lang racket
(define (selection-sort xs)
(cond [(empty? xs) '()]
[else (define x0 (apply min xs))
(cons x0 (selection-sort (remove x0 xs)))]))
|
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #Raku | Raku | sub selection_sort ( @a is copy ) {
for 0 ..^ @a.end -> $i {
my $min = [ $i+1 .. @a.end ].min: { @a[$_] };
@a[$i, $min] = @a[$min, $i] if @a[$i] > @a[$min];
}
return @a;
}
my @data = 22, 7, 2, -5, 8, 4;
say 'input = ' ~ @data;
say 'output = ' ~ @data.&selection_sort;
|
http://rosettacode.org/wiki/Soundex | Soundex | Soundex is an algorithm for creating indices for words based on their pronunciation.
Task
The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling (from the soundex Wikipedia article).
Caution
There is a major issue in many of the implementations concerning the separation of two consonants that have the same soundex code! According to the official Rules [[1]]. So check for instance if Ashcraft is coded to A-261.
If a vowel (A, E, I, O, U) separates two consonants that have the same soundex code, the consonant to the right of the vowel is coded. Tymczak is coded as T-522 (T, 5 for the M, 2 for the C, Z ignored (see "Side-by-Side" rule above), 2 for the K). Since the vowel "A" separates the Z and K, the K is coded.
If "H" or "W" separate two consonants that have the same soundex code, the consonant to the right of the vowel is not coded. Example: Ashcraft is coded A-261 (A, 2 for the S, C ignored, 6 for the R, 1 for the F). It is not coded A-226.
| #SenseTalk | SenseTalk | set names to ["Aschraft","Ashcroft","DiBenedetto","Euler","Gauss","Ghosh","Gutierrez",
"Heilbronn","Hilbert","Honeyman","Jackson","Lee","LeGrand","Lissajous","Lloyd",
"Moses","Pfister","Robert","Rupert","Rubin","Tymczak","VanDeusen","Van de Graaff","Wheaton"]
repeat with each name in names
put !"[[name]] --> [[name's soundex]]"
end repeat
to handle soundex of aName
delete space from aName -- remove spaces
put the first character of aName into soundex
replace every occurrence of <{letter:char},{:letter}> with "{:letter}" in aName -- collapse double letters
delete "H" from aName
delete "W" from aName
set prevCode to 0
repeat with each character ch in aName
if ch is in ...
... "BFPV" then set code to 1
... "CGJKQSXZ" then set code to 2
... "DT" then set code to 3
... "L" then set code to 4
... "MN" then set code to 5
... "R" then set code to 6
... else set code to 0
end if
if code isn't 0 and the counter > 1 and code isn't prevCode then put code after soundex
put code into prevCode
end repeat
set soundex to the first 4 chars of (soundex & "000") -- fill in with 0's as needed
set prefix to <("Van" or "Con" or "De" or "Di" or "La" or "Le") followed by a capital letter>
if aName begins with prefix then
put aName into nameWithoutPrefix
delete the first occurrence of prefix in nameWithoutPrefix
return [soundex, soundex of nameWithoutPrefix]
end if
return soundex
end soundex
|
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Raku | Raku | my @stack; # just a array
@stack.push($elem); # add $elem to the end of @stack
$elem = @stack.pop; # get the last element back
@stack.elems == 0 # true, because the stack is empty
not @stack # also true because @stack is false |
http://rosettacode.org/wiki/Spiral_matrix | Spiral matrix | Task
Produce a spiral array.
A spiral array is a square arrangement of the first N2 natural numbers, where the
numbers increase sequentially as you go around the edges of the array spiraling inwards.
For example, given 5, produce this array:
0 1 2 3 4
15 16 17 18 5
14 23 24 19 6
13 22 21 20 7
12 11 10 9 8
Related tasks
Zig-zag matrix
Identity_matrix
Ulam_spiral_(for_primes)
| #Z80_Assembly | Z80 Assembly | ; Spiral matrix in Z80 assembly (for CP/M OS - you can use `tnylpo` or `z88dk-ticks` on PC)
OPT --syntax=abf : OUTPUT "spiralmt.com" ; asm syntax for z00m's variant of sjasmplus
ORG $100
spiral_matrix:
ld a,5 ; N matrix size (argument for the code) (valid range: 1..150)
; setup phase
push af
ld l,a
ld h,0
add hl,hl
ld (delta_d),hl ; down-direction address delta = +N*2
neg
ld l,a
ld h,$FF
add hl,hl
ld (delta_u),hl ; up-direction address delta = -N*2
neg
ld hl,matrix
ld de,2 ; delta_r value to move right in matrix
ld bc,0 ; starting value
dec a ; first sequences will be N-1 long
jr z,.finish ; 1x1 doesn't need any sequence, just set last element
call set_sequence ; initial entry sequence has N-1 elements (same as two more)
; main loop - do twice same length sequence, then decrement length, until zero
.loop:
call set_sequence_twice
dec a
jr nz,.loop
.finish: ; whole spiral is set except last element, set it now
ld (hl),c
inc hl
ld (hl),b
; print matrix - reading it by POP HL (destructive, plus some memory ahead of matrix too)
pop de ; d = N
ld (.oldsp+1),sp
ld sp,matrix ; set stack to beginning of matrix (call/push does damage memory ahead)
ld c,d ; c = N (lines counter)
.print_rows:
ld b,d ; b = N (value per row counter)
.print_row:
pop hl
push de
push bc
call print_hl
pop bc
pop de
djnz .print_row
push de
call print_crlf
pop de
dec c
jr nz,.print_rows
.oldsp:
ld sp,0
rst 0 ; return to CP/M
print_crlf:
ld e,10
call print_char
ld e,13
jr print_char
print_hl:
ld b,' '
ld e,b
call print_char
ld de,-10000
call extract_digit
ld de,-1000
call extract_digit
ld de,-100
call extract_digit
ld de,-10
call extract_digit
ld a,l
print_digit:
ld b,'0'
add a,b
ld e,a
print_char:
push bc
push hl
ld c,2
call 5
pop hl
pop bc
ret
extract_digit:
ld a,-1
.digit_loop:
inc a
add hl,de
jr c,.digit_loop
sbc hl,de
or a
jr nz,print_digit
ld e,b
jr print_char
set_sequence_twice:
call set_sequence
set_sequence:
; A = length, HL = next_to_matrix, DE = delta to advance hl, BC = next_value
push af
.set_loop:
ld (hl),c
inc hl
ld (hl),b
dec hl ; [HL] = BC
add hl,de ; HL += DE
inc bc ; ++BC
dec a
jr nz,.set_loop
push hl ; change DE for next direction (right->down->left->up->right->...)
.d: ld hl,delta_d ; self-modify-code: pointer to next delta
ld e,(hl)
inc hl
ld d,(hl) ; de = address delta for next sequence
inc hl
ld a,low delta_u+2 ; if hl == delta_u+2 => reset it back to delta_r
cp l
jr nz,.next_delta
ld hl,delta_r
.next_delta:
ld (.d+1),hl ; self modify code pointer for next delta value
pop hl
pop af
ret
delta_r: dw +2 ; value to add to move right in matrix
delta_d: dw 0 ; value to add to move down in matrix (set to +N*2)
delta_l: dw -2 ; value to add to move left in matrix
delta_u: dw 0 ; value to add to move up in matrix (set to -N*2)
matrix:
; following memory is used for NxN matrix of uint16_t values (150x150 needs 45000 bytes) |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #IDL | IDL | function qs, arr
if (count = n_elements(arr)) lt 2 then return,arr
pivot = total(arr) / count ; use the average for want of a better choice
return,[qs(arr[where(arr le pivot)]),qs(arr[where(arr gt pivot)])]
end |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Haxe | Haxe | class InsertionSort {
@:generic
public static function sort<T>(arr:Array<T>) {
for (i in 1...arr.length) {
var value = arr[i];
var j = i - 1;
while (j >= 0 && Reflect.compare(arr[j], value) > 0) {
arr[j + 1] = arr[j--];
}
arr[j + 1] = value;
}
}
}
class Main {
static function main() {
var integerArray = [1, 10, 2, 5, -1, 5, -19, 4, 23, 0];
var floatArray = [1.0, -3.2, 5.2, 10.8, -5.7, 7.3,
3.5, 0.0, -4.1, -9.5];
var stringArray = ['We', 'hold', 'these', 'truths', 'to',
'be', 'self-evident', 'that', 'all',
'men', 'are', 'created', 'equal'];
Sys.println('Unsorted Integers: ' + integerArray);
InsertionSort.sort(integerArray);
Sys.println('Sorted Integers: ' + integerArray);
Sys.println('Unsorted Floats: ' + floatArray);
InsertionSort.sort(floatArray);
Sys.println('Sorted Floats: ' + floatArray);
Sys.println('Unsorted Strings: ' + stringArray);
InsertionSort.sort(stringArray);
Sys.println('Sorted Strings: ' + stringArray);
}
} |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #JavaScript | JavaScript |
function heapSort(arr) {
heapify(arr)
end = arr.length - 1
while (end > 0) {
[arr[end], arr[0]] = [arr[0], arr[end]]
end--
siftDown(arr, 0, end)
}
}
function heapify(arr) {
start = Math.floor(arr.length/2) - 1
while (start >= 0) {
siftDown(arr, start, arr.length - 1)
start--
}
}
function siftDown(arr, startPos, endPos) {
let rootPos = startPos
while (rootPos * 2 + 1 <= endPos) {
childPos = rootPos * 2 + 1
if (childPos + 1 <= endPos && arr[childPos] < arr[childPos + 1]) {
childPos++
}
if (arr[rootPos] < arr[childPos]) {
[arr[rootPos], arr[childPos]] = [arr[childPos], arr[rootPos]]
rootPos = childPos
} else {
return
}
}
}
test('rosettacode', () => {
arr = [12, 11, 15, 10, 9, 1, 2, 3, 13, 14, 4, 5, 6, 7, 8,]
heapSort(arr)
expect(arr).toStrictEqual([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15])
}) |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Fortran | Fortran | program TestMergeSort
implicit none
integer, parameter :: N = 8
integer :: A(N) = (/ 1, 5, 2, 7, 3, 9, 4, 6 /)
integer :: work((size(A) + 1) / 2)
write(*,'(A,/,10I3)')'Unsorted array :',A
call MergeSort(A, work)
write(*,'(A,/,10I3)')'Sorted array :',A
contains
subroutine merge(A, B, C)
implicit none
! The targe attribute is necessary, because A .or. B might overlap with C.
integer, target, intent(in) :: A(:), B(:)
integer, target, intent(inout) :: C(:)
integer :: i, j, k
if (size(A) + size(B) > size(C)) stop(1)
i = 1; j = 1
do k = 1, size(C)
if (i <= size(A) .and. j <= size(B)) then
if (A(i) <= B(j)) then
C(k) = A(i)
i = i + 1
else
C(k) = B(j)
j = j + 1
end if
else if (i <= size(A)) then
C(k) = A(i)
i = i + 1
else if (j <= size(B)) then
C(k) = B(j)
j = j + 1
end if
end do
end subroutine merge
subroutine swap(x, y)
implicit none
integer, intent(inout) :: x, y
integer :: tmp
tmp = x; x = y; y = tmp
end subroutine
recursive subroutine MergeSort(A, work)
implicit none
integer, intent(inout) :: A(:)
integer, intent(inout) :: work(:)
integer :: half
half = (size(A) + 1) / 2
if (size(A) < 2) then
continue
else if (size(A) == 2) then
if (A(1) > A(2)) then
call swap(A(1), A(2))
end if
else
call MergeSort(A( : half), work)
call MergeSort(A(half + 1 :), work)
if (A(half) > A(half + 1)) then
work(1 : half) = A(1 : half)
call merge(work(1 : half), A(half + 1:), A)
endif
end if
end subroutine MergeSort
end program TestMergeSort
|
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #REXX | REXX | /*REXX program sorts a stemmed array using the selection─sort algorithm. */
call init /*assign some values to an array: @. */
call show 'before sort' /*show the before array elements. */
say copies('▒', 65) /*show a nice separator line (fence). */
call selectionSort # /*invoke selection sort (and # items). */
call show ' after sort' /*show the after array elements. */
exit 0 /*stick a fork in it, we're all done. */
/*──────────────────────────────────────────────────────────────────────────────────────*/
init: @.=; @.1 = '---The seven hills of Rome:---'
@.2 = '=============================='; @.6 = 'Virminal'
@.3 = 'Caelian' ; @.7 = 'Esquiline'
@.4 = 'Palatine' ; @.8 = 'Quirinal'
@.5 = 'Capitoline' ; @.9 = 'Aventine'
do #=1 until @.#==''; end /*find the number of items in the array*/
#= #-1; return /*adjust # (because of DO index). */
/*──────────────────────────────────────────────────────────────────────────────────────*/
selectionSort: procedure expose @.; parse arg n
do j=1 for n-1; _= @.j; p= j
do k=j+1 to n; if @.k>=_ then iterate
_= @.k; p= k /*this item is out─of─order, swap later*/
end /*k*/
if p==j then iterate /*if the same, the order of items is OK*/
_= @.j; @.j= @.p; @.p= _ /*swap 2 items that're out─of─sequence.*/
end /*j*/
return
/*──────────────────────────────────────────────────────────────────────────────────────*/
show: do i=1 for #; say ' element' right(i,length(#)) arg(1)":" @.i; end; return |
http://rosettacode.org/wiki/Soundex | Soundex | Soundex is an algorithm for creating indices for words based on their pronunciation.
Task
The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling (from the soundex Wikipedia article).
Caution
There is a major issue in many of the implementations concerning the separation of two consonants that have the same soundex code! According to the official Rules [[1]]. So check for instance if Ashcraft is coded to A-261.
If a vowel (A, E, I, O, U) separates two consonants that have the same soundex code, the consonant to the right of the vowel is coded. Tymczak is coded as T-522 (T, 5 for the M, 2 for the C, Z ignored (see "Side-by-Side" rule above), 2 for the K). Since the vowel "A" separates the Z and K, the K is coded.
If "H" or "W" separate two consonants that have the same soundex code, the consonant to the right of the vowel is not coded. Example: Ashcraft is coded A-261 (A, 2 for the S, C ignored, 6 for the R, 1 for the F). It is not coded A-226.
| #Sidef | Sidef | func soundex(word, length=3) {
# Uppercase the argument passed in to normalize it
# and drop any non-alphabetic characters
word.uc!.tr!('A-Z', '', 'cd')
# Return if word does not contain 'A-Z'
return(nil) if (word.is_empty)
var firstLetter = word.char(0)
# Replace letters with corresponding number values
word.tr!('BFPV', '1', 's')
word.tr!('CGJKQSXZ', '2', 's')
word.tr!('DT', '3', 's')
word.tr!('L', '4', 's')
word.tr!('MN', '5', 's')
word.tr!('R', '6', 's')
# Discard the first letter
word.ft!(1)
# Remove A, E, H, I, O, U, W, and Y
word.tr!('AEHIOUWY', '', 'd')
# Return the soundex code
firstLetter + (word.chars + length.of('0') -> first(length).join)
}
func testSoundex {
# Key-value pairs of names and corresponding Soundex codes
var sndx = Hash(
"Euler" => "E4600",
"Gauss" => "G2000",
"Hilbert" => "H4163",
"Knuth" => "K5300",
"Lloyd" => "L3000",
"Lukasieicz" => "L2220",
'fulkerson' => 'F4262',
'faulkersuhn' => 'F4262',
'fpfffffauhlkkersssin' => 'F4262',
'Aaeh' => 'A0000',
)
sndx.keys.sort.each { |name|
var findSdx = soundex(name, 4)
say "The soundex for #{name} should be #{sndx{name}} and is #{findSdx}"
if (findSdx != sndx{name}) {
say "\tHowever, that is incorrect!\n"
}
}
}
testSoundex() |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Raven | Raven | new stack as s
1 s push
s pop |
http://rosettacode.org/wiki/Spiral_matrix | Spiral matrix | Task
Produce a spiral array.
A spiral array is a square arrangement of the first N2 natural numbers, where the
numbers increase sequentially as you go around the edges of the array spiraling inwards.
For example, given 5, produce this array:
0 1 2 3 4
15 16 17 18 5
14 23 24 19 6
13 22 21 20 7
12 11 10 9 8
Related tasks
Zig-zag matrix
Identity_matrix
Ulam_spiral_(for_primes)
| #zkl | zkl | fcn spiralMatrix(n){
sm:=(0).pump(n,List,(0).pump(n,List,False).copy); //L(L(False,False..), L(F,F,..) ...)
drc:=Walker.cycle(T(0,1,0), T(1,0,1), T(0,-1,0), T(-1,0,1)); // deltas
len:=n; r:=0; c:=-1; z:=-1; while(len>0){ //or do(2*n-1){
dr,dc,dl:=drc.next();
do(len-=dl){ sm[r+=dr][c+=dc]=(z+=1); }
}
sm
} |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Idris | Idris | quicksort : Ord elem => List elem -> List elem
quicksort [] = []
quicksort (x :: xs) =
let lesser = filter (< x) xs
greater = filter(>= x) xs in
(quicksort lesser) ++ [x] ++ (quicksort greater) |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #HicEst | HicEst | DO i = 2, LEN(A)
value = A(i)
j = i - 1
1 IF( j > 0 ) THEN
IF( A(j) > value ) THEN
A(j+1) = A(j)
j = j - 1
GOTO 1 ! no WHILE in HicEst
ENDIF
ENDIF
A(j+1) = value
ENDDO |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #jq | jq | def swap($a; $i; $j):
$a
| .[$i] as $t
| .[$i] = .[$j]
| .[$j] = $t ;
def siftDown($a; $start; $iend):
{ $a, root: $start }
| until( .stop or (.root*2 + 1 > $iend);
.child = .root*2 + 1
| if .child + 1 <= $iend and .a[.child] < .a[.child+1]
then .child += 1
else .
end
| if .a[.root] < .a[.child]
then
.a = swap(.a; .root; .child)
| .root = .child
else .stop = true
end)
| .a ;
def heapify:
length as $count
| {a: ., start: ((($count - 2)/2)|floor)}
| until(.start < 0;
.a = siftDown(.a; .start; $count - 1)
| .start += -1 )
| .a ;
def heapSort:
{ a: heapify,
iend: (length - 1) }
| until( .iend <= 0;
.a = swap(.a; 0; .iend)
| .iend += -1
| .a = siftDown(.a; 0; .iend) )
| .a ;
[4, 65, 2, -31, 0, 99, 2, 83, 782, 1], [7, 5, 2, 6, 1, 4, 2, 6, 3]
|
"Before: \(.)",
"After : \(heapSort)\n"
|
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Julia | Julia | function swap(a, i, j)
a[i], a[j] = a[j], a[i]
end
function pd!(a, first, last)
while (c = 2 * first - 1) < last
if c < last && a[c] < a[c + 1]
c += 1
end
if a[first] < a[c]
swap(a, c, first)
first = c
else
break
end
end
end
function heapify!(a, n)
f = div(n, 2)
while f >= 1
pd!(a, f, n)
f -= 1
end
end
function heapsort!(a)
n = length(a)
heapify!(a, n)
l = n
while l > 1
swap(a, 1, l)
l -= 1
pd!(a, 1, l)
end
return a
end
using Random: shuffle
a = shuffle(collect(1:12))
println("Unsorted: $a")
println("Heap sorted: ", heapsort!(a))
|
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #FreeBASIC | FreeBASIC | ' FB 1.05.0 Win64
Sub copyArray(a() As Integer, iBegin As Integer, iEnd As Integer, b() As Integer)
Redim b(iBegin To iEnd - 1) As Integer
For k As Integer = iBegin To iEnd - 1
b(k) = a(k)
Next
End Sub
' Left source half is a(iBegin To iMiddle-1).
' Right source half is a(iMiddle To iEnd-1).
' Result is b(iBegin To iEnd-1).
Sub topDownMerge(a() As Integer, iBegin As Integer, iMiddle As Integer, iEnd As Integer, b() As Integer)
Dim i As Integer = iBegin
Dim j As Integer = iMiddle
' While there are elements in the left or right runs...
For k As Integer = iBegin To iEnd - 1
' If left run head exists and is <= existing right run head.
If i < iMiddle AndAlso (j >= iEnd OrElse a(i) <= a(j)) Then
b(k) = a(i)
i += 1
Else
b(k) = a(j)
j += 1
End If
Next
End Sub
' Sort the given run of array a() using array b() as a source.
' iBegin is inclusive; iEnd is exclusive (a(iEnd) is not in the set).
Sub topDownSplitMerge(b() As Integer, iBegin As Integer, iEnd As Integer, a() As Integer)
If (iEnd - iBegin) < 2 Then Return '' If run size = 1, consider it sorted
' split the run longer than 1 item into halves
Dim iMiddle As Integer = (iEnd + iBegin) \ 2 '' iMiddle = mid point
' recursively sort both runs from array a() into b()
topDownSplitMerge(a(), iBegin, iMiddle, b()) '' sort the left run
topDownSplitMerge(a(), iMiddle, iEnd, b()) '' sort the right run
' merge the resulting runs from array b() into a()
topDownMerge(b(), iBegin, iMiddle, iEnd, a())
End Sub
' Array a() has the items to sort; array b() is a work array (empty initially).
Sub topDownMergeSort(a() As Integer, b() As Integer, n As Integer)
copyArray(a(), 0, n, b()) '' duplicate array a() into b()
topDownSplitMerge(b(), 0, n, a()) '' sort data from b() into a()
End Sub
Sub printArray(a() As Integer)
For i As Integer = LBound(a) To UBound(a)
Print Using "####"; a(i);
Next
Print
End Sub
Dim a(0 To 9) As Integer = {4, 65, 2, -31, 0, 99, 2, 83, 782, 1}
Dim b() As Integer
Print "Unsorted : ";
printArray(a())
topDownMergeSort a(), b(), 10
Print "Sorted : ";
printArray(a())
Print
Dim a2(0 To 8) As Integer = {7, 5, 2, 6, 1, 4, 2, 6, 3}
Erase b
Print "Unsorted : ";
printArray(a2())
topDownMergeSort a2(), b(), 9
Print "Sorted : ";
printArray(a2())
Print
Print "Press any key to quit"
Sleep |
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #Ring | Ring |
aList = [7,6,5,9,8,4,3,1,2,0]
see sortList(aList)
func sortList list
count = len(list) + 1
last = count - 1
for i = 1 to last
minCandidate = i
j = i + 1
while j < count
if list[j] < list[minCandidate] minCandidate = j ok
j = j + 1
end
temp = list[i]
list[i] = list[minCandidate]
list[minCandidate] = temp
next
return list
|
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #Ruby | Ruby |
# a relatively readable version - creates a distinct array
def sequential_sort(array)
sorted = []
while array.any?
index_of_smallest_element = find_smallest_index(array) # defined below
sorted << array.delete_at(index_of_smallest_element)
end
sorted
end
def find_smallest_index(array)
smallest_element = array[0]
smallest_index = 0
array.each_with_index do |ele, idx|
if ele < smallest_element
smallest_element = ele
smallest_index = idx
end
end
smallest_index
end
puts "sequential_sort([9, 6, 8, 7, 5]): #{sequential_sort([9, 6, 8, 7, 5])}"
# prints: sequential_sort([9, 6, 8, 7, 5]): [5, 6, 7, 8, 9]
# more efficient version - swaps the array's elements in place
def sequential_sort_with_swapping(array)
array.each_with_index do |element, index|
smallest_unsorted_element_so_far = element
smallest_unsorted_index_so_far = index
(index+1...array.length).each do |index_value|
if array[index_value] < smallest_unsorted_element_so_far
smallest_unsorted_element_so_far = array[index_value]
smallest_unsorted_index_so_far = index_value
end
end
# swap index_value-th smallest element for index_value-th element
array[index], array[smallest_unsorted_index_so_far] = array[smallest_unsorted_index_so_far], array[index]
end
array
end
puts "sequential_sort_with_swapping([7,6,5,9,8,4,3,1,2,0]): #{sequential_sort_with_swapping([7,6,5,9,8,4,3,1,2,0])}"
# prints: sequential_sort_with_swapping([7,6,5,9,8,4,3,1,2,0]): [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
|
http://rosettacode.org/wiki/Soundex | Soundex | Soundex is an algorithm for creating indices for words based on their pronunciation.
Task
The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling (from the soundex Wikipedia article).
Caution
There is a major issue in many of the implementations concerning the separation of two consonants that have the same soundex code! According to the official Rules [[1]]. So check for instance if Ashcraft is coded to A-261.
If a vowel (A, E, I, O, U) separates two consonants that have the same soundex code, the consonant to the right of the vowel is coded. Tymczak is coded as T-522 (T, 5 for the M, 2 for the C, Z ignored (see "Side-by-Side" rule above), 2 for the K). Since the vowel "A" separates the Z and K, the K is coded.
If "H" or "W" separate two consonants that have the same soundex code, the consonant to the right of the vowel is not coded. Example: Ashcraft is coded A-261 (A, 2 for the S, C ignored, 6 for the R, 1 for the F). It is not coded A-226.
| #Smalltalk | Smalltalk | PhoneticStringUtilities soundexCodeOf: 'Soundex' "-> S532" |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #REBOL | REBOL | rebol [
Title: "Stack"
URL: http://rosettacode.org/wiki/Stack
]
stack: make object! [
data: copy []
push: func [x][append data x]
pop: func [/local x][x: last data remove back tail data x]
empty: does [empty? data]
peek: does [last data]
]
; Teeny Tiny Test Suite
assert: func [code][print [either do code [" ok"]["FAIL"] mold code]]
print "Simple integers:"
s: make stack [] s/push 1 s/push 2 ; Initialize.
assert [2 = s/peek]
assert [2 = s/pop]
assert [1 = s/pop]
assert [s/empty]
print [lf "Symbolic data on stack:"]
v: make stack [data: [this is a test]] ; Initialize on instance.
assert ['test = v/peek]
assert ['test = v/pop]
assert ['a = v/pop]
assert [not v/empty] |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Io | Io | List do(
quickSort := method(
if(size > 1) then(
pivot := at(size / 2 floor)
return select(x, x < pivot) quickSort appendSeq(
select(x, x == pivot) appendSeq(select(x, x > pivot) quickSort)
)
) else(return self)
)
quickSortInPlace := method(
copy(quickSort)
)
)
lst := list(5, -1, -4, 2, 9)
lst quickSort println # ==> list(-4, -1, 2, 5, 9)
lst quickSortInPlace println # ==> list(-4, -1, 2, 5, 9) |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Icon_and_Unicon | Icon and Unicon | procedure main() #: demonstrate various ways to sort a list and string
demosort(insertionsort,[3, 14, 1, 5, 9, 2, 6, 3],"qwerty")
end
procedure insertionsort(X,op) #: return sorted X
local i,temp
op := sortop(op,X) # select how and what we sort
every i := 2 to *X do {
temp := X[j := i]
while op(temp,X[1 <= (j -:= 1)]) do
X[j+1] := X[j]
X[j+1] := temp
}
return X
end |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Kotlin | Kotlin | // version 1.1.0
fun heapSort(a: IntArray) {
heapify(a)
var end = a.size - 1
while (end > 0) {
val temp = a[end]
a[end] = a[0]
a[0] = temp
end--
siftDown(a, 0, end)
}
}
fun heapify(a: IntArray) {
var start = (a.size - 2) / 2
while (start >= 0) {
siftDown(a, start, a.size - 1)
start--
}
}
fun siftDown(a: IntArray, start: Int, end: Int) {
var root = start
while (root * 2 + 1 <= end) {
var child = root * 2 + 1
if (child + 1 <= end && a[child] < a[child + 1]) child++
if (a[root] < a[child]) {
val temp = a[root]
a[root] = a[child]
a[child] = temp
root = child
}
else return
}
}
fun main(args: Array<String>) {
val aa = arrayOf(
intArrayOf(100, 2, 56, 200, -52, 3, 99, 33, 177, -199),
intArrayOf(4, 65, 2, -31, 0, 99, 2, 83, 782, 1),
intArrayOf(12, 11, 15, 10, 9, 1, 2, 3, 13, 14, 4, 5, 6, 7, 8)
)
for (a in aa) {
heapSort(a)
println(a.joinToString(", "))
}
} |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #FunL | FunL | def
sort( [] ) = []
sort( [x] ) = [x]
sort( xs ) =
val (l, r) = xs.splitAt( xs.length()\2 )
merge( sort(l), sort(r) )
merge( [], xs ) = xs
merge( xs, [] ) = xs
merge( x:xs, y:ys )
| x <= y = x : merge( xs, y:ys )
| otherwise = y : merge( x:xs, ys )
println( sort([94, 37, 16, 56, 72, 48, 17, 27, 58, 67]) )
println( sort(['Sofía', 'Alysha', 'Sophia', 'Maya', 'Emma', 'Olivia', 'Emily']) ) |
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #Run_BASIC | Run BASIC | siz = 10
dim srdData(siz)
for i = 1 to siz
srtData(i) = rnd(0) * 100
next i
FOR i = 1 TO siz-1
lo = i
FOR j = (i + 1) TO siz
IF srtData(j) < srtData(lo) lo = j
NEXT j
if i <> lo then
temp = srtData(i)
srtData(i) = srtData(lo)
srtData(lo) = temp
end if
NEXT i
for i = 1 to siz
print i;chr$(9);srtData(i)
next i |
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #Rust | Rust |
fn selection_sort(array: &mut [i32]) {
let mut min;
for i in 0..array.len() {
min = i;
for j in (i+1)..array.len() {
if array[j] < array[min] {
min = j;
}
}
let tmp = array[i];
array[i] = array[min];
array[min] = tmp;
}
}
fn main() {
let mut array = [ 9, 4, 8, 3, -5, 2, 1, 6 ];
println!("The initial array is {:?}", array);
selection_sort(&mut array);
println!(" The sorted array is {:?}", array);
}
|
http://rosettacode.org/wiki/Soundex | Soundex | Soundex is an algorithm for creating indices for words based on their pronunciation.
Task
The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling (from the soundex Wikipedia article).
Caution
There is a major issue in many of the implementations concerning the separation of two consonants that have the same soundex code! According to the official Rules [[1]]. So check for instance if Ashcraft is coded to A-261.
If a vowel (A, E, I, O, U) separates two consonants that have the same soundex code, the consonant to the right of the vowel is coded. Tymczak is coded as T-522 (T, 5 for the M, 2 for the C, Z ignored (see "Side-by-Side" rule above), 2 for the K). Since the vowel "A" separates the Z and K, the K is coded.
If "H" or "W" separate two consonants that have the same soundex code, the consonant to the right of the vowel is not coded. Example: Ashcraft is coded A-261 (A, 2 for the S, C ignored, 6 for the R, 1 for the F). It is not coded A-226.
| #SNOBOL4 | SNOBOL4 | * # Soundex coding
* # ABCDEFGHIJKLMNOPQRSTUVWXYZ
* # 01230127022455012623017202
define('soundex(str)init,ch') :(soundex_end)
soundex sdxmap = '01230127022455012623017202'
str = replace(str,&lcase,&ucase)
sdx1 str notany(&ucase) = :s(sdx1)
init = substr(str,1,1)
str = replace(str,&ucase,sdxmap)
sdx2 str len(1) $ ch span(*ch) = ch :s(sdx2)
* # Omit next line for Knuth's simple Soundex
sdx3 str len(1) $ ch ('7' *ch) = ch :s(sdx3)
str len(1) = init
sdx4 str any('07') = :s(sdx4)
str = substr(str,1,4)
str = lt(size(str),4) str dupl('0',4 - size(str))
soundex = str :(return)
soundex_end
* # Test and display
test = " Washington Lee Gutierrez Pfister Jackson Tymczak"
+ " Ashcroft Swhgler O'Connor Rhys-Davies"
loop test span(' ') break(' ') . name = :f(end)
output = soundex(name) ' ' name :(loop)
end |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Retro | Retro | : stack ( n"- ) create 0 , allot ;
: push ( na- ) dup ++ dup @ + ! ;
: pop ( a-n ) dup @ over -- + @ ;
: top ( a-n ) dup @ + @ ;
: empty? ( a-f ) @ 0 = ;
10 stack st
1 st push
2 st push
3 st push
st empty? putn
st top putn
st pop putn st pop putn st pop putn
st empty? putn |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Isabelle | Isabelle | theory Quicksort
imports Main
begin
fun quicksort :: "('a :: linorder) list ⇒ 'a list" where
"quicksort [] = []"
| "quicksort (x#xs) = (quicksort [y←xs. y<x]) @ [x] @ (quicksort [y←xs. y>x])"
lemma "quicksort [4::int, 2, 7, 1] = [1, 2, 4, 7]"
by(code_simp)
lemma set_first_second_partition:
fixes x :: "'a :: linorder"
shows "{y ∈ ys. y < x} ∪ {x} ∪ {y ∈ ys. x < y} =
insert x ys"
by fastforce
lemma set_quicksort: "set (quicksort xs) = set xs"
by(induction xs rule: quicksort.induct)
(simp add: set_first_second_partition[simplified])+
theorem "sorted (quicksort xs)"
proof(induction xs rule: quicksort.induct)
case 1
show "sorted (quicksort [])" by simp
next
case (2 x xs)
assume IH_less: "sorted (quicksort [y←xs. y<x])"
assume IH_greater: "sorted (quicksort [y←xs. y>x])"
have pivot_geq_first_partition:
"∀z∈set (quicksort [y←xs. y<x]). z ≤ x"
by (simp add: set_quicksort less_imp_le)
have pivot_leq_second_partition:
"∀z ∈ (set (quicksort [y←xs. y>x])). (x ≤ z)"
by (simp add: set_quicksort less_imp_le)
have first_partition_leq_second_partition:
"∀p∈set (quicksort [y←xs. y<x]).
∀z ∈ (set (quicksort [y←xs. y>x])). (p ≤ z)"
by (auto simp add: set_quicksort)
from IH_less IH_greater
pivot_geq_first_partition pivot_leq_second_partition
first_partition_leq_second_partition
show "sorted (quicksort (x # xs))" by(simp add: sorted_append)
qed
text‹
The specification on rosettacode says
▪ All elements less than the pivot must be in the first partition.
▪ All elements greater than the pivot must be in the second partition.
Since this specification neither says "less than or equal" nor
"greater or equal", this quicksort implementation removes duplicate elements.
›
lemma "quicksort [1::int, 1, 1, 2, 2, 3] = [1, 2, 3]"
by(code_simp)
text‹If we try the following, we automatically get a counterexample›
lemma "length (quicksort xs) = length xs"
(*
Auto Quickcheck found a counterexample:
xs = [a⇩1, a⇩1]
Evaluated terms:
length (quicksort xs) = 1
length xs = 2
*)
oops
end
|
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Io | Io |
List do(
insertionSortInPlace := method(
for(j, 1, size - 1,
key := at(j)
i := j - 1
while(i >= 0 and at(i) > key,
atPut(i + 1, at(i))
i = i - 1
)
atPut(i + 1, key)
)
)
)
lst := list(7, 6, 5, 9, 8, 4, 3, 1, 2, 0)
lst insertionSortInPlace println # ==> list(0, 1, 2, 3, 4, 5, 6, 7, 8, 9) |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Liberty_BASIC | Liberty BASIC | wikiSample=1 'comment out for random array
data 6, 5, 3, 1, 8, 7, 2, 4
itemCount = 20
if wikiSample then itemCount = 8
dim A(itemCount)
for i = 1 to itemCount
A(i) = int(rnd(1) * 100)
if wikiSample then read tmp: A(i)=tmp
next i
print "Before Sort"
call printArray itemCount
call heapSort itemCount
print "After Sort"
call printArray itemCount
end
'------------------------------------------
sub heapSort count
call heapify count
print "the heap"
call printArray count
theEnd = count
while theEnd > 1
call swap theEnd, 1
call siftDown 1, theEnd-1
theEnd = theEnd - 1
wend
end sub
sub heapify count
start = int(count / 2)
while start >= 1
call siftDown start, count
start = start - 1
wend
end sub
sub siftDown start, theEnd
root = start
while root * 2 <= theEnd
child = root * 2
swap = root
if A(swap) < A(child) then
swap = child
end if
if child+1 <= theEnd then
if A(swap) < A(child+1) then
swap = child + 1
end if
end if
if swap <> root then
call swap root, swap
root = swap
else
exit sub
end if
wend
end sub
sub swap a,b
tmp = A(a)
A(a) = A(b)
A(b) = tmp
end sub
'===========================================
sub printArray itemCount
for i = 1 to itemCount
print using("###", A(i));
next i
print
end sub |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Go | Go | package main
import "fmt"
var a = []int{170, 45, 75, -90, -802, 24, 2, 66}
var s = make([]int, len(a)/2+1) // scratch space for merge step
func main() {
fmt.Println("before:", a)
mergeSort(a)
fmt.Println("after: ", a)
}
func mergeSort(a []int) {
if len(a) < 2 {
return
}
mid := len(a) / 2
mergeSort(a[:mid])
mergeSort(a[mid:])
if a[mid-1] <= a[mid] {
return
}
// merge step, with the copy-half optimization
copy(s, a[:mid])
l, r := 0, mid
for i := 0; ; i++ {
if s[l] <= a[r] {
a[i] = s[l]
l++
if l == mid {
break
}
} else {
a[i] = a[r]
r++
if r == len(a) {
copy(a[i+1:], s[l:mid])
break
}
}
}
return
} |
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #Scala | Scala | def swap(a: Array[Int], i1: Int, i2: Int) = { val tmp = a(i1); a(i1) = a(i2); a(i2) = tmp }
def selectionSort(a: Array[Int]) =
for (i <- 0 until a.size - 1)
swap(a, i, (i + 1 until a.size).foldLeft(i)((currMin, index) =>
if (a(index) < a(currMin)) index else currMin)) |
http://rosettacode.org/wiki/Soundex | Soundex | Soundex is an algorithm for creating indices for words based on their pronunciation.
Task
The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling (from the soundex Wikipedia article).
Caution
There is a major issue in many of the implementations concerning the separation of two consonants that have the same soundex code! According to the official Rules [[1]]. So check for instance if Ashcraft is coded to A-261.
If a vowel (A, E, I, O, U) separates two consonants that have the same soundex code, the consonant to the right of the vowel is coded. Tymczak is coded as T-522 (T, 5 for the M, 2 for the C, Z ignored (see "Side-by-Side" rule above), 2 for the K). Since the vowel "A" separates the Z and K, the K is coded.
If "H" or "W" separate two consonants that have the same soundex code, the consonant to the right of the vowel is not coded. Example: Ashcraft is coded A-261 (A, 2 for the S, C ignored, 6 for the R, 1 for the F). It is not coded A-226.
| #Standard_ML | Standard ML | (* There are 3 kinds of letters:
* h and w are ignored completely (letters separated by h or w are considered
* adjacent, or merged together)
* vowels are ignored, but letters separated by a vowel are split apart.
* All consonants but h and w map to a digit *)
datatype code =
Merge
| Split
| Digit of char
(* Encodes which characters map to which codes *)
val codeTable =
[([#"H", #"W"], Merge),
([#"A",#"E",#"I", #"O",#"U",#"Y"], Split),
([#"B",#"F",#"P",#"V"], Digit #"1"),
([#"C",#"G",#"J",#"K",#"Q",#"S",#"X",#"Z"], Digit #"2"),
([#"D",#"T"], Digit #"3"),
([#"L"], Digit #"4"),
([#"M",#"N"], Digit #"5"),
([#"R"], Digit #"6")]
(* Find the code that matches a given character *)
fun codeOf (c : char) =
#2 (valOf (List.find (fn (L,_) => isSome(List.find (fn c' => c = c') L)) codeTable))
(* Remove all the non-digit codes, combining like digits when appropriate. *)
fun collapse (c :: Merge :: cs) = collapse (c :: cs)
| collapse (Digit d :: Split :: cs) = Digit d :: collapse cs
| collapse (Digit d :: (cs' as Digit d' :: cs)) =
if d = d' then collapse (Digit d :: cs)
else Digit d :: collapse cs'
| collapse [Digit d] = [Digit d]
| collapse (c::cs) = collapse cs
| collapse _ = []
(* dropWhile f L removes the initial elements of L that satisfy f and returns
* the rest *)
fun dropWhile f [] = []
| dropWhile f (x::xs) =
if f x then dropWhile f xs
else x::xs
fun soundex (s : string) =
let
(* Normalize the string to uppercase letters only *)
val c::cs = map (Char.toUpper) (filter Char.isAlpha(String.explode s))
fun first3 L = map (fn Digit c => c) (List.take(L,3))
val padding = [Digit #"0", Digit #"0", Digit #"0"]
(* Remove any initial section that has the same code as the first character.
* This comes up in the "Pfister" test case. *)
val codes = dropWhile (fn Merge => true | Digit d => Digit d = codeOf c | Split => false)
(map codeOf (c::cs))
in
String.implode(c::first3(collapse codes@padding))
end
(* Some test cases from Wikipedia *)
fun test input output =
if soundex input = output then ()
else raise Fail ("Soundex of " ^ input ^ " should be " ^ output ^ ", not " ^ soundex input)
val () = test "Rupert" "R163"
val () = test "Robert" "R163"
val () = test "Rubin" "R150"
val () = test "Tymczak" "T522"
val () = test "Pfister" "P236" |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #REXX | REXX | y=123 /*define a REXX variable, value is 123 */
push y /*pushes 123 onto the stack. */
pull g /*pops last value stacked & removes it. */
q=empty() /*invokes the EMPTY subroutine (below)*/
exit /*stick a fork in it, we're done. */
empty: return queued() /*subroutine returns # of stacked items.*/ |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #J | J | sel=: 1 : 'u # ['
quicksort=: 3 : 0
if.
1 >: #y
do.
y
else.
e=. y{~?#y
(quicksort y <sel e),(y =sel e),quicksort y >sel e
end.
) |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Isabelle | Isabelle | theory Insertionsort
imports Main
begin
fun insert :: "int ⇒ int list ⇒ int list" where
"insert x [] = [x]"
| "insert x (y#ys) = (if x ≤ y then (x#y#ys) else y#(insert x ys))"
text‹Example:›
lemma "insert 4 [1, 2, 3, 5, 6] = [1, 2, 3, 4, 5, 6]" by(code_simp)
fun insertionsort :: "int list ⇒ int list" where
"insertionsort [] = []"
| "insertionsort (x#xs) = insert x (insertionsort xs)"
lemma "insertionsort [4, 2, 6, 1, 8, 1] = [1, 1, 2, 4, 6, 8]" by(code_simp)
text‹
Our function behaves the same as the \<^term>‹sort› function of the standard library.
›
lemma insertionsort: "insertionsort xs = sort xs"
proof(induction xs)
case Nil
show "insertionsort [] = sort []" by simp
next
case (Cons x xs)
text‹Our \<^const>‹insert› behaves the same as the std libs \<^const>‹insort›.›
have "insert a as = insort a as" for a as by(induction as) simp+
with Cons show "insertionsort (x # xs) = sort (x # xs)" by simp
qed
text‹
Given that we behave the same as the std libs sorting algorithm,
we get the correctness properties for free.
›
corollary insertionsort_correctness:
"sorted (insertionsort xs)" and
"set (insertionsort xs) = set xs"
using insertionsort by(simp)+
text‹
The Haskell implementation from
🌐‹https://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort#Haskell›
also behaves the same. Ultimately, they all return a sorted list.
One exception to the Haskell implementation is that the type signature of
\<^const>‹foldr› in Isabelle is slightly different:
The initial value of the accumulator goes last.
›
definition rosettacode_haskell_insertionsort :: "int list ⇒ int list" where
"rosettacode_haskell_insertionsort ≡ λxs. foldr insert xs []"
lemma "rosettacode_haskell_insertionsort [4, 2, 6, 1, 8, 1] =
[1, 1, 2, 4, 6, 8]" by(code_simp)
lemma "rosettacode_haskell_insertionsort xs = insertionsort xs"
unfolding rosettacode_haskell_insertionsort_def by(induction xs) simp+
end |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Lobster | Lobster | def siftDown(a, start, end):
// (end represents the limit of how far down the heap to sift)
var root = start
while root * 2 + 1 <= end: // (While the root has at least one child)
var child = root * 2 + 1 // (root*2+1 points to the left child)
// (If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 <= end and a[child] < a[child + 1]:
child += 1 // (... then point to the right child instead)
if a[root] < a[child]: // (out of max-heap order)
let r = a[root] // swap(a[root], a[child])
a[root] = a[child]
a[child] = r
root = child // (repeat to continue sifting down the child now)
else:
return
def heapify(a, count):
//(start is assigned the index in a of the last parent node)
var start = (count - 2) >> 1
while start >= 0:
// (sift down the node at index start to the proper place
// such that all nodes below the start index are in heap order)
siftDown(a, start, count-1)
start -= 1
// (after sifting down the root all nodes/elements are in heap order)
def heapSort(a):
// input: an unordered array a of length count
let count = a.length
// (first place a in max-heap order)
heapify(a, count)
var end = count - 1
while end > 0:
//(swap the root(maximum value) of the heap with the last element of the heap)
let z = a[0]
a[0] = a[end]
a[end] = z
//(decrement the size of the heap so that the previous max value will stay in its proper place)
end -= 1
// (put the heap back in max-heap order)
siftDown(a, 0, end)
let inputi = [1,10,2,5,-1,5,-19,4,23,0]
print ("input: " + inputi)
heapSort(inputi)
print ("sorted: " + inputi)
let inputf = [1,-3.2,5.2,10.8,-5.7,7.3,3.5,0,-4.1,-9.5]
print ("input: " + inputf)
heapSort(inputf)
print ("sorted: " + inputf)
let inputs = ["We","hold","these","truths","to","be","self-evident","that","all","men","are","created","equal"]
print ("input: " + inputs)
heapSort(inputs)
print ("sorted: " + inputs)
|
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #LotusScript | LotusScript |
Public Sub heapsort(pavIn As Variant)
Dim liCount As Integer, liEnd As Integer
Dim lvTemp As Variant
liCount = UBound(pavIn) + 1
heapify pavIn, liCount
liEnd = liCount - 1
While liEnd > 0
lvTemp = pavIn(liEnd)
pavIn(liEnd) = pavIn(0)
pavIn(0) = lvTemp
liEnd = liEnd -1
siftDown pavIn,0, liEnd
Wend
End Sub
Private Sub heapify(pavIn As Variant,piCount As Integer)
Dim liStart As Integer
liStart = (piCount - 2) / 2
While liStart >=0
siftDown pavIn, liStart, piCount -1
liStart = liStart - 1
Wend
End Sub
Private Sub siftDown(pavIn As Variant, piStart As Integer, piEnd As Integer)
Dim liRoot As Integer, liChild As Integer
Dim lvTemp As Variant
liRoot = piStart
While liRoot *2 + 1 <= piEnd
liChild = liRoot *2 + 1
If liChild +1 <= piEnd And pavIn(liChild) < pavIn(liChild + 1) Then
liChild = liChild + 1
End If
If pavIn(liRoot) < pavIn(liChild) Then
lvTemp = pavIn(liRoot)
pavIn(liRoot) = pavIn(liChild)
pavIn(liChild) = lvTemp
liRoot = liChild
Else
Exit sub
End if
wend
End Sub
|
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Groovy | Groovy | def merge = { List left, List right ->
List mergeList = []
while (left && right) {
print "."
mergeList << ((left[-1] > right[-1]) ? left.pop() : right.pop())
}
mergeList = mergeList.reverse()
mergeList = left + right + mergeList
}
def mergeSort;
mergeSort = { List list ->
def n = list.size()
if (n < 2) return list
def middle = n.intdiv(2)
def left = [] + list[0..<middle]
def right = [] + list[middle..<n]
left = mergeSort(left)
right = mergeSort(right)
if (left[-1] <= right[0]) return left + right
merge(left, right)
} |
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #Seed7 | Seed7 | const proc: selectionSort (inout array elemType: arr) is func
local
var integer: i is 0;
var integer: j is 0;
var integer: min is 0;
var elemType: help is elemType.value;
begin
for i range 1 to length(arr) - 1 do
min := i;
for j range i + 1 to length(arr) do
if arr[j] < arr[min] then
min := j;
end if;
end for;
help := arr[min];
arr[min] := arr[i];
arr[i] := help;
end for;
end func; |
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #Sidef | Sidef | class Array {
method selectionsort {
for i in ^(self.end) {
var min_idx = i
for j in (i+1 .. self.end) {
if (self[j] < self[min_idx]) {
min_idx = j
}
}
self.swap(i, min_idx)
}
return self
}
}
var nums = [7,6,5,9,8,4,3,1,2,0];
say nums.selectionsort;
var strs = ["John", "Kate", "Zerg", "Alice", "Joe", "Jane"];
say strs.selectionsort; |
http://rosettacode.org/wiki/Soundex | Soundex | Soundex is an algorithm for creating indices for words based on their pronunciation.
Task
The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling (from the soundex Wikipedia article).
Caution
There is a major issue in many of the implementations concerning the separation of two consonants that have the same soundex code! According to the official Rules [[1]]. So check for instance if Ashcraft is coded to A-261.
If a vowel (A, E, I, O, U) separates two consonants that have the same soundex code, the consonant to the right of the vowel is coded. Tymczak is coded as T-522 (T, 5 for the M, 2 for the C, Z ignored (see "Side-by-Side" rule above), 2 for the K). Since the vowel "A" separates the Z and K, the K is coded.
If "H" or "W" separate two consonants that have the same soundex code, the consonant to the right of the vowel is not coded. Example: Ashcraft is coded A-261 (A, 2 for the S, C ignored, 6 for the R, 1 for the F). It is not coded A-226.
| #Stata | Stata | . display soundex_nara("Ashcraft")
A261
. display soundex_nara("Tymczak")
T522 |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Ring | Ring |
# Project : Stack
load "stdlib.ring"
ostack = new stack
for n = 5 to 7
see "Push: " + n + nl
ostack.push(n)
next
see "Pop:" + ostack.pop() + nl
see "Push: " + "8" + nl
ostack.push(8)
while len(ostack) > 0
see "Pop:" + ostack.pop() + nl
end
if len(ostack) = 0
see "Pop: stack is empty" + nl
ok
|
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Java | Java | public static <E extends Comparable<? super E>> List<E> quickSort(List<E> arr) {
if (arr.isEmpty())
return arr;
else {
E pivot = arr.get(0);
List<E> less = new LinkedList<E>();
List<E> pivotList = new LinkedList<E>();
List<E> more = new LinkedList<E>();
// Partition
for (E i: arr) {
if (i.compareTo(pivot) < 0)
less.add(i);
else if (i.compareTo(pivot) > 0)
more.add(i);
else
pivotList.add(i);
}
// Recursively sort sublists
less = quickSort(less);
more = quickSort(more);
// Concatenate results
less.addAll(pivotList);
less.addAll(more);
return less;
}
}
|
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #J | J | isort=:((>: # ]) , [ , < #])/ |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #M4 | M4 | divert(-1)
define(`randSeed',141592653)
define(`setRand',
`define(`randSeed',ifelse(eval($1<10000),1,`eval(20000-$1)',`$1'))')
define(`rand_t',`eval(randSeed^(randSeed>>13))')
define(`random',
`define(`randSeed',eval((rand_t^(rand_t<<18))&0x7fffffff))randSeed')
define(`set',`define(`$1[$2]',`$3')')
define(`get',`defn(`$1[$2]')')
define(`new',`set($1,size,0)')
dnl for the heap calculations, it's easier if origin is 0, so set value first
define(`append',
`set($1,get($1,size),$2)`'set($1,size,incr(get($1,size)))')
dnl swap(<name>,<j>,<name>[<j>],<k>) using arg stack for the temporary
define(`swap',`set($1,$2,get($1,$4))`'set($1,$4,$3)')
define(`deck',
`new($1)for(`x',1,$2,
`append(`$1',eval(random%100))')')
define(`show',
`for(`x',0,decr(get($1,size)),`get($1,x) ')')
define(`for',
`ifelse($#,0,``$0'',
`ifelse(eval($2<=$3),1,
`pushdef(`$1',$2)$4`'popdef(`$1')$0(`$1',incr($2),$3,`$4')')')')
define(`ifywork',
`ifelse(eval($2>=0),1,
`siftdown($1,$2,$3)`'ifywork($1,decr($2),$3)')')
define(`heapify',
`define(`start',eval((get($1,size)-2)/2))`'ifywork($1,start,
decr(get($1,size)))')
define(`siftdown',
`define(`child',eval($2*2+1))`'ifelse(eval(child<=$3),1,
`ifelse(eval(child+1<=$3),1,
`ifelse(eval(get($1,child)<get($1,incr(child))),1,
`define(`child',
incr(child))')')`'ifelse(eval(get($1,$2)<get($1,child)),1,
`swap($1,$2,get($1,$2),child)`'siftdown($1,child,$3)')')')
define(`sortwork',
`ifelse($2,0,
`',
`swap($1,0,get($1,0),$2)`'siftdown($1,0,decr($2))`'sortwork($1,
decr($2))')')
define(`heapsort',
`heapify($1)`'sortwork($1,decr(get($1,size)))')
divert
deck(`a',10)
show(`a')
heapsort(`a')
show(`a') |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Maple | Maple | swap := proc(arr, a, b)
local temp:
temp := arr[a]:
arr[a] := arr[b]:
arr[b] := temp:
end proc:
heapify := proc(toSort, n, i)
local largest, l, r, holder:
largest := i:
l := 2*i:
r := 2*i+1:
if (l <= n and toSort[l] > toSort[largest]) then
largest := l:
end if:
if (r <= n and toSort[r] > toSort[largest]) then
largest := r:
end if:
if (not largest = i) then
swap(toSort, i, largest);
heapify(toSort, n, largest):
end if:
end proc:
heapsort := proc(arr)
local n,i:
n := numelems(arr):
for i from trunc(n/2) to 1 by -1 do
heapify(arr, n, i):
end do:
for i from n to 2 by -1 do
swap(arr, 1, i):
heapify(arr, i-1, 1):
end do:
end proc:
arr := Array([17,3,72,0,36,2,3,8,40,0]);
heapsort(arr);
arr; |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Haskell | Haskell | merge [] ys = ys
merge xs [] = xs
merge xs@(x:xt) ys@(y:yt) | x <= y = x : merge xt ys
| otherwise = y : merge xs yt
split (x:y:zs) = let (xs,ys) = split zs in (x:xs,y:ys)
split [x] = ([x],[])
split [] = ([],[])
mergeSort [] = []
mergeSort [x] = [x]
mergeSort xs = let (as,bs) = split xs
in merge (mergeSort as) (mergeSort bs) |
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #Standard_ML | Standard ML | fun selection_sort [] = []
| selection_sort (first::lst) =
let
val (small, output) = foldl
(fn (x, (small, output)) =>
if x < small then
(x, small::output)
else
(small, x::output)
) (first, []) lst
in
small :: selection_sort output
end |
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #Stata | Stata | mata
function selection_sort(real vector a) {
real scalar i, j, k, n
n = length(a)
for (i = 1; i < n; i++) {
k = i
for (j = i+1; j <= n; j++) {
if (a[j] < a[k]) k = j
}
if (k != i) a[(i, k)] = a[(k, i)]
}
}
end |
http://rosettacode.org/wiki/Soundex | Soundex | Soundex is an algorithm for creating indices for words based on their pronunciation.
Task
The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling (from the soundex Wikipedia article).
Caution
There is a major issue in many of the implementations concerning the separation of two consonants that have the same soundex code! According to the official Rules [[1]]. So check for instance if Ashcraft is coded to A-261.
If a vowel (A, E, I, O, U) separates two consonants that have the same soundex code, the consonant to the right of the vowel is coded. Tymczak is coded as T-522 (T, 5 for the M, 2 for the C, Z ignored (see "Side-by-Side" rule above), 2 for the K). Since the vowel "A" separates the Z and K, the K is coded.
If "H" or "W" separate two consonants that have the same soundex code, the consonant to the right of the vowel is not coded. Example: Ashcraft is coded A-261 (A, 2 for the S, C ignored, 6 for the R, 1 for the F). It is not coded A-226.
| #Tcl | Tcl | package require soundex
foreach string {"Soundex" "Example" "Sownteks" "Ekzampul"} {
set soundexCode [soundex::knuth $string]
puts "\"$string\" has code $soundexCode"
} |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Ruby | Ruby | stack = []
stack.push(value) # pushing
value = stack.pop # popping
stack.empty? # is empty? |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #JavaScript | JavaScript | function sort(array, less) {
function swap(i, j) {
var t = array[i];
array[i] = array[j];
array[j] = t;
}
function quicksort(left, right) {
if (left < right) {
var pivot = array[left + Math.floor((right - left) / 2)],
left_new = left,
right_new = right;
do {
while (less(array[left_new], pivot)) {
left_new += 1;
}
while (less(pivot, array[right_new])) {
right_new -= 1;
}
if (left_new <= right_new) {
swap(left_new, right_new);
left_new += 1;
right_new -= 1;
}
} while (left_new <= right_new);
quicksort(left, right_new);
quicksort(left_new, right);
}
}
quicksort(0, array.length - 1);
return array;
} |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Java | Java | public static void insertSort(int[] A){
for(int i = 1; i < A.length; i++){
int value = A[i];
int j = i - 1;
while(j >= 0 && A[j] > value){
A[j + 1] = A[j];
j = j - 1;
}
A[j + 1] = value;
}
} |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Mathematica.2FWolfram_Language | Mathematica/Wolfram Language | siftDown[list_,root_,theEnd_]:=
While[(root*2) <= theEnd,
child = root*2;
If[(child+1 <= theEnd)&&(list[[child]] < list[[child+1]]), child++;];
If[list[[root]] < list[[child]],
list[[{root,child}]] = list[[{child,root}]]; root = child;,
Break[];
]
]
heapSort[list_] := Module[{ count, start},
count = Length[list]; start = Floor[count/2];
While[start >= 1,list = siftDown[list,start,count];
start--;
]
While[count > 1, list[[{count,1}]] = list[[{1,count}]];
count--; list = siftDown[list,1,count];
]
] |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #MATLAB_.2F_Octave | MATLAB / Octave | function list = heapSort(list)
function list = siftDown(list,root,theEnd)
while (root * 2) <= theEnd
child = root * 2;
if (child + 1 <= theEnd) && (list(child) < list(child+1))
child = child + 1;
end
if list(root) < list(child)
list([root child]) = list([child root]); %Swap
root = child;
else
return
end
end %while
end %siftDown
count = numel(list);
%Because heapify is called once in pseudo-code, it is inline here
start = floor(count/2);
while start >= 1
list = siftDown(list, start, count);
start = start - 1;
end
%End Heapify
while count > 1
list([count 1]) = list([1 count]); %Swap
count = count - 1;
list = siftDown(list,1,count);
end
end |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Icon_and_Unicon | Icon and Unicon | procedure main() #: demonstrate various ways to sort a list and string
demosort(mergesort,[3, 14, 1, 5, 9, 2, 6, 3],"qwerty")
end
procedure mergesort(X,op,lower,upper) #: return sorted list ascending(or descending)
local middle
if /lower := 1 then { # top level call setup
upper := *X
op := sortop(op,X) # select how and what we sort
}
if upper ~= lower then { # sort all sections with 2 or more elements
X := mergesort(X,op,lower,middle := lower + (upper - lower) / 2)
X := mergesort(X,op,middle+1,upper)
if op(X[middle+1],X[middle]) then # @middle+1 < @middle merge if halves reversed
X := merge(X,op,lower,middle,upper)
}
return X
end
procedure merge(X,op,lower,middle,upper) # merge two list sections within a larger list
local p1,p2,add
p1 := lower
p2 := middle + 1
add := if type(X) ~== "string" then put else "||" # extend X, strings require X := add (until ||:= is invocable)
while p1 <= middle & p2 <= upper do
if op(X[p1],X[p2]) then { # @p1 < @p2
X := add(X,X[p1]) # extend X temporarily (rather than use a separate temporary list)
p1 +:= 1
}
else {
X := add(X,X[p2]) # extend X temporarily
p2 +:= 1
}
while X := add(X,X[middle >= p1]) do p1 +:= 1 # and rest of lower or ...
while X := add(X,X[upper >= p2]) do p2 +:= 1 # ... upper trailers if any
if type(X) ~== "string" then # pull section's sorted elements from extension
every X[upper to lower by -1] := pull(X)
else
(X[lower+:(upper-lower+1)] := X[0-:(upper-lower+1)])[0-:(upper-lower+1)] := ""
return X
end |
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #Swift | Swift | func selectionSort(inout arr:[Int]) {
var min:Int
for n in 0..<arr.count {
min = n
for x in n+1..<arr.count {
if (arr[x] < arr[min]) {
min = x
}
}
if min != n {
let temp = arr[min]
arr[min] = arr[n]
arr[n] = temp
}
}
} |
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #Tcl | Tcl | package require Tcl 8.5
package require struct::list
proc selectionsort {A} {
set len [llength $A]
for {set i 0} {$i < $len - 1} {incr i} {
set min_idx [expr {$i + 1}]
for {set j $min_idx} {$j < $len} {incr j} {
if {[lindex $A $j] < [lindex $A $min_idx]} {
set min_idx $j
}
}
if {[lindex $A $i] > [lindex $A $min_idx]} {
struct::list swap A $i $min_idx
}
}
return $A
}
puts [selectionsort {8 6 4 2 1 3 5 7 9}] ;# => 1 2 3 4 5 6 7 8 9 |
http://rosettacode.org/wiki/Soundex | Soundex | Soundex is an algorithm for creating indices for words based on their pronunciation.
Task
The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling (from the soundex Wikipedia article).
Caution
There is a major issue in many of the implementations concerning the separation of two consonants that have the same soundex code! According to the official Rules [[1]]. So check for instance if Ashcraft is coded to A-261.
If a vowel (A, E, I, O, U) separates two consonants that have the same soundex code, the consonant to the right of the vowel is coded. Tymczak is coded as T-522 (T, 5 for the M, 2 for the C, Z ignored (see "Side-by-Side" rule above), 2 for the K). Since the vowel "A" separates the Z and K, the K is coded.
If "H" or "W" separate two consonants that have the same soundex code, the consonant to the right of the vowel is not coded. Example: Ashcraft is coded A-261 (A, 2 for the S, C ignored, 6 for the R, 1 for the F). It is not coded A-226.
| #TMG | TMG | prog: ignore(spaces)
let: peek/done
[ch = ch>140 ? ch-40 : ch ]
( [ch<110?] ( [ch==101?] vow
| [ch==102?] r1
| [ch==103?] r2
| [ch==104?] r3
| [ch==105?] vow
| [ch==106?] r1
| [ch==107?] r2 )
| [ch<120?] ( [ch==110?] hw
| [ch==111?] vow
| [ch==112?] r2
| [ch==113?] r2
| [ch==114?] r4
| [ch==115?] r5
| [ch==116?] r5
| [ch==117?] vow )
| [ch<130?] ( [ch==120?] r1
| [ch==121?] r2
| [ch==122?] r6
| [ch==123?] r2
| [ch==124?] r3
| [ch==125?] vow
| [ch==126?] r1
| [ch==127?] hw )
| [ch<140?] ( [ch==130?] r2
| [ch==131?] vow
| [ch==132?] r2 ))
[n>0?]\let done;
vow: [ch=0] out;
r1: [ch=1] out;
r2: [ch=2] out;
r3: [ch=3] out;
r4: [ch=4] out;
r5: [ch=5] out;
r6: [ch=6] out;
hw: [ch=7] out;
out: [n==4?] [--n] parse(( scopy ))
| ( [(l1!=10) & ((ch==l1) | (ch==7) | (!ch)) ?]
| [(l1==7) & (ch==l2) ?]
| [--n] parse(num) );
num: octal(ch);
done: [l1=10] [ch=0]
loop: [n>0?] out loop | parse((={*}));
peek: adv ord/read;
ord: char(ch) fail;
read: smark any(!<<>>);
adv: [l2=l1] [l1=ch];
spaces: <<
>>;
n: 4;
ch: 0;
l1: 0;
l2: 0; |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Run_BASIC | Run BASIC | dim stack$(10) ' stack of ten
global stack$
global stackEnd
for i = 1 to 5 ' push 5 values to the stack
a$ = push$(chr$(i + 64))
print "Pushed ";chr$(i + 64);" stack has ";stackEnd
next i
print "Pop Value:";pop$();" stack has ";stackEnd ' pop last in
print "Pop Value:";pop$();" stack has ";stackEnd ' pop last in
e$ = mt$() ' MT the stack
print "Empty stack. stack has ";stackEnd
' ------ PUSH the stack
FUNCTION push$(val$)
stackEnd = stackEnd + 1 ' if more than 10 then lose the oldest
if stackEnd > 10 then
for i = 0 to 9
stack$(i) = stack$(i+1)
next i
stackEnd = 10
end if
stack$(stackEnd) = val$
END FUNCTION
' ------ POP the stack -----
FUNCTION pop$()
if stackEnd = 0 then
pop$ = "Stack is MT"
else
pop$ = stack$(stackEnd) ' pop last in
stackEnd = max(stackEnd - 1,0)
end if
END FUNCTION
' ------ MT the stack ------
FUNCTION mt$()
stackEnd = 0
END FUNCTION |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Joy | Joy |
DEFINE qsort ==
[small] # termination condition: 0 or 1 element
[] # do nothing
[uncons [>] split] # pivot and two lists
[enconcat] # insert the pivot after the recursion
binrec. # recursion on the two lists
|
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #JavaScript | JavaScript |
function insertionSort (a) {
for (var i = 0; i < a.length; i++) {
var k = a[i];
for (var j = i; j > 0 && k < a[j - 1]; j--)
a[j] = a[j - 1];
a[j] = k;
}
return a;
}
var a = [4, 65, 2, -31, 0, 99, 83, 782, 1];
insertionSort(a);
document.write(a.join(" ")); |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #MAXScript | MAXScript | fn heapify arr count =
(
local s = count /2
while s > 0 do
(
arr = siftDown arr s count
s -= 1
)
return arr
)
fn siftDown arr s end =
(
local root = s
while root * 2 <= end do
(
local child = root * 2
if child < end and arr[child] < arr[child+1] do
(
child += 1
)
if arr[root] < arr[child] then
(
swap arr[root] arr[child]
root = child
)
else return arr
)
return arr
)
fn heapSort arr =
(
local count = arr.count
arr = heapify arr count
local end = count
while end >= 1 do
(
swap arr[1] arr[end]
end -= 1
arr = siftDown arr 1 end
)
) |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Io | Io | List do (
merge := method(lst1, lst2,
result := list()
while(lst1 isNotEmpty or lst2 isNotEmpty,
if(lst1 first <= lst2 first) then(
result append(lst1 removeFirst)
) else (
result append(lst2 removeFirst)
)
)
result)
mergeSort := method(
if (size > 1) then(
half_size := (size / 2) ceil
return merge(slice(0, half_size) mergeSort,
slice(half_size, size) mergeSort)
) else (return self)
)
mergeSortInPlace := method(
copy(mergeSort)
)
)
lst := list(9, 5, 3, -1, 15, -2)
lst mergeSort println # ==> list(-2, -1, 3, 5, 9, 15)
lst mergeSortInPlace println # ==> list(-2, -1, 3, 5, 9, 15) |
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #TI-83_BASIC | TI-83 BASIC | :L1→L2
:dim(L2)→I
:For(A,1,I)
:A→C
:0→X
:For(B,A,I)
:If L2(B)<L2(C)
:Then
:B→C
:1→X
:End
:End
:If X=1
:Then
:L2(C)→B
:L2(A)→L2(C)
:B→L2(A)
:End
:End
:DelVar A
:DelVar B
:DelVar C
:DelVar I
:DelVar X
:Return
|
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #uBasic.2F4tH | uBasic/4tH | PRINT "Selection sort:"
n = FUNC (_InitArray)
PROC _ShowArray (n)
PROC _Selectionsort (n)
PROC _ShowArray (n)
PRINT
END
_Selectionsort PARAM (1) ' Selection sort
LOCAL (3)
FOR b@ = 0 TO a@-1
c@ = b@
FOR d@ = b@ TO a@-1
IF @(d@) < @(c@) THEN c@ = d@
NEXT
IF b@ # c@ THEN PROC _Swap (b@, c@)
NEXT
RETURN
_Swap PARAM(2) ' Swap two array elements
PUSH @(a@)
@(a@) = @(b@)
@(b@) = POP()
RETURN
_InitArray ' Init example array
PUSH 4, 65, 2, -31, 0, 99, 2, 83, 782, 1
FOR i = 0 TO 9
@(i) = POP()
NEXT
RETURN (i)
_ShowArray PARAM (1) ' Show array subroutine
FOR i = 0 TO a@-1
PRINT @(i),
NEXT
PRINT
RETURN |
http://rosettacode.org/wiki/Soundex | Soundex | Soundex is an algorithm for creating indices for words based on their pronunciation.
Task
The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling (from the soundex Wikipedia article).
Caution
There is a major issue in many of the implementations concerning the separation of two consonants that have the same soundex code! According to the official Rules [[1]]. So check for instance if Ashcraft is coded to A-261.
If a vowel (A, E, I, O, U) separates two consonants that have the same soundex code, the consonant to the right of the vowel is coded. Tymczak is coded as T-522 (T, 5 for the M, 2 for the C, Z ignored (see "Side-by-Side" rule above), 2 for the K). Since the vowel "A" separates the Z and K, the K is coded.
If "H" or "W" separate two consonants that have the same soundex code, the consonant to the right of the vowel is not coded. Example: Ashcraft is coded A-261 (A, 2 for the S, C ignored, 6 for the R, 1 for the F). It is not coded A-226.
| #TSE_SAL | TSE SAL |
// library: string: get: soundex <description></description> <version>1.0.0.0.35</version> <version control></version control> (filenamemacro=getstgso.s) [kn, ri, sa, 15-10-2011 18:23:04]
STRING PROC FNStringGetSoundexS( STRING inS )
// Except the first character, you replace each character in the string with its corresponding mapping number
// Idea is that you give characters with the same sound the same mapping number (e.g. 'c' is replaced by '2'. And 'k' which might sound the same as a 'c' is also replaced by the same '2'
STRING map1S[255] = "AEHIOUWYBFPVCGJKQSXZDTLMNR"
STRING map2S[255] = "00000000111122222222334556"
STRING s[255] = Upper( inS )
STRING soundexS[255] = ""
STRING characterCurrentS[255] = ""
STRING characterPreviousS[255] = "?"
STRING characterMapS[255] = ""
INTEGER mapPositionI = 0
INTEGER minI = 1
INTEGER I = minI
INTEGER maxI = Length( s )
I = minI
characterCurrentS = SubStr( s, I, 1 )
mapPositionI = Pos( characterCurrentS, map1S )
WHILE ( ( I <= maxI ) AND ( Length( soundexS ) < 4 ) AND ( NOT ( mapPositionI == 0 ) ) )
// Skip double letters, like CC, KK, PP, ...
IF ( NOT ( mapPositionI == 0 ) ) AND ( NOT ( characterCurrentS == characterPreviousS ) )
characterPreviousS = characterCurrentS
// First character is extracted unchanged, for sorting purposes.
IF ( I == minI )
soundexS = Format( soundexS, characterCurrentS )
ELSE
mapPositionI = Pos( characterCurrentS, map1S )
IF ( NOT ( mapPositionI == 0 ) )
characterMapS = SubStr( map2S, mapPositionI, 1 )
// skip vowels A, E, I, O, U, further also H, W and Y. In general all characters which have a mapping value of "0"
IF ( NOT ( characterMapS == "0" ) )
soundexS = Format( soundexS, characterMapS )
ENDIF
ENDIF
ENDIF
ENDIF
I = I + 1
characterCurrentS = SubStr( s, I, 1 )
ENDWHILE
IF ( NOT ( soundexS == "" ) )
WHILE ( Length( soundexS ) < 4 )
soundexS = Format( soundexS, "0" )
ENDWHILE
ENDIF
RETURN( soundexS )
END
PROC Main()
STRING s1[255] = "John Doe"
// Warn( Format( FNStringGetSoundexS( "Ashcraft" ) ) ) // gives e.g. "A226" // using another rule the value might be "A261" (see Wikipedia, soundex)
// Warn( Format( FNStringGetSoundexS( "Ashcroft" ) ) ) // gives e.g. "A226" // using another rule the value might be "A261" (see Wikipedia, soundex)
// Warn( Format( FNStringGetSoundexS( "Davidson, Greg" ) ) ) // gives e.g. "D132"
// Warn( Format( FNStringGetSoundexS( "Dracula" ) ) ) // gives e.g. "D624"
// Warn( Format( FNStringGetSoundexS( "Drakula" ) ) ) // gives e.g. "D624"
// Warn( Format( FNStringGetSoundexS( "Darwin" ) ) ) // gives e.g. "D650"
// Warn( Format( FNStringGetSoundexS( "Darwin, Daemon" ) ) ) // gives e.g. "D650"
// Warn( Format( FNStringGetSoundexS( "Darwin, Ian" ) ) ) // gives e.g. "D650"
// Warn( Format( FNStringGetSoundexS( "Derwin" ) ) ) // gives e.g. "D650"
// Warn( Format( FNStringGetSoundexS( "Darwent, William" ) ) ) // gives e.g. "D653"
// Warn( Format( FNStringGetSoundexS( "Ellery" ) ) ) // gives e.g. "E460"
// Warn( Format( FNStringGetSoundexS( "Euler" ) ) ) // gives e.g. "E460"
// Warn( Format( FNStringGetSoundexS( "Ghosh" ) ) ) // gives e.g. "G200"
// Warn( Format( FNStringGetSoundexS( "Gauss" ) ) ) // gives e.g. "G200"
// Warn( Format( FNStringGetSoundexS( "Heilbronn" ) ) ) // gives e.g. "H416"
// Warn( Format( FNStringGetSoundexS( "Hilbert" ) ) ) // gives e.g. "H416"
// Warn( Format( FNStringGetSoundexS( "Johnny" ) ) ) // gives e.g. "J500"
// Warn( Format( FNStringGetSoundexS( "Jonny" ) ) ) // gives e.g. "J500"
// Warn( Format( FNStringGetSoundexS( "Kant" ) ) ) // gives e.g. "K530"
// Warn( Format( FNStringGetSoundexS( "Knuth" ) ) ) // gives e.g. "K530"
// Warn( Format( FNStringGetSoundexS( "Lissajous" ) ) ) // gives e.g. "L222"
// Warn( Format( FNStringGetSoundexS( "Lukasiewicz" ) ) ) // gives e.g. "L222"
// Warn( Format( FNStringGetSoundexS( "Ladd" ) ) ) // gives e.g. "L300"
// Warn( Format( FNStringGetSoundexS( "Lloyd" ) ) ) // gives e.g. "L300"
// Warn( Format( FNStringGetSoundexS( "Rubin" ) ) ) // gives e.g. "R150"
// Warn( Format( FNStringGetSoundexS( "Robert" ) ) ) // gives e.g. "R163"
// Warn( Format( FNStringGetSoundexS( "Rupert" ) ) ) // gives e.g. "R163"
REPEAT
IF ( NOT ( Ask( "string: get: soundex = ", s1, _EDIT_HISTORY_ ) ) AND ( Length( s1 ) > 0 ) ) RETURN() ENDIF
Warn( Format( FNStringGetSoundexS( s1 ) ) )
UNTIL FALSE
END
|
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Rust | Rust | fn main() {
let mut stack = Vec::new();
stack.push("Element1");
stack.push("Element2");
stack.push("Element3");
assert_eq!(Some(&"Element3"), stack.last());
assert_eq!(Some("Element3"), stack.pop());
assert_eq!(Some("Element2"), stack.pop());
assert_eq!(Some("Element1"), stack.pop());
assert_eq!(None, stack.pop());
} |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #jq | jq | [1, 1.1, [1,2], true, false, null, {"a":1}, null] | sort |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #jq | jq | def insertion_sort:
reduce .[] as $x ([]; insert($x)); |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Mercury | Mercury | %%%-------------------------------------------------------------------
:- module heapsort_task.
:- interface.
:- import_module io.
:- pred main(io::di, io::uo) is det.
:- implementation.
:- import_module array.
:- import_module int.
:- import_module list.
:- import_module random.
:- import_module random.sfc16.
%%%-------------------------------------------------------------------
%%%
%%% heapsort/3 --
%%%
%%% A generic heapsort predicate. It takes a "Less_than" predicate to
%%% determine the order of the sort.
%%%
%%% That I call the predicate "Less_than" does not, by any means,
%%% preclude a descending order. This "Less_than" refers to the
%%% ordinals of the sequence. In other words, it means "comes before".
%%%
%%% The implementation closely follows the task pseudocode--although,
%%% of course, loops have been turned into tail recursions and arrays
%%% are treated as state variables.
%%%
:- pred heapsort(pred(T, T)::pred(in, in) is semidet,
array(T)::array_di, array(T)::array_uo) is det.
heapsort(Less_than, !Arr) :-
heapsort(Less_than, size(!.Arr), !Arr).
:- pred heapsort(pred(T, T)::pred(in, in) is semidet, int::in,
array(T)::array_di, array(T)::array_uo) is det.
heapsort(Less_than, Count, !Arr) :-
heapify(Less_than, Count, !Arr),
heapsort_loop(Less_than, Count, Count - 1, !Arr).
:- pred heapsort_loop(pred(T, T)::pred(in, in) is semidet,
int::in, int::in,
array(T)::array_di, array(T)::array_uo) is det.
heapsort_loop(Less_than, Count, End, !Arr) :-
if (End = 0) then true
else (swap(End, 0, !Arr),
sift_down(Less_than, 0, End - 1, !Arr),
heapsort_loop(Less_than, Count, End - 1, !Arr)).
:- pred heapify(pred(T, T)::pred(in, in) is semidet, int::in,
array(T)::array_di, array(T)::array_uo) is det.
heapify(Less_than, Count, !Arr) :-
heapify(Less_than, Count, (Count - 2) // 2, !Arr).
:- pred heapify(pred(T, T)::pred(in, in) is semidet,
int::in, int::in,
array(T)::array_di, array(T)::array_uo) is det.
heapify(Less_than, Count, Start, !Arr) :-
if (Start = -1) then true
else (sift_down(Less_than, Start, Count - 1, !Arr),
heapify(Less_than, Count, Start - 1, !Arr)).
:- pred sift_down(pred(T, T)::pred(in, in) is semidet,
int::in, int::in,
array(T)::array_di, array(T)::array_uo) is det.
sift_down(Less_than, Root, End, !Arr) :-
if (End < (Root * 2) + 1) then true
else (locate_child(Less_than, Root, End, !.Arr, Child),
(if not Less_than(!.Arr^elem(Root), !.Arr^elem(Child))
then true
else (swap(Root, Child, !Arr),
sift_down(Less_than, Child, End, !Arr)))).
:- pred locate_child(pred(T, T)::pred(in, in) is semidet,
int::in, int::in,
array(T)::in, int::out) is det.
locate_child(Less_than, Root, End, Arr, Child) :-
Child0 = (Root * 2) + 1,
(if (End =< Child0 + 1)
then (Child = Child0)
else if not Less_than(Arr^elem(Child0), Arr^elem(Child0 + 1))
then (Child = Child0)
else (Child = Child0 + 1)).
%%%-------------------------------------------------------------------
main(!IO) :-
R = (sfc16.init),
make_io_random(R, M, !IO),
Generate = (pred(Index::in, Number::out, IO1::di, IO::uo) is det :-
uniform_int_in_range(M, min(0, Index), 10, Number,
IO1, IO)),
generate_foldl(30, Generate, Arr0, !IO),
print_line(Arr0, !IO),
heapsort(<, Arr0, Arr1),
print_line(Arr1, !IO),
heapsort(>=, Arr1, Arr2),
print_line(Arr2, !IO).
%%%-------------------------------------------------------------------
%%% local variables:
%%% mode: mercury
%%% prolog-indent-width: 2
%%% end: |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #NetRexx | NetRexx | /* NetRexx */
options replace format comments java crossref savelog symbols binary
import java.util.List
placesList = [String -
"UK London", "US New York", "US Boston", "US Washington" -
, "UK Washington", "US Birmingham", "UK Birmingham", "UK Boston" -
]
lists = [ -
placesList -
, heapSort(String[] Arrays.copyOf(placesList, placesList.length)) -
]
loop ln = 0 to lists.length - 1
cl = lists[ln]
loop ct = 0 to cl.length - 1
say cl[ct]
end ct
say
end ln
return
method heapSort(a = String[], count = a.length) public constant binary returns String[]
rl = String[a.length]
al = List heapSort(Arrays.asList(a), count)
al.toArray(rl)
return rl
method heapSort(a = List, count = a.size) public constant binary returns ArrayList
a = heapify(a, count)
iend = count - 1
loop label iend while iend > 0
swap = a.get(0)
a.set(0, a.get(iend))
a.set(iend, swap)
a = siftDown(a, 0, iend - 1)
iend = iend - 1
end iend
return ArrayList(a)
method heapify(a = List, count = int) public constant binary returns List
start = (count - 2) % 2
loop label strt while start >= 0
a = siftDown(a, start, count - 1)
start = start - 1
end strt
return a
method siftDown(a = List, istart = int, iend = int) public constant binary returns List
root = istart
loop label root while root * 2 + 1 <= iend
child = root * 2 + 1
if child + 1 <= iend then do
if (Comparable a.get(child)).compareTo(Comparable a.get(child + 1)) < 0 then do
child = child + 1
end
end
if (Comparable a.get(root)).compareTo(Comparable a.get(child)) < 0 then do
swap = a.get(root)
a.set(root, a.get(child))
a.set(child, swap)
root = child
end
else do
leave root
end
end root
return a |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Isabelle | Isabelle | theory Mergesort
imports Main
begin
fun merge :: "int list ⇒ int list ⇒ int list" where
"merge [] ys = ys"
| "merge xs [] = xs"
| "merge (x#xs) (y#ys) = (if x ≤ y
then x # merge xs (y#ys)
else y # merge (x # xs) ys)"
text‹example:›
lemma "merge [1,3,6] [1,2,5,8] = [1,1,2,3,5,6,8]" by simp
lemma merge_set: "set (merge xs ys) = set xs ∪ set ys"
by(induction xs ys rule: merge.induct) auto
lemma merge_sorted:
"sorted xs ⟹ sorted ys ⟹ sorted (merge xs ys)"
proof(induction xs ys rule: merge.induct)
case (1 ys)
then show "sorted (merge [] ys)" by simp
next
case (2 x xs)
then show "sorted (merge (x # xs) [])" by simp
next
case (3 x xs y ys)
assume premx: "sorted (x # xs)"
and premy: "sorted (y # ys)"
and IHx: "x ≤ y ⟹ sorted xs ⟹ sorted (y # ys) ⟹
sorted (merge xs (y # ys))"
and IHy: "¬ x ≤ y ⟹ sorted (x # xs) ⟹ sorted ys ⟹
sorted (merge (x # xs) ys)"
then show "sorted (merge (x # xs) (y # ys))"
proof(cases "x ≤ y")
case True
with premx IHx premy have IH: "sorted (merge xs (y # ys))" by simp
from ‹x ≤ y› premx premy merge_set have
"∀z ∈ set (merge xs (y # ys)). x ≤ z" by fastforce
with ‹x ≤ y› IH show "sorted (merge (x # xs) (y # ys))" by(simp)
next
case False
with premy IHy premx have IH: "sorted (merge (x # xs) ys)" by simp
from ‹¬x ≤ y› premx premy merge_set have
"∀z ∈ set (merge (x # xs) ys). y ≤ z" by fastforce
with ‹¬x ≤ y› IH show "sorted (merge (x # xs) (y # ys))" by(simp)
qed
qed
fun mergesort :: "int list ⇒ int list" where
"mergesort [] = []"
| "mergesort [x] = [x]"
| "mergesort xs = merge (mergesort (take (length xs div 2) xs))
(mergesort (drop (length xs div 2) xs))"
theorem mergesort_set: "set xs = set (mergesort xs)"
proof(induction xs rule: mergesort.induct)
case 1
show "set [] = set (mergesort [])" by simp
next
case (2 x)
show "set [x] = set (mergesort [x])" by simp
next
case (3 x1 x2 xs)
from 3 have IH_simplified_take:
"set (mergesort (x1 # take (length xs div 2) (x2 # xs))) =
insert x1 (set (take (length xs div 2) (x2 # xs)))"
and IH_simplified_drop:
"set (mergesort (drop (length xs div 2) (x2 # xs))) =
set (drop (length xs div 2) (x2 # xs))" by simp+
have "(set (take n as) ∪ set (drop n as)) = set as"
for n and as::"int list"
proof -
from set_append[of "take n as" "drop n as"] have
"(set (take n as) ∪ set (drop n as)) =
set (take n as @ drop n as)" by simp
moreover have
"set (take n as @ drop n as) =
set as" using append_take_drop_id by simp
ultimately show ?thesis by simp
qed
hence "(set (take (length xs div 2) (x2 # xs)) ∪
set (drop (length xs div 2) (x2 # xs))) =
set (x2 # xs)"by(simp)
with IH_simplified_take IH_simplified_drop show
"set (x1 # x2 # xs) = set (mergesort (x1 # x2 # xs))"
by(simp add: merge_set)
qed
theorem mergesort_sorted: "sorted (mergesort xs)"
by(induction xs rule: mergesort.induct) (simp add: merge_sorted)+
text‹example:›
lemma "mergesort [42, 5, 1, 3, 67, 3, 9, 0, 33, 32] =
[0, 1, 3, 3, 5, 9, 32, 33, 42, 67]" by simp
end
|
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #Ursala | Ursala | #import std
selection_sort "p" = @iNX ~&l->rx ^(gldif ==,~&r)^/~&l ^|C/"p"$- ~& |
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #VBA | VBA |
sub swap( byref a, byref b)
dim tmp
tmp = a
a = b
b = tmp
end sub
function selectionSort (a)
for i = 0 to ubound(a)
k = i
for j=i+1 to ubound(a)
if a(j) < a(i) then
swap a(i), a(j)
end if
next
next
selectionSort = a
end function
|
http://rosettacode.org/wiki/Soundex | Soundex | Soundex is an algorithm for creating indices for words based on their pronunciation.
Task
The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling (from the soundex Wikipedia article).
Caution
There is a major issue in many of the implementations concerning the separation of two consonants that have the same soundex code! According to the official Rules [[1]]. So check for instance if Ashcraft is coded to A-261.
If a vowel (A, E, I, O, U) separates two consonants that have the same soundex code, the consonant to the right of the vowel is coded. Tymczak is coded as T-522 (T, 5 for the M, 2 for the C, Z ignored (see "Side-by-Side" rule above), 2 for the K). Since the vowel "A" separates the Z and K, the K is coded.
If "H" or "W" separate two consonants that have the same soundex code, the consonant to the right of the vowel is not coded. Example: Ashcraft is coded A-261 (A, 2 for the S, C ignored, 6 for the R, 1 for the F). It is not coded A-226.
| #TUSCRIPT | TUSCRIPT |
$$ MODE DATA
$$ BUILD X_TABLE soundex = *
:b:1:f:1:p:1:v:1:
:c:2:g:2:j:2:k:2:1:2:s:2:x:2:z:2:
:d:3:t:3:
:l:4:
:m:5:n:5:
:r:6:
$$ names="Christiansen'Kris Jenson'soundex'Lloyd'Woolcock'Donnell'Baragwanath'Williams'Ashcroft'Euler'Ellery'Gauss'Ghosh'Hilbert'Heilbronn'Knuth'Kant'Ladd'Lukasiewicz'Lissajous'Wheaton'Burroughs'Burrows"
$$ MODE TUSCRIPT,{}
LOOP/CLEAR n=names
first=EXTRACT (n,1,2),second=EXTRACT (n,2,3)
IF (first==second) THEN
rest=EXTRACT (n,3,0)
ELSE
rest=EXTRACT (n,2,0)
ENDIF
soundex=EXCHANGE (rest,soundex)
soundex=STRINGS (soundex,":{\0}:a:e:i:o:u:")
soundex=REDUCE (soundex)
soundex=STRINGS (soundex,":{\0}:",0,0,1,0,"")
soundex=CONCAT (soundex,"000")
soundex=EXTRACT (soundex,0,4)
PRINT first,soundex,"=",n
ENDLOOP
|
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Sather | Sather | class STACK{T} is
private attr stack :LLIST{T};
create:SAME is
res ::= new;
res.stack := #LLIST{T};
return res;
end;
push(elt: T) is
stack.insert_front(elt);
end;
pop: T is
if ~stack.is_empty then
stack.rewind;
r ::= stack.current;
stack.delete;
return r;
else
raise "stack empty!\n";
end;
end;
top: T is
stack.rewind;
return stack.current;
end;
is_empty: BOOL is
return stack.is_empty;
end;
end; |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Julia | Julia | sort!(A, alg=QuickSort) |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Julia | Julia | # v0.6
function insertionsort!(A::Array{T}) where T <: Number
for i in 1:length(A)-1
value = A[i+1]
j = i
while j > 0 && A[j] > value
A[j+1] = A[j]
j -= 1
end
A[j+1] = value
end
return A
end
x = randn(5)
@show x insertionsort!(x) |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Nim | Nim | proc siftDown[T](a: var openarray[T]; start, ending: int) =
var root = start
while root * 2 + 1 < ending:
var child = 2 * root + 1
if child + 1 < ending and a[child] < a[child+1]:
inc child
if a[root] < a[child]:
swap a[child], a[root]
root = child
else:
return
proc heapSort[T](a: var openarray[T]) =
let count = a.len
for start in countdown((count - 2) div 2, 0):
siftDown(a, start, count)
for ending in countdown(count - 1, 1):
swap a[ending], a[0]
siftDown(a, 0, ending)
var a = @[4, 65, 2, -31, 0, 99, 2, 83, 782]
heapSort a
echo a |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Objeck | Objeck | bundle Default {
class HeapSort {
function : Main(args : String[]) ~ Nil {
values := [4, 3, 1, 5, 6, 2];
HeapSort(values);
each(i : values) {
values[i]->PrintLine();
};
}
function : HeapSort(a : Int[]) ~ Nil {
count := a->Size();
Heapify(a, count);
end := count - 1;
while(end > 0) {
tmp := a[end];
a[end] := a[0];
a[0] := tmp;
SiftDown(a, 0, end - 1);
end -= 1;
};
}
function : Heapify(a : Int[], count : Int) ~ Nil {
start := (count - 2) / 2;
while(start >= 0) {
SiftDown(a, start, count - 1);
start -= 1;
};
}
function : SiftDown(a : Int[], start : Int, end : Int) ~ Nil {
root := start;
while((root * 2 + 1) <= end) {
child := root * 2 + 1;
if(child + 1 <= end & a[child] < a[child + 1]) {
child := child + 1;
};
if(a[root] < a[child]) {
tmp := a[root];
a[root] := a[child];
a[child] := tmp;
root := child;
}
else {
return;
};
};
}
}
} |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #J | J | mergesort=: {{
if. 2>#y do. y return.end.
middle=. <.-:#y
X=. mergesort middle{.y
Y=. mergesort middle}.y
X merge Y
}}
merge=: {{ r=. y#~ i=. j=. 0
while. (i<#x)*(j<#y) do. a=. i{x [b=. j{y
if. a<b do. r=. r,a [i=. i+1
else. r=. r,b [j=. j+1 end.
end.
if. i<#x do. r=. r, i}.x end.
if. j<#y do. r=. r, j}.y end.
}} |
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #VBScript | VBScript | Function Selection_Sort(s)
arr = Split(s,",")
For i = 0 To UBound(arr)
For j = i To UBound(arr)
temp = arr(i)
If arr(j) < arr(i) Then
arr(i) = arr(j)
arr(j) = temp
End If
Next
Next
Selection_Sort = (Join(arr,","))
End Function
WScript.StdOut.Write "Pre-Sort" & vbTab & "Sorted"
WScript.StdOut.WriteLine
WScript.StdOut.Write "3,2,5,4,1" & vbTab & Selection_Sort("3,2,5,4,1")
WScript.StdOut.WriteLine
WScript.StdOut.Write "c,e,b,a,d" & vbTab & Selection_Sort("c,e,b,a,d") |
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #Wren | Wren | var selectionSort = Fn.new { |a|
var last = a.count - 1
for (i in 0...last) {
var aMin = a[i]
var iMin = i
for (j in i+1..last) {
if (a[j] < aMin) {
aMin = a[j]
iMin = j
}
}
var t = a[i]
a[i] = aMin
a[iMin] = t
}
}
var as = [ [4, 65, 2, -31, 0, 99, 2, 83, 782, 1], [7, 5, 2, 6, 1, 4, 2, 6, 3] ]
for (a in as) {
System.print("Before: %(a)")
selectionSort.call(a)
System.print("After : %(a)")
System.print()
} |
http://rosettacode.org/wiki/Soundex | Soundex | Soundex is an algorithm for creating indices for words based on their pronunciation.
Task
The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling (from the soundex Wikipedia article).
Caution
There is a major issue in many of the implementations concerning the separation of two consonants that have the same soundex code! According to the official Rules [[1]]. So check for instance if Ashcraft is coded to A-261.
If a vowel (A, E, I, O, U) separates two consonants that have the same soundex code, the consonant to the right of the vowel is coded. Tymczak is coded as T-522 (T, 5 for the M, 2 for the C, Z ignored (see "Side-by-Side" rule above), 2 for the K). Since the vowel "A" separates the Z and K, the K is coded.
If "H" or "W" separate two consonants that have the same soundex code, the consonant to the right of the vowel is not coded. Example: Ashcraft is coded A-261 (A, 2 for the S, C ignored, 6 for the R, 1 for the F). It is not coded A-226.
| #TXR | TXR | @(next :args)
@###
@# soundex-related filters
@###
@(deffilter remdbl ("AA" "A") ("BB" "B") ("CC" "C") ("DD" "D") ("EE" "E")
("FF" "F") ("GG" "G") ("HH" "H") ("II" "I") ("JJ" "J")
("KK" "K") ("LL" "L") ("MM" "M") ("NN" "N") ("OO" "O")
("PP" "P") ("QQ" "Q") ("RR" "R") ("SS" "S") ("TT" "T")
("UU" "U") ("VV" "V") ("WW" "W") ("XX" "X") ("YY" "Y")
("ZZ" "Z"))
@(deffilter code ("B" "F" "P" "V" "1")
("C" "G" "J" "K" "Q" "S" "X" "Z" "2")
("D" "T" "3") ("L" "4") ("M" "N" "5")
("R" "6") ("A" "E" "I" "O" "U" "Y" "0") ("H" "W" ""))
@(deffilter squeeze ("11" "111" "1111" "11111" "1")
("22" "222" "2222" "22222" "2")
("33" "333" "3333" "33333" "3")
("44" "444" "4444" "44444" "4")
("55" "555" "5555" "55555" "5")
("66" "666" "6666" "66666" "6"))
@(bind prefix ("VAN" "CON" "DE" "DI" "LA" "LE"))
@(deffilter remzero ("0" ""))
@###
@# soundex function
@###
@(define soundex (in out))
@ (local nodouble letters remainder first rest coded)
@ (next :string in)
@ (coll)@{letters /[A-Za-z]+/}@(end)
@ (cat letters "")
@ (output :into nodouble :filter (:upcase remdbl))
@letters
@ (end)
@ (next :list nodouble)
@ (maybe)
@prefix@remainder
@ (output :into nodouble)
@nodouble
@remainder
@ (end)
@ (end)
@ (next :list nodouble)
@ (collect)
@{first 1}@rest
@ (output :filter (code squeeze remzero) :into coded)
@{rest}000
@ (end)
@ (next :list coded)
@{digits 3}@(skip)
@ (end)
@ (output :into out)
@ (rep):@first@digits@(first)@first@digits@(end)
@ (end)
@ (cat out)
@(end)
@###
@# process arguments and list soundex codes
@###
@(collect :vars ())
@input
@ (output :filter (:fun soundex))
@input
@ (end)
@(end)
@###
@# compare first and second argument under soundex
@###
@(bind (first_arg second_arg . rest_args) input)
@(cases)
@ (bind first_arg second_arg :filter (:fun soundex))
@ (output)
"@first_arg" and "@second_arg" match under soundex
@ (end)
@(end) |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Scala | Scala | class Stack[T] {
private var items = List[T]()
def isEmpty = items.isEmpty
def peek = items match {
case List() => error("Stack empty")
case head :: rest => head
}
def pop = items match {
case List() => error("Stack empty")
case head :: rest => items = rest; head
}
def push(value: T) = items = value +: items
} |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #K | K | quicksort:{f:*x@1?#x;:[0=#x;x;,/(_f x@&x<f;x@&x=f;_f x@&x>f)]} |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Kotlin | Kotlin | fun insertionSort(array: IntArray) {
for (index in 1 until array.size) {
val value = array[index]
var subIndex = index - 1
while (subIndex >= 0 && array[subIndex] > value) {
array[subIndex + 1] = array[subIndex]
subIndex--
}
array[subIndex + 1] = value
}
}
fun main(args: Array<String>) {
val numbers = intArrayOf(5, 2, 3, 17, 12, 1, 8, 3, 4, 9, 7)
fun printArray(message: String, array: IntArray) = with(array) {
print("$message [")
forEachIndexed { index, number ->
print(if (index == lastIndex) number else "$number, ")
}
println("]")
}
printArray("Unsorted:", numbers)
insertionSort(numbers)
printArray("Sorted:", numbers)
} |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #OCaml | OCaml | let heapsort a =
let swap i j =
let t = a.(i) in a.(i) <- a.(j); a.(j) <- t in
let sift k l =
let rec check x y =
if 2*x+1 < l then
let ch =
if y < l-1 && a.(y) < a.(y+1) then y+1 else y in
if a.(x) < a.(ch) then (swap x ch; check ch (2*ch+1)) in
check k (2*k+1) in
let len = Array.length a in
for start = (len/2)-1 downto 0 do
sift start len;
done;
for term = len-1 downto 1 do
swap term 0;
sift 0 term;
done;; |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Java | Java | import java.util.List;
import java.util.ArrayList;
import java.util.Iterator;
public class Merge{
public static <E extends Comparable<? super E>> List<E> mergeSort(List<E> m){
if(m.size() <= 1) return m;
int middle = m.size() / 2;
List<E> left = m.subList(0, middle);
List<E> right = m.subList(middle, m.size());
right = mergeSort(right);
left = mergeSort(left);
List<E> result = merge(left, right);
return result;
}
public static <E extends Comparable<? super E>> List<E> merge(List<E> left, List<E> right){
List<E> result = new ArrayList<E>();
Iterator<E> it1 = left.iterator();
Iterator<E> it2 = right.iterator();
E x = it1.next();
E y = it2.next();
while (true){
//change the direction of this comparison to change the direction of the sort
if(x.compareTo(y) <= 0){
result.add(x);
if(it1.hasNext()){
x = it1.next();
}else{
result.add(y);
while(it2.hasNext()){
result.add(it2.next());
}
break;
}
}else{
result.add(y);
if(it2.hasNext()){
y = it2.next();
}else{
result.add(x);
while (it1.hasNext()){
result.add(it1.next());
}
break;
}
}
}
return result;
}
} |
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #XPL0 | XPL0 | include c:\cxpl\codes; \intrinsic 'code' declarations
string 0; \use zero-terminated strings
proc SelSort(A, N); \Selection sort
char A; \address of array
int N; \number of elements in array (size)
int I, J, S, JS, T;
[for I:= 0 to N-2 do
[S:= (~0)>>1;
for J:= I to N-1 do \find smallest element
if A(J) < S then [S:= A(J); JS:= J];
T:= A(I); A(I):= A(JS); A(JS):= T;
];
];
func StrLen(Str); \Return number of characters in an ASCIIZ string
char Str;
int I;
for I:= 0 to -1>>1-1 do
if Str(I) = 0 then return I;
char Str;
[Str:= "Pack my box with five dozen liquor jugs.";
SelSort(Str, StrLen(Str));
Text(0, Str); CrLf(0);
] |
http://rosettacode.org/wiki/Sorting_algorithms/Selection_sort | Sorting algorithms/Selection sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
Task
Sort an array (or list) of elements using the Selection sort algorithm.
It works as follows:
First find the smallest element in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
Its asymptotic complexity is O(n2) making it inefficient on large arrays.
Its primary purpose is for when writing data is very expensive (slow) when compared to reading, eg. writing to flash memory or EEPROM.
No other sorting algorithm has less data movement.
References
Rosetta Code: O (complexity).
Wikipedia: Selection sort.
Wikipedia: [Big O notation].
| #zkl | zkl | fcn selectionSort(list){ // sort a list of ints
copy,r:=list.copy(),List();
while(copy){
min,idx:=(0).min(copy), copy.find(min);
r.append(min);
copy.del(idx);
}
r
} |
http://rosettacode.org/wiki/Soundex | Soundex | Soundex is an algorithm for creating indices for words based on their pronunciation.
Task
The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling (from the soundex Wikipedia article).
Caution
There is a major issue in many of the implementations concerning the separation of two consonants that have the same soundex code! According to the official Rules [[1]]. So check for instance if Ashcraft is coded to A-261.
If a vowel (A, E, I, O, U) separates two consonants that have the same soundex code, the consonant to the right of the vowel is coded. Tymczak is coded as T-522 (T, 5 for the M, 2 for the C, Z ignored (see "Side-by-Side" rule above), 2 for the K). Since the vowel "A" separates the Z and K, the K is coded.
If "H" or "W" separate two consonants that have the same soundex code, the consonant to the right of the vowel is not coded. Example: Ashcraft is coded A-261 (A, 2 for the S, C ignored, 6 for the R, 1 for the F). It is not coded A-226.
| #UNIX_Shell | UNIX Shell | declare -A value=(
[B]=1 [F]=1 [P]=1 [V]=1
[C]=2 [G]=2 [J]=2 [K]=2 [Q]=2 [S]=2 [X]=2 [Z]=2
[D]=3 [T]=3
[L]=4
[M]=5 [N]=5
[R]=6
) |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Scheme | Scheme | (define (make-stack)
(let ((st '()))
(lambda (message . args)
(case message
((empty?) (null? st))
((top) (if (null? st)
'empty
(car st)))
((push) (set! st (cons (car args) st)))
((pop) (if (null? st)
'empty
(let ((result (car st)))
(set! st (cdr st))
result)))
(else 'badmsg))))) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.