task_url
stringlengths 30
116
| task_name
stringlengths 2
86
| task_description
stringlengths 0
14.4k
| language_url
stringlengths 2
53
| language_name
stringlengths 1
52
| code
stringlengths 0
61.9k
|
---|---|---|---|---|---|
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Koka | Koka | fun qsort( xs : list<int> ) : div list<int> {
match(xs) {
Cons(x,xx) -> {
val ys = xx.filter fn(el) { el < x }
val zs = xx.filter fn(el) { el >= x }
qsort(ys) + [x] + qsort(zs)
}
Nil -> Nil
}
} |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Ksh | Ksh | #!/bin/ksh
# An insertion sort in ksh
# # Variables:
#
typeset -a arr=( 4 65 2 -31 0 99 2 83 782 1 )
# # Functions:
#
# # Function _insertionSort(array) - Insersion sort of array of integers
#
function _insertionSort {
typeset _arr ; nameref _arr="$1"
typeset _i _j _val ; integer _i _j _val
for (( _i=1; _i<${#_arr[*]}; _i++ )); do
_val=${_arr[_i]}
(( _j = _i - 1 ))
while (( _j>=0 && _arr[_j]>_val )); do
_arr[_j+1]=${_arr[_j]}
(( _j-- ))
done
_arr[_j+1]=${_val}
done
}
######
# main #
######
_insertionSort arr
printf "%s" "( "
for (( i=0; i<${#arr[*]}; i++ )); do
printf "%d " ${arr[i]}
done
printf "%s\n" " )" |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Oz | Oz | declare
proc {HeapSort A}
Low = {Array.low A}
High = {Array.high A}
Count = High-Low+1
%% heapify
LastParent = Low + (Count-2) div 2
in
for Start in LastParent..Low;~1 do
{Siftdown A Start High}
end
%% repeatedly put the maximum element to the end
%% and re-heapify the rest
for End in High..Low+1;~1 do
{Swap A End Low}
{Siftdown A Low End-1}
end
end
proc {Siftdown A Start End}
Low = {Array.low A}
fun {FirstChildOf I} Low+(I-Low)*2+1 end
Root = {NewCell Start}
in
for while:{FirstChildOf @Root} =< End
break:Break
do
Child = {NewCell {FirstChildOf @Root}}
in
if @Child + 1 =< End andthen A.@Child < A.(@Child + 1) then
Child := @Child + 1
end
if A.@Root < A.@Child then
{Swap A @Root @Child}
Root := @Child
else
{Break}
end
end
end
proc {Swap A I J}
A.J := (A.I := A.J)
end
%% create array with indices ~1..7 and fill it
Arr = {Array.new ~1 7 0}
{Record.forAllInd unit(~1:3 0:1 4 1 5 9 2 6 5)
proc {$ I V}
Arr.I := V
end}
in
{HeapSort Arr}
{Show {Array.toRecord unit Arr}} |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #JavaScript | JavaScript | function merge(left, right, arr) {
var a = 0;
while (left.length && right.length) {
arr[a++] = (right[0] < left[0]) ? right.shift() : left.shift();
}
while (left.length) {
arr[a++] = left.shift();
}
while (right.length) {
arr[a++] = right.shift();
}
}
function mergeSort(arr) {
var len = arr.length;
if (len === 1) { return; }
var mid = Math.floor(len / 2),
left = arr.slice(0, mid),
right = arr.slice(mid);
mergeSort(left);
mergeSort(right);
merge(left, right, arr);
}
var arr = [1, 5, 2, 7, 3, 9, 4, 6, 8];
mergeSort(arr); // arr will now: 1, 2, 3, 4, 5, 6, 7, 8, 9
// here is improved faster version, also often faster than QuickSort!
function mergeSort2(a) {
if (a.length <= 1) return
const mid = Math.floor(a.length / 2), left = a.slice(0, mid), right = a.slice(mid)
mergeSort2(left)
mergeSort2(right)
let ia = 0, il = 0, ir = 0
while (il < left.length && ir < right.length)
a[ia++] = left[il] < right[ir] ? left[il++] : right[ir++]
while (il < left.length)
a[ia++] = left[il++]
while (ir < right.length)
a[ia++] = right[ir++]
}
|
http://rosettacode.org/wiki/Soundex | Soundex | Soundex is an algorithm for creating indices for words based on their pronunciation.
Task
The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling (from the soundex Wikipedia article).
Caution
There is a major issue in many of the implementations concerning the separation of two consonants that have the same soundex code! According to the official Rules [[1]]. So check for instance if Ashcraft is coded to A-261.
If a vowel (A, E, I, O, U) separates two consonants that have the same soundex code, the consonant to the right of the vowel is coded. Tymczak is coded as T-522 (T, 5 for the M, 2 for the C, Z ignored (see "Side-by-Side" rule above), 2 for the K). Since the vowel "A" separates the Z and K, the K is coded.
If "H" or "W" separate two consonants that have the same soundex code, the consonant to the right of the vowel is not coded. Example: Ashcraft is coded A-261 (A, 2 for the S, C ignored, 6 for the R, 1 for the F). It is not coded A-226.
| #VBScript | VBScript | ' Soundex
tt=array( _
"Ashcraft","Ashcroft","Gauss","Ghosh","Hilbert","Heilbronn","Lee","Lloyd", _
"Moses","Pfister","Robert","Rupert","Rubin","Tymczak","Soundex","Example")
tv=array( _
"A261","A261","G200","G200","H416","H416","L000","L300", _
"M220","P236","R163","R163","R150","T522","S532","E251")
For i=lbound(tt) To ubound(tt)
ts=soundex(tt(i))
If ts<>tv(i) Then ok=" KO "& tv(i) Else ok=""
Wscript.echo right(" "& i ,2) & " " & left( tt(i) &space(12),12) & " " & ts & ok
Next 'i
Function getCode(c)
Select Case c
Case "B", "F", "P", "V"
getCode = "1"
Case "C", "G", "J", "K", "Q", "S", "X", "Z"
getCode = "2"
Case "D", "T"
getCode = "3"
Case "L"
getCode = "4"
Case "M", "N"
getCode = "5"
Case "R"
getCode = "6"
Case "W","H"
getCode = "-"
End Select
End Function 'getCode
Function soundex(s)
Dim code, previous, i
code = UCase(Mid(s, 1, 1))
previous = getCode(UCase(Mid(s, 1, 1)))
For i = 2 To Len(s)
current = getCode(UCase(Mid(s, i, 1)))
If current <> "" And current <> "-" And current <> previous Then code = code & current
If current <> "-" Then previous = current
Next 'i
soundex = Mid(code & "000", 1, 4)
End Function 'soundex
|
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Seed7 | Seed7 | $ include "seed7_05.s7i";
const func type: stack (in type: baseType) is func
result
var type: stackType is void;
begin
stackType := array baseType;
const proc: push (inout stackType: aStack, in baseType: top) is func
begin
aStack := [] (top) & aStack;
end func;
const func baseType: pop (inout stackType: aStack) is func
result
var baseType: top is baseType.value;
begin
if length(aStack) = 0 then
raise RANGE_ERROR;
else
top := aStack[1];
aStack := aStack[2 ..];
end if;
end func;
const func boolean: empty (in stackType: aStack) is
return length(aStack) = 0;
end func;
const type: intStack is stack(integer);
const proc: main is func
local
var intStack: s is intStack.value;
begin
push(s, 10);
push(s, 20);
writeln(pop(s) = 20);
writeln(pop(s) = 10);
writeln(empty(s));
end func; |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Kotlin | Kotlin | fun <E : Comparable<E>> List<E>.qsort(): List<E> =
if (size < 2) this
else filter { it < first() }.qsort() +
filter { it == first() } +
filter { it > first() }.qsort()
|
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Lambdatalk | Lambdatalk |
{def sort
{def sort.i
{lambda {:x :a}
{if {A.empty? :a}
then {A.new :x}
else {if {<= :x {A.first :a}}
then {A.addfirst! :x :a}
else {A.addfirst! {A.first :a} {sort.i :x {A.rest :a}}} }}}}
{def sort.r
{lambda {:a1 :a2}
{if {A.empty? :a1}
then :a2
else {sort.r {A.rest :a1} {sort.i {A.first :a1} :a2}} }}}
{lambda {:a}
{sort.r :a {A.new}} }}
-> sort
{def A {A.new 4 65 2 -31 0 99 83 782 1}}
-> A
{sort {A}}
-> [-31,0,1,2,4,65,83,99,782]
|
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Pascal | Pascal | program HeapSortDemo;
type
TIntArray = array[4..15] of integer;
var
data: TIntArray;
i: integer;
procedure siftDown(var a: TIntArray; start, ende: integer);
var
root, child, swap: integer;
begin
root := start;
while root * 2 - start + 1 <= ende do
begin
child := root * 2 - start + 1;
if (child + 1 <= ende) and (a[child] < a[child + 1]) then
inc(child);
if a[root] < a[child] then
begin
swap := a[root];
a[root] := a[child];
a[child] := swap;
root := child;
end
else
exit;
end;
end;
procedure heapify(var a: TIntArray);
var
start, count: integer;
begin
count := length(a);
start := low(a) + count div 2 - 1;
while start >= low(a) do
begin
siftdown(a, start, high(a));
dec(start);
end;
end;
procedure heapSort(var a: TIntArray);
var
ende, swap: integer;
begin
heapify(a);
ende := high(a);
while ende > low(a) do
begin
swap := a[low(a)];
a[low(a)] := a[ende];
a[ende] := swap;
dec(ende);
siftdown(a, low(a), ende);
end;
end;
begin
Randomize;
writeln('The data before sorting:');
for i := low(data) to high(data) do
begin
data[i] := Random(high(data));
write(data[i]:4);
end;
writeln;
heapSort(data);
writeln('The data after sorting:');
for i := low(data) to high(data) do
begin
write(data[i]:4);
end;
writeln;
end. |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #jq | jq | # Input: [x,y] -- the two arrays to be merged
# If x and y are sorted as by "sort", then the result will also be sorted:
def merge:
def m: # state: [x, y, array] (array being the answer)
.[0] as $x
| .[1] as $y
| if 0 == ($x|length) then .[2] + $y
elif 0 == ($y|length) then .[2] + $x
else
(if $x[0] <= $y[0] then [$x[1:], $y, .[2] + [$x[0] ]]
else [$x, $y[1:], .[2] + [$y[0] ]]
end) | m
end;
[.[0], .[1], []] | m;
def merge_sort:
if length <= 1 then .
else
(length/2 |floor) as $len
| . as $in
| [ ($in[0:$len] | merge_sort), ($in[$len:] | merge_sort) ] | merge
end; |
http://rosettacode.org/wiki/Soundex | Soundex | Soundex is an algorithm for creating indices for words based on their pronunciation.
Task
The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling (from the soundex Wikipedia article).
Caution
There is a major issue in many of the implementations concerning the separation of two consonants that have the same soundex code! According to the official Rules [[1]]. So check for instance if Ashcraft is coded to A-261.
If a vowel (A, E, I, O, U) separates two consonants that have the same soundex code, the consonant to the right of the vowel is coded. Tymczak is coded as T-522 (T, 5 for the M, 2 for the C, Z ignored (see "Side-by-Side" rule above), 2 for the K). Since the vowel "A" separates the Z and K, the K is coded.
If "H" or "W" separate two consonants that have the same soundex code, the consonant to the right of the vowel is not coded. Example: Ashcraft is coded A-261 (A, 2 for the S, C ignored, 6 for the R, 1 for the F). It is not coded A-226.
| #Wren | Wren | import "/str" for Char
import "/fmt" for Fmt
var getCode = Fn.new { |c|
return "BFPV".contains(c) ? "1" :
"CGJKQSXZ".contains(c) ? "2" :
c == "D" || c == "T" ? "3" :
c == "L" ? "4" :
c == "M" || c == "N" ? "5" :
c == "R" ? "6" :
c == "H" || c == "W" ? "-" : ""
}
var soundex = Fn.new { |s|
if (s == "") return ""
var sb = Char.upper(s[0])
var prev = getCode.call(sb[0])
for (c in s.skip(1)) {
var curr = getCode.call(Char.upper(c))
if (curr != "" && curr != "-" && curr != prev) sb = sb + curr
if (curr != "-") prev = curr
}
return Fmt.ljust(4, sb, "0")[0..3]
}
var pairs = [
["Ashcraft", "A261"],
["Ashcroft", "A261"],
["Gauss", "G200"],
["Ghosh", "G200"],
["Hilbert", "H416"],
["Heilbronn", "H416"],
["Lee", "L000"],
["Lloyd", "L300"],
["Moses", "M220"],
["Pfister", "P236"],
["Robert", "R163"],
["Rupert", "R163"],
["Rubin", "R150"],
["Tymczak", "T522"],
["Soundex", "S532"],
["Example", "E251"]
]
for (pair in pairs) {
Fmt.print("$-9s -> $s -> $s", pair[0], pair[1], soundex.call(pair[0]) == pair[1])
} |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #SenseTalk | SenseTalk | put () into stack
repeat with each item of 1 .. 10
push it into stack
end repeat
repeat while stack is not empty
pop stack
put it
end repeat |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Lambdatalk | Lambdatalk |
We create a binary tree from a random array, then we walk the canopy.
1) three functions for readability:
{def BT.data {lambda {:t} {A.get 0 :t}}} -> BT.data
{def BT.left {lambda {:t} {A.get 1 :t}}} -> BT.left
{def BT.right {lambda {:t} {A.get 2 :t}}} -> BT.right
2) adding a leaf to the tree:
{def BT.add {lambda {:x :t}
{if {A.empty? :t}
then {A.new :x {A.new} {A.new}}
else {if {= :x {BT.data :t}}
then :t
else {if {< :x {BT.data :t}}
then {A.new {BT.data :t}
{BT.add :x {BT.left :t}}
{BT.right :t}}
else {A.new {BT.data :t}
{BT.left :t}
{BT.add :x {BT.right :t}} }}}}}}
-> BT.add
3) creating the tree from an array of numbers:
{def BT.create
{def BT.create.rec
{lambda {:l :t}
{if {A.empty? :l}
then :t
else {BT.create.rec {A.rest :l}
{BT.add {A.first :l} :t}} }}}
{lambda {:l}
{BT.create.rec :l {A.new}} }}
-> BT.create
4) walking the canopy -> sorting in increasing order:
{def BT.sort
{lambda {:t}
{if {A.empty? :t}
then else {BT.sort {BT.left :t}}
{BT.data :t}
{BT.sort {BT.right :t}} }}}
-> BT.sort
Testing
1) generating random numbers:
{def L {A.new
{S.map {lambda {:n} {floor {* {random} 100000}}} {S.serie 1 100}}}}
-> L = [1850,7963,50540,92667,72892,47361,19018,40640,10126,80235,48407,51623,63597,71675,27814,63478,18985,88032,46585,85209,
74053,95005,27592,9575,22162,35904,70467,38527,89715,36594,54309,39950,89345,72224,7772,65756,68766,43942,52422,85144,
66010,38961,21647,53194,72166,33545,49037,23218,27969,83566,19382,53120,55291,77374,27502,66648,99637,37322,9815,432,90565,
37831,26503,99232,87024,65625,75155,55382,30120,58117,70031,13011,81375,10490,39786,1926,71311,4213,55183,2583,22075,90411,
92928,61120,94259,433,93332,88423,64119,40850,94318,27816,84818,90632,5094,36696,94705,50602,45818,61365]
2) creating the tree is the main work:
{def T {BT.create {L}}}
-> T = [1850,[432,],[433,],]]],[7963,[7772,[1926,],[4213,[2583,],]],[5094,],]]]],]],[50540,[47361,[19018,[10126,[9575,],
[9815,],]]],[18985,[13011,[10490,],]],]],]]],[40640,[27814,[27592,[22162,[21647,[19382,],]],[22075,],]]],[23218,],
[27502,[26503,],]],]]]],]],[35904,[33545,[27969,[27816,],]],[30120,],]]],]],[38527,[36594,],[37322,[36696,],]],[37831,],]]]],
[39950,[38961,],[39786,],]]],]]]]],[46585,[43942,[40850,],]],[45818,],]]],]]]],[48407,],[49037,],]]]],[92667,[72892,
[51623,[50602,],]],[63597,[63478,[54309,[52422,],[53194,[53120,],]],]]],[55291,[55183,],]],[55382,],[58117,],[61120,],[61365,],]]]]]]],]],[71675,[70467,[65756,[65625,[64119,],]],]],[68766,[66010,],[66648,],]]],[70031,],]]]],[71311,],]]],
[72224,[72166,],]],]]]]],[80235,[74053,],[77374,[75155,],]],]]],[88032,[85209,[85144,[83566,[81375,],]],[84818,],]]],]],
[87024,],]]],[89715,[89345,[88423,],]],]],[90565,[90411,],]],[90632,],]]]]]]],[95005,[92928,],[94259,[93332,],]],[94318,],
[94705,],]]]]],[99637,[99232,],]],]]]]]]]
3) walking the canopy is fast:
{BT.sort {T}}
-> 432 433 1850 1926 2583 4213 5094 7772 7963 9575 9815 10126 10490 13011 18985 19018 19382 21647 22075 22162 23218 26503
27502 27592 27814 27816 27969 30120 33545 35904 36594 36696 37322 37831 38527 38961 39786 39950 40640 40850 43942 45818
46585 47361 48407 49037 50540 50602 51623 52422 53120 53194 54309 55183 55291 55382 58117 61120 61365 63478 63597 64119
65625 65756 66010 66648 68766 70031 70467 71311 71675 72166 72224 72892 74053 75155 77374 80235 81375 83566 84818 85144
85209 87024 88032 88423 89345 89715 90411 90565 90632 92667 92928 93332 94259 94318 94705 95005 99232 99637
4) walking with new leaves is fast:
{BT.sort {BT.add -1 {T}}}
-> -1 432 433 1850 1926 2583 4213 5094 7772 7963 9575 9815 10126 10490 13011 18985 19018 19382 21647 22075 22162 23218 26503
27502 27592 27814 27816 27969 30120 33545 35904 36594 36696 37322 37831 38527 38961 39786 39950 40640 40850 43942 45818 46585
47361 48407 49037 50540 50602 51623 52422 53120 53194 54309 55183 55291 55382 58117 61120 61365 63478 63597 64119 65625 65756
66010 66648 68766 70031 70467 71311 71675 72166 72224 72892 74053 75155 77374 80235 81375 83566 84818 85144 85209 87024 88032
88423 89345 89715 90411 90565 90632 92667 92928 93332 94259 94318 94705 95005 99232 99637
{BT.sort {BT.add 50000 {T}}}
-> 432 433 1850 1926 2583 4213 5094 7772 7963 9575 9815 10126 10490 13011 18985 19018 19382 21647 22075 22162 23218 26503
27502 27592 27814 27816 27969 30120 33545 35904 36594 36696 37322 37831 38527 38961 39786 39950 40640 40850 43942 45818 46585
47361 48407 49037 50000 50540 50602 51623 52422 53120 53194 54309 55183 55291 55382 58117 61120 61365 63478 63597 64119 65625
65756 66010 66648 68766 70031 70467 71311 71675 72166 72224 72892 74053 75155 77374 80235 81375 83566 84818 85144 85209 87024
88032 88423 89345 89715 90411 90565 90632 92667 92928 93332 94259 94318 94705 95005 99232 99637
{BT.sort {BT.add 100000 {T}}}
-> 432 433 1850 1926 2583 4213 5094 7772 7963 9575 9815 10126 10490 13011 18985 19018 19382 21647 22075 22162 23218 26503
27502 27592 27814 27816 27969 30120 33545 35904 36594 36696 37322 37831 38527 38961 39786 39950 40640 40850 43942 45818 46585
47361 48407 49037 50540 50602 51623 52422 53120 53194 54309 55183 55291 55382 58117 61120 61365 63478 63597 64119 65625 65756
66010 66648 68766 70031 70467 71311 71675 72166 72224 72892 74053 75155 77374 80235 81375 83566 84818 85144 85209 87024 88032
88423 89345 89715 90411 90565 90632 92667 92928 93332 94259 94318 94705 95005 99232 99637 100000
|
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Liberty_BASIC | Liberty BASIC | itemCount = 20
dim A(itemCount)
for i = 1 to itemCount
A(i) = int(rnd(1) * 100)
next i
print "Before Sort"
gosub [printArray]
'--- Insertion sort algorithm
for i = 2 to itemCount
value = A(i)
j = i-1
while j >= 0 and A(j) > value
A(j+1) = A(j)
j = j-1
wend
A(j+1) = value
next
'--- end of (Insertion sort algorithm)
print "After Sort"
gosub [printArray]
end
[printArray]
for i = 1 to itemCount
print using("###", A(i));
next i
print
return |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Perl | Perl | #!/usr/bin/perl
my @a = (4, 65, 2, -31, 0, 99, 2, 83, 782, 1);
print "@a\n";
heap_sort(\@a);
print "@a\n";
sub heap_sort {
my ($a) = @_;
my $n = @$a;
for (my $i = ($n - 2) / 2; $i >= 0; $i--) {
down_heap($a, $n, $i);
}
for (my $i = 0; $i < $n; $i++) {
my $t = $a->[$n - $i - 1];
$a->[$n - $i - 1] = $a->[0];
$a->[0] = $t;
down_heap($a, $n - $i - 1, 0);
}
}
sub down_heap {
my ($a, $n, $i) = @_;
while (1) {
my $j = max($a, $n, $i, 2 * $i + 1, 2 * $i + 2);
last if $j == $i;
my $t = $a->[$i];
$a->[$i] = $a->[$j];
$a->[$j] = $t;
$i = $j;
}
}
sub max {
my ($a, $n, $i, $j, $k) = @_;
my $m = $i;
$m = $j if $j < $n && $a->[$j] > $a->[$m];
$m = $k if $k < $n && $a->[$k] > $a->[$m];
return $m;
}
|
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Julia | Julia | function mergesort(arr::Vector)
if length(arr) ≤ 1 return arr end
mid = length(arr) ÷ 2
lpart = mergesort(arr[1:mid])
rpart = mergesort(arr[mid+1:end])
rst = similar(arr)
i = ri = li = 1
@inbounds while li ≤ length(lpart) && ri ≤ length(rpart)
if lpart[li] ≤ rpart[ri]
rst[i] = lpart[li]
li += 1
else
rst[i] = rpart[ri]
ri += 1
end
i += 1
end
if li ≤ length(lpart)
copy!(rst, i, lpart, li)
else
copy!(rst, i, rpart, ri)
end
return rst
end
v = rand(-10:10, 10)
println("# unordered: $v\n -> ordered: ", mergesort(v)) |
http://rosettacode.org/wiki/Soundex | Soundex | Soundex is an algorithm for creating indices for words based on their pronunciation.
Task
The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling (from the soundex Wikipedia article).
Caution
There is a major issue in many of the implementations concerning the separation of two consonants that have the same soundex code! According to the official Rules [[1]]. So check for instance if Ashcraft is coded to A-261.
If a vowel (A, E, I, O, U) separates two consonants that have the same soundex code, the consonant to the right of the vowel is coded. Tymczak is coded as T-522 (T, 5 for the M, 2 for the C, Z ignored (see "Side-by-Side" rule above), 2 for the K). Since the vowel "A" separates the Z and K, the K is coded.
If "H" or "W" separate two consonants that have the same soundex code, the consonant to the right of the vowel is not coded. Example: Ashcraft is coded A-261 (A, 2 for the S, C ignored, 6 for the R, 1 for the F). It is not coded A-226.
| #XPL0 | XPL0 | code CrLf=9, Text=12;
string 0; \use zero-terminated strings
func Soundex(S1); \Convert name to Soundex string (e.g: Rubin = R150)
char S1;
char S2(80), Tbl;
int I, J, Char, Dig, Dig0;
[ \abcdefghijklmnopqrstuvwxyz
Tbl:= "01230120022455012623010202";
I:= 0; J:= 0; \convert all letters to digits
repeat Char:= S1(I); I:= I+1;
if Char>=^A & Char<=^Z then \convert letter to lowercase
Char:= Char + $20;
if Char>=^a & Char<=^z & \eliminate non letters
Char#^h & Char#^w then \eliminate h and w
[Dig:= Tbl(Char-^a); \convert letter to digit
if Dig#^0 & Dig#Dig0 ! J=0 then \filter out zeros and doubles
[S2(J):= Dig; J:= J+1]; \ but always store first digit
Dig0:= Dig; \save digit to detect doubles
];
until S1(I) = 0;
while J<4 do [S2(J):= ^0; J:= J+1]; \pad with zeros to get 3 digits
S2(0):= S1(0) & ~$20; S2(4):= 0; \insert first letter & terminate
return S2; \BEWARE: very temporary string
];
int I, Name;
[Name:=["Ashcraft", "Ashcroft", "de la Rosa", "Gauss", "Ghosh", "Heilbronn",
"Hilbert", "Knuth", "Lee", "Lloyd", "Moses", "O'Hara", "Pfister",
"R2-D2", "Robert", "Rubin", "Rupert", "Tymczak", "Soundex", "Example"];
for I:= 0 to 20-1 do
[Text(0, Soundex(Name(I))); Text(0, " ");
Text(0, Name(I)); CrLf(0);
];
] |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Sidef | Sidef | var stack = [];
stack.push(42); # pushing
say stack.pop; # popping
say stack.is_empty; # is_emtpy? |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Lobster | Lobster | include "std.lobster"
def quicksort(xs, lt):
if xs.length <= 1:
xs
else:
pivot := xs[0]
tail := xs.slice(1, -1)
f1 := filter tail: lt(_, pivot)
f2 := filter tail: !lt(_, pivot)
append(append(quicksort(f1, lt), [ pivot ]),
quicksort(f2, lt))
sorted := [ 3, 9, 5, 4, 1, 3, 9, 5, 4, 1 ].quicksort(): _a < _b
print sorted |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Lua | Lua | do
local function lower_bound(container, container_begin, container_end, value, comparator)
local count = container_end - container_begin + 1
while count > 0 do
local half = bit.rshift(count, 1) -- or math.floor(count / 2)
local middle = container_begin + half
if comparator(container[middle], value) then
container_begin = middle + 1
count = count - half - 1
else
count = half
end
end
return container_begin
end
local function binary_insertion_sort_impl(container, comparator)
for i = 2, #container do
local j = i - 1
local selected = container[i]
local loc = lower_bound(container, 1, j, selected, comparator)
while j >= loc do
container[j + 1] = container[j]
j = j - 1
end
container[j + 1] = selected
end
end
local function binary_insertion_sort_comparator(a, b)
return a < b
end
function table.bininsertionsort(container, comparator)
if not comparator then
comparator = binary_insertion_sort_comparator
end
binary_insertion_sort_impl(container, comparator)
end
end |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Phix | Phix | with javascript_semantics
function siftDown(sequence arr, integer s, integer last)
integer root = s
while root*2<=last do
integer child = root*2
if child<last and arr[child]<arr[child+1] then
child += 1
end if
if arr[root]>=arr[child] then exit end if
object tmp = arr[root]
arr[root] = arr[child]
arr[child] = tmp
root = child
end while
return arr
end function
function heapify(sequence arr, integer count)
integer s = floor(count/2)
while s>0 do
arr = siftDown(arr,s,count)
s -= 1
end while
return arr
end function
function heap_sort(sequence arr)
integer last = length(arr)
arr = heapify(arr,last)
while last>1 do
object tmp = arr[1]
arr[1] = arr[last]
arr[last] = tmp
last -= 1
arr = siftDown(arr,1,last)
end while
return arr
end function
?heap_sort({5,"oranges","and",3,"apples"})
|
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Kotlin | Kotlin | fun mergeSort(list: List<Int>): List<Int> {
if (list.size <= 1) {
return list
}
val left = mutableListOf<Int>()
val right = mutableListOf<Int>()
val middle = list.size / 2
list.forEachIndexed { index, number ->
if (index < middle) {
left.add(number)
} else {
right.add(number)
}
}
fun merge(left: List<Int>, right: List<Int>): List<Int> = mutableListOf<Int>().apply {
var indexLeft = 0
var indexRight = 0
while (indexLeft < left.size && indexRight < right.size) {
if (left[indexLeft] <= right[indexRight]) {
add(left[indexLeft])
indexLeft++
} else {
add(right[indexRight])
indexRight++
}
}
while (indexLeft < left.size) {
add(left[indexLeft])
indexLeft++
}
while (indexRight < right.size) {
add(right[indexRight])
indexRight++
}
}
return merge(mergeSort(left), mergeSort(right))
}
fun main(args: Array<String>) {
val numbers = listOf(5, 2, 3, 17, 12, 1, 8, 3, 4, 9, 7)
println("Unsorted: $numbers")
println("Sorted: ${mergeSort(numbers)}")
} |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Slate | Slate | collections define: #Stack &parents: {ExtensibleArray}.
"An abstraction over ExtensibleArray implementations to follow the stack
protocol. The convention is that the Sequence indices run least-to-greatest
from bottom to top."
s@(Stack traits) push: obj
[s addLast: obj].
s@(Stack traits) pop
[s removeLast].
s@(Stack traits) pop: n
[s removeLast: n].
s@(Stack traits) top
[s last].
s@(Stack traits) top: n
[s last: n].
s@(Stack traits) bottom
[s first]. |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Logo | Logo | ; quicksort (lists, functional)
to small? :list
output or [empty? :list] [empty? butfirst :list]
end
to quicksort :list
if small? :list [output :list]
localmake "pivot first :list
output (sentence
quicksort filter [? < :pivot] butfirst :list
filter [? = :pivot] :list
quicksort filter [? > :pivot] butfirst :list
)
end
show quicksort [1 3 5 7 9 8 6 4 2] |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Maple | Maple | arr := Array([17,3,72,0,36,2,3,8,40,0]):
len := numelems(arr):
for i from 2 to len do
val := arr[i]:
j := i-1:
while(j > 0 and arr[j] > val) do
arr[j+1] := arr[j]:
j--:
end do:
arr[j+1] := val:
end do:
arr; |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Picat | Picat | main =>
_ = random2(),
A = [random(-10,10) : _ in 1..30],
println(A),
heapSort(A),
println(A).
heapSort(A) =>
heapify(A),
End = A.len,
while (End > 1)
swap(A, End, 1),
End := End - 1,
siftDown(A, 1, End)
end.
heapify(A) =>
Count = A.len,
Start = Count // 2,
while (Start >= 1)
siftDown(A, Start, Count),
Start := Start - 1
end.
siftDown(A, Start, End) =>
Root = Start,
Loop = true,
while (Root * 2 - 1 < End, Loop == true)
Child := Root * 2- 1,
if Child + 1 <= End, A[Child] @< A[Child+1] then
Child := Child + 1
end,
if A[Root] @< A[Child] then
swap(A,Root, Child),
Root := Child
else
Loop := false
end
end.
swap(L,I,J) =>
T = L[I],
L[I] := L[J],
L[J] := T. |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #PicoLisp | PicoLisp | (de heapSort (A Cnt)
(let Cnt (length A)
(for (Start (/ Cnt 2) (gt0 Start) (dec Start))
(siftDown A Start (inc Cnt)) )
(for (End Cnt (> End 1) (dec End))
(xchg (nth A End) A)
(siftDown A 1 End) ) )
A )
(de siftDown (A Start End)
(use Child
(for (Root Start (> End (setq Child (* 2 Root))))
(and
(> End (inc Child))
(> (get A (inc Child)) (get A Child))
(inc 'Child) )
(NIL (> (get A Child) (get A Root)))
(xchg (nth A Root) (nth A Child))
(setq Root Child) ) ) ) |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Lambdatalk | Lambdatalk |
{def alt
{lambda {:list}
{if {A.empty? :list}
then {A.new}
else {A.addfirst! {A.first :list}
{alt {A.rest {A.rest :list}}}} }}}
-> alt
{def merge
{lambda {:l1 :l2}
{if {A.empty? :l2}
then :l1
else {if {< {A.first :l1} {A.first :l2}}
then {A.addfirst! {A.first :l1} {merge :l2 {A.rest :l1}}}
else {A.addfirst! {A.first :l2} {merge :l1 {A.rest :l2}}} }}}}
-> merge
{def mergesort
{lambda {:list}
{if {A.empty? {A.rest :list}}
then :list
else {merge {mergesort {alt :list}}
{mergesort {alt {A.rest :list}}}} }}}
-> mergesort
{mergesort {A.new 8 1 5 3 9 0 2 7 6 4}}
-> [0,1,2,3,4,5,6,7,8,9]
|
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Smalltalk | Smalltalk |
s := Stack new.
s push: 1.
s push: 2.
s push: 3.
s pop.
s top. "2"
|
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Logtalk | Logtalk | quicksort(List, Sorted) :-
quicksort(List, [], Sorted).
quicksort([], Sorted, Sorted).
quicksort([Pivot| Rest], Acc, Sorted) :-
partition(Rest, Pivot, Smaller0, Bigger0),
quicksort(Smaller0, [Pivot| Bigger], Sorted),
quicksort(Bigger0, Acc, Bigger).
partition([], _, [], []).
partition([X| Xs], Pivot, Smalls, Bigs) :-
( X @< Pivot ->
Smalls = [X| Rest],
partition(Xs, Pivot, Rest, Bigs)
; Bigs = [X| Rest],
partition(Xs, Pivot, Smalls, Rest)
). |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Mathematica.2FWolfram_Language | Mathematica/Wolfram Language | insertionSort[a_List] := Module[{A = a},
For[i = 2, i <= Length[A], i++,
value = A[[i]]; j = i - 1;
While[j >= 1 && A[[j]] > value, A[[j + 1]] = A[[j]]; j--;];
A[[j + 1]] = value;];
A
] |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #PL.2FI | PL/I | *process source xref attributes or(!);
/*********************************************************************
* Pseudocode found here:
* http://en.wikipedia.org/wiki/Heapsort#Pseudocode
* Sample data from REXX
* 27.07.2013 Walter Pachl
*********************************************************************/
heaps: Proc Options(main);
Dcl a(0:25) Char(50) Var Init(
'---letters of the modern Greek Alphabet---',
'==========================================',
'alpha','beta','gamma','delta','epsilon','zeta','eta','theta',
'iota','kappa','lambda','mu','nu','xi','omicron','pi',
'rho','sigma','tau','upsilon','phi','chi','psi','omega');
Dcl n Bin Fixed(31) Init((hbound(a)+1));
Call showa('before sort');
Call heapsort((n));
Call showa(' after sort');
heapSort: Proc(count);
Dcl (count,end) Bin Fixed(31);
Call heapify((count));
end=count-1;
do while(end>0);
Call swap(end,0);
end=end-1;
Call siftDown(0,(end));
End;
End;
heapify: Proc(count);
Dcl (count,start) Bin Fixed(31);
start=(count-2)/2;
Do while (start>=0);
Call siftDown((start),count-1);
start=start-1;
End;
End;
siftDown: Proc(start,end);
Dcl (count,start,root,end,child,sw) Bin Fixed(31);
root=start;
Do while(root*2+1<= end);
child=root*2+1;
sw=root;
if a(sw)<a(child) Then
sw=child;
if child+1<=end & a(sw)<a(child+1) Then
sw=child+1;
if sw^=root Then Do;
Call swap(root,sw);
root=sw;
End;
else
return;
End;
End;
swap: Proc(x,y);
Dcl (x,y) Bin Fixed(31);
Dcl temp Char(50) Var;
temp=a(x);
a(x)=a(y);
a(y)=temp;
End;
showa: Proc(txt);
Dcl txt Char(*);
Dcl j Bin Fixed(31);
Do j=0 To hbound(a);
Put Edit('element',j,txt,a(j))(skip,a,f(3),x(1),a,x(1),a);
End;
End;
End; |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Liberty_BASIC | Liberty BASIC | itemCount = 20
dim A(itemCount)
dim tmp(itemCount) 'merge sort needs additionally same amount of storage
for i = 1 to itemCount
A(i) = int(rnd(1) * 100)
next i
print "Before Sort"
call printArray itemCount
call mergeSort 1,itemCount
print "After Sort"
call printArray itemCount
end
'------------------------------------------
sub mergeSort start, theEnd
if theEnd-start < 1 then exit sub
if theEnd-start = 1 then
if A(start)>A(theEnd) then
tmp=A(start)
A(start)=A(theEnd)
A(theEnd)=tmp
end if
exit sub
end if
middle = int((start+theEnd)/2)
call mergeSort start, middle
call mergeSort middle+1, theEnd
call merge start, middle, theEnd
end sub
sub merge start, middle, theEnd
i = start: j = middle+1: k = start
while i<=middle OR j<=theEnd
select case
case i<=middle AND j<=theEnd
if A(i)<=A(j) then
tmp(k)=A(i)
i=i+1
else
tmp(k)=A(j)
j=j+1
end if
k=k+1
case i<=middle
tmp(k)=A(i)
i=i+1
k=k+1
case else 'j<=theEnd
tmp(k)=A(j)
j=j+1
k=k+1
end select
wend
for i = start to theEnd
A(i)=tmp(i)
next
end sub
'===========================================
sub printArray itemCount
for i = 1 to itemCount
print using("###", A(i));
next i
print
end sub |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Standard_ML | Standard ML | signature STACK =
sig
type 'a stack
exception EmptyStack
val empty : 'a stack
val isEmpty : 'a stack -> bool
val push : ('a * 'a stack) -> 'a stack
val pop : 'a stack -> 'a stack
val top : 'a stack -> 'a
val popTop : 'a stack -> 'a stack * 'a
val map : ('a -> 'b) -> 'a stack -> 'b stack
val app : ('a -> unit) -> 'a stack -> unit
end |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Lua | Lua | table.sort(tableName) |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #MATLAB_.2F_Octave | MATLAB / Octave | function list = insertionSort(list)
for i = (2:numel(list))
value = list(i);
j = i - 1;
while (j >= 1) && (list(j) > value)
list(j+1) = list(j);
j = j-1;
end
list(j+1) = value;
end %for
end %insertionSort |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #PL.2FM | PL/M | 100H:
/* HEAP SORT AN ARRAY OF 16-BIT INTEGERS */
HEAP$SORT: PROCEDURE (AP, COUNT);
SIFT$DOWN: PROCEDURE (AP, START, ENDV);
DECLARE (AP, A BASED AP) ADDRESS;
DECLARE (START, ENDV, ROOT, CHILD, TEMP) ADDRESS;
ROOT = START;
DO WHILE (CHILD := SHL(ROOT,1) + 1) <= ENDV;
IF CHILD + 1 <= ENDV AND A(CHILD) < A(CHILD+1) THEN
CHILD = CHILD + 1;
IF A(ROOT) < A(CHILD) THEN DO;
TEMP = A(ROOT);
A(ROOT) = A(CHILD);
A(CHILD) = TEMP;
ROOT = CHILD;
END;
ELSE RETURN;
END;
END SIFT$DOWN;
HEAPIFY: PROCEDURE (AP, COUNT);
DECLARE (AP, COUNT, START) ADDRESS;
START = (COUNT-2) / 2;
LOOP:
CALL SIFT$DOWN(AP, START, COUNT-1);
IF START = 0 THEN RETURN;
START = START - 1;
GO TO LOOP;
END HEAPIFY;
DECLARE (AP, COUNT, ENDV, TEMP, A BASED AP) ADDRESS;
CALL HEAPIFY(AP, COUNT);
ENDV = COUNT - 1;
DO WHILE ENDV > 0;
TEMP = A(0);
A(0) = A(ENDV);
A(ENDV) = TEMP;
ENDV = ENDV - 1;
CALL SIFT$DOWN(AP, 0, ENDV);
END;
END HEAP$SORT;
/* CP/M CALLS AND FUNCTION TO PRINT INTEGERS */
BDOS: PROCEDURE (FN, ARG);
DECLARE FN BYTE, ARG ADDRESS;
GO TO 5;
END BDOS;
PRINT$NUMBER: PROCEDURE (N);
DECLARE S (7) BYTE INITIAL ('..... $');
DECLARE (N, P) ADDRESS, C BASED P BYTE;
P = .S(5);
DIGIT:
P = P-1;
C = N MOD 10 + '0';
N = N / 10;
IF N > 0 THEN GO TO DIGIT;
CALL BDOS(9, P);
END PRINT$NUMBER;
/* SORT AN ARRAY */
DECLARE NUMBERS (11) ADDRESS INITIAL (4, 65, 2, 31, 0, 99, 2, 8, 3, 782, 1);
CALL HEAP$SORT(.NUMBERS, LENGTH(NUMBERS));
/* PRINT THE SORTED ARRAY */
DECLARE N BYTE;
DO N = 0 TO LAST(NUMBERS);
CALL PRINT$NUMBER(NUMBERS(N));
END;
CALL BDOS(0,0);
EOF |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Logo | Logo | to split :size :front :list
if :size < 1 [output list :front :list]
output split :size-1 (lput first :list :front) (butfirst :list)
end
to merge :small :large
if empty? :small [output :large]
ifelse lessequal? first :small first :large ~
[output fput first :small merge butfirst :small :large] ~
[output fput first :large merge butfirst :large :small]
end
to mergesort :list
localmake "half split (count :list) / 2 [] :list
if empty? first :half [output :list]
output merge mergesort first :half mergesort last :half
end |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Stata | Stata | struct Stack<T> {
var items = [T]()
var empty:Bool {
return items.count == 0
}
func peek() -> T {
return items[items.count - 1]
}
mutating func pop() -> T {
return items.removeLast()
}
mutating func push(obj:T) {
items.append(obj)
}
}
var stack = Stack<Int>()
stack.push(1)
stack.push(2)
println(stack.pop())
println(stack.peek())
stack.pop()
println(stack.empty) |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Lucid | Lucid | qsort(a) = if eof(first a) then a else follow(qsort(b0),qsort(b1)) fi
where
p = first a < a;
b0 = a whenever p;
b1 = a whenever not p;
follow(x,y) = if xdone then y upon xdone else x fi
where
xdone = iseod x fby xdone or iseod x;
end;
end |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Maxima | Maxima | insertion_sort(u) := block(
[n: length(u), x, j],
for i from 2 thru n do (
x: u[i],
j: i - 1,
while j >= 1 and u[j] > x do (
u[j + 1]: u[j],
j: j - 1
),
u[j + 1]: x
)
)$ |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #PowerShell | PowerShell |
function heapsort($a, $count) {
$a = heapify $a $count
$end = $count - 1
while( $end -gt 0) {
$a[$end], $a[0] = $a[0], $a[$end]
$end--
$a = siftDown $a 0 $end
}
$a
}
function heapify($a, $count) {
$start = [Math]::Floor(($count - 2) / 2)
while($start -ge 0) {
$a = siftDown $a $start ($count-1)
$start--
}
$a
}
function siftdown($a, $start, $end) {
$b, $root = $true, $start
while(( ($root * 2 + 1) -le $end) -and $b) {
$child = $root * 2 + 1
if( ($child + 1 -le $end) -and ($a[$child] -lt $a[$child + 1]) ) {
$child++
}
if($a[$root] -lt $a[$child]) {
$a[$root], $a[$child] = $a[$child], $a[$root]
$root = $child
}
else { $b = $false}
}
$a
}
$array = @(60, 21, 19, 36, 63, 8, 100, 80, 3, 87, 11)
"$(heapsort $array $array.Count)"
|
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Logtalk | Logtalk | msort([], []) :- !.
msort([X], [X]) :- !.
msort([X, Y| Xs], Ys) :-
split([X, Y| Xs], X1s, X2s),
msort(X1s, Y1s),
msort(X2s, Y2s),
merge(Y1s, Y2s, Ys).
split([], [], []).
split([X| Xs], [X| Ys], Zs) :-
split(Xs, Zs, Ys).
merge([X| Xs], [Y| Ys], [X| Zs]) :-
X @=< Y, !,
merge(Xs, [Y| Ys], Zs).
merge([X| Xs], [Y| Ys], [Y| Zs]) :-
X @> Y, !,
merge([X | Xs], Ys, Zs).
merge([], Xs, Xs) :- !.
merge(Xs, [], Xs). |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Swift | Swift | struct Stack<T> {
var items = [T]()
var empty:Bool {
return items.count == 0
}
func peek() -> T {
return items[items.count - 1]
}
mutating func pop() -> T {
return items.removeLast()
}
mutating func push(obj:T) {
items.append(obj)
}
}
var stack = Stack<Int>()
stack.push(1)
stack.push(2)
println(stack.pop())
println(stack.peek())
stack.pop()
println(stack.empty) |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #M2000_Interpreter | M2000 Interpreter |
Module Checkit1 {
Group Quick {
Private:
Function partition {
Read &A(), p, r
x = A(r)
i = p-1
For j=p to r-1 {
If .LE(A(j), x) Then {
i++
Swap A(i),A(j)
}
}
Swap A(i+1),A(r)
= i+1
}
Public:
LE=Lambda->Number<=Number
Function quicksort {
Read &A(), p, r
If p < r Then {
q = .partition(&A(), p, r)
Call .quicksort(&A(), p, q - 1)
Call .quicksort(&A(), q + 1, r)
}
}
}
Dim A(10)<<Random(50, 100)
Print A()
Call Quick.quicksort(&A(), 0, Len(A())-1)
Print A()
}
Checkit1
|
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #MAXScript | MAXScript |
fn inSort arr =
(
arr = deepcopy arr
for i = 1 to arr.count do
(
j = i
while j > 1 and arr[j-1] > arr[j] do
(
swap arr[j] arr[j-1]
j -= 1
)
)
return arr
)
|
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #PureBasic | PureBasic | Declare heapify(Array a(1), count)
Declare siftDown(Array a(1), start, ending)
Procedure heapSort(Array a(1), count)
Protected ending=count-1
heapify(a(), count)
While ending>0
Swap a(ending),a(0)
siftDown(a(), 0, ending-1)
ending-1
Wend
EndProcedure
Procedure heapify(Array a(1), count)
Protected start=(count-2)/2
While start>=0
siftDown(a(),start,count-1)
start-1
Wend
EndProcedure
Procedure siftDown(Array a(1), start, ending)
Protected root=start, child
While (root*2+1)<=ending
child=root*2+1
If child+1<=ending And a(child)<a(child+1)
child+1
EndIf
If a(root)<a(child)
Swap a(root), a(child)
root=child
Else
Break
EndIf
Wend
EndProcedure |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Lua | Lua | local function merge(left_container, left_container_begin, left_container_end, right_container, right_container_begin, right_container_end, result_container, result_container_begin, comparator)
while left_container_begin <= left_container_end do
if right_container_begin > right_container_end then
for i = left_container_begin, left_container_end do
result_container[result_container_begin] = left_container[i]
result_container_begin = result_container_begin + 1
end
return
end
if comparator(right_container[right_container_begin], left_container[left_container_begin]) then
result_container[result_container_begin] = right_container[right_container_begin]
right_container_begin = right_container_begin + 1
else
result_container[result_container_begin] = left_container[left_container_begin]
left_container_begin = left_container_begin + 1
end
result_container_begin = result_container_begin + 1
end
for i = right_container_begin, right_container_end do
result_container[result_container_begin] = right_container[i]
result_container_begin = result_container_begin + 1
end
end
local function mergesort_impl(container, container_begin, container_end, comparator)
local range_length = (container_end - container_begin) + 1
if range_length < 2 then return end
local copy = {}
local copy_len = 0
for it = container_begin, container_end do
copy_len = copy_len + 1
copy[copy_len] = container[it]
end
local middle = bit.rshift(range_length, 1) -- or math.floor(range_length / 2)
mergesort_impl(copy, 1, middle, comparator)
mergesort_impl(copy, middle + 1, copy_len, comparator)
merge(copy, 1, middle, copy, middle + 1, copy_len, container, container_begin, comparator)
end
local function mergesort_default_comparator(a, b)
return a < b
end
function table.mergesort(container, comparator)
if not comparator then
comparator = mergesort_default_comparator
end
mergesort_impl(container, 1, #container, comparator)
end |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Tailspin | Tailspin |
processor Stack
@: $;
sink push
..|@Stack: $;
end push
source peek
$@Stack(last) !
end peek
source pop
^@Stack(last) !
end pop
source empty
$@Stack::length -> #
<=0> 1 !
<> 0 !
end empty
end Stack
def myStack: [1] -> Stack;
2 -> !myStack::push
'$myStack::empty; $myStack::pop;
' -> !OUT::write
'$myStack::empty; $myStack::pop;
' -> !OUT::write
'$myStack::empty;
' -> !OUT::write
3 -> !myStack::push
'$myStack::empty; $myStack::peek;
' -> !OUT::write
'$myStack::empty; $myStack::pop;
' -> !OUT::write
'$myStack::empty;' -> !OUT::write
|
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #M4 | M4 | dnl return the first element of a list when called in the funny way seen below
define(`arg1', `$1')dnl
dnl
dnl append lists 1 and 2
define(`append',
`ifelse(`$1',`()',
`$2',
`ifelse(`$2',`()',
`$1',
`substr($1,0,decr(len($1))),substr($2,1)')')')dnl
dnl
dnl separate list 2 based on pivot 1, appending to left 3 and right 4,
dnl until 2 is empty, and then combine the sort of left with pivot with
dnl sort of right
define(`sep',
`ifelse(`$2', `()',
`append(append(quicksort($3),($1)),quicksort($4))',
`ifelse(eval(arg1$2<=$1),1,
`sep($1,(shift$2),append($3,(arg1$2)),$4)',
`sep($1,(shift$2),$3,append($4,(arg1$2)))')')')dnl
dnl
dnl pick first element of list 1 as pivot and separate based on that
define(`quicksort',
`ifelse(`$1', `()',
`()',
`sep(arg1$1,(shift$1),`()',`()')')')dnl
dnl
quicksort((3,1,4,1,5,9)) |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #ML | ML | fun insertion_sort L =
let
fun insert
(x,[]) = [x]
| (x, y :: ys) =
if x <= y then
x :: y :: ys
else
y :: insert (x, ys)
in
foldr (insert,[]) L
end;
println ` insertion_sort [6,8,5,9,3,2,1,4,7];
|
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Python | Python | def heapsort(lst):
''' Heapsort. Note: this function sorts in-place (it mutates the list). '''
# in pseudo-code, heapify only called once, so inline it here
for start in range((len(lst)-2)/2, -1, -1):
siftdown(lst, start, len(lst)-1)
for end in range(len(lst)-1, 0, -1):
lst[end], lst[0] = lst[0], lst[end]
siftdown(lst, 0, end - 1)
return lst
def siftdown(lst, start, end):
root = start
while True:
child = root * 2 + 1
if child > end: break
if child + 1 <= end and lst[child] < lst[child + 1]:
child += 1
if lst[root] < lst[child]:
lst[root], lst[child] = lst[child], lst[root]
root = child
else:
break |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Lucid | Lucid | msort(a) = if iseod(first next a) then a else merge(msort(b0),msort(b1)) fi
where
p = false fby not p;
b0 = a whenever p;
b1 = a whenever not p;
just(a) = ja
where
ja = a fby if iseod ja then eod else next a fi;
end;
merge(x,y) = if takexx then xx else yy fi
where
xx = (x) upon takexx;
yy = (y) upon not takexx;
takexx = if iseod(yy) then true elseif
iseod(xx) then false else xx <= yy fi;
end;
end; |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Tcl | Tcl | proc push {stackvar value} {
upvar 1 $stackvar stack
lappend stack $value
}
proc pop {stackvar} {
upvar 1 $stackvar stack
set value [lindex $stack end]
set stack [lrange $stack 0 end-1]
return $value
}
proc size {stackvar} {
upvar 1 $stackvar stack
llength $stack
}
proc empty {stackvar} {
upvar 1 $stackvar stack
expr {[size stack] == 0}
}
proc peek {stackvar} {
upvar 1 $stackvar stack
lindex $stack end
}
set S [list]
empty S ;# ==> 1 (true)
push S foo
empty S ;# ==> 0 (false)
push S bar
peek S ;# ==> bar
pop S ;# ==> bar
peek S ;# ==> foo |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Maclisp | Maclisp |
;; While not strictly required, it simplifies the
;; implementation considerably to use filter. MACLisp
;; Doesn't have one out of the box, so we bring our own
(DEFUN FILTER (F LIST)
(COND
((EQ LIST NIL) NIL)
((FUNCALL F (CAR LIST))
(CONS (CAR LIST) (FILTER F (CDR LIST))))
(T
(FILTER F (CDR LIST)))))
;; And then, quicksort.
(DEFUN QUICKSORT (LIST)
(COND
((OR (EQ LIST ())
(EQ (CDR LIST) ()))
LIST)
(T
(LET
((PIVOT (CAR LIST))
(REST (CDR LIST)))
(APPEND
(QUICKSORT (FILTER #'(LAMBDA (X) (<= X PIVOT)) REST))
(LIST PIVOT)
(QUICKSORT (FILTER #'(LAMBDA (X) (> X PIVOT)) REST)))))))
|
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Modula-3 | Modula-3 | MODULE InsertSort;
PROCEDURE IntSort(VAR item: ARRAY OF INTEGER) =
VAR j, value: INTEGER;
BEGIN
FOR i := FIRST(item) + 1 TO LAST(item) DO
value := item[i];
j := i - 1;
WHILE j >= FIRST(item) AND item[j] > value DO
item[j + 1] := item[j];
DEC(j);
END;
item[j + 1] := value;
END;
END IntSort;
END InsertSort. |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Quackery | Quackery | [ [] swap pqwith >
dup pqsize times
[ frompq rot join swap ]
drop ] is hsort ( [ --> [ )
[] 23 times [ 90 random 10 + join ]
say " " dup echo cr
say " --> " hsort echo |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Racket | Racket |
#lang racket
(require (only-in srfi/43 vector-swap!))
(define (heap-sort! xs)
(define (ref i) (vector-ref xs i))
(define (swap! i j) (vector-swap! xs i j))
(define size (vector-length xs))
(define (sift-down! r end)
(define c (+ (* 2 r) 1))
(define c+1 (+ c 1))
(when (<= c end)
(define child
(if (and (<= c+1 end) (< (ref c) (ref c+1)))
c+1 c))
(when (< (ref r) (ref child))
(swap! r child))
(sift-down! child end)))
(for ([i (in-range (quotient (- size 2) 2) -1 -1)])
(sift-down! i (- size 1)))
(for ([end (in-range (- size 1) 0 -1)])
(swap! 0 end)
(sift-down! 0 (- end 1)))
xs)
|
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #M2000_Interpreter | M2000 Interpreter |
module checkit {
\\ merge sort
group merge {
function sort(right as stack) {
if len(right)<=1 then =right : exit
left=.sort(stack up right, len(right) div 2 )
right=.sort(right)
\\ stackitem(right) is same as stackitem(right,1)
if stackitem(left, len(left))<=stackitem(right) then
\\ !left take items from left for merging
\\ so after this left and right became empty stacks
=stack:=!left, !right
exit
end if
=.merge(left, right)
}
function sortdown(right as stack) {
if len(right)<=1 then =right : exit
left=.sortdown(stack up right, len(right) div 2 )
right=.sortdown(right)
if stackitem(left, len(left))>stackitem(right) then
=stack:=!left, !right : exit
end if
=.mergedown(left, right)
}
\\ left and right are pointers to stack objects
\\ here we pass by value the pointer not the data
function merge(left as stack, right as stack) {
result=stack
while len(left) > 0 and len(right) > 0
if stackitem(left,1) <= stackitem(right) then
result=stack:=!result, !(stack up left, 1)
else
result=stack:=!result, !(stack up right, 1)
end if
end while
if len(right) > 0 then result=stack:= !result,!right
if len(left) > 0 then result=stack:= !result,!left
=result
}
function mergedown(left as stack, right as stack) {
result=stack
while len(left) > 0 and len(right) > 0
if stackitem(left,1) > stackitem(right) then
result=stack:=!result, !(stack up left, 1)
else
result=stack:=!result, !(stack up right, 1)
end if
end while
if len(right) > 0 then result=stack:= !result,!right
if len(left) > 0 then result=stack:= !result,!left
=result
}
}
k=stack:=7, 5, 2, 6, 1, 4, 2, 6, 3
print merge.sort(k)
print len(k)=0 ' we have to use merge.sort(stack(k)) to pass a copy of k
\\ input array (arr is a pointer to array)
arr=(10,8,9,7,5,6,2,3,0,1)
\\ stack(array pointer) return a stack with a copy of array items
\\ array(stack pointer) return an array, empty the stack
arr2=array(merge.sort(stack(arr)))
Print type$(arr2)
Dim a()
\\ a() is an array as a value, so we just copy arr2 to a()
a()=arr2
\\ to prove we add 1 to each element of arr2
arr2++
Print a() ' 0,1,2,3,4,5,6,7,8,9
Print arr2 ' 1,2,3,4,5,6,7,8,9,11
p=a() ' we get a pointer
\\ a() has a double pointer inside
\\ so a() get just the inner pointer
a()=array(merge.sortdown(stack(p)))
\\ so now p (which use the outer pointer)
\\ still points to a()
print p ' p point to a()
}
checkit
|
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #UnixPipes | UnixPipes | init() { if [ -e stack ]; then rm stack; fi } # force pop to blow up if empty
push() { echo $1 >> stack; }
pop() {
tail -1 stack;
x=`head -n -1 stack | wc -c`
if [ $x -eq '0' ]; then rm stack; else
truncate -s `head -n -1 stack | wc -c` stack
fi
}
empty() { head -n -1 stack |wc -l; }
stack_top() { tail -1 stack; } |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Maple | Maple | swap := proc(arr, a, b)
local temp := arr[a]:
arr[a] := arr[b]:
arr[b] := temp:
end proc:
quicksort := proc(arr, low, high)
local pi:
if (low < high) then
pi := qpart(arr,low,high):
quicksort(arr, low, pi-1):
quicksort(arr, pi+1, high):
end if:
end proc:
qpart := proc(arr, low, high)
local i,j,pivot;
pivot := arr[high]:
i := low-1:
for j from low to high-1 by 1 do
if (arr[j] <= pivot) then
i++:
swap(arr, i, j):
end if;
end do;
swap(arr, i+1, high):
return (i+1):
end proc:
a:=Array([12,4,2,1,0]);
quicksort(a,1,5);
a; |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #N.2Ft.2Froff | N/t/roff | .de end
..
.de array
. nr \\$1.c 0 1
. de \\$1.push end
. nr \\$1..\\\\n+[\\$1.c] \\\\$1
. end
. de \\$1.pushln end
. if \\\\n(.$>0 .\\$1.push \\\\$1
. if \\\\n(.$>1 \{ \
. shift
. \\$1.pushln \\\\$@
. \}
. end
. de \\$1.dump end
. nr i 0 1
. ds out "
. while \\\\n+i<=\\\\n[\\$1.c] .as out "\\\\n[\\$1..\\\\ni]
. tm \\\\*[out]
. rm out
. rr i
. end
. de \\$1.slideright end
. nr i \\\\$1
. nr i+1 \\\\ni+1
. nr \\$1..\\\\n[i+1] \\\\n[\\$1..\\\\ni]
. rr i
. rr i+1
. end
..
.de insertionsort
. nr keyidx 1 1
. while \\n+[keyidx]<=\\n[\\$1.c] \{ \
. nr key \\n[\\$1..\\n[keyidx]]
. nr compidx \\n[keyidx] 1
. while \\n-[compidx]>=0 \{ \
. if \\n[compidx]=0 \{ \
. nr \\$1..1 \\n[key]
. break
. \}
. ie \\n[\\$1..\\n[compidx]]>\\n[key] \{ \
. \\$1.slideright \\n[compidx]
. \}
. el \{ \
. nr compidx+1 \\n[compidx]+1
. nr \\$1..\\n[compidx+1] \\n[key]
. break
. \}
. \}
. \}
..
.array a
.a.pushln 13 64 22 87 54 87 23 92 11 64 5 9 3 3 0
.insertionsort a
.a.dump |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Raku | Raku | sub heap_sort ( @list ) {
for ( 0 ..^ +@list div 2 ).reverse -> $start {
_sift_down $start, @list.end, @list;
}
for ( 1 ..^ +@list ).reverse -> $end {
@list[ 0, $end ] .= reverse;
_sift_down 0, $end-1, @list;
}
}
sub _sift_down ( $start, $end, @list ) {
my $root = $start;
while ( my $child = $root * 2 + 1 ) <= $end {
$child++ if $child + 1 <= $end and [<] @list[ $child, $child+1 ];
return if @list[$root] >= @list[$child];
@list[ $root, $child ] .= reverse;
$root = $child;
}
}
my @data = 6, 7, 2, 1, 8, 9, 5, 3, 4;
say 'Input = ' ~ @data;
@data.&heap_sort;
say 'Output = ' ~ @data; |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Maple | Maple | merge := proc(arr, left, mid, right)
local i, j, k, n1, n2, L, R;
n1 := mid-left+1:
n2 := right-mid:
L := Array(1..n1):
R := Array(1..n2):
for i from 0 to n1-1 do
L(i+1) :=arr(left+i):
end do:
for j from 0 to n2-1 do
R(j+1) := arr(mid+j+1):
end do:
i := 1:
j := 1:
k := left:
while(i <= n1 and j <= n2) do
if (L[i] <= R[j]) then
arr[k] := L[i]:
i++:
else
arr[k] := R[j]:
j++:
end if:
k++:
end do:
while(i <= n1) do
arr[k] := L[i]:
i++:
k++:
end do:
while(j <= n2) do
arr[k] := R[j]:
j++:
k++:
end do:
end proc:
arr := Array([17,3,72,0,36,2,3,8,40,0]);
mergeSort(arr,1,numelems(arr)):
arr; |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #UNIX_Shell | UNIX Shell | init() {
if [[ -n $KSH_VERSION ]]; then
set -A stack
else
stack=(); # this sets stack to '()' in ksh
fi
}
push() {
stack=("$1" "${stack[@]}")
}
stack_top() {
# this approach sidesteps zsh indexing difference
set -- "${stack[@]}"
printf '%s\n' "$1"
}
pop() {
stack_top
stack=("${stack[@]:1}")
}
empty() {
(( ${#stack[@]} == 0 ))
}
# Demo
push fred; push wilma; push betty; push barney
printf 'peek(stack)==%s\n' "$(stack_top)"
while ! empty; do
pop
done |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Mathematica.2FWolfram_Language | Mathematica/Wolfram Language | QuickSort[x_List] := Module[{pivot},
If[Length@x <= 1, Return[x]];
pivot = RandomChoice@x;
Flatten@{QuickSort[Cases[x, j_ /; j < pivot]], Cases[x, j_ /; j == pivot], QuickSort[Cases[x, j_ /; j > pivot]]}
] |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Nanoquery | Nanoquery | def insertion_sort(L)
for i in range(1, len(L) - 1)
j = i - 1
key = L[i]
while (L[j] > key) and (j >= 0)
L[j + 1] = L[j]
j -= 1
end
L[j+1] = key
end
return L
end |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #REXX | REXX | /*REXX pgm sorts an array (names of epichoric Greek letters) using a heapsort algorithm.*/
parse arg x; call init /*use args or default, define @ array.*/
call show "before sort:" /*#: the number of elements in array*/
call heapSort #; say copies('▒', 40) /*sort; then after sort, show separator*/
call show " after sort:"
exit /*stick a fork in it, we're all done. */
/*──────────────────────────────────────────────────────────────────────────────────────*/
init: _= 'alpha beta gamma delta digamma epsilon zeta eta theta iota kappa lambda mu nu' ,
"xi omicron pi san qoppa rho sigma tau upsilon phi chi psi omega"
if x='' then x= _; #= words(x) /*#: number of words in X*/
do j=1 for #; @.j= word(x, j); end; return /*assign letters to array*/
/*──────────────────────────────────────────────────────────────────────────────────────*/
heapSort: procedure expose @.; arg n; do j=n%2 by -1 to 1; call shuffle j,n; end /*j*/
do n=n by -1 to 2; _= @.1; @.1= @.n; @.n= _; call heapSuff 1,n-1
end /*n*/; return /* [↑] swap two elements; and shuffle.*/
/*──────────────────────────────────────────────────────────────────────────────────────*/
heapSuff: procedure expose @.; parse arg i,n; $= @.i /*obtain parent.*/
do while i+i<=n; j= i+i; k= j+1; if k<=n then if @.k>@.j then j= k
if $>[email protected] then leave; @.i= @.j; i= j
end /*while*/; @.i= $; return /*define lowest.*/
/*──────────────────────────────────────────────────────────────────────────────────────*/
show: do s=1 for #; say ' element' right(s, length(#)) arg(1) @.s; end; return |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Mathematica_.2F_Wolfram_Language | Mathematica / Wolfram Language | MergeSort[m_List] := Module[{middle},
If[Length[m] >= 2,
middle = Ceiling[Length[m]/2];
Apply[Merge,
Map[MergeSort, Partition[m, middle, middle, {1, 1}, {}]]],
m
]
]
Merge[left_List, right_List] := Module[
{leftIndex = 1, rightIndex = 1},
Table[
Which[
leftIndex > Length[left], right[[rightIndex++]],
rightIndex > Length[right], left[[leftIndex++]],
left[[leftIndex]] <= right[[rightIndex]], left[[leftIndex++]],
True, right[[rightIndex++]]],
{Length[left] + Length[right]}]
] |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #VBA | VBA | 'Simple Stack class
'uses a dynamic array of Variants to stack the values
'has read-only property "Size"
'and methods "Push", "Pop", "IsEmpty"
Private myStack()
Private myStackHeight As Integer
'method Push
Public Function Push(aValue)
'increase stack height
myStackHeight = myStackHeight + 1
ReDim Preserve myStack(myStackHeight)
myStack(myStackHeight) = aValue
End Function
'method Pop
Public Function Pop()
'check for nonempty stack
If myStackHeight > 0 Then
Pop = myStack(myStackHeight)
myStackHeight = myStackHeight - 1
Else
MsgBox "Pop: stack is empty!"
End If
End Function
'method IsEmpty
Public Function IsEmpty() As Boolean
IsEmpty = (myStackHeight = 0)
End Function
'property Size
Property Get Size() As Integer
Size = myStackHeight
End Property |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #MATLAB | MATLAB | function sortedArray = quickSort(array)
if numel(array) <= 1 %If the array has 1 element then it can't be sorted
sortedArray = array;
return
end
pivot = array(end);
array(end) = [];
%Create two new arrays which contain the elements that are less than or
%equal to the pivot called "less" and greater than the pivot called
%"greater"
less = array( array <= pivot );
greater = array( array > pivot );
%The sorted array is the concatenation of the sorted "less" array, the
%pivot and the sorted "greater" array in that order
sortedArray = [quickSort(less) pivot quickSort(greater)];
end |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Nemerle | Nemerle | using System.Console;
using Nemerle.English;
module InsertSort
{
public static Sort(this a : array[int]) : void
{
mutable value = 0; mutable j = 0;
foreach (i in [1 .. (a.Length - 1)])
{
value = a[i]; j = i - 1;
while (j >= 0 and a[j] > value)
{
a[j + 1] = a[j];
j = j - 1;
}
a[j + 1] = value;
}
}
Main() : void
{
def arr = array[1, 4, 8, 3, 8, 3, 5, 2, 6];
arr.Sort();
foreach (i in arr) Write($"$i ");
}
} |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Ring | Ring |
# Project : Sorting algorithms/Heapsort
test = [4, 65, 2, -31, 0, 99, 2, 83, 782, 1]
see "before sort:" + nl
showarray(test)
heapsort(test)
see "after sort:" + nl
showarray(test)
func heapsort(a)
cheapify(a)
for e = len(a) to 1 step -1
temp = a[e]
a[e] = a[1]
a[1] = temp
siftdown(a, 1, e-1)
next
func cheapify(a)
m = len(a)
for s = floor((m - 1) / 2) to 1 step -1
siftdown(a,s,m)
next
func siftdown(a,s,e)
r = s
while r * 2 + 1 <= e
c = r * 2
if c + 1 <= e
if a[c] < a[c + 1]
c = c + 1
ok
ok
if a[r] < a[c]
temp = a[r]
a[r] = a[c]
a[c] = temp
r = c
else
exit
ok
end
func showarray(vect)
svect = ""
for n = 1 to len(vect)
svect = svect + vect[n] + " "
next
svect = left(svect, len(svect) - 1)
see svect + nl
|
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #MATLAB | MATLAB | function list = mergeSort(list)
if numel(list) <= 1
return
else
middle = ceil(numel(list) / 2);
left = list(1:middle);
right = list(middle+1:end);
left = mergeSort(left);
right = mergeSort(right);
if left(end) <= right(1)
list = [left right];
return
end
%merge(left,right)
counter = 1;
while (numel(left) > 0) && (numel(right) > 0)
if(left(1) <= right(1))
list(counter) = left(1);
left(1) = [];
else
list(counter) = right(1);
right(1) = [];
end
counter = counter + 1;
end
if numel(left) > 0
list(counter:end) = left;
elseif numel(right) > 0
list(counter:end) = right;
end
%end merge
end %if
end %mergeSort |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #VBScript | VBScript | class stack
dim tos
dim stack()
dim stacksize
private sub class_initialize
stacksize = 100
redim stack( stacksize )
tos = 0
end sub
public sub push( x )
stack(tos) = x
tos = tos + 1
end sub
public property get stackempty
stackempty = ( tos = 0 )
end property
public property get stackfull
stackfull = ( tos > stacksize )
end property
public property get stackroom
stackroom = stacksize - tos
end property
public function pop()
pop = stack( tos - 1 )
tos = tos - 1
end function
public sub resizestack( n )
redim preserve stack( n )
stacksize = n
if tos > stacksize then
tos = stacksize
end if
end sub
end class
dim s
set s = new stack
s.resizestack 10
wscript.echo s.stackempty
dim i
for i = 1 to 10
s.push rnd
wscript.echo s.stackroom
if s.stackroom = 0 then exit for
next
for i = 1 to 10
wscript.echo s.pop
if s.stackempty then exit for
next |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #MAXScript | MAXScript | fn quickSort arr =
(
less = #()
pivotList = #()
more = #()
if arr.count <= 1 then
(
arr
)
else
(
pivot = arr[arr.count/2]
for i in arr do
(
case of
(
(i < pivot): (append less i)
(i == pivot): (append pivotList i)
(i > pivot): (append more i)
)
)
less = quickSort less
more = quickSort more
less + pivotList + more
)
)
a = #(4, 89, -3, 42, 5, 0, 2, 889)
a = quickSort a |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #NetRexx | NetRexx | /* NetRexx */
options replace format comments java crossref savelog symbols binary
import java.util.List
placesList = [String -
"UK London", "US New York", "US Boston", "US Washington" -
, "UK Washington", "US Birmingham", "UK Birmingham", "UK Boston" -
]
lists = [ -
placesList -
, insertionSort(String[] Arrays.copyOf(placesList, placesList.length)) -
]
loop ln = 0 to lists.length - 1
cl = lists[ln]
loop ct = 0 to cl.length - 1
say cl[ct]
end ct
say
end ln
return
method insertionSort(A = String[]) public constant binary returns String[]
rl = String[A.length]
al = List insertionSort(Arrays.asList(A))
al.toArray(rl)
return rl
method insertionSort(A = List) public constant binary returns ArrayList
loop i_ = 1 to A.size - 1
value = A.get(i_)
j_ = i_ - 1
loop label j_ while j_ >= 0
if (Comparable A.get(j_)).compareTo(Comparable value) <= 0 then leave j_
A.set(j_ + 1, A.get(j_))
j_ = j_ - 1
end j_
A.set(j_ + 1, value)
end i_
return ArrayList(A)
|
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Ruby | Ruby | class Array
def heapsort
self.dup.heapsort!
end
def heapsort!
# in pseudo-code, heapify only called once, so inline it here
((length - 2) / 2).downto(0) {|start| siftdown(start, length - 1)}
# "end" is a ruby keyword
(length - 1).downto(1) do |end_|
self[end_], self[0] = self[0], self[end_]
siftdown(0, end_ - 1)
end
self
end
def siftdown(start, end_)
root = start
loop do
child = root * 2 + 1
break if child > end_
if child + 1 <= end_ and self[child] < self[child + 1]
child += 1
end
if self[root] < self[child]
self[root], self[child] = self[child], self[root]
root = child
else
break
end
end
end
end |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Maxima | Maxima | merge(a, b) := block(
[c: [ ], i: 1, j: 1, p: length(a), q: length(b)],
while i <= p and j <= q do (
if a[i] < b[j] then (
c: endcons(a[i], c),
i: i + 1
) else (
c: endcons(b[j], c),
j: j + 1
)
),
if i > p then append(c, rest(b, j - 1)) else append(c, rest(a, i - 1))
)$
mergesort(u) := block(
[n: length(u), k, a, b],
if n <= 1 then u else (
a: rest(u, k: quotient(n, 2)),
b: rest(u, k - n),
merge(mergesort(a), mergesort(b))
)
)$ |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Vlang | Vlang | const (
max_depth = 256
)
struct Stack {
mut:
data []f32 = []f32{len: max_depth}
depth int
}
fn (mut s Stack) push(v f32) {
if s.depth >= max_depth {
return
}
println('Push: ${v:3.2f}')
s.data[s.depth] = v
s.depth++
}
fn (mut s Stack) pop() ?f32 {
if s.depth > 0 {
s.depth--
result := s.data[s.depth]
println('Pop: top of stack was ${result:3.2f}')
return result
}
return error('Stack Underflow!!')
}
fn (s Stack) peek() ?f32 {
if s.depth > 0 {
result := s.data[s.depth - 1]
println('Peek: top of stack is ${result:3.2f}')
return result
}
return error('Out of Bounds...')
}
fn (s Stack) empty() bool {
return s.depth == 0
}
fn main() {
mut stack := Stack{}
println('Stack is empty? ' + if stack.empty() { 'Yes' } else { 'No' })
stack.push(5.0)
stack.push(4.2)
println('Stack is empty? ' + if stack.empty() { 'Yes' } else { 'No' })
stack.peek() or { return }
stack.pop() or { return }
stack.pop() or { return }
}
|
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Mercury | Mercury | %%%-------------------------------------------------------------------
:- module quicksort_task_for_lists.
:- interface.
:- import_module io.
:- pred main(io, io).
:- mode main(di, uo) is det.
:- implementation.
:- import_module int.
:- import_module list.
%%%-------------------------------------------------------------------
%%%
%%% Partitioning a list into three:
%%%
%%% Left elements less than the pivot
%%% Middle elements equal to the pivot
%%% Right elements greater than the pivot
%%%
%%% The implementation is tail-recursive.
%%%
:- pred partition(comparison_func(T), T, list(T),
list(T), list(T), list(T)).
:- mode partition(in, in, in, out, out, out) is det.
partition(Compare, Pivot, Lst, Left, Middle, Right) :-
partition(Compare, Pivot, Lst, [], Left, [], Middle, [], Right).
:- pred partition(comparison_func(T), T, list(T),
list(T), list(T),
list(T), list(T),
list(T), list(T)).
:- mode partition(in, in, in,
in, out,
in, out,
in, out) is det.
partition(_, _, [], Left0, Left, Middle0, Middle, Right0, Right) :-
Left = Left0,
Middle = Middle0,
Right = Right0.
partition(Compare, Pivot, [Head | Tail], Left0, Left,
Middle0, Middle, Right0, Right) :-
Compare(Head, Pivot) = Cmp,
(if (Cmp = (<))
then partition(Compare, Pivot, Tail,
[Head | Left0], Left,
Middle0, Middle,
Right0, Right)
else if (Cmp = (=))
then partition(Compare, Pivot, Tail,
Left0, Left,
[Head | Middle0], Middle,
Right0, Right)
else partition(Compare, Pivot, Tail,
Left0, Left,
Middle0, Middle,
[Head | Right0], Right)).
%%%-------------------------------------------------------------------
%%%
%%% Quicksort using the first element as pivot.
%%%
%%% This is not the world's best choice of pivot, but it is the
%%% easiest one to get from a linked list.
%%%
%%% This implementation is *not* tail-recursive--as most quicksort
%%% implementations also are not. (However, do an online search on
%%% "quicksort fortran 77" and you will find some "tail-recursive"
%%% implementations, with the tail recursions expressed as gotos.)
%%%
:- func quicksort(comparison_func(T), list(T)) = list(T).
quicksort(_, []) = [].
quicksort(Compare, [Pivot | Tail]) = Sorted_Lst :-
partition(Compare, Pivot, Tail, Left, Middle, Right),
quicksort(Compare, Left) = Sorted_Left,
quicksort(Compare, Right) = Sorted_Right,
Sorted_Left ++ [Pivot | Middle] ++ Sorted_Right = Sorted_Lst.
%%%-------------------------------------------------------------------
:- func example_numbers = list(int).
example_numbers = [1, 3, 9, 5, 8, 6, 5, 1, 7, 9, 8, 6, 4, 2].
:- func int_compare(int, int) = comparison_result.
int_compare(I, J) = Cmp :-
if (I < J) then (Cmp = (<))
else if (I = J) then (Cmp = (=))
else (Cmp = (>)).
main(!IO) :-
quicksort(int_compare, example_numbers) = Sorted_Numbers,
print("unsorted: ", !IO),
print_line(example_numbers, !IO),
print("sorted: ", !IO),
print_line(Sorted_Numbers, !IO).
%%%-------------------------------------------------------------------
%%% local variables:
%%% mode: mercury
%%% prolog-indent-width: 2
%%% end: |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Nim | Nim | proc insertSort[T](a: var openarray[T]) =
for i in 1 .. a.high:
let value = a[i]
var j = i
while j > 0 and value < a[j-1]:
a[j] = a[j-1]
dec j
a[j] = value
var a = @[4, 65, 2, -31, 0, 99, 2, 83, 782]
insertSort a
echo a |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Rust | Rust | fn main() {
let mut v = [4, 6, 8, 1, 0, 3, 2, 2, 9, 5];
heap_sort(&mut v, |x, y| x < y);
println!("{:?}", v);
}
fn heap_sort<T, F>(array: &mut [T], order: F)
where
F: Fn(&T, &T) -> bool,
{
let len = array.len();
// Create heap
for start in (0..len / 2).rev() {
shift_down(array, &order, start, len - 1)
}
for end in (1..len).rev() {
array.swap(0, end);
shift_down(array, &order, 0, end - 1)
}
}
fn shift_down<T, F>(array: &mut [T], order: &F, start: usize, end: usize)
where
F: Fn(&T, &T) -> bool,
{
let mut root = start;
loop {
let mut child = root * 2 + 1;
if child > end {
break;
}
if child + 1 <= end && order(&array[child], &array[child + 1]) {
child += 1;
}
if order(&array[root], &array[child]) {
array.swap(root, child);
root = child
} else {
break;
}
}
} |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Scala | Scala | def heapSort[T](a: Array[T])(implicit ord: Ordering[T]) {
import scala.annotation.tailrec // Ensure functions are tail-recursive
import ord._
val indexOrdering = Ordering by a.apply
def numberOfLeaves(heapSize: Int) = (heapSize + 1) / 2
def children(i: Int, heapSize: Int) = {
val leftChild = i * 2 + 1
leftChild to leftChild + 1 takeWhile (_ < heapSize)
}
def swap(i: Int, j: Int) = {
val tmp = a(i)
a(i) = a(j)
a(j) = tmp
}
// Maintain partial ordering by bubbling down elements
@tailrec
def siftDown(i: Int, heapSize: Int) {
val childrenOfI = children(i, heapSize)
if (childrenOfI nonEmpty) {
val biggestChild = childrenOfI max indexOrdering
if (a(i) < a(biggestChild)) {
swap(i, biggestChild)
siftDown(biggestChild, heapSize)
}
}
}
// Prepare heap by sifting down all non-leaf elements
for (i <- a.indices.reverse drop numberOfLeaves(a.size)) siftDown(i, a.size)
// Sort from the end of the array forward, by swapping the highest element,
// which is always the top of the heap, to the end of the unsorted array
for (i <- a.indices.reverse) {
swap(0, i)
siftDown(0, i)
}
} |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #MAXScript | MAXScript | fn mergesort arr =
(
local left = #()
local right = #()
local result = #()
if arr.count < 2 then return arr
else
(
local mid = arr.count/2
for i = 1 to mid do
(
append left arr[i]
)
for i = (mid+1) to arr.count do
(
append right arr[i]
)
left = mergesort left
right = mergesort right
if left[left.count] <= right[1] do
(
join left right
return left
)
result = _merge left right
return result
)
)
fn _merge a b =
(
local result = #()
while a.count > 0 and b.count > 0 do
(
if a[1] <= b[1] then
(
append result a[1]
a = for i in 2 to a.count collect a[i]
)
else
(
append result b[1]
b = for i in 2 to b.count collect b[i]
)
)
if a.count > 0 do
(
join result a
)
if b.count > 0 do
(
join result b
)
return result
) |
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Wart | Wart | def (stack)
(tag 'stack nil)
mac (push! x s) :qcase `(isa stack ,s)
`(push! ,x (rep ,s))
mac (pop! s) :qcase `(isa stack ,s)
`(pop! (rep ,s))
def (empty? s) :case (isa stack s)
(empty? rep.s) |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Modula-2 | Modula-2 | (*#####################*)
DEFINITION MODULE QSORT;
(*#####################*)
FROM SYSTEM IMPORT ADDRESS;
TYPE CmpFuncPtrs = PROCEDURE(ADDRESS, ADDRESS):INTEGER;
PROCEDURE QuickSortPtrs(VAR Array:ARRAY OF ADDRESS; N:CARDINAL;
Compare:CmpFuncPtrs);
END QSORT.
|
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Objeck | Objeck |
bundle Default {
class Insert {
function : Main(args : String[]) ~ Nil {
values := [9, 7, 10, 2, 9, 7, 4, 3, 10, 2, 7, 10];
InsertionSort(values);
each(i : values) {
values[i]->PrintLine();
};
}
function : InsertionSort (a : Int[]) ~ Nil {
each(i : a) {
value := a[i];
j := i - 1;
while(j >= 0 & a[j] > value) {
a[j + 1] := a[j];
j -= 1;
};
a[j + 1] := value;
};
}
}
}
|
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Scheme | Scheme | ; swap two elements of a vector
(define (swap! v i j)
(define temp (vector-ref v i))
(vector-set! v i (vector-ref v j))
(vector-set! v j temp))
; sift element at node start into place
(define (sift-down! v start end)
(let ((child (+ (* start 2) 1)))
(cond
((> child end) 'done) ; start has no children
(else
(begin
; if child has a sibling node whose value is greater ...
(and (and (<= (+ child 1) end)
(< (vector-ref v child) (vector-ref v (+ child 1))))
; ... then we'll look at the sibling instead
(set! child (+ child 1)))
(if (< (vector-ref v start) (vector-ref v child))
(begin
(swap! v start child)
(sift-down! v child end))
'done))))))
; transform v into a binary max-heap
(define (heapify v)
(define (iter v start)
(if (>= start 0)
(begin (sift-down! v start (- (vector-length v) 1))
(iter v (- start 1)))
'done))
; start sifting with final parent node of v
(iter v (quotient (- (vector-length v) 2) 2)))
(define (heapsort v)
; swap root and end node values,
; sift the first element into place
; and recurse with new root and next-to-end node
(define (iter v end)
(if (zero? end)
'done
(begin
(swap! v 0 end)
(sift-down! v 0 (- end 1))
(iter v (- end 1)))))
(begin
(heapify v)
; start swapping with root and final node
(iter v (- (vector-length v) 1))))
; testing
(define uriah (list->vector '(3 5 7 9 0 8 1 4 2 6)))
(heapsort uriah)
uriah |
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Mercury | Mercury |
:- module merge_sort.
:- interface.
:- import_module list.
:- type split_error ---> split_error.
:- func merge_sort(list(T)) = list(T).
:- pred merge_sort(list(T)::in, list(T)::out) is det.
:- implementation.
:- import_module int, exception.
merge_sort(U) = S :- merge_sort(U, S).
merge_sort(U, S) :- merge_sort(list.length(U), U, S).
:- pred merge_sort(int::in, list(T)::in, list(T)::out) is det.
merge_sort(L, U, S) :-
( L > 1 ->
H = L // 2,
( split(H, U, F, B) ->
merge_sort(H, F, SF),
merge_sort(L - H, B, SB),
merge_sort.merge(SF, SB, S)
; throw(split_error) )
; S = U ).
:- pred split(int::in, list(T)::in, list(T)::out, list(T)::out) is semidet.
split(N, L, S, E) :-
( N = 0 -> S = [], E = L
; N > 0, L = [H | L1], S = [H | S1],
split(N - 1, L1, S1, E) ).
:- pred merge(list(T)::in, list(T)::in, list(T)::out) is det.
merge([], [], []).
merge([X|Xs], [], [X|Xs]).
merge([], [Y|Ys], [Y|Ys]).
merge([X|Xs], [Y|Ys], M) :-
( compare(>, X, Y) ->
merge_sort.merge([X|Xs], Ys, M0),
M = [Y|M0]
; merge_sort.merge(Xs, [Y|Ys], M0),
M = [X|M0] ).
|
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #Wren | Wren | import "/seq" for Stack
var s = Stack.new()
s.push(1)
s.push(2)
System.print("Stack contains %(s.toList)")
System.print("Number of elements in stack = %(s.count)")
var item = s.pop()
System.print("'%(item)' popped from the stack")
System.print("Last element is now %(s.peek())")
s.clear()
System.print("Stack cleared")
System.print("Is stack now empty? %((s.isEmpty) ? "yes" : "no")") |
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Modula-3 | Modula-3 | GENERIC INTERFACE ArraySort(Elem);
PROCEDURE Sort(VAR a: ARRAY OF Elem.T; cmp := Elem.Compare);
END ArraySort. |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #OCaml | OCaml | let rec insert lst x =
match lst with
[] -> [x]
| y :: ys when x <= y -> x :: y :: ys
| y :: ys -> y :: insert ys x
;;
let insertion_sort = List.fold_left insert [];;
insertion_sort [6;8;5;9;3;2;1;4;7];; |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Seed7 | Seed7 | const proc: downheap (inout array elemType: arr, in var integer: k, in integer: n) is func
local
var elemType: help is elemType.value;
var integer: j is 0;
begin
if k <= n div 2 then
help := arr[k];
repeat
j := 2 * k;
if j < n and arr[j] < arr[succ(j)] then
incr(j);
end if;
if help < arr[j] then
arr[k] := arr[j];
k := j;
end if;
until help >= arr[j] or k > n div 2;
arr[k] := help;
end if;
end func;
const proc: heapSort (inout array elemType: arr) is func
local
var integer: n is 0;
var integer: k is 0;
var elemType: help is elemType.value;
begin
n := length(arr);
for k range n div 2 downto 1 do
downheap(arr, k, n);
end for;
repeat
help := arr[1];
arr[1] := arr[n];
arr[n] := help;
decr(n);
downheap(arr, 1, n);
until n <= 1;
end func; |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #SequenceL | SequenceL |
import <Utilities/Sequence.sl>;
TUPLE<T> ::= (A: T, B: T);
heapSort(x(1)) :=
let
heapified := heapify(x, (size(x) - 2) / 2 + 1);
in
sortLoop(heapified, size(heapified));
heapify(x(1), i) :=
x when i <= 0 else
heapify(siftDown(x, i, size(x)), i - 1);
sortLoop(x(1), i) :=
x when i <= 2 else
sortLoop( siftDown(swap(x, 1, i), 1, i - 1), i - 1);
siftDown(x(1), start, end) :=
let
child := start * 2;
child1 := child + 1 when child + 1 <= end and x[child] < x[child + 1] else child;
in
x when child >= end else
x when x[start] >= x[child1] else
siftDown(swap(x, child1, start), child1, end);
swap(list(1), i, j) :=
let
vals := (A: list[i], B: list[j]);
in
setElementAt(setElementAt(list, i, vals.B), j, vals.A);
|
http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort | Sorting algorithms/Merge sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
The merge sort is a recursive sort of order n*log(n).
It is notable for having a worst case and average complexity of O(n*log(n)), and a best case complexity of O(n) (for pre-sorted input).
The basic idea is to split the collection into smaller groups by halving it until the groups only have one element or no elements (which are both entirely sorted groups).
Then merge the groups back together so that their elements are in order.
This is how the algorithm gets its divide and conquer description.
Task
Write a function to sort a collection of integers using the merge sort.
The merge sort algorithm comes in two parts:
a sort function and
a merge function
The functions in pseudocode look like this:
function mergesort(m)
var list left, right, result
if length(m) ≤ 1
return m
else
var middle = length(m) / 2
for each x in m up to middle - 1
add x to left
for each x in m at and after middle
add x to right
left = mergesort(left)
right = mergesort(right)
if last(left) ≤ first(right)
append right to left
return left
result = merge(left, right)
return result
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
See also
the Wikipedia entry: merge sort
Note: better performance can be expected if, rather than recursing until length(m) ≤ 1, an insertion sort is used for length(m) smaller than some threshold larger than 1. However, this complicates the example code, so it is not shown here.
| #Modula-2 | Modula-2 |
DEFINITION MODULE MSIterat;
PROCEDURE IterativeMergeSort( VAR a : ARRAY OF INTEGER);
END MSIterat.
|
http://rosettacode.org/wiki/Stack | Stack |
Data Structure
This illustrates a data structure, a means of storing data within a program.
You may see other such structures in the Data Structures category.
A stack is a container of elements with last in, first out access policy. Sometimes it also called LIFO.
The stack is accessed through its top.
The basic stack operations are:
push stores a new element onto the stack top;
pop returns the last pushed stack element, while removing it from the stack;
empty tests if the stack contains no elements.
Sometimes the last pushed stack element is made accessible for immutable access (for read) or mutable access (for write):
top (sometimes called peek to keep with the p theme) returns the topmost element without modifying the stack.
Stacks allow a very simple hardware implementation.
They are common in almost all processors.
In programming, stacks are also very popular for their way (LIFO) of resource management, usually memory.
Nested scopes of language objects are naturally implemented by a stack (sometimes by multiple stacks).
This is a classical way to implement local variables of a re-entrant or recursive subprogram. Stacks are also used to describe a formal computational framework.
See stack machine.
Many algorithms in pattern matching, compiler construction (e.g. recursive descent parsers), and machine learning (e.g. based on tree traversal) have a natural representation in terms of stacks.
Task
Create a stack supporting the basic operations: push, pop, empty.
See also
Array
Associative array: Creation, Iteration
Collections
Compound data type
Doubly-linked list: Definition, Element definition, Element insertion, List Traversal, Element Removal
Linked list
Queue: Definition, Usage
Set
Singly-linked list: Element definition, Element insertion, List Traversal, Element Removal
Stack
| #X86_Assembly | X86 Assembly |
; x86_64 linux nasm
struc Stack
maxSize: resb 8
currentSize: resb 8
contents:
endStruc
section .data
soError: db "Stack Overflow Exception", 10
seError: db "Stack Empty Error", 10
section .text
createStack:
; IN: max number of elements (rdi)
; OUT: pointer to new stack (rax)
push rdi
xor rdx, rdx
mov rbx, 8
mul rbx
mov rcx, rax
mov rax, 12
mov rdi, 0
syscall
push rax
mov rdi, rax
add rdi, rcx
mov rax, 12
syscall
pop rax
pop rbx
mov qword [rax + maxSize], rbx
mov qword [rax + currentSize], 0
ret
push:
; IN: stack to operate on (stack argument), element to push (rdi)
; OUT: void
mov rax, qword [rsp + 8]
mov rbx, qword [rax + currentSize]
cmp rbx, qword [rax + maxSize]
je stackOverflow
lea rsi, [rax + contents + 8*rbx]
mov qword [rsi], rdi
add qword [rax + currentSize], 1
ret
pop:
; pop
; IN: stack to operate on (stack argument)
; OUT: element from stack top
mov rax, qword [rsp + 8]
mov rbx, qword [rax + currentSize]
cmp rbx, 0
je stackEmpty
sub rbx, 1
lea rsi, [rax + contents + 8*rbx]
mov qword [rax + currentSize], rbx
mov rax, qword [rsi]
ret
; stack operation exceptions
stackOverflow:
mov rsi, soError
mov rdx, 25
jmp errExit
stackEmpty:
mov rsi, seError
mov rdx, 18
errExit:
mov rax, 1
mov rdi, 1
syscall
mov rax, 60
mov rdi, 1
syscall
|
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort | Sorting algorithms/Quicksort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Quicksort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Task
Sort an array (or list) elements using the quicksort algorithm.
The elements must have a strict weak order and the index of the array can be of any discrete type.
For languages where this is not possible, sort an array of integers.
Quicksort, also known as partition-exchange sort, uses these steps.
Choose any element of the array to be the pivot.
Divide all other elements (except the pivot) into two partitions.
All elements less than the pivot must be in the first partition.
All elements greater than the pivot must be in the second partition.
Use recursion to sort both partitions.
Join the first sorted partition, the pivot, and the second sorted partition.
The best pivot creates partitions of equal length (or lengths differing by 1).
The worst pivot creates an empty partition (for example, if the pivot is the first or last element of a sorted array).
The run-time of Quicksort ranges from O(n log n) with the best pivots, to O(n2) with the worst pivots, where n is the number of elements in the array.
This is a simple quicksort algorithm, adapted from Wikipedia.
function quicksort(array)
less, equal, greater := three empty arrays
if length(array) > 1
pivot := select any element of array
for each x in array
if x < pivot then add x to less
if x = pivot then add x to equal
if x > pivot then add x to greater
quicksort(less)
quicksort(greater)
array := concatenate(less, equal, greater)
A better quicksort algorithm works in place, by swapping elements within the array, to avoid the memory allocation of more arrays.
function quicksort(array)
if length(array) > 1
pivot := select any element of array
left := first index of array
right := last index of array
while left ≤ right
while array[left] < pivot
left := left + 1
while array[right] > pivot
right := right - 1
if left ≤ right
swap array[left] with array[right]
left := left + 1
right := right - 1
quicksort(array from first index to right)
quicksort(array from left to last index)
Quicksort has a reputation as the fastest sort. Optimized variants of quicksort are common features of many languages and libraries. One often contrasts quicksort with merge sort, because both sorts have an average time of O(n log n).
"On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct values, repeated many times." — http://perldoc.perl.org/sort.html
Quicksort is at one end of the spectrum of divide-and-conquer algorithms, with merge sort at the opposite end.
Quicksort is a conquer-then-divide algorithm, which does most of the work during the partitioning and the recursive calls. The subsequent reassembly of the sorted partitions involves trivial effort.
Merge sort is a divide-then-conquer algorithm. The partioning happens in a trivial way, by splitting the input array in half. Most of the work happens during the recursive calls and the merge phase.
With quicksort, every element in the first partition is less than or equal to every element in the second partition. Therefore, the merge phase of quicksort is so trivial that it needs no mention!
This task has not specified whether to allocate new arrays, or sort in place. This task also has not specified how to choose the pivot element. (Common ways to are to choose the first element, the middle element, or the median of three elements.) Thus there is a variety among the following implementations.
| #Mond | Mond | fun quicksort( arr, cmp )
{
if( arr.length() < 2 )
return arr;
if( !cmp )
cmp = ( a, b ) -> a - b;
var a = [ ], b = [ ];
var pivot = arr[0];
var len = arr.length();
for( var i = 1; i < len; ++i )
{
var item = arr[i];
if( cmp( item, pivot ) < cmp( pivot, item ) )
a.add( item );
else
b.add( item );
}
a = quicksort( a, cmp );
b = quicksort( b, cmp );
a.add( pivot );
foreach( var item in b )
a.add( item );
return a;
} |
http://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort | Sorting algorithms/Insertion sort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Insertion sort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
An O(n2) sorting algorithm which moves elements one at a time into the correct position.
The algorithm consists of inserting one element at a time into the previously sorted part of the array, moving higher ranked elements up as necessary.
To start off, the first (or smallest, or any arbitrary) element of the unsorted array is considered to be the sorted part.
Although insertion sort is an O(n2) algorithm, its simplicity, low overhead, good locality of reference and efficiency make it a good choice in two cases:
small n,
as the final finishing-off algorithm for O(n logn) algorithms such as mergesort and quicksort.
The algorithm is as follows (from wikipedia):
function insertionSort(array A)
for i from 1 to length[A]-1 do
value := A[i]
j := i-1
while j >= 0 and A[j] > value do
A[j+1] := A[j]
j := j-1
done
A[j+1] = value
done
Writing the algorithm for integers will suffice.
| #Oforth | Oforth | : insertionSort(a)
| l i j v |
a asListBuffer ->l
2 l size for: i [
l at(i) ->v
i 1- ->j
while(j) [
l at(j) dup v <= ifTrue: [ drop break ]
j 1+ swap l put
j 1- ->j
]
l put(j 1 +, v)
]
l ; |
http://rosettacode.org/wiki/Sorting_algorithms/Heapsort | Sorting algorithms/Heapsort |
Sorting Algorithm
This is a sorting algorithm. It may be applied to a set of data in order to sort it.
For comparing various sorts, see compare sorts.
For other sorting algorithms, see sorting algorithms, or:
O(n logn) sorts
Heap sort |
Merge sort |
Patience sort |
Quick sort
O(n log2n) sorts
Shell Sort
O(n2) sorts
Bubble sort |
Cocktail sort |
Cocktail sort with shifting bounds |
Comb sort |
Cycle sort |
Gnome sort |
Insertion sort |
Selection sort |
Strand sort
other sorts
Bead sort |
Bogo sort |
Common sorted list |
Composite structures sort |
Custom comparator sort |
Counting sort |
Disjoint sublist sort |
External sort |
Jort sort |
Lexicographical sort |
Natural sorting |
Order by pair comparisons |
Order disjoint list items |
Order two numerical lists |
Object identifier (OID) sort |
Pancake sort |
Quickselect |
Permutation sort |
Radix sort |
Ranking methods |
Remove duplicate elements |
Sleep sort |
Stooge sort |
[Sort letters of a string] |
Three variable sort |
Topological sort |
Tree sort
This page uses content from Wikipedia. The original article was at Heapsort. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
Heapsort is an in-place sorting algorithm with worst case and average complexity of O(n logn).
The basic idea is to turn the array into a binary heap structure, which has the property that it allows efficient retrieval and removal of the maximal element.
We repeatedly "remove" the maximal element from the heap, thus building the sorted list from back to front.
A heap sort requires random access, so can only be used on an array-like data structure.
Pseudocode:
function heapSort(a, count) is
input: an unordered array a of length count
(first place a in max-heap order)
heapify(a, count)
end := count - 1
while end > 0 do
(swap the root(maximum value) of the heap with the
last element of the heap)
swap(a[end], a[0])
(decrement the size of the heap so that the previous
max value will stay in its proper place)
end := end - 1
(put the heap back in max-heap order)
siftDown(a, 0, end)
function heapify(a,count) is
(start is assigned the index in a of the last parent node)
start := (count - 2) / 2
while start ≥ 0 do
(sift down the node at index start to the proper place
such that all nodes below the start index are in heap
order)
siftDown(a, start, count-1)
start := start - 1
(after sifting down the root all nodes/elements are in heap order)
function siftDown(a, start, end) is
(end represents the limit of how far down the heap to sift)
root := start
while root * 2 + 1 ≤ end do (While the root has at least one child)
child := root * 2 + 1 (root*2+1 points to the left child)
(If the child has a sibling and the child's value is less than its sibling's...)
if child + 1 ≤ end and a[child] < a[child + 1] then
child := child + 1 (... then point to the right child instead)
if a[root] < a[child] then (out of max-heap order)
swap(a[root], a[child])
root := child (repeat to continue sifting down the child now)
else
return
Write a function to sort a collection of integers using heapsort.
| #Sidef | Sidef | func sift_down(a, start, end) {
var root = start;
while ((2*root + 1) <= end) {
var child = (2*root + 1);
if ((child+1 <= end) && (a[child] < a[child + 1])) {
child += 1;
}
if (a[root] < a[child]) {
a[child, root] = a[root, child];
root = child;
} else {
return;
}
}
}
func heapify(a, count) {
var start = ((count - 2) / 2);
while (start >= 0) {
sift_down(a, start, count-1);
start -= 1;
}
}
func heap_sort(a, count) {
heapify(a, count);
var end = (count - 1);
while (end > 0) {
a[0, end] = a[end, 0];
end -= 1;
sift_down(a, 0, end)
}
return a
}
var arr = (1..10 -> shuffle); # creates a shuffled array
say arr; # prints the unsorted array
heap_sort(arr, arr.len); # sorts the array in-place
say arr; # prints the sorted array |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.