id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
56435250
https://en.wikipedia.org/wiki/Robert%20W.%20Doran
Robert W. Doran
Robert William Doran HFNZCS (5 November 1944 โ€“ 13 October 2018) was a New Zealand-based computer scientist and historian of computing. He was Professor Emeritus of Computer Science at the University of Auckland, New Zealand. Robert W. Doran studied at the University of Canterbury (New Zealand) and for a Masters in computer science from Stanford University (California, United States) in 1967. He taught at City University (London, England) and Massey University (Palmerston North, New Zealand). He first worked with computers in 1963. He was a Principal Computer Architect at Amdahl Corporation (Sunnyvale, California) during 1976โ€“1982. He joined the Department of Computer Science at the University of Auckland in 1982 and was Head of Department. He maintained computing history displays in the department, especially of totalisators. The history displays are now part of The Bob Doran Museum of Computing. Doran's research interests included computer architecture, parallel algorithms, and computer programming. He was also interested in the history of computing. In 2017, he contributed to The Turing Guide. Robert Doran was made an Honorary Fellow of the New Zealand Computer Society, now the Institute of IT Professionals. Bob Doran died on 13 October 2018 at home in Auckland. Selected publications Carpenter, B. E. & Doran, R. W. (1977). The other Turing machine. The Computer Journal, 20(3):269โ€“279. Doran, R. W. (1977). Computer Architecture: A Structured Approach. Academic Press, 1979. Doran, R. W. & Thomas, L. K. (1980). Variants of the software solution to mutual exclusion. Information Processing Letters, 10(4/5):206โ€“208, July. Carpenter, B. E. & Doran, R. W. (1986). AM Turing's ACE Report of 1946 and Other Papers, Vol. 10, Charles Babbage Institute Reprint Series for the History of Computing, MIT Press. Doran, R. W. (1988). Variants of an improved carry look-ahead adder. IEEE Transactions on Computers, 37(9):1110โ€“1113. Doran, R. W. (1988). Amdahl Multiple-Domain Architecture. Computer, 21(10):20โ€“28. Thomborson, C. D. & Doran, R. W. (2005). Incredible Codes. In A. Brook (ed.), Incredible Science: Explore the Wonderful World of Science (pp. 16โ€“17). New Zealand: Penguin Books. Doran, R. W. (1995). Special cases of division, Journal of Universal Computer Science, 1(3):176โ€“194. Doran, R. W. (2005). Computer architecture and the ACE computers. In B. J. Copeland (ed.), Alan Turing's Automatic Computing Engine (pp. 193โ€“206). Oxford: Oxford University Press. Doran, R. W. (2007). The Gray code. Journal of Universal Computer Science, 13(11):1573โ€“1597. Doran, R. W. (2007). The First Automatic Totalisator. The Rutherford Journal, 2. Carpenter, B. E. & Doran, R. W. (2014). John Womersley: Applied Mathematician and Pioneer of Modern Computing. IEEE Annals of the History of Computing, 36(2):60โ€“70. Doran is also a listed inventor on the following US patents assigned to Amdahl Corporation: 4503512 (1985), 4967342 (1990), 5109522 (1992). References External links Emeritus Professor Bob William Doran home page via Archive.org Bob Doran personal home page via Archive.org 1944 births 2018 deaths People from Auckland University of Canterbury alumni Stanford University alumni New Zealand computer scientists Historians of technology Academics of City, University of London Massey University faculty University of Auckland faculty
36362946
https://en.wikipedia.org/wiki/1973%20USC%20Trojans%20baseball%20team
1973 USC Trojans baseball team
The 1973 USC Trojans baseball team represented the University of Southern California in the 1973 NCAA University Division baseball season. The team was coached Rod Dedeaux in his 32nd season. The Trojans won the College World Series, defeating the Arizona State Sun Devils in the championship game, winning their fourth of five consecutive national championships, and the fifth in six years. Roster Schedule ! style="background:#FFCC00;color:#990000;"| Regular Season |- |- align="center" bgcolor="ddffdd" | February 17 || || 6โ€“1 || 1โ€“0 || โ€“ |- align="center" bgcolor="ddffdd" | February 17 || San Diego State || 5โ€“3 || 2โ€“0 || โ€“ |- align="center" bgcolor="#ddffdd" | February 20 || at || 3โ€“1 || 3โ€“0 || โ€“ |- align="center" bgcolor="ddffdd" | February 23 || || 5โ€“2 || 4โ€“0 || โ€“ |- align="center" bgcolor="#ddffdd" | February 24 || || 5โ€“4 || 5โ€“0 || โ€“ |- align="center" bgcolor="#ddffdd" | February 24 || UC Santa Barbara || 10โ€“6 || 6โ€“0 || โ€“ |- |- align="center" bgcolor="ddffdd" | March 2 || at || 5โ€“0 || 7โ€“0 || โ€“ |- align="center" bgcolor="ddffdd" | March 3 || at Fresno State || 6โ€“2 || 8โ€“0 || โ€“ |- align="center" bgcolor="ddffdd" | March 3 || at Fresno State || 2โ€“0 || 9โ€“0 || โ€“ |- align="center" bgcolor="ddffdd" | March 7 || || 10โ€“2 || 10โ€“0 || โ€“ |- align="center" bgcolor="#ddffdd" | March 10 || || 6โ€“3 || 11โ€“0 || 1โ€“0 |- align="center" bgcolor="#ddffdd" | March 10 || UCLA || 10โ€“1 || 12โ€“0 || 2โ€“0 |- align="center" bgcolor="ffdddd" | March 15 || at Arizona State || 2โ€“4 || 12โ€“1 || โ€“ |- align="center" bgcolor="ffdddd" | March 16 || at Arizona State || 4โ€“8 || 12โ€“2 || โ€“ |- align="center" bgcolor="ffdddd" | March 17 || at Arizona State || 5โ€“12 || 12โ€“3 || โ€“ |- align="center" bgcolor="ddffdd" | March 21 || || 13โ€“12 || 13โ€“3 || โ€“ |- align="center" bgcolor="ddffdd" | March 23 || || 2โ€“1 || 14โ€“3 || โ€“ |- align="center" bgcolor="ddffdd" | March 26 || vs. Arizona State || 3โ€“1 || 15โ€“3 || โ€“ |- align="center" bgcolor="ddffdd" | March 27 || vs. || 4โ€“0 || 16โ€“3 || โ€“ |- align="center" bgcolor="ffdddd" | March 27 || vs. || 4โ€“5 || 16โ€“4 || โ€“ |- align="center" bgcolor="ddffdd" | March 29 || vs. || 9โ€“2 || 17โ€“4 || โ€“ |- align="center" bgcolor="ddffdd" | March 30 || vs. || 16โ€“4 || 18โ€“4 || โ€“ |- align="center" bgcolor="#ddffdd" | March 30 || at || 7โ€“4 || 19โ€“4 || โ€“ |- align="center" bgcolor="ddffdd" | March 31 || vs. || 8โ€“2 || 20โ€“4 || โ€“ |- align="center" bgcolor="ddffdd" | March 31 || vs. Arizona State || 2โ€“0 || 21โ€“4 || โ€“ |- |- align="center" bgcolor="ddffdd" | April 1 || at || 9โ€“2 || 22โ€“4 || โ€“ |- align="center" bgcolor="ffdddd" | April 3 || at || 3โ€“15 || 22โ€“5 || โ€“ |- align="center" bgcolor="ddffdd" | April 6 || || 14โ€“0 || 23โ€“5 || 3โ€“0 |- align="center" bgcolor="ddffdd" | April 7 || California || 2โ€“1 || 24โ€“5 || 4โ€“0 |- align="center" bgcolor="ddffdd" | April 7 || California || 8โ€“4 || 25โ€“5 || 5โ€“0 |- align="center" bgcolor="ddffdd" | April 9 || || 14โ€“4 || 26โ€“5 || โ€“ |- align="center" bgcolor="ffdddd" | April 10 || UC Irvine || 0โ€“5 || 26โ€“6 || โ€“ |- align="center" bgcolor="ddffdd" | April 11 || || 14โ€“3 || 27โ€“6 || โ€“ |- align="center" bgcolor="ddffdd" | April 13 || at Stanford || 2โ€“1 || 28โ€“6 || 6โ€“0 |- align="center" bgcolor="ffdddd" | April 14 || at Stanford || 0โ€“1 || 28โ€“7 || 6โ€“1 |- align="center" bgcolor="ddffdd" | April 14 || at Stanford || 3โ€“0 || 29โ€“7 || 7โ€“1 |- align="center" bgcolor="ddffdd" | April 17 || at Hawaii || 10โ€“6 || 30โ€“7 || โ€“ |- align="center" bgcolor="ddffdd" | April 24 || at Cal State Los Angeles || 9โ€“4 || 31โ€“7 || โ€“ |- align="center" bgcolor="ffdddd" | April 25 || at Chapman || 5โ€“6 || 31โ€“8 || โ€“ |- align="center" bgcolor="ddffdd" | April 27 || Stanford || 12โ€“8 || 32โ€“8 || 8โ€“1 |- align="center" bgcolor="ddffdd" | April 28 || Stanford || 1โ€“0 || 33โ€“8 || 9โ€“1 |- align="center" bgcolor="ffdddd" | April 28 || Stanford || 0โ€“3 || 33โ€“9 || 9โ€“2 |- |- align="center" bgcolor="ddffdd" | May 1 || at Cal Poly Pomona || 18โ€“13 || 34โ€“9 || โ€“ |- align="center" bgcolor="ffdddd" | May 4 || at California || 4โ€“5 || 34โ€“10 || 9โ€“3 |- align="center" bgcolor="ddffdd" | May 5 || at California || 8โ€“2 || 35โ€“10 || 10โ€“3 |- align="center" bgcolor="ddffdd" | May 5 || at California || 2โ€“0 || 36โ€“10 || 11โ€“3 |- align="center" bgcolor="ddffdd" | May 8 || || 10โ€“3 || 37โ€“10 || โ€“ |- align="center" bgcolor="ffdddd" | May 10 || at UCLA || 5โ€“6 || 37โ€“11 || 11โ€“4 |- align="center" bgcolor="ddffdd" | May 11 || UCLA || 6โ€“2 || 38โ€“11 || 12โ€“4 |- align="center" bgcolor="ddffdd" | May 12 || at UCLA || 8โ€“4 || 39โ€“11 || 13โ€“4 |- align="center" bgcolor="ddffdd" | May 12 || at UCLA || 6โ€“4 || 40โ€“11 || 14โ€“4 |- |- ! style="background:#FFCC00;color:#990000;"| Postโ€“Season |- |- |- align="center" bgcolor="ddffdd" | May 18 || vs. Washington State || Buck Bailey Field || 13โ€“4 || 41โ€“11 |- align="center" bgcolor="ddffdd" | May 19 || vs. Washington State || Buck Bailey Field || 11โ€“9 || 42โ€“11 |- |- align="center" bgcolor="ddffdd" | May 26 || vs. Loyola Marymount || Bovard Field || 9โ€“8 || 43โ€“11 |- align="center" bgcolor="ddffdd" | May 27 || vs. Loyola Marymount || Bovard Field || 2โ€“1 || 44โ€“11 |- align="center" bgcolor="ddffdd" | June 1 || vs. Cal State Los Angeles || Bovard Field || 4โ€“3 || 45โ€“11 |- align="center" bgcolor="ddffdd" | June 2 || vs. Cal State Los Angeles || Bovard Field || 13โ€“6 || 46โ€“11 |- |- align="center" bgcolor="ddffdd" | June 9 || vs. || Rosenblatt Stadium || 4โ€“1 || 47โ€“11 |- align="center" bgcolor="ddffdd" | June 10 || vs. Texas || Rosenblatt Stadium || 4โ€“1 || 48โ€“11 |- align="center" bgcolor="ddffdd" | June 11 || vs. Arizona State || Rosenblatt Stadium || 3โ€“1 || 49โ€“11 |- align="center" bgcolor="ddffdd" | June 12 || vs. Minnesota || Rosenblatt Stadium || 8โ€“7 || 50โ€“11 |- align="center" bgcolor="ddffdd" | June 13 || vs. Arizona State || Rosenblatt Stadium || 4โ€“3 || 51โ€“11 |- Awards and honors Rich Dauer All-Pacific-8 First Team Ken Huizenga College World Series All-Tournament Team Ed Putnam All-Pacific-8 First Team Randy Scarbery College World Series All-Tournament Team All-America First Team All-Pacific-8 First Team Roy Smalley College World Series All-Tournament Team All-America First Team All-Pacific-8 First Team Trojans in the 1973 MLB Draft The following members of the USC baseball program were drafted in the 1973 Major League Baseball Draft. June regular draft June secondary draft References USC USC Trojans baseball seasons Pac-12 Conference baseball champion seasons College World Series seasons NCAA Division I Baseball Championship seasons USC Trojans
1802669
https://en.wikipedia.org/wiki/Ken%20Musgrave
Ken Musgrave
Forest Kenton Musgrave (16 September 1955 โ€“ 14 December 2018) was a professor at The George Washington University in the USA. A computer artist who worked with fractal images, he worked on the Bryce landscape software and later as CEO/CTO of Pandromeda, Inc. developed and designed the innovative MojoWorld software. Education He obtained his Ph.D in Computer science from Yale University in 1993, writing his thesis on Methods for Realistic Landscape Imaging. He was referred to by fractal pioneer Benoรฎt Mandelbrot as being "the first true fractal-based artist". Software work Musgrave designed the initial fractal-based programs on which Bryce was based, and later worked on designing the Deep Materials Lab component of Bryce. His work was featured in an article in the January 1996 Scientific American (Gibbs, "Playing Slartibartfast with Fractals") which discussed fractal curves. The article also described software he had designed which would generate entire Earth-size planets using semi-random procedural 3D, and then allow a user to fly or walk about that world, exploring mountains or forests, and choosing a scene to render to an image. The software eventually became a commercial release called MojoWorld, which went through three releases to end with version 3.1.1. Cinema work Musgrave received screen credits for digital effects in the films Titanic, Dante's Peak and Lawnmower Man. His MojoWorld software was used to procedurally generate background mattes and terrains on big-budget movies such as The Day After Tomorrow. ZeniMax Media Musgrave was technical advisor at ZeniMax Media parent company of videogame publisher Bethesda Softworks, at the time of famous releases such as the RPG Morrowind. Publications Texturing and Modeling: A Procedural Approach - F. Kenton Musgrave et al., 1998 - See also Fractal landscape References External links Ken Musgrave's website Pandromeda's website Methods for Realistic Landscape Imaging - doctoral dissertation 1955 births American computer scientists Yale University alumni 2018 deaths
3821
https://en.wikipedia.org/wiki/Binary-coded%20decimal
Binary-coded decimal
In computing and electronic systems, binary-coded decimal (BCD) is a class of binary encodings of decimal numbers where each digit is represented by a fixed number of bits, usually four or eight. Sometimes, special bit patterns are used for a sign or other indications (e.g. error or overflow). In byte-oriented systems (i.e. most modern computers), the term unpacked BCD usually implies a full byte for each digit (often including a sign), whereas packed BCD typically encodes two digits within a single byte by taking advantage of the fact that four bits are enough to represent the range 0 to 9. The precise 4-bit encoding, however, may vary for technical reasons (e.g. Excess-3). The ten states representing a BCD digit are sometimes called tetrades (for the nibble typically needed to hold them is also known as a tetrade) while the unused, don't care-states are named , pseudo-decimals or pseudo-decimal digits. BCD's main virtue, in comparison to binary positional systems, is its more accurate representation and rounding of decimal quantities, as well as its ease of conversion into conventional human-readable representations. Its principal drawbacks are a slight increase in the complexity of the circuits needed to implement basic arithmetic as well as slightly less dense storage. BCD was used in many early decimal computers, and is implemented in the instruction set of machines such as the IBM System/360 series and its descendants, Digital Equipment Corporation's VAX, the Burroughs B1700, and the Motorola 68000-series processors. BCD per se is not as widely used as in the past, and is unavailable or limited in newer instruction sets (e.g., ARM; x86 in long mode). However, decimal fixed-point and floating-point formats are still important and continue to be used in financial, commercial, and industrial computing, where the subtle conversion and fractional rounding errors that are inherent in floating point binary representations cannot be tolerated. Background BCD takes advantage of the fact that any one decimal numeral can be represented by a four-bit pattern. The most obvious way of encoding digits is Natural BCD (NBCD), where each decimal digit is represented by its corresponding four-bit binary value, as shown in the following table. This is also called "8421" encoding. This scheme can also be referred to as Simple Binary-Coded Decimal (SBCD) or BCDย 8421, and is the most common encoding. Others include the so-called "4221" and "7421" encoding โ€“ named after the weighting used for the bits โ€“ and "Excess-3". For example, the BCD digit 6, in 8421 notation, is in 4221 (two encodings are possible), in 7421, while in Excess-3 it is (). The following table represents decimal digits from 0 to 9 in various BCD encoding systems. In the headers, the "8421" indicates the weight of each bit. In the fifth column ("BCDย 84โˆ’2โˆ’1"), two of the weights are negative. Both ASCII and EBCDIC character codes for the digits, which are examples of zoned BCD, are also shown. As most computers deal with data in 8-bit bytes, it is possible to use one of the following methods to encode a BCD number: Unpacked: Each decimal digit is encoded into one byte, with four bits representing the number and the remaining bits having no significance. Packed: Two decimal digits are encoded into a single byte, with one digit in the least significant nibble (bits 0 through 3) and the other numeral in the most significant nibble (bits 4 through 7). As an example, encoding the decimal number 91 using unpacked BCD results in the following binary pattern of two bytes: Decimal: 9 1 Binary : 0000 1001 0000 0001 In packed BCD, the same number would fit into a single byte: Decimal: 9 1 Binary: 1001 0001 Hence the numerical range for one unpacked BCD byte is zero through nine inclusive, whereas the range for one packed BCD byte is zero through ninety-nine inclusive. To represent numbers larger than the range of a single byte any number of contiguous bytes may be used. For example, to represent the decimal number 12345 in packed BCD, using big-endian format, a program would encode as follows: Decimal: 0 1 2 3 4 5 Binary : 0000 0001 0010 0011 0100 0101 Here, the most significant nibble of the most significant byte has been encoded as zero, so the number is stored as 012345 (but formatting routines might replace or remove leading zeros). Packed BCD is more efficient in storage usage than unpacked BCD; encoding the same number (with the leading zero) in unpacked format would consume twice the storage. Shifting and masking operations are used to pack or unpack a packed BCD digit. Other bitwise operations are used to convert a numeral to its equivalent bit pattern or reverse the process. Packed BCD In packed BCD (or simply packed decimal), each of the two nibbles of each byte represent a decimal digit. Packed BCD has been in use since at least the 1960s and is implemented in all IBM mainframe hardware since then. Most implementations are big endian, i.e. with the more significant digit in the upper half of each byte, and with the leftmost byte (residing at the lowest memory address) containing the most significant digits of the packed decimal value. The lower nibble of the rightmost byte is usually used as the sign flag, although some unsigned representations lack a sign flag. As an example, a 4-byte value consists of 8 nibbles, wherein the upper 7 nibbles store the digits of a 7-digit decimal value, and the lowest nibble indicates the sign of the decimal integer value. Standard sign values are 1100 (hex C) for positive (+) and 1101 (D) for negative (โˆ’). This convention comes from the zone field for EBCDIC characters and the signed overpunch representation. Other allowed signs are 1010 (A) and 1110 (E) for positive and 1011 (B) for negative. IBM System/360 processors will use the 1010 (A) and 1011 (B) signs if the A bit is set in the PSW, for the ASCII-8 standard that never passed. Most implementations also provide unsigned BCD values with a sign nibble of 1111 (F). ILE RPG uses 1111 (F) for positive and 1101 (D) for negative. These match the EBCDIC zone for digits without a sign overpunch. In packed BCD, the number 127 is represented by 0001 0010 0111 1100 (127C) and โˆ’127 is represented by 0001 0010 0111 1101 (127D). Burroughs systems used 1101 (D) for negative, and any other value is considered a positive sign value (the processors will normalize a positive sign to 1100 (C)). No matter how many bytes wide a word is, there is always an even number of nibbles because each byte has two of them. Therefore, a word of n bytes can contain up to (2n)โˆ’1 decimal digits, which is always an odd number of digits. A decimal number with d digits requires (d+1) bytes of storage space. For example, a 4-byte (32-bit) word can hold seven decimal digits plus a sign and can represent values ranging from ยฑ9,999,999. Thus the number โˆ’1,234,567 is 7 digits wide and is encoded as: 0001 0010 0011 0100 0101 0110 0111 1101 1 2 3 4 5 6 7 โˆ’ Like character strings, the first byte of the packed decimal that with the most significant two digits is usually stored in the lowest address in memory, independent of the endianness of the machine. In contrast, a 4-byte binary two's complement integer can represent values from โˆ’2,147,483,648 to +2,147,483,647. While packed BCD does not make optimal use of storage (using about 20% more memory than binary notation to store the same numbers), conversion to ASCII, EBCDIC, or the various encodings of Unicode is made trivial, as no arithmetic operations are required. The extra storage requirements are usually offset by the need for the accuracy and compatibility with calculator or hand calculation that fixed-point decimal arithmetic provides. Denser packings of BCD exist which avoid the storage penalty and also need no arithmetic operations for common conversions. Packed BCD is supported in the COBOL programming language as the "COMPUTATIONAL-3" (an IBM extension adopted by many other compiler vendors) or "PACKED-DECIMAL" (part of the 1985 COBOL standard) data type. It is supported in PL/I as "FIXED DECIMAL". Beside the IBM System/360 and later compatible mainframes, packed BCD is implemented in the native instruction set of the original VAX processors from Digital Equipment Corporation and some models of the SDS Sigma series mainframes, and is the native format for the Burroughs Corporation Medium Systems line of mainframes (descended from the 1950s Electrodata 200 series). Ten's complement representations for negative numbers offer an alternative approach to encoding the sign of packed (and other) BCD numbers. In this case, positive numbers always have a most significant digit between 0 and 4 (inclusive), while negative numbers are represented by the 10's complement of the corresponding positive number. As a result, this system allows for 32-bit packed BCD numbers to range from โˆ’50,000,000 to +49,999,999, and โˆ’1 is represented as 99999999. (As with two's complement binary numbers, the range is not symmetric about zero.) Fixed-point packed decimal Fixed-point decimal numbers are supported by some programming languages (such as COBOL and PL/I). These languages allow the programmer to specify an implicit decimal point in front of one of the digits. For example, a packed decimal value encoded with the bytes 12 34 56 7C represents the fixed-point value +1,234.567 when the implied decimal point is located between the 4th and 5th digits: 12 34 56 7C 12 34.56 7+ The decimal point is not actually stored in memory, as the packed BCD storage format does not provide for it. Its location is simply known to the compiler, and the generated code acts accordingly for the various arithmetic operations. Higher-density encodings If a decimal digit requires four bits, then three decimal digits require 12 bits. However, since 210 (1,024) is greater than 103 (1,000), if three decimal digits are encoded together, only 10 bits are needed. Two such encodings are Chenโ€“Ho encoding and densely packed decimal (DPD). The latter has the advantage that subsets of the encoding encode two digits in the optimal seven bits and one digit in four bits, as in regular BCD. Zoned decimal Some implementations, for example IBM mainframe systems, support zoned decimal numeric representations. Each decimal digit is stored in one byte, with the lower four bits encoding the digit in BCD form. The upper four bits, called the "zone" bits, are usually set to a fixed value so that the byte holds a character value corresponding to the digit. EBCDIC systems use a zone value of 1111 (hex F); this yields bytes in the range F0 to F9 (hex), which are the EBCDIC codes for the characters "0" through "9". Similarly, ASCII systems use a zone value of 0011 (hex 3), giving character codes 30 to 39 (hex). For signed zoned decimal values, the rightmost (least significant) zone nibble holds the sign digit, which is the same set of values that are used for signed packed decimal numbers (see above). Thus a zoned decimal value encoded as the hex bytes F1 F2 D3 represents the signed decimal value โˆ’123: F1 F2 D3 1 2 โˆ’3 EBCDIC zoned decimal conversion table (*) Note: These characters vary depending on the local character code page setting. Fixed-point zoned decimal Some languages (such as COBOL and PL/I) directly support fixed-point zoned decimal values, assigning an implicit decimal point at some location between the decimal digits of a number. For example, given a six-byte signed zoned decimal value with an implied decimal point to the right of the fourth digit, the hex bytes F1 F2 F7 F9 F5 C0 represent the value +1,279.50: F1 F2 F7 F9 F5 C0 1 2 7 9. 5 +0 BCD in computers IBM IBM used the terms Binary-Coded Decimal Interchange Code (BCDIC, sometimes just called BCD), for 6-bit alphanumeric codes that represented numbers, upper-case letters and special characters. Some variation of BCDIC alphamerics is used in most early IBM computers, including the IBM 1620 (introduced in 1959), IBM 1400 series, and non-Decimal Architecture members of the IBM 700/7000 series. The IBM 1400 series are character-addressable machines, each location being six bits labeled B, A, 8, 4, 2 and 1, plus an odd parity check bit (C) and a word mark bit (M). For encoding digits 1 through 9, B and A are zero and the digit value represented by standard 4-bit BCD in bits 8 through 1. For most other characters bits B and A are derived simply from the "12", "11", and "0" "zone punches" in the punched card character code, and bits 8 through 1 from the 1 through 9 punches. A "12 zone" punch set both B and A, an "11 zone" set B, and a "0 zone" (a 0 punch combined with any others) set A. Thus the letter A, which is (12,1) in the punched card format, is encoded (B,A,1). The currency symbol $, (11,8,3) in the punched card, was encoded in memory as (B,8,2,1). This allows the circuitry to convert between the punched card format and the internal storage format to be very simple with only a few special cases. One important special case is digit 0, represented by a lone 0 punch in the card, and (8,2) in core memory. The memory of the IBM 1620 is organized into 6-bit addressable digits, the usual 8, 4, 2, 1 plus F, used as a flag bit and C, an odd parity check bit. BCD alphamerics are encoded using digit pairs, with the "zone" in the even-addressed digit and the "digit" in the odd-addressed digit, the "zone" being related to the 12, 11, and 0 "zone punches" as in the 1400 series. Input/Output translation hardware converted between the internal digit pairs and the external standard 6-bit BCD codes. In the Decimal Architecture IBM 7070, IBM 7072, and IBM 7074 alphamerics are encoded using digit pairs (using two-out-of-five code in the digits, not BCD) of the 10-digit word, with the "zone" in the left digit and the "digit" in the right digit. Input/Output translation hardware converted between the internal digit pairs and the external standard 6-bit BCD codes. With the introduction of System/360, IBM expanded 6-bit BCD alphamerics to 8-bit EBCDIC, allowing the addition of many more characters (e.g., lowercase letters). A variable length Packed BCD numeric data type is also implemented, providing machine instructions that perform arithmetic directly on packed decimal data. On the IBM 1130 and 1800, packed BCD is supported in software by IBM's Commercial Subroutine Package. Today, BCD data is still heavily used in IBM processors and databases, such as IBM DB2, mainframes, and Power6. In these products, the BCD is usually zoned BCD (as in EBCDIC or ASCII), Packed BCD (two decimal digits per byte), or "pure" BCD encoding (one decimal digit stored as BCD in the low four bits of each byte). All of these are used within hardware registers and processing units, and in software. To convert packed decimals in EBCDIC table unloads to readable numbers, you can use the OUTREC FIELDS mask of the JCL utility DFSORT. Other computers The Digital Equipment Corporation VAX-11 series includes instructions that can perform arithmetic directly on packed BCD data and convert between packed BCD data and other integer representations. The VAX's packed BCD format is compatible with that on IBM System/360 and IBM's later compatible processors. The MicroVAX and later VAX implementations dropped this ability from the CPU but retained code compatibility with earlier machines by implementing the missing instructions in an operating system-supplied software library. This is invoked automatically via exception handling when the defunct instructions are encountered, so that programs using them can execute without modification on the newer machines. The Intel x86 architecture supports a unique 18-digit (ten-byte) BCD format that can be loaded into and stored from the floating point registers, from where computations can be performed. The Motorola 68000 series had BCD instructions. In more recent computers such capabilities are almost always implemented in software rather than the CPU's instruction set, but BCD numeric data are still extremely common in commercial and financial applications. There are tricks for implementing packed BCD and zoned decimal addโ€“orโ€“subtract operations using short but difficult to understand sequences of word-parallel logic and binary arithmetic operations. For example, the following code (written in C) computes an unsigned 8-digit packed BCD addition using 32-bit binary operations: uint32_t BCDadd(uint32_t a, uint32_t b) { uint32_t t1, t2; // unsigned 32-bit intermediate values t1 = a + 0x06666666; t2 = t1 ^ b; // sum without carry propagation t1 = t1 + b; // provisional sum t2 = t1 ^ t2; // all the binary carry bits t2 = ~t2 & 0x11111110; // just the BCD carry bits t2 = (t2 >> 2) | (t2 >> 3); // correction return t1 - t2; // corrected BCD sum } BCD in electronics BCD is very common in electronic systems where a numeric value is to be displayed, especially in systems consisting solely of digital logic, and not containing a microprocessor. By employing BCD, the manipulation of numerical data for display can be greatly simplified by treating each digit as a separate single sub-circuit. This matches much more closely the physical reality of display hardwareโ€”a designer might choose to use a series of separate identical seven-segment displays to build a metering circuit, for example. If the numeric quantity were stored and manipulated as pure binary, interfacing with such a display would require complex circuitry. Therefore, in cases where the calculations are relatively simple, working throughout with BCD can lead to an overall simpler system than converting to and from binary. Most pocket calculators do all their calculations in BCD. The same argument applies when hardware of this type uses an embedded microcontroller or other small processor. Often, representing numbers internally in BCD format results in smaller code, since a conversion from or to binary representation can be expensive on such limited processors. For these applications, some small processors feature dedicated arithmetic modes, which assist when writing routines that manipulate BCD quantities. Operations with BCD Addition It is possible to perform addition by first adding in binary, and then converting to BCD afterwards. Conversion of the simple sum of two digits can be done by adding 6 (that is, 16 โˆ’ 10) when the five-bit result of adding a pair of digits has a value greater than 9. The reason for adding 6 is that there are 16 possible 4-bit BCD values (since 24 = 16), but only 10 values are valid (0000 through 1001). For example: 1001 + 1000 = 10001 9 + 8 = 17 10001 is the binary, not decimal, representation of the desired result, but the most significant 1 (the "carry") cannot fit in a 4-bit binary number. In BCD as in decimal, there cannot exist a value greater than 9 (1001) per digit. To correct this, 6 (0110) is added to the total, and then the result is treated as two nibbles: 10001 + 0110 = 00010111 => 0001 0111 17 + 6 = 23 1 7 The two nibbles of the result, 0001 and 0111, correspond to the digits "1" and "7". This yields "17" in BCD, which is the correct result. This technique can be extended to adding multiple digits by adding in groups from right to left, propagating the second digit as a carry, always comparing the 5-bit result of each digit-pair sum to 9. Some CPUs provide a half-carry flag to facilitate BCD arithmetic adjustments following binary addition and subtraction operations. The Intel 8080, the Zilog Z80 and the CPUs of the x86 family provide the opcode DAA (Decimal Adjust Accumulator). Subtraction Subtraction is done by adding the ten's complement of the subtrahend to the minuend. To represent the sign of a number in BCD, the number 0000 is used to represent a positive number, and 1001 is used to represent a negative number. The remaining 14 combinations are invalid signs. To illustrate signed BCD subtraction, consider the following problem: 357 โˆ’ 432. In signed BCD, 357 is 0000 0011 0101 0111. The ten's complement of 432 can be obtained by taking the nine's complement of 432, and then adding one. So, 999 โˆ’ 432 = 567, and 567 + 1 = 568. By preceding 568 in BCD by the negative sign code, the number โˆ’432 can be represented. So, โˆ’432 in signed BCD is 1001 0101 0110 1000. Now that both numbers are represented in signed BCD, they can be added together: 0000 0011 0101 0111 0 3 5 7 + 1001 0101 0110 1000 9 5 6 8 = 1001 1000 1011 1111 9 8 11 15 Since BCD is a form of decimal representation, several of the digit sums above are invalid. In the event that an invalid entry (any BCD digit greater than 1001) exists, 6 is added to generate a carry bit and cause the sum to become a valid entry. So, adding 6 to the invalid entries results in the following: 1001 1000 1011 1111 9 8 11 15 + 0000 0000 0110 0110 0 0 6 6 = 1001 1001 0010 0101 9 9 2 5 Thus the result of the subtraction is 1001 1001 0010 0101 (โˆ’925). To confirm the result, note that the first digit is 9, which means negative. This seems to be correct since 357 โˆ’ 432 should result in a negative number. The remaining nibbles are BCD, so 1001 0010 0101 is 925. The ten's complement of 925 is 1000 โˆ’ 925 = 75, so the calculated answer is โˆ’75. If there are a different number of nibbles being added together (such as 1053 โˆ’ 2), the number with the fewer digits must first be prefixed with zeros before taking the ten's complement or subtracting. So, with 1053 โˆ’ 2, 2 would have to first be represented as 0002 in BCD, and the ten's complement of 0002 would have to be calculated. Comparison with pure binary Advantages Many non-integral values, such as decimal 0.2, have an infinite place-value representation in binary (.001100110011...) but have a finite place-value in binary-coded decimal (0.0010). Consequently, a system based on binary-coded decimal representations of decimal fractions avoids errors representing and calculating such values. This is useful in financial calculations. Scaling by a power of 10 is simple. Rounding at a decimal digit boundary is simpler. Addition and subtraction in decimal do not require rounding. The alignment of two decimal numbers (for example 1.3 + 27.08) is a simple, exact shift. Conversion to a character form or for display (e.g., to a text-based format such as XML, or to drive signals for a seven-segment display) is a simple per-digit mapping, and can be done in linear (O(n)) time. Conversion from pure binary involves relatively complex logic that spans digits, and for large numbers, no linear-time conversion algorithm is known (see ). Disadvantages Some operations are more complex to implement. Adders require extra logic to cause them to wrap and generate a carry early. 15 to 20 per cent more circuitry is needed for BCD add compared to pure binary. Multiplication requires the use of algorithms that are somewhat more complex than shift-mask-add (a binary multiplication, requiring binary shifts and adds or the equivalent, per-digit or group of digits is required). Standard BCD requires four bits per digit, roughly 20 per cent more space than a binary encoding (the ratio of 4 bits to log210 bits is 1.204). When packed so that three digits are encoded in ten bits, the storage overhead is greatly reduced, at the expense of an encoding that is unaligned with the 8-bit byte boundaries common on existing hardware, resulting in slower implementations on these systems. Practical existing implementations of BCD are typically slower than operations on binary representations, especially on embedded systems, due to limited processor support for native BCD operations. Representational variations Various BCD implementations exist that employ other representations for numbers. Programmable calculators manufactured by Texas Instruments, Hewlett-Packard, and others typically employ a floating-point BCD format, typically with two or three digits for the (decimal) exponent. The extra bits of the sign digit may be used to indicate special numeric values, such as infinity, underflow/overflow, and error (a blinking display). Signed variations Signed decimal values may be represented in several ways. The COBOL programming language, for example, supports five zoned decimal formats, with each one encoding the numeric sign in a different way: Telephony binary-coded decimal (TBCD) 3GPP developed TBCD, an expansion to BCD where the remaining (unused) bit combinations are used to add specific telephony characters, with digits similar to those found in telephone keypads original design. The mentioned 3GPP document defines TBCD-STRING with swapped nibbles in each byte. Bits, octets and digits indexed from 1, bits from the right, digits and octets from the left. bits 8765 of octet n encoding digit 2n bits 4321 of octet n encoding digit 2(n โ€“ 1) + 1 Meaning number 1234, would become 21 43 in TBCD. Alternative encodings If errors in representation and computation are more important than the speed of conversion to and from display, a scaled binary representation may be used, which stores a decimal number as a binary-encoded integer and a binary-encoded signed decimal exponent. For example, 0.2 can be represented as 2. This representation allows rapid multiplication and division, but may require shifting by a power of 10 during addition and subtraction to align the decimal points. It is appropriate for applications with a fixed number of decimal places that do not then require this adjustmentโ€”particularly financial applications where 2 or 4 digits after the decimal point are usually enough. Indeed, this is almost a form of fixed point arithmetic since the position of the radix point is implied. The Hertz and Chenโ€“Ho encodings provide Boolean transformations for converting groups of three BCD-encoded digits to and from 10-bit values that can be efficiently encoded in hardware with only 2 or 3 gate delays. Densely packed decimal (DPD) is a similar scheme that is used for most of the significand, except the lead digit, for one of the two alternative decimal encodings specified in the IEEE 754-2008 floating-point standard. Application The BIOS in many personal computers stores the date and time in BCD because the MC6818 real-time clock chip used in the original IBM PC AT motherboard provided the time encoded in BCD. This form is easily converted into ASCII for display. The Atari 8-bit family of computers used BCD to implement floating-point algorithms. The MOS 6502 processor has a BCD mode that affects the addition and subtraction instructions. The Psion Organiser 1 handheld computer's manufacturer-supplied software also entirely used BCD to implement floating point; later Psion models used binary exclusively. Early models of the PlayStation 3 store the date and time in BCD. This led to a worldwide outage of the console on 1 March 2010. The last two digits of the year stored as BCD were misinterpreted as 16 causing an error in the unit's date, rendering most functions inoperable. This has been referred to as the Year 2010 problem. Legal history In the 1972 case Gottschalk v. Benson, the U.S. Supreme Court overturned a lower court's decision that had allowed a patent for converting BCD-encoded numbers to binary on a computer. The decision noted that a patent "would wholly pre-empt the mathematical formula and in practical effect would be a patent on the algorithm itself". This was a landmark judgement that determining the patentability of software and algorithms. See also Bi-quinary coded decimal Binary-coded ternary (BCT) Binary integer decimal (BID) Bitmask Chenโ€“Ho encoding Decimal computer Densely packed decimal (DPD) Double dabble, an algorithm for converting binary numbers to BCD Year 2000 problem Notes References Further reading and (NB. At least some batches of the Krieger reprint edition were misprints with defective pages 115โ€“146.) (Also: ACM SIGPLAN Notices, Vol. 22 #10, IEEE Computer Society Press #87CH2440-6, October 1987) External links Convert BCD to decimal, binary and hexadecimal and vice versa BCD for Java Computer arithmetic Numeral systems Non-standard positional numeral systems Binary arithmetic Articles with example C code
1629883
https://en.wikipedia.org/wiki/Brian%20Paul
Brian Paul
Brian E. Paul is a computer programmer who originally wrote and maintained the source code for the open source Mesa graphics library until 2012 and is still active in the project. Paul began programming initial source code in August 1993. Mesa is a free software/open source graphics library that provides a generic OpenGL implementation for rendering three-dimensional graphics on multiple platforms. Education Paul obtained his bachelor's degree at the University of Wisconsinโ€“Oshkosh in 1990. He worked on the SSEC Visualization Project while obtaining his master's degree at the University of Wisconsinโ€“Madison. Mesa development Paul was a graphics hobbyist. He thought it would be fun to implement a simple 3D graphics library using the OpenGL API, which he might then use instead of VOGL. He spent eighteen months of part-time development before he released the software on the Internet. The software was well received, and people began contributing to its development. Graphics hardware support was added to Mesa in 1997 in the form of a Glide driver for the new 3dfx Voodoo graphics card. Career Paul continued working on the SSEC Project after graduation. He has also worked for Silicon Graphics, Avid Technology, and Precision Insight (bought out by VA Linux Systems). In 2000, Paul won the third Free Software Foundation Award for the Advancement of Free Software. In November 2001, he co-founded Tungsten Graphics, which was acquired by VMware in December 2008, where he now works. Other contributions Paul has also contributed to or written: Chromium Direct Rendering Infrastructure in XFree86 Blockbuster โ€“ a high-res movie player for scientific visualization applications Glean โ€“ OpenGL validation Togl โ€“ an OpenGL widget for Tcl/Tk Vis5D visualization system VisAD visualization system Cave5D โ€“ an adaptation of Vis5D to immersive virtual reality TR โ€“ OpenGL tile rendering library V-Blocks โ€“ virtual building blocks Avid Marquee โ€“ video animation, 3D text, graphics References External links Brian Paul's Home Page "Interview: Brian Paul Answers"; slashdot; December 17, 1999; Retrieved February 11, 2007 Free software programmers Living people Year of birth missing (living people) University of Wisconsinโ€“Oshkosh alumni University of Wisconsinโ€“Madison alumni
47449130
https://en.wikipedia.org/wiki/Lars%20Eilebrecht
Lars Eilebrecht
Lars Eilebrecht (born March 1972) is a German software engineer, solutions architect, IT security expert, and Open Source evangelist. He is one of the original developers of the Apache HTTP Server, and co-founder and former Vice President of the Apache Software Foundation. Lars was based in the United Kingdom between 2009 and 2019 where he founded the IT consultancy company Primevation Ltd. Since 2019 he is based in Germany where he works as Chief Information Security Officer for polypoly. Open Source Lars has been active in open software projects, and most notably the Apache HTTP Server project. He was a member of the Apache Group, and is co-founder and member of the Apache Software Foundation. Since the beginning of the Apache Software Foundation he was a member of the Conferences Committee helping the foundation to organise ApacheCon events. He served as Vice President, Conference Planning from 2007 to 2009. Additionally he is a member of the ASF Security Team and the ASF Public Relations Committee. Lars is an Open Source evangelist and received O'Reilly's Appaloosa Award for raising awareness of Apache. Career Lars Between 2008 and 2019 Lars worked as an independent IT consultant for companies such as the BBC, Channel 4, Heise Media, El Tiempo and Pearson. Lars was owner and managing director of Primevation Ltd, and partner at pliXos GmbH. Previous employers of Lars Eilebrecht include Ciphire Labs, Quam, Parc Technologies, CyberSolutions, and Cable & Wireless. Lars has an interest in IT security and cryptography. He is the CISO at polypoly, was Director Security Solutions and Chief Security Architect at Ciphire Labs, and speaker at conferences such as Financial Cryptography and Data Security and the 21st Chaos Communication Congress (21C3). Lars was a member of the International Financial Cryptography Association from 2005 to 2006. In 1998 Lars received a Master of Science degree in Computer Engineering from the University of Siegen in Germany. Publications Lars is the author of Apache Webserver, the first German-language book about the Apache HTTP Server. He published 5 editions of the book between 1997 and 2003. See also Apache Software Foundation External links Lars Eilebrecht's personal website References Computer security specialists Computer programmers German computer programmers Free software programmers Web developers Living people 1972 births
18909988
https://en.wikipedia.org/wiki/Joseph%20M.%20Hellerstein
Joseph M. Hellerstein
Joseph M. Hellerstein (born ) is an American professor of Computer Science at the University of California, Berkeley, where he works on database systems and computer networks. He co-founded Trifacta with Jeffrey Heer and Sean Kandel in 2012, which stemmed from their research project, Wrangler. Education Hellerstein attended Harvard University from 1986-1990 (AB computer science) and pursued his masters in Computer Science at University of California, Berkeley from 1991-1992. He received his Ph.D., also in computer science, from the University of Wisconsin, Madison in 1995, for a thesis on query optimization supervised by Jeffrey Naughton and Michael Stonebraker. Research Hellerstein has made contributions to many areas of database systems, such as ad-hoc sensor networks, adaptive query processing, approximate query processing and online aggregation, declarative networking, and data stream processing. Awards and recognition Hellerstein's work has been recognized with an Alfred P. Sloan Fellowship, MIT Technology Review's inaugural TR100 list and TR10 list, Fortune 50 smartest in Tech, and three ACM-SIGMOD "Test of Time" awards. He is a Fellow of the Association for Computing Machinery (2009). References American computer scientists Database researchers Living people Harvard University alumni University of California, Berkeley alumni University of Wisconsinโ€“Madison College of Letters and Science alumni Fellows of the Association for Computing Machinery UC Berkeley College of Engineering faculty Year of birth missing (living people)
44047094
https://en.wikipedia.org/wiki/Flock%20%28messaging%20service%29
Flock (messaging service)
Flock is a proprietary messaging and collaboration tool, founded by tech entrepreneur Bhavin Turakhia in 2014. The app is available on Windows, MacOS, Android, iOS and Web. Flock allows users to configure external apps and integrations from the Flock App Store, and receive notifications and updates directly in Flock. Flock functions on a freemium pricing model. The application was launched in 2014. Features The primary features of Flock are direct & channel messaging, video conferencing, screen & file sharing, and unlimited chat history. Teams Flock users can create multiple teams for the entire company, a department or for selective members of the organisation. To join a team, users can send invites to others or share the Team URL. Channels Flock users can create public channels and private channels. Public channels are open for everyone to discover and join, and do not require an invitation from the team admin. These channels are meant for sharing knowledge, interests and experiences. Private channels are meant for more focused discussions, and can be joined by invite only. Native apps Flock comes pre-installed with business apps such as: Poll app Shared To-dos Mailcast Code snippet sharing Reminders Note sharing My Favourites API Flock provides its platform, FlockOS, for developers to build apps, bots and custom integrations on top of Flock. Flock conducts regular hackathons to help young developers build innovative apps by using FlockOS's capabilities. App Store Flock lets users integrate external apps and services from the Flock App Store. Some common apps include Google Drive, Google Analytics, Trello, GitHub, Twitter and Mailchimp. Developers can also publish apps built on FlockOS to the Flock App Store. Awards Best Business Communication App of the Year by Global Mobile App Summit & Awards, July 2016 Best Mobile Enterprise Product/Service Award by India Digital Awards 2017, February 2017 References External links 2014 software Android (operating system) software Collaborative software IOS software MacOS software Project management software
18568
https://en.wikipedia.org/wiki/List%20of%20algorithms
List of algorithms
The following is a list of algorithms along with one-line descriptions for each. Automated planning Combinatorial algorithms General combinatorial algorithms Brent's algorithm: finds a cycle in function value iterations using only two iterators Floyd's cycle-finding algorithm: finds a cycle in function value iterations Galeโ€“Shapley algorithm: solves the stable marriage problem Pseudorandom number generators (uniformly distributedโ€”see also List of pseudorandom number generators for other PRNGs with varying degrees of convergence and varying statistical quality): ACORN generator Blum Blum Shub Lagged Fibonacci generator Linear congruential generator Mersenne Twister Graph algorithms Coloring algorithm: Graph coloring algorithm. Hopcroftโ€“Karp algorithm: convert a bipartite graph to a maximum cardinality matching Hungarian algorithm: algorithm for finding a perfect matching Prรผfer coding: conversion between a labeled tree and its Prรผfer sequence Tarjan's off-line lowest common ancestors algorithm: computes lowest common ancestors for pairs of nodes in a tree Topological sort: finds linear order of nodes (e.g. jobs) based on their dependencies. Graph drawing Force-based algorithms (also known as force-directed algorithms or spring-based algorithm) Spectral layout Network theory Network analysis Link analysis Girvanโ€“Newman algorithm: detect communities in complex systems Web link analysis Hyperlink-Induced Topic Search (HITS) (also known as Hubs and authorities) PageRank TrustRank Flow networks Dinic's algorithm: is a strongly polynomial algorithm for computing the maximum flow in a flow network. Edmondsโ€“Karp algorithm: implementation of Fordโ€“Fulkerson Fordโ€“Fulkerson algorithm: computes the maximum flow in a graph Karger's algorithm: a Monte Carlo method to compute the minimum cut of a connected graph Pushโ€“relabel algorithm: computes a maximum flow in a graph Routing for graphs Edmonds' algorithm (also known as Chuโ€“Liu/Edmonds' algorithm): find maximum or minimum branchings Euclidean minimum spanning tree: algorithms for computing the minimum spanning tree of a set of points in the plane Longest path problem: find a simple path of maximum length in a given graph Minimum spanning tree Borลฏvka's algorithm Kruskal's algorithm Prim's algorithm Reverse-delete algorithm Nonblocking minimal spanning switch say, for a telephone exchange Shortest path problem Bellmanโ€“Ford algorithm: computes shortest paths in a weighted graph (where some of the edge weights may be negative) Dijkstra's algorithm: computes shortest paths in a graph with non-negative edge weights Floydโ€“Warshall algorithm: solves the all pairs shortest path problem in a weighted, directed graph Johnson's algorithm: All pairs shortest path algorithm in sparse weighted directed graph Transitive closure problem: find the transitive closure of a given binary relation Traveling salesman problem Christofides algorithm Nearest neighbour algorithm Warnsdorff's rule: A heuristic method for solving the Knight's tour problem. Graph search A*: special case of best-first search that uses heuristics to improve speed B*: a best-first graph search algorithm that finds the least-cost path from a given initial node to any goal node (out of one or more possible goals) Backtracking: abandons partial solutions when they are found not to satisfy a complete solution Beam search: is a heuristic search algorithm that is an optimization of best-first search that reduces its memory requirement Beam stack search: integrates backtracking with beam search Best-first search: traverses a graph in the order of likely importance using a priority queue Bidirectional search: find the shortest path from an initial vertex to a goal vertex in a directed graph Breadth-first search: traverses a graph level by level Brute-force search: An exhaustive and reliable search method, but computationally inefficient in many applications. D*: an incremental heuristic search algorithm Depth-first search: traverses a graph branch by branch Dijkstra's algorithm: A special case of A* for which no heuristic function is used General Problem Solver: a seminal theorem-proving algorithm intended to work as a universal problem solver machine. Iterative deepening depth-first search (IDDFS): a state space search strategy Jump point search: An optimization to A* which may reduce computation time by an order of magnitude using further heuristics. Lexicographic breadth-first search (also known as Lex-BFS): a linear time algorithm for ordering the vertices of a graph Uniform-cost search: a tree search that finds the lowest-cost route where costs vary SSS*: state space search traversing a game tree in a best-first fashion similar to that of the A* search algorithm F*: Special algorithm to merge the two arrays Subgraphs Cliques Bronโ€“Kerbosch algorithm: a technique for finding maximal cliques in an undirected graph MaxCliqueDyn maximum clique algorithm: find a maximum clique in an undirected graph Strongly connected components Path-based strong component algorithm Kosaraju's algorithm Tarjan's strongly connected components algorithm Subgraph isomorphism problem Sequence algorithms Approximate sequence matching Bitap algorithm: fuzzy algorithm that determines if strings are approximately equal. Phonetic algorithms Daitchโ€“Mokotoff Soundex: a Soundex refinement which allows matching of Slavic and Germanic surnames Double Metaphone: an improvement on Metaphone Match rating approach: a phonetic algorithm developed by Western Airlines Metaphone: an algorithm for indexing words by their sound, when pronounced in English NYSIIS: phonetic algorithm, improves on Soundex Soundex: a phonetic algorithm for indexing names by sound, as pronounced in English String metrics: computes a similarity or dissimilarity (distance) score between two pairs of text strings Damerauโ€“Levenshtein distance: computes a distance measure between two strings, improves on Levenshtein distance Dice's coefficient (also known as the Dice coefficient): a similarity measure related to the Jaccard index Hamming distance: sum number of positions which are different Jaroโ€“Winkler distance: is a measure of similarity between two strings Levenshtein edit distance: computes a metric for the amount of difference between two sequences Trigram search: search for text when the exact syntax or spelling of the target object is not precisely known Selection algorithms Quickselect Introselect Sequence search Linear search: locates an item in an unsorted sequence Selection algorithm: finds the kth largest item in a sequence Ternary search: a technique for finding the minimum or maximum of a function that is either strictly increasing and then strictly decreasing or vice versa Sorted lists Binary search algorithm: locates an item in a sorted sequence Fibonacci search technique: search a sorted sequence using a divide and conquer algorithm that narrows down possible locations with the aid of Fibonacci numbers Jump search (or block search): linear search on a smaller subset of the sequence Predictive search: binary-like search which factors in magnitude of search term versus the high and low values in the search. Sometimes called dictionary search or interpolated search. Uniform binary search: an optimization of the classic binary search algorithm Sequence merging Simple merge algorithm k-way merge algorithm Union (merge, with elements on the output not repeated) Sequence permutations Fisherโ€“Yates shuffle (also known as the Knuth shuffle): randomly shuffle a finite set Schensted algorithm: constructs a pair of Young tableaux from a permutation Steinhausโ€“Johnsonโ€“Trotter algorithm (also known as the Johnsonโ€“Trotter algorithm): generates permutations by transposing elements Heap's permutation generation algorithm: interchange elements to generate next permutation Sequence combinations Sequence alignment Dynamic time warping: measure similarity between two sequences which may vary in time or speed Hirschberg's algorithm: finds the least cost sequence alignment between two sequences, as measured by their Levenshtein distance Needlemanโ€“Wunsch algorithm: find global alignment between two sequences Smithโ€“Waterman algorithm: find local sequence alignment Sequence sorting Exchange sorts Bubble sort: for each pair of indices, swap the items if out of order Cocktail shaker sort or bidirectional bubble sort, a bubble sort traversing the list alternately from front to back and back to front Comb sort Gnome sort Oddโ€“even sort Quicksort: divide list into two, with all items on the first list coming before all items on the second list.; then sort the two lists. Often the method of choice Humorous or ineffective Bogosort Stooge sort Hybrid Flashsort Introsort: begin with quicksort and switch to heapsort when the recursion depth exceeds a certain level Timsort: adaptative algorithm derived from merge sort and insertion sort. Used in Python 2.3 and up, and Java SE 7. Insertion sorts Insertion sort: determine where the current item belongs in the list of sorted ones, and insert it there Library sort Patience sorting Shell sort: an attempt to improve insertion sort Tree sort (binary tree sort): build binary tree, then traverse it to create sorted list Cycle sort: in-place with theoretically optimal number of writes Merge sorts Merge sort: sort the first and second half of the list separately, then merge the sorted lists Slowsort Strand sort Non-comparison sorts Bead sort Bucket sort Burstsort: build a compact, cache efficient burst trie and then traverse it to create sorted output Counting sort Pigeonhole sort Postman sort: variant of Bucket sort which takes advantage of hierarchical structure Radix sort: sorts strings letter by letter Selection sorts Heapsort: convert the list into a heap, keep removing the largest element from the heap and adding it to the end of the list Selection sort: pick the smallest of the remaining elements, add it to the end of the sorted list Smoothsort Other Bitonic sorter Pancake sorting Spaghetti sort Topological sort Unknown class Samplesort Subsequences Kadane's algorithm: finds maximum sub-array of any size Longest common subsequence problem: Find the longest subsequence common to all sequences in a set of sequences Longest increasing subsequence problem: Find the longest increasing subsequence of a given sequence Shortest common supersequence problem: Find the shortest supersequence that contains two or more sequences as subsequences Substrings Longest common substring problem: find the longest string (or strings) that is a substring (or are substrings) of two or more strings Substring search Ahoโ€“Corasick string matching algorithm: trie based algorithm for finding all substring matches to any of a finite set of strings Boyerโ€“Moore string-search algorithm: amortized linear (sublinear in most times) algorithm for substring search Boyerโ€“Mooreโ€“Horspool algorithm: Simplification of Boyerโ€“Moore Knuthโ€“Morrisโ€“Pratt algorithm: substring search which bypasses reexamination of matched characters Rabinโ€“Karp string search algorithm: searches multiple patterns efficiently Zhuโ€“Takaoka string matching algorithm: a variant of Boyerโ€“Moore Ukkonen's algorithm: a linear-time, online algorithm for constructing suffix trees Matching wildcards Rich Salz' wildmat: a widely used open-source recursive algorithm Krauss matching wildcards algorithm: an open-source non-recursive algorithm Computational mathematics Abstract algebra Chien search: a recursive algorithm for determining roots of polynomials defined over a finite field Schreierโ€“Sims algorithm: computing a base and strong generating set (BSGS) of a permutation group Toddโ€“Coxeter algorithm: Procedure for generating cosets. Computer algebra Buchberger's algorithm: finds a Grรถbner basis Cantorโ€“Zassenhaus algorithm: factor polynomials over finite fields Faugรจre F4 algorithm: finds a Grรถbner basis (also mentions the F5 algorithm) Gosper's algorithm: find sums of hypergeometric terms that are themselves hypergeometric terms Knuthโ€“Bendix completion algorithm: for rewriting rule systems Multivariate division algorithm: for polynomials in several indeterminates Pollard's kangaroo algorithm (also known as Pollard's lambda algorithm ): an algorithm for solving the discrete logarithm problem Polynomial long division: an algorithm for dividing a polynomial by another polynomial of the same or lower degree Risch algorithm: an algorithm for the calculus operation of indefinite integration (i.e. finding antiderivatives) Geometry Closest pair problem: find the pair of points (from a set of points) with the smallest distance between them Collision detection algorithms: check for the collision or intersection of two given solids Cone algorithm: identify surface points Convex hull algorithms: determining the convex hull of a set of points Graham scan Quickhull Gift wrapping algorithm or Jarvis march Chan's algorithm Kirkpatrickโ€“Seidel algorithm Euclidean distance transform: computes the distance between every point in a grid and a discrete collection of points. Geometric hashing: a method for efficiently finding two-dimensional objects represented by discrete points that have undergone an affine transformation Gilbertโ€“Johnsonโ€“Keerthi distance algorithm: determining the smallest distance between two convex shapes. Jump-and-Walk algorithm: an algorithm for point location in triangulations Laplacian smoothing: an algorithm to smooth a polygonal mesh Line segment intersection: finding whether lines intersect, usually with a sweep line algorithm Bentleyโ€“Ottmann algorithm Shamosโ€“Hoey algorithm Minimum bounding box algorithms: find the oriented minimum bounding box enclosing a set of points Nearest neighbor search: find the nearest point or points to a query point Point in polygon algorithms: tests whether a given point lies within a given polygon Point set registration algorithms: finds the transformation between two point sets to optimally align them. Rotating calipers: determine all antipodal pairs of points and vertices on a convex polygon or convex hull. Shoelace algorithm: determine the area of a polygon whose vertices are described by ordered pairs in the plane Triangulation Delaunay triangulation Ruppert's algorithm (also known as Delaunay refinement): create quality Delaunay triangulations Chew's second algorithm: create quality constrained Delaunay triangulations Marching triangles: reconstruct two-dimensional surface geometry from an unstructured point cloud Polygon triangulation algorithms: decompose a polygon into a set of triangles Voronoi diagrams, geometric dual of Delaunay triangulation Bowyerโ€“Watson algorithm: create voronoi diagram in any number of dimensions Fortune's Algorithm: create voronoi diagram Quasitriangulation Number theoretic algorithms Binary GCD algorithm: Efficient way of calculating GCD. Booth's multiplication algorithm Chakravala method: a cyclic algorithm to solve indeterminate quadratic equations, including Pell's equation Discrete logarithm: Baby-step giant-step Index calculus algorithm Pollard's rho algorithm for logarithms Pohligโ€“Hellman algorithm Euclidean algorithm: computes the greatest common divisor Extended Euclidean algorithm: Also solves the equation axย +ย byย =ย c. Integer factorization: breaking an integer into its prime factors Congruence of squares Dixon's algorithm Fermat's factorization method General number field sieve Lenstra elliptic curve factorization Pollard's pย โˆ’ย 1 algorithm Pollard's rho algorithm prime factorization algorithm Quadratic sieve Shor's algorithm Special number field sieve Trial division Multiplication algorithms: fast multiplication of two numbers Karatsuba algorithm Schรถnhageโ€“Strassen algorithm Toomโ€“Cook multiplication Modular square root: computing square roots modulo a prime number Tonelliโ€“Shanks algorithm Cipolla's algorithm Berlekamp's root finding algorithm Odlyzkoโ€“Schรถnhage algorithm: calculates nontrivial zeroes of the Riemann zeta function Lenstraโ€“Lenstraโ€“Lovรกsz algorithm (also known as LLL algorithm): find a short, nearly orthogonal lattice basis in polynomial time Primality tests: determining whether a given number is prime AKS primality test Baillieโ€“PSW primality test Fermat primality test Lucas primality test Millerโ€“Rabin primality test Sieve of Atkin Sieve of Eratosthenes Sieve of Sundaram Numerical algorithms Differential equation solving Euler method Backward Euler method Trapezoidal rule (differential equations) Linear multistep methods Rungeโ€“Kutta methods Euler integration Multigrid methods (MG methods), a group of algorithms for solving differential equations using a hierarchy of discretizations Partial differential equation: Finite difference method Crankโ€“Nicolson method for diffusion equations Laxโ€“Wendroff for wave equations Verlet integration (): integrate Newton's equations of motion Elementary and special functions Computation of ฯ€: Borwein's algorithm: an algorithm to calculate the value of 1/ฯ€ Gaussโ€“Legendre algorithm: computes the digits of pi Chudnovsky algorithm: A fast method for calculating the digits of ฯ€ Baileyโ€“Borweinโ€“Plouffe formula: (BBP formula) a spigot algorithm for the computation of the nth binary digit of ฯ€ Division algorithms: for computing quotient and/or remainder of two numbers Long division Restoring division Non-restoring division SRT division Newtonโ€“Raphson division: uses Newton's method to find the reciprocal of D, and multiply that reciprocal by N to find the final quotient Q. Goldschmidt division Hyperbolic and Trigonometric Functions: BKM algorithm: computes elementary functions using a table of logarithms CORDIC: computes hyperbolic and trigonometric functions using a table of arctangents Exponentiation: Addition-chain exponentiation: exponentiation by positive integer powers that requires a minimal number of multiplications Exponentiating by squaring: an algorithm used for the fast computation of large integer powers of a number Montgomery reduction: an algorithm that allows modular arithmetic to be performed efficiently when the modulus is large Multiplication algorithms: fast multiplication of two numbers Booth's multiplication algorithm: a multiplication algorithm that multiplies two signed binary numbers in two's complement notation Fรผrer's algorithm: an integer multiplication algorithm for very large numbers possessing a very low asymptotic complexity Karatsuba algorithm: an efficient procedure for multiplying large numbers Schรถnhageโ€“Strassen algorithm: an asymptotically fast multiplication algorithm for large integers Toomโ€“Cook multiplication: (Toom3) a multiplication algorithm for large integers Multiplicative inverse Algorithms: for computing a number's multiplicative inverse (reciprocal). Newton's method Rounding functions: the classic ways to round numbers Spigot algorithm: A way to compute the value of a mathematical constant without knowing preceding digits Square and Nth root of a number: Alpha max plus beta min algorithm: an approximation of the square-root of the sum of two squares Methods of computing square roots nth root algorithm Shifting nth-root algorithm: digit by digit root extraction Summation: Binary splitting: a divide and conquer technique which speeds up the numerical evaluation of many types of series with rational terms Kahan summation algorithm: a more accurate method of summing floating-point numbers Unrestricted algorithm Geometric Filtered back-projection: efficiently computes the inverse 2-dimensional Radon transform. Level set method (LSM): a numerical technique for tracking interfaces and shapes Interpolation and extrapolation Birkhoff interpolation: an extension of polynomial interpolation Cubic interpolation Hermite interpolation Lagrange interpolation: interpolation using Lagrange polynomials Linear interpolation: a method of curve fitting using linear polynomials Monotone cubic interpolation: a variant of cubic interpolation that preserves monotonicity of the data set being interpolated. Multivariate interpolation Bicubic interpolation, a generalization of cubic interpolation to two dimensions Bilinear interpolation: an extension of linear interpolation for interpolating functions of two variables on a regular grid Lanczos resampling ("Lanzosh"): a multivariate interpolation method used to compute new values for any digitally sampled data Nearest-neighbor interpolation Tricubic interpolation, a generalization of cubic interpolation to three dimensions Pareto interpolation: a method of estimating the median and other properties of a population that follows a Pareto distribution. Polynomial interpolation Neville's algorithm Spline interpolation: Reduces error with Runge's phenomenon. De Boor algorithm: B-splines De Casteljau's algorithm: Bรฉzier curves Trigonometric interpolation Linear algebra Eigenvalue algorithms Arnoldi iteration Inverse iteration Jacobi method Lanczos iteration Power iteration QR algorithm Rayleigh quotient iteration Gramโ€“Schmidt process: orthogonalizes a set of vectors Matrix multiplication algorithms Cannon's algorithm: a distributed algorithm for matrix multiplication especially suitable for computers laid out in an N ร— N mesh Coppersmithโ€“Winograd algorithm: square matrix multiplication Freivalds' algorithm: a randomized algorithm used to verify matrix multiplication Strassen algorithm: faster matrix multiplication Solving systems of linear equations Biconjugate gradient method: solves systems of linear equations Conjugate gradient: an algorithm for the numerical solution of particular systems of linear equations Gaussian elimination Gaussโ€“Jordan elimination: solves systems of linear equations Gaussโ€“Seidel method: solves systems of linear equations iteratively Levinson recursion: solves equation involving a Toeplitz matrix Stone's method: also known as the strongly implicit procedure or SIP, is an algorithm for solving a sparse linear system of equations Successive over-relaxation (SOR): method used to speed up convergence of the Gaussโ€“Seidel method Tridiagonal matrix algorithm (Thomas algorithm): solves systems of tridiagonal equations Sparse matrix algorithms Cuthillโ€“McKee algorithm: reduce the bandwidth of a symmetric sparse matrix Minimum degree algorithm: permute the rows and columns of a symmetric sparse matrix before applying the Cholesky decomposition Symbolic Cholesky decomposition: Efficient way of storing sparse matrix Monte Carlo Gibbs sampling: generates a sequence of samples from the joint probability distribution of two or more random variables Hybrid Monte Carlo: generates a sequence of samples using Hamiltonian weighted Markov chain Monte Carlo, from a probability distribution which is difficult to sample directly. Metropolisโ€“Hastings algorithm: used to generate a sequence of samples from the probability distribution of one or more variables Wang and Landau algorithm: an extension of Metropolisโ€“Hastings algorithm sampling Numerical integration MISER algorithm: Monte Carlo simulation, numerical integration Root finding Bisection method False position method: approximates roots of a function ITP method: minmax optimal and superlinar convergence simultaneously Newton's method: finds zeros of functions with calculus Halley's method: uses first and second derivatives Secant method: 2-point, 1-sided False position method and Illinois method: 2-point, bracketing Ridder's method: 3-point, exponential scaling Muller's method: 3-point, quadratic interpolation Optimization algorithms Alphaโ€“beta pruning: search to reduce number of nodes in minimax algorithm Branch and bound Bruss algorithm: see odds algorithm Chain matrix multiplication Combinatorial optimization: optimization problems where the set of feasible solutions is discrete Greedy randomized adaptive search procedure (GRASP): successive constructions of a greedy randomized solution and subsequent iterative improvements of it through a local search Hungarian method: a combinatorial optimization algorithm which solves the assignment problem in polynomial time Constraint satisfaction General algorithms for the constraint satisfaction AC-3 algorithm Difference map algorithm Min conflicts algorithm Chaff algorithm: an algorithm for solving instances of the boolean satisfiability problem Davisโ€“Putnam algorithm: check the validity of a first-order logic formula Davisโ€“Putnamโ€“Logemannโ€“Loveland algorithm (DPLL): an algorithm for deciding the satisfiability of propositional logic formula in conjunctive normal form, i.e. for solving the CNF-SAT problem Exact cover problem Algorithm X: a nondeterministic algorithm Dancing Links: an efficient implementation of Algorithm X Cross-entropy method: a general Monte Carlo approach to combinatorial and continuous multi-extremal optimization and importance sampling Differential evolution Dynamic Programming: problems exhibiting the properties of overlapping subproblems and optimal substructure Ellipsoid method: is an algorithm for solving convex optimization problems Evolutionary computation: optimization inspired by biological mechanisms of evolution Evolution strategy Gene expression programming Genetic algorithms Fitness proportionate selection โ€“ also known as roulette-wheel selection Stochastic universal sampling Truncation selection Tournament selection Memetic algorithm Swarm intelligence Ant colony optimization Bees algorithm: a search algorithm which mimics the food foraging behavior of swarms of honey bees Particle swarm Frank-Wolfe algorithm: an iterative first-order optimization algorithm for constrained convex optimization Golden-section search: an algorithm for finding the maximum of a real function Gradient descent Grid Search Harmony search (HS): a metaheuristic algorithm mimicking the improvisation process of musicians Interior point method Linear programming Benson's algorithm: an algorithm for solving linear vector optimization problems Dantzigโ€“Wolfe decomposition: an algorithm for solving linear programming problems with special structure Delayed column generation Integer linear programming: solve linear programming problems where some or all the unknowns are restricted to integer values Branch and cut Cutting-plane method Karmarkar's algorithm: The first reasonably efficient algorithm that solves the linear programming problem in polynomial time. Simplex algorithm: An algorithm for solving linear programming problems Line search Local search: a metaheuristic for solving computationally hard optimization problems Random-restart hill climbing Tabu search Minimax used in game programming Nearest neighbor search (NNS): find closest points in a metric space Best Bin First: find an approximate solution to the nearest neighbor search problem in very-high-dimensional spaces Newton's method in optimization Nonlinear optimization BFGS method: A nonlinear optimization algorithm Gaussโ€“Newton algorithm: An algorithm for solving nonlinear least squares problems. Levenbergโ€“Marquardt algorithm: An algorithm for solving nonlinear least squares problems. Nelderโ€“Mead method (downhill simplex method): A nonlinear optimization algorithm Odds algorithm (Bruss algorithm): Finds the optimal strategy to predict a last specific event in a random sequence event Random Search Simulated annealing Stochastic tunneling Subset sum algorithm Computational science Astronomy Doomsday algorithm: day of the week Zeller's congruence is an algorithm to calculate the day of the week for any Julian or Gregorian calendar date various Easter algorithms are used to calculate the day of Easter Bioinformatics Basic Local Alignment Search Tool also known as BLAST: an algorithm for comparing primary biological sequence information Kabsch algorithm: calculate the optimal alignment of two sets of points in order to compute the root mean squared deviation between two protein structures. Velvet: a set of algorithms manipulating de Bruijn graphs for genomic sequence assembly Sorting by signed reversals: an algorithm for understanding genomic evolution. Maximum parsimony (phylogenetics): an algorithm for finding the simplest phylogenetic tree to explain a given character matrix. UPGMA: a distance-based phylogenetic tree construction algorithm. Geoscience Vincenty's formulae: a fast algorithm to calculate the distance between two latitude/longitude points on an ellipsoid Geohash: a public domain algorithm that encodes a decimal latitude/longitude pair as a hash string Linguistics Lesk algorithm: word sense disambiguation Stemming algorithm: a method of reducing words to their stem, base, or root form Sukhotin's algorithm: a statistical classification algorithm for classifying characters in a text as vowels or consonants Medicine ESC algorithm for the diagnosis of heart failure Manning Criteria for irritable bowel syndrome Pulmonary embolism diagnostic algorithms Texas Medication Algorithm Project Physics Constraint algorithm: a class of algorithms for satisfying constraints for bodies that obey Newton's equations of motion Demon algorithm: a Monte Carlo method for efficiently sampling members of a microcanonical ensemble with a given energy Featherstone's algorithm: computes the effects of forces applied to a structure of joints and links Ground state approximation Variational method Ritz method n-body problems Barnesโ€“Hut simulation: Solves the n-body problem in an approximate way that has the order instead of as in a direct-sum simulation. Fast multipole method (FMM): speeds up the calculation of long-ranged forces Rainflow-counting algorithm: Reduces a complex stress history to a count of elementary stress-reversals for use in fatigue analysis Sweep and prune: a broad phase algorithm used during collision detection to limit the number of pairs of solids that need to be checked for collision VEGAS algorithm: a method for reducing error in Monte Carlo simulations Glauber dynamics: a method for simulating the Ising Model on a computer Statistics Algorithms for calculating variance: avoiding instability and numerical overflow Approximate counting algorithm: Allows counting large number of events in a small register Bayesian statistics Nested sampling algorithm: a computational approach to the problem of comparing models in Bayesian statistics Clustering Algorithms Average-linkage clustering: a simple agglomerative clustering algorithm Canopy clustering algorithm: an unsupervised pre-clustering algorithm related to the K-means algorithm Complete-linkage clustering: a simple agglomerative clustering algorithm DBSCAN: a density based clustering algorithm Expectation-maximization algorithm Fuzzy clustering: a class of clustering algorithms where each point has a degree of belonging to clusters Fuzzy c-means FLAME clustering (Fuzzy clustering by Local Approximation of MEmberships): define clusters in the dense parts of a dataset and perform cluster assignment solely based on the neighborhood relationships among objects KHOPCA clustering algorithm: a local clustering algorithm, which produces hierarchical multi-hop clusters in static and mobile environments. k-means clustering: cluster objects based on attributes into partitions k-means++: a variation of this, using modified random seeds k-medoids: similar to k-means, but chooses datapoints or medoids as centers Lindeโ€“Buzoโ€“Gray algorithm: a vector quantization algorithm to derive a good codebook Lloyd's algorithm (Voronoi iteration or relaxation): group data points into a given number of categories, a popular algorithm for k-means clustering OPTICS: a density based clustering algorithm with a visual evaluation method Single-linkage clustering: a simple agglomerative clustering algorithm SUBCLU: a subspace clustering algorithm Ward's method: an agglomerative clustering algorithm, extended to more general Lanceโ€“Williams algorithms WACA clustering algorithm: a local clustering algorithm with potentially multi-hop structures; for dynamic networks Estimation Theory Expectation-maximization algorithm A class of related algorithms for finding maximum likelihood estimates of parameters in probabilistic models Ordered subset expectation maximization (OSEM): used in medical imaging for positron emission tomography, single-photon emission computed tomography and X-ray computed tomography. Odds algorithm (Bruss algorithm) Optimal online search for distinguished value in sequential random input Kalman filter: estimate the state of a linear dynamic system from a series of noisy measurements False nearest neighbor algorithm (FNN) estimates fractal dimension Hidden Markov model Baumโ€“Welch algorithm: computes maximum likelihood estimates and posterior mode estimates for the parameters of a hidden Markov model Forward-backward algorithm: a dynamic programming algorithm for computing the probability of a particular observation sequence Viterbi algorithm: find the most likely sequence of hidden states in a hidden Markov model Partial least squares regression: finds a linear model describing some predicted variables in terms of other observable variables Queuing theory Buzen's algorithm: an algorithm for calculating the normalization constant G(K) in the Gordonโ€“Newell theorem RANSAC (an abbreviation for "RANdom SAmple Consensus"): an iterative method to estimate parameters of a mathematical model from a set of observed data which contains outliers Scoring algorithm: is a form of Newton's method used to solve maximum likelihood equations numerically Yamartino method: calculate an approximation to the standard deviation ฯƒฮธ of wind direction ฮธ during a single pass through the incoming data Ziggurat algorithm: generates random numbers from a non-uniform distribution Computer science Computer architecture Tomasulo algorithm: allows sequential instructions that would normally be stalled due to certain dependencies to execute non-sequentially Computer graphics Clipping Line clipping Cohenโ€“Sutherland Cyrusโ€“Beck Fast-clipping Liangโ€“Barsky Nichollโ€“Leeโ€“Nicholl Polygon clipping Sutherlandโ€“Hodgman Vatti Weilerโ€“Atherton Contour lines and Isosurfaces Marching cubes: extract a polygonal mesh of an isosurface from a three-dimensional scalar field (sometimes called voxels) Marching squares: generates contour lines for a two-dimensional scalar field Marching tetrahedrons: an alternative to Marching cubes Discrete Green's Theorem: is an algorithm for computing double integral over a generalized rectangular domain in constant time. It is a natural extension to the summed area table algorithm Flood fill: fills a connected region of a multi-dimensional array with a specified symbol Global illumination algorithms: Considers direct illumination and reflection from other objects. Ambient occlusion Beam tracing Cone tracing Image-based lighting Metropolis light transport Path tracing Photon mapping Radiosity Ray tracing Hidden-surface removal or Visual surface determination Newell's algorithm: eliminate polygon cycles in the depth sorting required in hidden-surface removal Painter's algorithm: detects visible parts of a 3-dimensional scenery Scanline rendering: constructs an image by moving an imaginary line over the image Warnock algorithm Line Drawing: graphical algorithm for approximating a line segment on discrete graphical media. Bresenham's line algorithm: plots points of a 2-dimensional array to form a straight line between 2 specified points (uses decision variables) DDA line algorithm: plots points of a 2-dimensional array to form a straight line between 2 specified points (uses floating-point math) Xiaolin Wu's line algorithm: algorithm for line antialiasing. Midpoint circle algorithm: an algorithm used to determine the points needed for drawing a circle Ramerโ€“Douglasโ€“Peucker algorithm: Given a 'curve' composed of line segments to find a curve not too dissimilar but that has fewer points Shading Gouraud shading: an algorithm to simulate the differing effects of light and colour across the surface of an object in 3D computer graphics Phong shading: an algorithm to interpolate surface normal-vectors for surface shading in 3D computer graphics Slerp (spherical linear interpolation): quaternion interpolation for the purpose of animating 3D rotation Summed area table (also known as an integral image): an algorithm for computing the sum of values in a rectangular subset of a grid in constant time Cryptography Asymmetric (public key) encryption: ElGamal Elliptic curve cryptography MAE1 NTRUEncrypt RSA Digital signatures (asymmetric authentication): DSA, and its variants: ECDSA and Deterministic ECDSA EdDSA (Ed25519) RSA Cryptographic hash functions (see also the section on message authentication codes): BLAKE MD5 โ€“ Note that there is now a method of generating collisions for MD5 RIPEMD-160 SHA-1 โ€“ Note that there is now a method of generating collisions for SHA-1 SHA-2 (SHA-224, SHA-256, SHA-384, SHA-512) SHA-3 (SHA3-224, SHA3-256, SHA3-384, SHA3-512, SHAKE128, SHAKE256) Tiger (TTH), usually used in Tiger tree hashes WHIRLPOOL Cryptographically secure pseudo-random number generators Blum Blum Shub โ€“ based on the hardness of factorization Fortuna, intended as an improvement on Yarrow algorithm Linear-feedback shift register (note: many LFSR-based algorithms are weak or have been broken) Yarrow algorithm Key exchange Diffieโ€“Hellman key exchange Elliptic-curve Diffieโ€“Hellman (ECDH) Key derivation functions, often used for password hashing and key stretching bcrypt PBKDF2 scrypt Argon2 Message authentication codes (symmetric authentication algorithms, which take a key as a parameter): HMAC: keyed-hash message authentication Poly1305 SipHash Secret sharing, Secret Splitting, Key Splitting, M of N algorithms Blakey's Scheme Shamir's Scheme Symmetric (secret key) encryption: Advanced Encryption Standard (AES), winner of NIST competition, also known as Rijndael Blowfish Twofish Threefish Data Encryption Standard (DES), sometimes DE Algorithm, winner of NBS selection competition, replaced by AES for most purposes IDEA RC4 (cipher) Tiny Encryption Algorithm (TEA) Salsa20, and its updated variant ChaCha20 Post-quantum cryptography Proof-of-work algorithms Digital logic Boolean minimization Quineโ€“McCluskey algorithm: Also called as Q-M algorithm, programmable method for simplifying the boolean equations. Petrick's method: Another algorithm for boolean simplification. Espresso heuristic logic minimizer: Fast algorithm for boolean function minimization. Machine learning and statistical classification ALOPEX: a correlation-based machine-learning algorithm Association rule learning: discover interesting relations between variables, used in data mining Apriori algorithm Eclat algorithm FP-growth algorithm One-attribute rule Zero-attribute rule Boosting (meta-algorithm): Use many weak learners to boost effectiveness AdaBoost: adaptive boosting BrownBoost: a boosting algorithm that may be robust to noisy datasets LogitBoost: logistic regression boosting LPBoost: linear programming boosting Bootstrap aggregating (bagging): technique to improve stability and classification accuracy Computer Vision Grabcut based on Graph cuts Decision Trees C4.5 algorithm: an extension to ID3 ID3 algorithm (Iterative Dichotomiser 3): use heuristic to generate small decision trees Clustering: a class of unsupervised learning algorithms for grouping and bucketing related input vector. k-nearest neighbors (k-NN): a method for classifying objects based on closest training examples in the feature space Lindeโ€“Buzoโ€“Gray algorithm: a vector quantization algorithm used to derive a good codebook Locality-sensitive hashing (LSH): a method of performing probabilistic dimension reduction of high-dimensional data Neural Network Backpropagation: A supervised learning method which requires a teacher that knows, or can calculate, the desired output for any given input Hopfield net: a Recurrent neural network in which all connections are symmetric Perceptron: the simplest kind of feedforward neural network: a linear classifier. Pulse-coupled neural networks (PCNN): Neural models proposed by modeling a cat's visual cortex and developed for high-performance biomimetic image processing. Radial basis function network: an artificial neural network that uses radial basis functions as activation functions Self-organizing map: an unsupervised network that produces a low-dimensional representation of the input space of the training samples Random forest: classify using many decision trees Reinforcement learning: Q-learning: learns an action-value function that gives the expected utility of taking a given action in a given state and following a fixed policy thereafter Stateโ€“Actionโ€“Rewardโ€“Stateโ€“Action (SARSA): learn a Markov decision process policy Temporal difference learning Relevance-Vector Machine (RVM): similar to SVM, but provides probabilistic classification Supervised learning: Learning by examples (labelled data-set split into training-set and test-set) Support-Vector Machine (SVM): a set of methods which divide multidimensional data by finding a dividing hyperplane with the maximum margin between the two sets Structured SVM: allows training of a classifier for general structured output labels. Winnow algorithm: related to the perceptron, but uses a multiplicative weight-update scheme Programming language theory C3 linearization: an algorithm used primarily to obtain a consistent linearization of a multiple inheritance hierarchy in object-oriented programming Chaitin's algorithm: a bottom-up, graph coloring register allocation algorithm that uses cost/degree as its spill metric Hindleyโ€“Milner type inference algorithm Rete algorithm: an efficient pattern matching algorithm for implementing production rule systems Sethi-Ullman algorithm: generates optimal code for arithmetic expressions Parsing CYK algorithm: An O(n3) algorithm for parsing context-free grammars in Chomsky normal form Earley parser: Another O(n3) algorithm for parsing any context-free grammar GLR parser:An algorithm for parsing any context-free grammar by Masaru Tomita. It is tuned for deterministic grammars, on which it performs almost linear time and O(n3) in worst case. Inside-outside algorithm: An O(n3) algorithm for re-estimating production probabilities in probabilistic context-free grammars LL parser: A relatively simple linear time parsing algorithm for a limited class of context-free grammars LR parser: A more complex linear time parsing algorithm for a larger class of context-free grammars. Variants: Canonical LR parser LALR (look-ahead LR) parser Operator-precedence parser SLR (Simple LR) parser Simple precedence parser Packrat parser: A linear time parsing algorithm supporting some context-free grammars and parsing expression grammars Recursive descent parser: A top-down parser suitable for LL(k) grammars Shunting-yard algorithm: convert an infix-notation math expression to postfix Pratt parser Lexical analysis Quantum algorithms Deutschโ€“Jozsa algorithm: criterion of balance for Boolean function Grover's algorithm: provides quadratic speedup for many search problems Shor's algorithm: provides exponential speedup (relative to currently known non-quantum algorithms) for factoring a number Simon's algorithm: provides a provably exponential speedup (relative to any non-quantum algorithm) for a black-box problem Theory of computation and automata Hopcroft's algorithm, Moore's algorithm, and Brzozowski's algorithm: algorithms for minimizing the number of states in a deterministic finite automaton Powerset construction: Algorithm to convert nondeterministic automaton to deterministic automaton. Tarskiโ€“Kuratowski algorithm: a non-deterministic algorithm which provides an upper bound for the complexity of formulas in the arithmetical hierarchy and analytical hierarchy Information theory and signal processing Coding theory Error detection and correction BCH Codes Berlekampโ€“Massey algorithm Petersonโ€“Gorensteinโ€“Zierler algorithm Reedโ€“Solomon error correction BCJR algorithm: decoding of error correcting codes defined on trellises (principally convolutional codes) Forward error correction Gray code Hamming codes Hamming(7,4): a Hamming code that encodes 4 bits of data into 7 bits by adding 3 parity bits Hamming distance: sum number of positions which are different Hamming weight (population count): find the number of 1 bits in a binary word Redundancy checks Adler-32 Cyclic redundancy check Damm algorithm Fletcher's checksum Longitudinal redundancy check (LRC) Luhn algorithm: a method of validating identification numbers Luhn mod N algorithm: extension of Luhn to non-numeric characters Parity: simple/fast error detection technique Verhoeff algorithm Lossless compression algorithms Burrowsโ€“Wheeler transform: preprocessing useful for improving lossless compression Context tree weighting Delta encoding: aid to compression of data in which sequential data occurs frequently Dynamic Markov compression: Compression using predictive arithmetic coding Dictionary coders Byte pair encoding (BPE) Deflate Lempelโ€“Ziv LZ77 and LZ78 Lempelโ€“Ziv Jeff Bonwick (LZJB) Lempelโ€“Zivโ€“Markov chain algorithm (LZMA) Lempelโ€“Zivโ€“Oberhumer (LZO): speed oriented Lempelโ€“Zivโ€“Stac (LZS) Lempelโ€“Zivโ€“Storerโ€“Szymanski (LZSS) Lempelโ€“Zivโ€“Welch (LZW) LZWL: syllable-based variant LZX Lempelโ€“Ziv Ross Williams (LZRW) Entropy encoding: coding scheme that assigns codes to symbols so as to match code lengths with the probabilities of the symbols Arithmetic coding: advanced entropy coding Range encoding: same as arithmetic coding, but looked at in a slightly different way Huffman coding: simple lossless compression taking advantage of relative character frequencies Adaptive Huffman coding: adaptive coding technique based on Huffman coding Package-merge algorithm: Optimizes Huffman coding subject to a length restriction on code strings Shannonโ€“Fano coding Shannonโ€“Fanoโ€“Elias coding: precursor to arithmetic encoding Entropy coding with known entropy characteristics Golomb coding: form of entropy coding that is optimal for alphabets following geometric distributions Rice coding: form of entropy coding that is optimal for alphabets following geometric distributions Truncated binary encoding Unary coding: code that represents a number n with n ones followed by a zero Universal codes: encodes positive integers into binary code words Elias delta, gamma, and omega coding Exponential-Golomb coding Fibonacci coding Levenshtein coding Fast Efficient & Lossless Image Compression System (FELICS): a lossless image compression algorithm Incremental encoding: delta encoding applied to sequences of strings Prediction by partial matching (PPM): an adaptive statistical data compression technique based on context modeling and prediction Run-length encoding: lossless data compression taking advantage of strings of repeated characters SEQUITUR algorithm: lossless compression by incremental grammar inference on a string Lossy compression algorithms 3Dc: a lossy data compression algorithm for normal maps Audio and Speech compression A-law algorithm: standard companding algorithm Code-excited linear prediction (CELP): low bit-rate speech compression Linear predictive coding (LPC): lossy compression by representing the spectral envelope of a digital signal of speech in compressed form Mu-law algorithm: standard analog signal compression or companding algorithm Warped Linear Predictive Coding (WLPC) Image compression Block Truncation Coding (BTC): a type of lossy image compression technique for greyscale images Embedded Zerotree Wavelet (EZW) Fast Cosine Transform algorithms (FCT algorithms): computes Discrete Cosine Transform (DCT) efficiently Fractal compression: method used to compress images using fractals Set Partitioning in Hierarchical Trees (SPIHT) Wavelet compression: form of data compression well suited for image compression (sometimes also video compression and audio compression) Transform coding: type of data compression for "natural" data like audio signals or photographic images Video compression Vector quantization: technique often used in lossy data compression Digital signal processing Adaptive-additive algorithm (AA algorithm): find the spatial frequency phase of an observed wave source Discrete Fourier transform: determines the frequencies contained in a (segment of a) signal Bluestein's FFT algorithm Bruun's FFT algorithm Cooleyโ€“Tukey FFT algorithm Fast Fourier transform Prime-factor FFT algorithm Rader's FFT algorithm Fast folding algorithm: an efficient algorithm for the detection of approximately periodic events within time series data Gerchbergโ€“Saxton algorithm: Phase retrieval algorithm for optical planes Goertzel algorithm: identify a particular frequency component in a signal. Can be used for DTMF digit decoding. Karplus-Strong string synthesis: physical modelling synthesis to simulate the sound of a hammered or plucked string or some types of percussion Image processing Contrast Enhancement Histogram equalization: use histogram to improve image contrast Adaptive histogram equalization: histogram equalization which adapts to local changes in contrast Connected-component labeling: find and label disjoint regions Dithering and half-toning Error diffusion Floydโ€“Steinberg dithering Ordered dithering Riemersma dithering Elser difference-map algorithm: a search algorithm for general constraint satisfaction problems. Originally used for X-Ray diffraction microscopy Feature detection Canny edge detector: detect a wide range of edges in images Generalised Hough transform Hough transform Marrโ€“Hildreth algorithm: an early edge detection algorithm SIFT (Scale-invariant feature transform): is an algorithm to detect and describe local features in images. : is a robust local feature detector, first presented by Herbert Bay et al. in 2006, that can be used in computer vision tasks like object recognition or 3D reconstruction. It is partly inspired by the SIFT descriptor. The standard version of SURF is several times faster than SIFT and claimed by its authors to be more robust against different image transformations than SIFT. Richardsonโ€“Lucy deconvolution: image de-blurring algorithm Blind deconvolution: image de-blurring algorithm when point spread function is unknown. Median filtering Seam carving: content-aware image resizing algorithm Segmentation: partition a digital image into two or more regions GrowCut algorithm: an interactive segmentation algorithm Random walker algorithm Region growing Watershed transformation: a class of algorithms based on the watershed analogy Software engineering Cache algorithms CHS conversion: converting between disk addressing systems Double dabble: Convert binary numbers to BCD Hash Function: convert a large, possibly variable-sized amount of data into a small datum, usually a single integer that may serve as an index into an array Fowlerโ€“Nollโ€“Vo hash function: fast with low collision rate Pearson hashing: computes 8 bit value only, optimized for 8 bit computers Zobrist hashing: used in the implementation of transposition tables Unicode Collation Algorithm Xor swap algorithm: swaps the values of two variables without using a buffer Database algorithms Algorithms for Recovery and Isolation Exploiting Semantics (ARIES): transaction recovery Join algorithms Block nested loop Hash join Nested loop join Sort-Merge Join Distributed systems algorithms Clock synchronization Berkeley algorithm Cristian's algorithm Intersection algorithm Marzullo's algorithm Consensus (computer science): agreeing on a single value or history among unreliable processors Chandraโ€“Toueg consensus algorithm Paxos algorithm Raft (computer science) Detection of Process Termination Dijkstra-Scholten algorithm Huang's algorithm Lamport ordering: a partial ordering of events based on the happened-before relation Leader election: a method for dynamically selecting a coordinator Bully algorithm Mutual exclusion Lamport's Distributed Mutual Exclusion Algorithm Naimi-Trehel's log(n) Algorithm Maekawa's Algorithm Raymond's Algorithm Ricartโ€“Agrawala Algorithm Snapshot algorithm: record a consistent global state for an asynchronous system Chandyโ€“Lamport algorithm Vector clocks: generate a partial ordering of events in a distributed system and detect causality violations Memory allocation and deallocation algorithms Buddy memory allocation: Algorithm to allocate memory such that fragmentation is less. Garbage collectors Cheney's algorithm: An improvement on the Semi-space collector Generational garbage collector: Fast garbage collectors that segregate memory by age Mark-compact algorithm: a combination of the mark-sweep algorithm and Cheney's copying algorithm Mark and sweep Semi-space collector: An early copying collector Reference counting Networking Karn's algorithm: addresses the problem of getting accurate estimates of the round-trip time for messages when using TCP Luleรฅ algorithm: a technique for storing and searching internet routing tables efficiently Network congestion Exponential backoff Nagle's algorithm: improve the efficiency of TCP/IP networks by coalescing packets Truncated binary exponential backoff Operating systems algorithms Banker's algorithm: Algorithm used for deadlock avoidance. Page replacement algorithms: Selecting the victim page under low memory conditions. Adaptive replacement cache: better performance than LRU Clock with Adaptive Replacement (CAR): is a page replacement algorithm that has performance comparable to Adaptive replacement cache Process synchronization Dekker's algorithm Lamport's Bakery algorithm Peterson's algorithm Scheduling Earliest deadline first scheduling Fair-share scheduling Least slack time scheduling List scheduling Multi level feedback queue Rate-monotonic scheduling Round-robin scheduling Shortest job next Shortest remaining time Top-nodes algorithm: resource calendar management I/O scheduling Disk scheduling Elevator algorithm: Disk scheduling algorithm that works like an elevator. Shortest seek first: Disk scheduling algorithm to reduce seek time. Other 'For You' algorithm: a proprietary algorithm developed by the social media network Tik-Tok. Uploaded videos are released first to a selection of users who have been identified by the algorithm as being likely to engage with the video, based on their previous web-site viewing patterns. See also List of data structures List of machine learning algorithms List of pathfinding algorithms List of algorithm general topics List of terms relating to algorithms and data structures Heuristic References Algorithms
58402580
https://en.wikipedia.org/wiki/Aprimo
Aprimo
Aprimo (/รฆ,primo/) is a United States-based company that develops and sells marketing automation software and digital asset management (DAM) technology for marketing and customer experience departments in enterprise organizations. Its software is designed to help manage the behind-the-scenes activities involved in marketing. History Early History Aprimo was founded in Indianapolis in 1998 by former executives of Software Artistry, which had recently been purchased by IBM. There are suggestions it was the first supplier of Marketing resource management (MRM) software, it is certainly the case that it was one of the earliest providers. In 2004, it made its first acquisition, buying British software developer Then. The following year, 2005, saw Aprimo acquire the EMS business of DoubleClick together with about 70 customers before the remainder of that organisation went to Hellman & Friedman. By 2007, Aprimo had about 250 employees and its clients included Bank of America, Nestlรฉ, Warner Bros., and Toyota. Teradata In 2011 the company was acquired by Teradata in a $525 million transaction. Marlin Teradata sold Aprimo in 2016 to Marlin Equity Partners for $90 million which merged it with Revenew and relocated its headquarters to Chicago. In 2017, Aprimo acquired Belgian company ADAM Software. Products and services The company's products include Digital Asset Management, software for managing videos, images, documents, and other assets; Productivity Management, software for managing ideas, plans, and production workflows; Plan & Spend, a budget planning system; Distributed Marketing, which coordinates marketing activities; and Campaign, a system that offers automated marketing data segmentation. In 2017, according to the company, it moved its products to SaaS-based systems running on the Microsoft Azure cloud computing service. Operations Aprimo is headquartered in Chicago, with R&D and customer service operations primarily based in Indianapolis. References External links 1998 establishments in Indiana Companies based in Chicago Software companies established in 1998 American companies established in 1998 Privately held companies based in Illinois
26314763
https://en.wikipedia.org/wiki/Document%20comparison
Document comparison
Document comparison, also known as redlining or blacklining, is a computer process by which changes are identified between two versions of the same document for the purposes of document editing and review. Document comparison is a common task in the legal and financial industries. The software-based document comparison process compares a reference document to a target document, and produces a third document which indicates (by colored highlighting or by differing font characteristics) information (text, graphics, formulas, etc.) that has either been added to or removed from the reference document to produce the target document. Common documents formats for comparison include word processing documents (e.g. Microsoft Word), spreadsheets, presentations (e.g. PowerPoint), and Portable Document Format (PDF) documents. Overview In the broadest definition, document comparison can refer to any act of marking changes made between two versions of the same document and presenting those changes in a third document via a graphical user interface (GUI). There are several variants in the types of changes registered through the process of document comparison. Some programs limit comparison to solely text and table content in word processing documents, while others register changes made in spreadsheets and presentations, along with changes made in versions of PDF documents. Certain programs also exist that compare changes made to objects like JPEG, TIFF, BMP, PNG images embedded in documents, and plain text files. Document comparison solutions mark changes made to the following types of documents: It is common for document comparison software vendors to present forms of the compared document in separate windows in a GUI. Each window contains the following items and the various windows are displayed on one or more computer display monitors: the original document the modified document the redline (or comparison) document, and the list of changes made between document versions. Presentation of changes made between document versions are also traditionally customizable. While one standard display of showing deletions with red underlines and additions with blue underlines is still used by many document comparison products, some programs allow users to customize the presentation of changes in the redline/comparison document. U.S. contract lawyers typically show deletions as red strikethrough text (red text with a line crossing off the words being deleted) and additions with red underlines. History Prior to personal computers, document comparison entailed the printing of two versions of a single document and reviewing those hard copies in detail for changes and version amendment. Included in this process were the potential for human error and the expansive administrative time necessitated by this arduous process. A ruler was used with a red pen to draw strike-through lines of deleted text and double-underline inserted text. The term "redline" came from using a red pen on the original/current version. When the document was placed in a copy machine, the copies came out black, thus the term "blackline." With the advent of personal computers and the ubiquity of word processing software, the need arose to find a way to manage changes made to document versions shared via disk, and later email. The importance of mitigating risks associated with potential document changes became essential as the amount of document and revision sharing increased. Early document comparison software solutions provided robust document review, checking all the text in two documents for changes, and then presenting those changes in a third redline/comparison version. As documents changed and evolved, so did document comparison solutions. Software began utilizing tables to manage a multiplicity of document layouts. Many document comparison solutions had difficulty comparing tables in document versions. These solutions first converted tables to text arrays and then compared the created arrays. In many cases, not enough due diligence on the softwareโ€™s part was conducted; users would not be informed of sections that were not successfully compared. In the second generation, Microsoftโ€™s Track Changes option was also introduced. With Track Changes, all changes made to documents were captured and stored inside the document. Flaws in the functionality of Track Changes could render the documents unusable and some comparison offerings again had difficulty managing the complex process of comparing in a Track Changes environment. Before third generation technology, it was common for organizations to be required to use multiple documents for one product. A main document with various supporting documents would be used to present and share necessary information. However, later software (especially Microsoft Word) enabled multiple types of information to be presented in a single document. Compound documents could include text, tables, and various styles, and could also include a range of embedded objects, such as Excel, Visio, ChemDraw, and SmartDraw objects, and inserted images in a range of types (including jpg, tiff, bmp, and gif). While this enhancement greatly increased the usefulness of documents, it added an entirely new layer of risk to organizations that needed to fully understand changes made to document versions. The majority of document comparison software programs have not yet included mechanisms to mitigate the risk related to changes inside of embedded objects. The software program that can compare changes made in embedded objects provides pixel-to-pixel comparison of images and cell-level comparison of embedded Excel spreadsheets and other changes made to these complex, compound documents. Business relevance Document comparison provides a method of quality assurance. Individuals and organizations are able to assure that changes requested have been integrated properly into documents. Additionally, document comparison provides assurances that no unwarranted changes were made. Document comparison in the legal industry Lawyers and legal professionals regularly share documents with opposing counsel. As the documents constructed in this business vertical may be binding on either side's clients, it is essential that the risks associated with changes are completely mitigated. If opposing counsel makes a change that is not detected by the lawyer, such a change could negatively affect the lawyer's client and the lawyer could be liable for the damages. Document comparison in banking, finance and accounting Professionals in the banking, finance and accounting industries manage large amounts of data in spreadsheets. As one change to a value or formula could affect a substantial amount of data, these professionals find document comparison (such as comparison of two versions of a MS Excel spreadsheet) to be extremely useful in assuring accuracy in document change management. Creative media management and publishing Professionals in these industries regularly work with multiple versions of single documents. Document comparison software helps these professionals ensure that all changes have been acceptably integrated into latest versions and provides them with a speedy understanding of changes made in editing and versioning of the documents they work with. See also File comparison References External links Technical communication tools Document comparison tool https://draftable.com/
13810684
https://en.wikipedia.org/wiki/SoftServe
SoftServe
SoftServe, Inc., founded in 1993 in Lviv, Ukraine, is a technology company specializing in consultancy services and software development. SoftServe provides services in the fields of big data, Internet of things, cloud computing, DevOps, e-commerce, computer security, experience design, and health care. With its United States headquarters in Austin, Texas and European headquarters in Lviv, Ukraine, the company employed more than 10,000 people in offices in 2021. It is one of the largest employer for software developers in Eastern Europe, and the largest outsourcing and outstaffing IT company in Ukraine. History SoftServe was founded in 1993 in Lviv, Ukraine. Started by two post-graduate students of Lviv Polytechnic, it began as a software development company with headquarters in Lviv. The company was initially supported by the Rensselaer Polytechnic Institute Incubator Center and its first known client was General Electric. The company opened its first office in the United States in 2000. SoftServe was instrumental in building Microsoft Bird's Eye service in 2004. It used the same concept that was later used by Google for its Google Street View. For its work on the project, SoftServe was invited to speak at Microsoft's annual conference where it was used as an example of business applications that could be built by technology corporations. In 2006, SoftServe founded SoftServe University. It became the company's corporate training program for improving developers and retraining specialists. Based in Ukraine, it also offers international IT Professional certificates to employees who complete the program. With the launch of SoftServe University, the company became the first to establish a corporate university in Ukraine. In 2008, SoftServe also founded Lviv Business School at Ukrainian Catholic University. SoftServe opened its United States headquarters in Fort Myers, Florida in 2008 and began holding an annual conference. By 2012, SoftServe was one of the largest IT outsourcing companies in Ukraine with 2,189 employees, third only to EPAM Systems and Luxoft. In 2014 SoftServe moved its United States headquarters from Florida to One Congress Plaza, in Austin, Texas (The company had previously operated an office out of Austin since 2013 and officially moved its headquarters to One Congress Plaza in 2014.) The same year SoftServe opened offices in London, Amsterdam, Sofia, Wroclaw, and Stockholm. The same year its employee base climbed to 3,900 employeesAlso in 2014, the company acquired Amsterdam-based tech services firm Initium Consulting Group BV. (Founded in 2012 and served mainly healthcare and private equity industries.) SoftServe also acquired European IT company UGE UkrGermanEnterprise GmbH. In 2015 SoftServe opened a new European headquarters in Lviv, Ukraine. It also organized an event in San Francisco, California along with IT professionals from Ukraine and members of the Ukraine consulate to address concerns about the country's operations in light of the geo-political situation in Ukraine. The same year the company named Chris Baker as the new CEO, taking over the role from Co Founder Taras Kytsmey. In January 2017, SoftServe acquired Wroclaw-based Coders Center, for between $1.5 million and $3 million. In September 2020, the company was targeted by a ransomware attack, in response SoftServe shut down many of their internal systems to try and stop the spread of the virus. SoftServe says there is no evidence that the virus spread to customers systems, and most of SoftServes internal systems were back online in a few hours to a few days. The hack resulted in bits of unfinished customer source code, and other information being shared on the internet. The individual who claims to be behind the hack, 'Freedomf0x', also published fragmented personal information of about 200 individual, but whether this information is linked to SoftServe employees is unclear. The attack targeted the company by exploiting the Windows tool Rainmeter. In response to the breach SoftServe partnered with multiple cyber and data security firms, and instituted new security policies. Growth Around 2013 the company began large-scale growth. It opened new offices in the United States, Poland, London, Amsterdam, Sofia, and, Stockholm and began a still running (as of 2020) greater-than 20% per year growth streak. About the same time it reached $100 million in yearly revenue. In the following years SoftServe also purchased Initium Consulting Group BV and UGE UkrGermanEnterprise GmbH. Chris Baker was named the company's new CEO as growth continued as the company opened more offices in the United States and Europe. In In 2018 the company revenue was estimated to have surpassed the $250 million mark. By the end of 2020, despite the global pandemic, the company grew by at least 20%, reaching an estimated $450 million in revenue. The company is targeting a yearly revenue of $1 billion by 2025. Products, Services and Partnerships SoftServe is a software application development company as well as consulting firm. Its services include software optimization, software as a service, cloud computing, mobile, UI/UX, analytics, and security. It provides its services mainly in the healthcare, retail, and technology. One of the "SoftServe Business System" divisions also releases its own products, which are specially designed for Ukraine in order to find new technological solutions in IT. SoftServe has continuing partnerships with: Amazon Web Services, Google Cloud, Microsoft, Salesforce, Apigee, and other organizations. Awards and Recognition Since 2004, SoftServe has been a member of the Microsoft Partner Ecosystem and was a finalist for the global Microsoft Partner of the Year in both 2006 and 2007. The company was recognized for the same award in 2008 and 2009 in Eastern Europe. SoftServe has continued to receive recognition from Microsoft, including by Microsoft Ukraine in 2012 for Partner of the Year, for Innovation in Business Analytics. The company has won additional awards throughout the industries, including being named to the Global Outsourcing 100 list in 2010, 2011, 2013, 2014, and 2015. In 2010 SoftServe was named as Ukraine's Best Employer by Hewitt Associates and in 2011 named as Best Employer in Eastern Europe. In 2019, SoftServe ranked seventh out of more than 130 Western European companies in the Clutch software development category. See also EPAM DataArt Ciklum Infopulse Ukraine Eleks References Software companies of Ukraine Outsourcing companies Software companies established in 1993 Economy of Lviv Economy of Austin, Texas Companies based in Lviv
22132318
https://en.wikipedia.org/wiki/68K/OS
68K/OS
68K/OS was a computer operating system developed by GST Computer Systems for the Sinclair QL microcomputer. It was commissioned by Sinclair Research in February 1983. However, after the official launch of the QL in January 1984, 68K/OS was rejected, and production QLs shipped with Sinclair's own Qdos operating system. GST later released 68K/OS as an alternative to Qdos, in the form of an EPROM expansion card, and also planned to use it on single-board computers based on the QL's hardware. The operating system was developed by Chris Scheybeler, Tim Ward, Howard Chalkley and others. The few ROM cards that were made mean that surviving examples now fetch a high price: On Feb 04, 2010 one sold for ยฃ310 on eBay. References External links GST Assembler, Adder Assembler - Sinclair User, April 1985 QL Pictures Gallery 68k/OS manuals and documentation Proprietary operating systems Sinclair Research Discontinued operating systems 68k architecture
2096603
https://en.wikipedia.org/wiki/IP%20Pascal
IP Pascal
IP Pascal is an implementation of the Pascal programming language using the IP portability platform, a multiple machine, operating system and language implementation system. It implements the language "Pascaline" (named after Blaise Pascal's calculator), and has passed the Pascal Validation Suite. Overview IP Pascal implements the language "Pascaline" (named after Blaise Pascal's calculator), which is a highly extended superset of ISO 7185 Pascal. It adds modularity with namespace control, including the parallel tasking monitor concept, dynamic arrays, overloads and overrides, objects, and a host of other minor extensions to the language. IP implements a porting platform, including a widget toolkit, TCP/IP library, MIDI and sound library and other functions, that allows both programs written under IP Pascal, and IP Pascal itself, to move to multiple operating systems and machines. IP Pascal is one of the only Pascal implementations that still exist that has passed the Pascal Validation Suite, a large suite of tests created to verify compliance with ISO 7185 Pascal. Although Pascaline extends ISO 7185 Pascal, it does not reduce the type safety of Pascal (as many other dialects of Pascal have by including so called "type escapes"). The functionality of the language is similar to that of C# (which implements a C++ like language but with the type insecurities removed), and Pascaline can be used anywhere that managed programs can be used (even though it is based on a language 30 years older than C#). Open or closed status The author of Pascaline the language has stated that there is no wish to have it remain as a proprietary language. IP Pascal is sold as an implementation of Pascaline, but the language itself can and should be open, and should have quality implementations. To that end, the full specification for Pascaline will be published online, and the long-term intention is to create a version of the open source P5 compiler-interpreter (an ISO 7185 version of Wirth's P4 compiler-interpreter) which implements Pascaline compliance. This will be known as the P6 compiler, and it will also be openly published and distributed. The value of IP Pascal as a commercial product will be based on the IDE and compiler-encoder resources of that system. This article follows a fairly old version of Pascaline. A newer version of Pascaline exists as Pascal-P6, part of the Pascal-P series. See the references below. Language IP Pascal starts with ISO 7185 Pascal (which standardized Niklaus Wirth's original language), and adds: Modules, including parallel task constructs process, monitor and share module mymod(input, output); uses extlib; const one = 1; type string = packed array of char; procedure s: string); private var s: string; procedure s: string); var i: integer; begin for i := 1 to max(s) do write(s[i]) end; begin { initialize monitor } end; begin { shutdown monitor } end. Modules have entry and exit sections. Declarations in modules form their own interface specifications, and it is not necessary to have both interface and implementation sections. If a separate interface declaration file is needed, it is created by stripping the code out of a module and creating a "skeleton" of the module. This is typically done only if the object for a module is to be sent out without the source. Modules must occupy a single file, and modules reference other modules via a uses or joins statement. To allow this, a module must bear the same name as its file name. The uses statement indicates that the referenced module will have its global declarations merged with the referencing module, and any name conflicts that result will cause an error. The joins statement will cause the referenced module to be accessible via the referencing module, but does not merge the name spaces of the two modules. Instead, the referencing module must use a so-called "qualified identifier": A program from ISO 7185 Pascal is directly analogous to a module, and is effectively a module without an exit section. Because all modules in the system are "daisy chained" such that each are executed in order, a program assumes "command" of the program simply because it does not exit its initialization until its full function is complete, unlike a module which does. In fact, it is possible to have multiple program sections, which would execute in sequence. A process module, like a program module, has only an initialization section, and runs its start, full function and completion in that section. However, it gets its own thread for execution aside from the main thread that runs program modules. As such, it can only call monitor and share modules. A monitor is a module that includes task locking on each call to an externally accessible procedure or function, and implements communication between tasks by semaphores. A share module, because it has no global data at all, can be used by any other module in the system, and is used to place library code in. Because the module system directly implements multitasking/multithreading using the Monitor concept, it solves the majority of multithreading access problems. Data for a module is bound to the code with mutexes or Mutually Exclusive Sections. Subtasks/subthreads are started transparently with the process module. Multiple subtasks/subthreads can access monitors or share modules. A share module is a module without data, which does not need the locking mechanisms of a monitor. Dynamic arrays In IP Pascal, dynamics are considered "containers" for static arrays. The result is that IP Pascal is perhaps the only Pascal where dynamic arrays are fully compatible with the ISO 7185 static arrays from the original language. A static array can be passed into a dynamic array parameter to a procedure or function, or created with new program test(output); type string = packed array of char; var s: string; procedure s: string); var i: integer; begin for i := 1 to max(s) do write(s[i]) end; begin , 12); s := 'Hello, world'; end. Such "container" arrays can be any number of dimensions. Constant expressions A constant declaration can contain expressions of other constants const b = a+10; Radix for numbers $ff, &76, %011000 Alphanumeric goto labels label exit; goto exit; underscore in all labels var my_number: integer; underscore in numbers a := 1234_5678; The '_' (break) character can be included anywhere in a number except for the first digit. It is ignored, and serves only to separate digits in the number. Special character sequences that can be embedded in constant strings const str = 'the rain in Spain\cr\lf'; Using standard ISO 8859-1 mnemonics. Duplication of forwarded headers procedure : integer); forward; ... procedure : integer); begin ... end; This makes it easier to declare a forward by cut-and-paste, and keeps the parameters of the procedure or function in the actual header where they can be seen. halt procedure procedure error(view s: string); begin ('*** Error: ', s:0); halt { terminate program } end; Special predefined header files program , output, list); begin (list, 'Start of listing:'); ... program command); var c: char; begin while not do begin c); write(c) end; end. program newprog(input, output, error); begin ... (error, 'Bad parameter'); halt ... 'command' is a file that connects to the command line, so that it can be read using normal file read operations. Automatic connection of program header files to command line names program copy(source, destination); var source, destination: text; c: char; begin reset(source); ; while not eof(source) do begin while not do begin read(source, c); , c) end; ; (destination) end end. 'source' and 'destination' files are automatically connected to the parameters on the command line for the program. File naming and handling operations program extfile(output); var f: file of integer; begin assign(f, 'myfile'); { set name of external file } update(f); { keep existing file, and set to write mode } position(f, length(f)); { position to end of file to append to it } writeln('The end of the file is: ', location(f)); { tell user location of new element } write(f, 54321); { write new last element } close(f) { close the file } end. fixed declarations which declare structured constant types fixed table: array [1..5] of record a: integer; packed array [1..10] of char end = array record 1, 'data1 ' end, record 2, 'data2 ' end, record 3, 'data3 ' end, record 4, 'data4 ' end, record 5, 'data5 ' end end; Boolean bit operators program test; var a, b: integer; begin a := a and b; b := b or $a5; a := not b; b := a xor b end. Extended range variables program test; var a: linteger; b: cardinal; c: lcardinal; d: 1..maxint*2; ... Extended range specifications give rules for scalars that lie outside the range of -maxint..maxint. It is implementation-specific as to just how large a number is possible, but Pascaline defines a series of standard types that exploit the extended ranges, including linteger for double-range integers, cardinal for unsigned integers, and lcardinal for unsigned double range integers. Pascaline also defines new limits for these types, as , , and . Semaphores monitor test; var notempty, notfull: semaphore; procedure enterqueue; begin while nodata do wait(notempty); ... signalone(notfull) end; ... begin end. Semaphores implement task event queuing directly in the language, using the classical methods outlined by Per Brinch Hansen. Overrides module test1; virtual procedure x; begin ... end; program test; joins test1; override procedure x; begin inherited x end; begin end. Overriding a procedure or function in another module effectively "hooks" that routine, replacing the definition for all callers of it, but makes the original definition available to the hooking module. This allows the overriding module to add new functionality to the old procedure or function. This can be implemented to any depth. Overloads procedure x; begin end; overload procedure : integer); begin end; overload function x: integer; begin xx:= 1 end; Overload "groups" allow a series of procedures and/or functions to be placed under the same name and accessed by their formal parameter or usage "signature". Unlike other languages that implement the concept, Pascaline will not accept overloads as belonging to the same group unless they are not ambiguous with each other. This means that there is no "priority" of overloads, nor any question as to which routine of an overload group will be executed for any given actual reference. Objects program test; uses baseclass; class alpha; extends beta; type alpha_ref = reference to alpha; var a, b: integer; next: alpha_ref; virtual procedure : integer); begin aa:= d; selfl:= next end; private var q: integer; begin end. var r: alpha_ref; begin new(r); ... if r is alpha then r.a := 1; r.x(5); ... end. In Pascaline, classes are a dynamic instance of a module (and modules are a static instance of a class). Classes are a code construct (not a type) that exists between modules and procedures and functions. Because a class is a module, it can define any code construct, such as constants, types, variables, fixed, procedures and functions (which become "methods"), and make them public to clients of the class, or hide them with the "private" keyword. Since a class is a module, it can be accessed via a qualified identifier. Classes as modules have automatic access to their namespace as found in C# and C++ in that they do not require any qualification. Outside of the class, all members of the class can be accessed either by qualified identifier or by a reference. A reference is a pointer to the object that is created according to the class. Any number of instances of a class, known as "objects" can be created with the new() statement, and removed with the dispose() statement. Class members that have instance data associated with them, such as variables (or fields) and methods must be accessed via a reference. A reference is a type, and resembles a pointer, including the ability to have the value nil, and checking for equality with other reference types. It is not required to qualify the pointer access with "^". Pascaline implements the concept of "reference grace" to allow a reference to access any part of the object regardless of whether or not it is per-instance. This characteristic allows class members to be "promoted", that is moved from constants to variables, and then to "properties" (which are class fields whose read and write access are provided by "get" and "set" methods). Both overloads and overrides are provided for and object's methods. A method that will be overridden must be indicated as virtual. Object methods can change the reference used to access them with the self keyword. Single inheritance only is implemented. Structured exception handling try ... except ... else ...; throw The "try" statement can guard a series of statements, and any exceptions flagged within the code are routined to the statement after "except". The try statement also features an else clause that allows a statement to be executed on normal termination of the try block. Exceptions are raised in the code via the throw() procedure. Try statements allow the program to bail out of any nested block, and serve as a better replacement for intra-procedure gotos (which are still supported under Pascaline). Since unhandled exceptions generate errors by default, the throw() procedure can serve as a general purpose error flagging system. Assertions assert(expression); The system procedure assert causes the program to terminate if the value tested is false. It is typically coupled to a runtime dump or diagnostic, and can be removed by compiler option. Unicode IP Pascal can generate either ISO 8859-1 mode programs (8-bit characters) or Unicode mode programs by a simple switch at compile time (unlike many other languages, there is no source difference between Unicode and non-Unicode programs). The ASCII upward-compatible UTF-8 format is used in text files, and these files are read to and from 8- or 16-bit characters internal to the program (the upper 128 characters of ISO 8859-1 are converted to and from UTF-8 format in text files even in an 8-bit character encoded program). Constant for character high limit Similar to maxint, Pascaline has a maxchr, which is the maximum character that exists in the character set (and may not in fact have a graphical representation). The range of the type char is then defined as 0..maxchr. This is an important addition for dealing with types like "set of char", and aids when dealing with different character set options (such as ISO 8859-1 or Unicode). Modular structure IP Pascal uses a unique stacking concept for modules. Each module is stacked one atop the other in memory, and executed at the bottom. The bottom module calls the next module up, and that module calls the next module, and so on. wrapper serlib program cap The cap module (sometimes called a "cell" in IP Pascal terminology, after a concept in integrated circuit design) terminates the stack, and begins a return process that ripples back down until the program terminates. Each module has its startup or entry section performed on the way up the stack, and its finalization or exit section performed on the way back down. This matches the natural dependencies in a program. The most primitive modules, such as the basic I/O support in "serlib", perform their initialization first, and their finalization last, before and after the higher level modules in the stack. Porting platform IP Pascal has a series of modules (or "libraries") that form a "porting platform". These libraries present an idealized API for each function that applies, such as files and extended operating system functions, graphics, midi and sound, etc. The whole collection forms the basis for an implementation on each operating system and machine that IP Pascal appears on. The two important differences between IP Pascal and many other languages that have simply been mated with portable graphics libraries are that: IP Pascal uses its own porting platform for its own low level code, so that once the platform is created for a particular operating system and machine, both the IP system and the programs it compiles can run on that. This is similar to the way Java and the UCSD Pascal systems work, but with true high optimization compiled code, not interpreted code or "just in time" compiled code. Since modules can override lower level functions such as Pascal's "write" statement, normal, unmodified ISO 7185 Pascal programs can also use advanced aspects of the porting platform. This is unlike many or most portable graphics libraries that force the user to use a completely different I/O methodology to access a windowed graphics system, for example C, other Pascals, and Visual Basic. IP modules can also be created that are system independent, and rely only on the porting platform modules. The result is that IP Pascal is very highly portable. Example: The standard "hello world" program is coupled to output into a graphical window. program HelloWorld(output); begin ('Hello, World!') end. Example: "hello world" with graphical commands added. Note that standard Pascal output statements are still used. program hello(input, output); uses gralib; var er: evtrec; begin bcolor(output, green); curvis(output, false); auto(output, false); page(output); fcolor(output, red); frect(output, 50, 50, maxxg(output)-50, maxyg(output)-50); fcolorg(output, maxint, maxint-(maxint div 3), maxint-maxint div 3); frect(output, 50, 50, 53, maxyg(output)-50); frect(output, 50, 50, maxxg(output)-50, 53); fcolorg(output, maxint div 2, 0, 0); frect(output, 52, maxyg(output)-53, maxxg(output)-50, maxyg(output)-50); frect(output, maxxg(output)-53, 52, maxxg(output)-50, maxyg(output)-50); font(output, font_sign); fontsiz(output, 100); binvis(output); fcolor(output, cyan); cursorg(output, maxxg(output) div 2-strsiz(output, 'hello, world') div 2+3, maxyg(output) div 2-100 div 2+3); writeln('hello, world'); fcolor(output, blue); cursorg(output, maxxg(output) div 2-strsiz(output, 'hello, world') div 2, maxyg(output) div 2-100 div 2); writeln('hello, world'); repeat event(input, er) until er.etype = etterm end. Because IP Pascal modules can "override" each other, a graphical extension module (or any other type of module) can override the standard I/O calls implemented in a module below it. Thus, paslib implements standard Pascal statements such as read, write, and other support services. overrides these services and redirects all standard Pascal I/O to graphical windows. The difference between this and such libraries in other implementations is that you typically have to stop using the standard I/O statements and switch to a completely different set of calls and paradigms. This means that you cannot "bring forward" programs implemented with the serial I/O paradigm to graphical systems. Another important difference with IP Pascal is that it uses procedural language methods to access the Windowed graphics library. Most graphics toolkits force the use of object-oriented programming methods to the toolkit. One reason for this is because Object orientation is a good match for graphics, but it also occurs because common systems such as Windows force the application program to appear as a service program to the operating system, appearing as a collection of functions called by the operating system, instead of having the program control its own execution and call the operating system. This is commonly known as callback design. Object-oriented code often works better with callbacks because it permits an object's methods to be called as callbacks, instead of a programmer having to register several pointers to functions to event handling code, each of which would be an individual callback. Object-orientation is a good programming method, but IP Pascal makes it an optional, not a required, methodology to write programs. IP Pascal's ability to use procedural methods to access all graphics functions means that there is no "cliff effect" for older programs. They don't need to be rewritten just to take advantage of modern programming environments. Another interesting feature of the IP porting platform is that it supports a character mode, even in graphical environments, by providing a "character grid" that overlays the pixel grid, and programs that use only character mode calls (that would work on any terminal or telnet connection) work under graphical environments automatically. History The Z80 implementation The compiler started out in 1980 on Micropolis Disk Operating System, but was moved rapidly to CP/M running on the Z80. The original system was coded in Z80 assembly language, and output direct machine code for the Z80. It was a single-pass compiler without a linker, it included its system support library within the compiler, and relocated and output that into the generated code into the runnable disk file. After the compiler was operational, almost exactly at the new year of 1980, a companion assembler for the compiler was written, in Pascal, followed by a linker, in Z80 assembly language. This odd combination was due to a calculation that showed the linker tables would be a problem in the 64kb limited Z80, so the linker needed to be as small as possible. This was then used to move the compiler and linker Z80 source code off the Micropolis assembler (which was a linkerless assembler that created a single output binary) to the new assembler linker system. After this, the compiler was retooled to output to the linker format, and the support library moved into a separate file and linked in. In 1981, the compiler was extensively redone to add optimization, such as register allocation, boolean to jump, dead code, constant folding, and other optimizations. This created a Pascal implementation that benchmarked better than any existing Z80 compilers, as well as most 8086 compilers. Unfortunately, at 46kb, it also was difficult to use, being able to compile only a few pages of source code before overflowing its tables (this was a common problem with most Pascal implementations on small address processors). The system was able to be used primarily because of the decision to create a compact linker allowed for large systems to be constructed from these small object files. Despite this, the original IP Pascal implementation ran until 1987 as a general purpose compiler. In this phase, IP Pascal was C like in its modular layout. Each source file was a unit, and consisted of some combination of a 'program' module, types, constants, variables, procedures or functions. These were in "free format". Procedures, functions, types, constants and variables could be outside of any block, and in any order. Procedures, functions, and variables in other files were referenced by 'external' declarations, and procedures, functions, and variables in the current file were declared 'global'. Each file was compiled to an object file, and then linked together. There was no type checking across object files. As part of the original compiler, a device independent terminal I/O module was created to allow use of any serial terminal (similar to Turbo Pascal's CRT unit), which remains to this day. In 1985, an effort was begun to rewrite the compiler in Pascal. The new compiler would be two pass with intermediate, which was designed to solve the memory problems associated with the first compiler. The front end of the compiler was created and tested without intermediate code generation capabilities (parse only). in 1987, the Z80 system used for IP was exchanged for an 80386 IBM-PC, and work on it stopped. From that time several other, ISO 7185 standard compilers were used, ending with the SVS Pascal compiler, a 32 bit DPMI extender based implementation. The 80386 implementation By 1993, ISO 7185 compatible compilers that delivered high quality 32 bit code were dying off. At this point, the choice was to stop using Pascal, or to revive the former IP Pascal project and modernize it as an 80386 compiler. At this point, a Pascal parser, and assembler (for Z80) were all that existed which were usable on the IBM-PC. From 1993 to 1994, the assembler was made modular to target multiple CPUs including the 80386, a linker to replace the Z80 assembly language linker was created, and a Pascal compiler front end was finished to output to intermediate code. Finally, an intermediate code simulator was constructed, in Pascal, to prove the system out. In 1994, the simulator was used to extend the ISO 7185 IP Pascal "core" language to include features such as dynamic arrays. In 1995, a "check encoder" was created to target 80386 machine code, and a converter program created to take the output object files and create a "Portable Executable" file for Windows. The system support library was created for IP Pascal, itself in IP Pascal. This was an unusual step taken to prevent having to later recode the library from assembly or another Pascal to IP Pascal, but with the problem that both the 80386 code generator and the library would have to be debugged together. At the beginning of 1996, the original target of Windows NT was switched to Windows 95, and IP Pascal became fully operational as an 80386 compiler under Windows. The system bootstrapped itself, and the remaining Pascal code was ported from SVS Pascal to IP Pascal to complete the bootstrap. This process was aided considerably by the ability of the DPMI based SVS Pascal to run under Windows 95, which meant that the need to boot back and forth between DOS and Windows 95 was eliminated. The Linux implementation In 2000, a Linux (Red Hat) version was created for text mode only. This implementation directly uses the system calls and avoids the use of glibc, and thus creates thinner binaries than if the full support system needed for C were used, at the cost of binary portability. The plan is to create a version of the text library that uses termcap info, and the graphical library under X11. Steps to "write once, run anywhere" In 1997, a version of the terminal library from the original 1980 IP Pascal was ported to windows, and a final encoder started for the 80386. However, the main reason for needing an improved encoder, execution speed, was largely made irrelevant by increases in processor speed in the IBM-PC. As a result, the new encoder wasn't finished until 2003. In 2001, a companion program to IP Pascal was created to translate C header files to Pascal header files. This was meant to replace the manual method of creating operating system interfaces for IP Pascal. In 2003, a fully graphical, operating system independent module was created for IP Pascal. In 2005, the windows management and widget kit was added. Lessons In retrospect, the biggest error in the Z80 version was its single-pass structure. There was no real reason for it; the author's preceding (Basic) compiler was multiple pass with intermediate storage. The only argument for it was that single-pass compilation was supposed to be faster. However, single-pass compiling turns out to be a bad match for small machines, and isn't likely to help the advanced optimizations common in large machines. Further, the single pass aspect slowed or prevented getting the compiler bootstrapped out of Z80 assembly language and onto Pascal itself. Since the compiler was monolithic, the conversion to Pascal could not be done one section at a time, but had to proceed as a wholesale replacement. When replacement was started, the project lasted longer than the machine did. The biggest help that two pass compiling gave the I80386 implementation was the maintenance of a standard book of intermediate instructions which communicated between front and back ends of the compiler. This well understood "stage" of compilation reduced overall complexity. Intuitively, when two programs of equal size are mated intimately, the complexity is not additive, but multiplicative, because the connections between the program halves multiply out of control. Another lesson from the Z80 days, which was corrected on the 80386 compiler, was to write as much of the code as possible into Pascal itself, even the support library. Having the 80386 support code all written in Pascal has made it so modular and portable that most of it was moved out of the operating system specific area and into the "common code" library section, a section reserved for code that never changes for each machine and operating system. Even the "system specific" code needs modification only slightly from implementation to implementation. The result is great amounts of implementation work saved while porting the system. Finally, it was an error to enter into a second round of optimization before bootstrapping the compiler. Although the enhancement of the output code was considerable, the resulting increase in complexity of the compiler caused problems with the limited address space. At the time, better optimized code was seen to be an enabler to bootstrapping the code in Pascal. In retrospect, the remaining assembly written sections WERE the problem, and needed to be eliminated, the sooner the better. Another way to say this is that the space problems could be transient, but having significant program sections written in assembly are a serious and lasting problem. Further reading Kathleen Jansen and Niklaus Wirth: PASCAL โ€“ User Manual and Report. Springer-Verlag, 1974, 1985, 1991, , , and Niklaus Wirth: "The Programming Language Pascal". Acta Informatica, 1, (June 1971) 35โ€“63 ISO/IEC 7185: Programming Languages - PASCAL. References External links The standard, ISO 7185 Pascal website Pascal-P6 Repository Pascal programming language family Pascal (programming language) compilers
45488321
https://en.wikipedia.org/wiki/Tim%20Guleri
Tim Guleri
Tim Guleri is an American venture capitalist and serial entrepreneur. He is the managing director of Sierra Ventures of San Mateo, California. Prior to joining Sierra Ventures, Guleri helped build Scopus Technology and founded Octane Software. Early life and education Tim Guleri received his Bachelor of Science degree in electrical engineering from Punjab Engineering College, Chandigarh, India. He received his master's degree in robotics and industrial engineering from Virginia Tech in 1988 and was inducted into the Academy of Distinguished Alumni in 2006. He sold books door to door to put himself through graduate school. Career In September 1989, Guleri became part of the information technology team at LSI Logic Corporation. In 1992, he joined Scopus Technology, a customer relationship management software company, where he was vice president of field operations until 1996. Scopus went public in 1995, and was acquired by Seibel Systems in 1998 for $750 million. Guleri founded the ecommerce company Octane Software in 1997. He was the CEO of Octane until it sold to Epiphany, Inc. in 2000. Guleri led the merger of the two companies. and served as the executive vice president of Epiphany from March 2000 until February 2001. He joined Sierra Ventures, a venture capital fund, as managing director of the software team in 2001. At Sierra, Guleri worked with investments in software and open source development. Through his investments from Sierra, he has taken companies Sourcefire and MakeMyTrip public. In February 2014, Shape Security raised $40 million in Series C funding from Sierra Ventures and other funds. Other activities During his time at Sierra Ventures, Guleri has led investments and served on the board of directors for companies such as MakeMyTrip.com, an online flight booking service based in India, CodeGreen Networks, Approva, Ventaso, a business software company, Greenplum, Again Technologies, BINA Technologies and Sourcefire. He currently sits on the board of directors for companies including Treasure Data, LeadGenius, DotNetNuke, and Hired.com, a startup company that aims to make the job search and hiring employees easier. See also List of venture capital firms References Living people Year of birth missing (living people) American venture capitalists Punjab Engineering College alumni
28788190
https://en.wikipedia.org/wiki/Science%20Software
Science Software
The Science Software (formerly Science Software Quarterly) was a scientific journal for scientists of all disciplines who used computers in the 1980s, particularly desktop platforms such as the IBM-PC (introduced in 1981), the Apple Macintosh (introduced in 1984), and the Apple II (introduced in 1977). The journal featured reviews of scientific applications and other software that were available at the time for many different disciplines and branches of science. Each issue also contained articles about scientific computing, and regular features. Available by individual subscription, SSQ was published quarterly, or four times per year. Each issue contained about 110 pages. History Science Software Quarterly was founded in 1984 by executive editor Diana Gabaldon, who at the time was an assistant professor in the Center for Environmental Studies at Arizona State University. SSQ was first published by ASU. In 1987, the journal was acquired by a new publisher, John Wiley & Sons, who changed the title to Science Software. The software reviews and articles in the journal were not peer-reviewed. On the new market for scientific software in 1986, Gabaldon wrote, "Within the last year, scientific and technical computer users have emerged as a significant vertical market." But scientists had been using personal computers before their market was discovered. "This means that computer-using scientists were frequently forced to write their own software if they wanted something specific to their needs." SSQ helped acquaint scientists with the newest software applications on the market, and provided evaluations from peers, who reviewed the products. Scientist reviewers Authors of the SSQ reviews were volunteer scientists who were experts in their field, selected by the executive editor. For software applications new on the market, the scientist reviewer would install and use the product in his or her work, and then evaluate it. Or a scientist could choose to write a review of software that he or she was already using. Manufacturers supplied a current copy of the software free of charge to each reviewer. Contents Software reviews SSQ scientist reviewers would install, learn to use, then evaluate a software package based on the following categories: Performance Documentation Ease of Learning Ease of Use Error handling Support provided from the software company Value Reviewers would write a section on each category above within the review. A checkbox graphic for each review article allowed readers to see at a glance the reviewer's marks for each of the four categories, giving ratings of Unsatisfactory, Poor, Fair, Good, or Excellent. The review articles would begin with a listing of the vendor for the software, the current price, and the system requirements, which included the type of computer platform, operating system version, minimum RAM (memory) needed, etc. for the software to work properly. Articles Articles of interest to scientists using computers were included in SSQ and Science Software on a wide range of topics, such as "Transferring BASIC programs From the Apple II to the IBM-PC." In this example, converting data from one operating system to another was explored and explained, which could be a difficult problem in the 1980s. Other features Besides the software reviews, the backbone of SSQ and Science Software, each issue contained: An editorial article Letters to the Editors A Readers' Survey, which provided feedback to the editor on what readers liked about SSQ and what they wanted to see in future issues. Features The Features section of the journal contained a variety of information each month, including: "New Products and Software in Development," which listed what was new, interesting or updated in hardware and software. "Books in Brief" gave a quick look at recent and relevant computer titles. "Database Profiles," condensed information on the latest in online and primed database resources. "The Wanted List," a listing service that allowed readers to post requests for information or a special software package. "Users' Groups," a listing to help readers find other people who shared their interests. "On the Periphery," which listed and described computer resources, including peripherals, addons, training videotapes, classes, demos, and "anything onelse on the periphery of scientific computing." Recent References The Recent References section listed articles that might be of interest to scientists using desktop computers, from a wide variety of sources. Articles were listed by author. Published software reviews The published software reviews section listed scientific software reviews in other journals that might be of interest to readers. Examples: TERMDOKโ€”Multilingual Technical Dictionary. Raitt, D: Online Review 12:304-315, 1988. Microsoft Word, Version 4.0. Small, GW: Journal of Chemical Information and Computer Sciences 28: 234-235, 1988. Availability When published, copies were available to individual subscribers by regular mail. Science Software Quarterly was discontinued in 1990. To date, an archive of the journal has not been established on the World Wide Web. References Quarterly journals English-language journals Publications established in 1984 Computer science journals
15472369
https://en.wikipedia.org/wiki/VirusTotal
VirusTotal
VirusTotal is a website created by the Spanish security company Hispasec Sistemas. Launched in June 2004, it was acquired by Google in September 2012. The company's ownership switched in January 2018 to Chronicle, a subsidiary of Google. VirusTotal aggregates many antivirus products and online scan engines called Contributors. In November, 2018, the Cyber National Mission Force, a unit subordinate to the U.S. Cyber Command became a Contributor. The aggregated data from these Contributors allows a user to check for viruses that the user's own antivirus software may have missed, or to verify against any false positives. Files up to 650 MB can be uploaded to the website, or sent via email (max. 32MB). Anti-virus software vendors can receive copies of files that were flagged by other scans but passed by their own engine, to help improve their software and, by extension, VirusTotal's own capability. Users can also scan suspect URLs and search through the VirusTotal dataset. VirusTotal uses the Cuckoo sandbox for dynamic analysis of malware. VirusTotal was selected by PC World as one of the best 100 products of 2007. Products and services Windows Uploader VirusTotal's Windows Uploader was an application that integrates into the Explorer's (right-click) contextual menu, listed under Send To > Virus Total. The application also launches manually for submitting a URL or a program that is currently running in the OS. VirusTotal stores the name and various hashes for each scanned file. Already scanned files can be identified by their known (e.g., VT default) SHA256 hash without uploading complete files. The SHA256 query URL has the form https://www.virustotal.com/latest-scan/SHA256. File uploads are normally limited to 128ย MB. In 2017 VirusTotal discontinued support of the Windows Uploader. Uploader for Mac OS X and Linux Same as the Windows app you upload the file (via the app's UI or context menu) then you will be given back a result. The Mac OS X app can be downloaded from their website, however you need to compile and build the app for Linux using the same core (provided in their repository) used in the Mac OS X application. Already scanned files can be identified by their known (e.g., VT default) SHA256 hash without uploading complete files. VirusTotal for Browsers There are several browser extensions available, such as VT4Browsers for Mozilla Firefox, and Google Chrome and vtExplorer for Internet Explorer. They allow the user to download files directly with VirusTotal's web application prior to storing them in the computer, as well as scanning URLs. VirusTotal for Mobile The service also offers an Android App that employs the public API to search any installed application for VirusTotal's previously scanned ones and show its status. Any application not previously scanned can be submitted, but an API key must be provided and other restrictions to public API usage may apply (see #Public API). Public API VirusTotal provides as a free service a public API that allows for automation of some of its online features such as "upload and scan files, submit and scan URLs, access finished scan reports and make automatic comments on URLs and samples". Some restrictions apply for requests made through the public API, such as requiring an individual API key freely obtained by online signing up, low priority scan queue, limited number of requests per time frame, etc. Antivirus products Antivirus engines used for detection for uploading files. Website/domain scanning engines and datasets Antivirus scanning engines used for URL scanning. File characterization tools & datasets Utilities used to provide additional info on uploaded files. Privacy Files uploaded to VirusTotal may be shared freely with anti-malware companies and will also be retained in a store. The VirusTotal About Page states under VirusTotal and confidentiality: Files and URLs sent to VirusTotal will be shared with antivirus vendors and security companies so as to help them in improving their services and products. We do this because we believe it will eventually lead to a safer Internet and better end-user protection. By default any file/URL submitted to VirusTotal which is detected by at least one scanner is freely sent to all those scanners that do not detect the resource. Additionally, all files and URLs enter a private store that may be accessed by premium (mainly security/antimalware companies/organizations) VirusTotal users so as to improve their security products and services. References External links Antivirus software Freeware Google acquisitions Internet properties established in 2004 2012 mergers and acquisitions
3645753
https://en.wikipedia.org/wiki/DOS%20MZ%20executable
DOS MZ executable
The DOS MZ executable format is the executable file format used for .EXE files in DOS. The file can be identified by the ASCII string "MZ" (hexadecimal: 4D 5A) at the beginning of the file (the "magic number"). "MZ" are the initials of Mark Zbikowski, one of the leading developers of MS-DOS. The MZ DOS executable file is newer than the COM executable format and differs from it. The DOS executable header contains relocation information, which allows multiple segments to be loaded at arbitrary memory addresses, and it supports executables larger than 64k; however, the format still requires relatively low memory limits. These limits were later bypassed using DOS extenders. Segment handling The environment of an EXE program run by DOS is found in its Program Segment Prefix. EXE files normally have separate segments for the code, data, and stack. Program execution begins at address 0 of the code segment, and the stack pointer register is set to whatever value is contained in the header information (thus if the header specifies a 512 byte stack, the stack pointer is set to 200h). It is possible to not use a separate stack segment and simply use the code segment for the stack if desired. The DS (data segment) register normally contains the same value as the CS (code segment) register and is not loaded with the actual segment address of the data segment when an EXE file is initialized; it is necessary for the programmer to set it themselves, generally done via the following instructions: MOV AX, @DATA MOV DS, AX Termination In the original DOS 1.x API, it was also necessary to have the DS register pointing to the segment with the PSP at program termination; this was done via the following instructions: PUSH DS XOR AX, AX PUSH AX Program termination would then be performed by a RETF instruction, which would retrieve the original segment address with the PSP from the stack and then jump to address 0, which contained an INT 20h instruction. The DOS 2.x API introduced a new program termination function, INT 21h Function 4Ch which does not require saving the PSP segment address at the start of the program, and Microsoft advised against the use of the older DOS 1.x method. Compatibility MZ DOS executables can be run from DOS and Windows 9x-based operating systems. 32-bit Windows NT-based operating systems can execute them using their built-in Virtual DOS machine (although some graphics modes are unsupported). 64-bit versions of Windows cannot execute them. Alternative ways to run these executables include DOSBox, DOSEMU, Wine, and Cygwin. MZ DOS executables can be created by linkers, like Digital Mars Optlink, MS linker, VALX or Open Watcom's WLINK; additionally, FASM can create them directly. See also DOS DOS extender Portable Executable DOS API Executable compression Further reading References External links A closer look at EXE DOS stub Executable file formats DOS technology
35503174
https://en.wikipedia.org/wiki/HP%20Release%20Control
HP Release Control
HP Release Control is an enterprise level software product which is a part of HP IT Performance Suite. Overview In a typical release life cycle, after a change enters the system, the change goes through an approval, implementation, and review phase. HP Release Control supports each one of these phases in the release life cycle. Approval During the approval phase, the Analysis module provides a detailed analysis of each change request in the system. Change Advisory Board (CAB) members can view information such as the potential impact of the change and the possible risk involved in implementation. The CAB uses this information to make more informed and accurate decisions regarding the approval of planned changes. In addition, the collaboration feature enables CAB members to provide feedback about planned changes, and to approve or reject the changes. Implementation During implementation, the Director and Implementor modules provide real-time information regarding change activities. Implementors and release teams are able to monitor the status of all change activities on a 24-hour timeline view. They receive alerts about issues such as scheduling, collisions, and delays, and use the implementation guidelines that were drawn up in the Analysis module during the approval phase. Review After implementation, the Post Implementation Review (PIR) feature provides a platform for reviewers to present their conclusions regarding the implemented change. Using information collected during the implementation phase, reviewers provide feedback about the overall success of the change and satisfaction levels of relevant parties. Management and Administration During the entire release life cycle, IT managers use the HP Release Control Dashboard module to view graphic displays of change request and activity data in real time. HP Release Control Administrators use the Administration module to configure the HP Release Control properties and perform administration tasks in the system. External links HP Release Control on SaaS: http://www8.hp.com/us/en/software-solutions/software.html?compURI=1172894 HP IT Performance Suite: https://web.archive.org/web/20150226064619/http://www8.hp.com/us/en/software/enterprise-software.html HP Service Manager software: http://www8.hp.com/us/en/software-solutions/software.html?compURI=1173779 Release Control
14465724
https://en.wikipedia.org/wiki/Koan%20%28program%29
Koan (program)
Koan is a generative music engine created by SSEYO, a company founded by Pete Cole and Tim Cole. The Koan technology is now owned by Intermorphic Limited, co-founded by the Cole brothers in 2007. Architecture and engine The SSEYO Koan Interactive Audio Platform (SKIAP) consisted of the core Koan generative music engine (the SSEYO Koan Generative Music Engine. or SKME), a set of authoring tools (SSEYO Koan Pro and SSEYO Koan X), a set of stand-alone Koan Music players (SSEYO Koan Plus, SSEYO Koan File Player and SSEYO Koan Album Player), and a plug-in for internet browsers such as Internet Explorer and Netscape. Development of the Koan engine started in 1990, when SSEYO was founded, and by 1992, the first version entered beta testing. Distributed by Koch Media, the first edition of Koan was publicly released in 1994, followed by the Koan Pro authoring tool in 1995. Later that year, SSEYO brought Koan to the attention of Brian Eno, who quickly showed great interest in the product. He began creating pieces with Koan Pro, collecting and publishing them in his 1996 work "Generative Music 1 with SSEYO Koan Software". This release featured a floppy disk containing the SSEYO Koan Plus player and a set of 12 Koan generative-music pieces that he authored. Eno's early relationship with Koan was captured in his 1996 diary A Year with Swollen Appendices. Brian Eno, 1996: Using the pseudonym CSJ Bofop, 1996: Availability The Koan Pro software was available for Microsoft Windows (16- and 32-bit versions), as well as Mac OS 8 and Mac OS 9. Integration with existing digital audio workstations could be difficult, as the software did not include an audio plug-in interface. Although SKIAP was developed until 2001, the last extension of the SKME itself was in 1998, as SSEYO concentrated on developing technology around the music engine, including real-time music synthesis and a highly programmable internet browser plug-in wrapper. Browser plugins The SSEYO Koan Plugin for web browsers was programmable in real-time through JavaScript, and was used to create several interesting interactive applications for web browsers. By 2001, Koan included a modular synthesizer; its engine also featured a file format named Vector Audio, which allowed very complicated generative pieces, complete with full synthesizer sound descriptions, to be delivered in only a few thousand bytes of plain text within a Web page. This development led to SSEYO being awarded a BAFTA Interactive Entertainment Award for Technical Innovation in 2001. Recent versions SSEYO was eventually acquired by Tao Group, which was sold in 2007. As a result, Koan and the Koan Pro authoring tool are no longer commercially available. In 2007, the original creators of Koan (Pete Cole and Tim Cole) founded a company called Intermorphic to create a new generative system called Noatikl. They acquired the Koan technology and thence described Noatikl as "the evolution of Koan.โ€ Noatikl supports importing data from earlier Koan systems, and offers a variety of audio plug-in implementations for easy integration with modern digital audio workstations. In 2012, Intermorphic released Noatikl 2, the first major update since 2007, which introduced the Partikl software synthesizer and Mixtikl mixer product. Noatikl 3, released in 2015, added a native iOS app together with extensive improvements to the Partikl software synthesizer. Performances In 2003, Ars Electronica held a 96-hour musical event entitled "Dark Symphony", playing live Koan music from various artists over a 160,000-watt PA in Linz's Klangpark on the banks of the Danube. See also A Year with Swollen Appendices โ€“ a book by Brian Eno which documents his use of Koan BAFTA Interactive Entertainment Awards#Technical Innovation References General references "Is the Future of Music Generative?" by Paul Brown "Electronic, aesthetic and social factors in Net music" by GOLO Fร–LLMER Floating Pointsโ€”Dark Symphony - Ars Electronica 2003 Computer Generated Music Composition by Chong (John) Yu http://www.intermorphic.com/tools/noatikl/generative_music.html - Intermorphic on Generative Music and the early history of Koan http://www.intermorphic.com/news/pressReleases/prnoatikl2_Generative_Music_Lab_for_Mac_Windows.html - Noatikl 2 release information Computer music software
1394843
https://en.wikipedia.org/wiki/Drakconf
Drakconf
drakconf, or the Mandriva Control Center, is a computer program written in Perl for the configuration of Mandriva Linux, a Linux distribution. It is a tool that allows easy configuration of Mandriva. It is licensed under the open-source GNU General Public License. It is also used by Mageia, a fork of Mandriva, where it is called Mageia Control Center. It is part of the so-called drakxtools and is specifically designed for this Linux distribution for running under command-line or X Window System environment. However the source code is available, so it could be ported to other distributions. This tool is a key feature in Mandriva Linux because it puts many configuration tools together in one place, and it is easier for a user who is new to Linux for configuring their system instead of changing configuration files using a text editor. References Online Help of the Program by the Mandriva Documentation Team Mandriva Control Center on Mandriva Wiki Free software programmed in Perl Linux package management-related software Mandriva Linux
21633118
https://en.wikipedia.org/wiki/Feather%20Linux
Feather Linux
Feather Linux, created by Robert Sullivan, was a Knoppix-based operating system which fits in under 128 MB (while older versions were made to fit within 64 MB). It boots from either a CD or a USB flash drive, into a Fluxbox desktop environment. It has a wide range of desktop and rescue software, and can load entirely into RAM (if enough RAM is available) or be installed to a hard drive. Feather Linux contains GTK+ applications, such as AbiWord and Pidgin. Feather Linux has tried to include software that people would frequently be using on their desktop. It is only available on the x86 architecture. It can run on a 486 or higher, and requires 16MB RAM to run on the console, and 24MB RAM to run the X server. According to DistroWatch, Feather Linux is discontinued and its final release was on 2005-07-04. The Feather Linux home page is no longer available. Feather Linux and Damn Small Linux share some common goals. See also Comparison of Linux distributions Lightweight Linux distribution References External links Former official website, now unavailable Feather Linux archive and downloads Knoppix Light-weight Linux distributions Discontinued Linux distributions Live USB Linux distributions
10135104
https://en.wikipedia.org/wiki/Audiograbber
Audiograbber
Audiograbber is a proprietary freeware CD audio extractor/converter program for Microsoft Windows. It was one of the first programs in the genre to become popular. The data extraction algorithm was designed by Jackie Franck and was included in the Xing Technology software package Xing Audio Catalyst in the mid-1990s. It does not use Xing Technology's proprietary MP3 encoding library. Instead, it uses the LAME encoder, Ogg Vorbis encoder, WMA codec, as well as any format supported by an external command-line encoder library. The author is no longer developing this software. Audiograbber is able to rip CDs, or record audio coming in via mic jack, or capture audio playing on the computer but not from the internet, into several formats, including WAV, MP3 and others. It performs the conversions entirely digitally, bypassing the system sound card, enabling accurate digital conversion. For convenience, it supports the freedb database of Compact Disc track listings (offline as of June 13, 2020), to allow ripped tracks, with reduced user effort, to have the names of songs, artists and albums. It also supports normalizing, ID3 tag and CD-Text support. A line-in sampling function can automatically split LP recordings into separate tracks, plus it can perform noise reduction with a proprietary VST plug-in from Algorithmix. Prior to the release of version 1.83 in February 2004, Audiograbber was shareware. The unregistered versions of the software only allowed a random selection of half the tracks of a given CD to be extracted in each ripping session. These limitations in the software were due to a restrictive clause in an agreement between the author and Xing Technology. After the agreement expired, the software was made available as freeware with no limitations on its function. Version 1.83 (as well as the convenient Lame plugin installer on the same site) from the developer site comes bundled with several adware like Funmoods Toolbar, Conduit Search, Zapp, VO Package, Browser Utility, AnyProtect. One has to read the installation screens carefully and deselect everything that one does not want to install. References External links Audiograbber homepage Audiograbber homepage Award page Review of version 1.62 on Sonic Spot Audiograbber information on Hydrogenaudio Knowledgebase Windows-only freeware Windows CD ripping software Data compression software
14215011
https://en.wikipedia.org/wiki/FlyBack
FlyBack
FlyBack is an open-source backup utility for Linux based on Git and modeled loosely after Apple's Time Machine. Overview FlyBack creates incremental backups of files, which can be restored at a later date. FlyBack presents a chronological view of a file system, allowing individual files or directories to be previewed or retrieved one at a time. Flyback was originally based on rsync when the project began in 2007, but in October 2009 it was rewritten from scratch using Git. User interface FlyBack presents the user with a typical file-manager style view of their file system, but with additional controls allowing the user to go forward or backward in time. It shows to the user files that exist, do not exist or have changed since the last version, and allows them to preview them before deciding to restore or ignore them. User settings FlyBack has very few settings in its preferences: The backup location Inclusion list (files or folders) Exclusion list (files or folders) When to automatically start a backup When to automatically delete old backups Further, the FlyBack UI allows users to: Commence a backup, and restore all or selected files. Requirements FlyBack is written in Python using GTK. These libraries, as well as the program Git, should be installed for the software to function properly. See also List of backup software Revision control Versioning file system References External links Free software programmed in Python Free backup software Backup software for Linux Software that uses GTK
23776578
https://en.wikipedia.org/wiki/Aaron%20Fulkerson
Aaron Fulkerson
Aaron Roe Fulkerson is an information technology businessman and founder of MindTouch, Inc. Fulkerson helped pioneer the open core business model, collaborative networks, and the application of Web Oriented Architecture to enterprise software. Fulkerson is founder and board member at MindTouch, a supplier of open source and collaborative network software. Prior to co-founding MindTouch with Steve Bjorg, Aaron was a member of Microsoftโ€™s Advanced Strategies and Policies division, and worked on distributed systems research. Previously, he owned and operated a successful software and Information Technology consulting firm, Gurion Digital LLP. He won a Jack Kent Cooke Foundation scholarship in 2002. Aaron advises Microsoft on open source practices, and is a founding advisory member of the OuterCurve Foundation (formerly known as the CodePlex Foundation). He is also the technical editor to MCGraw Hill's "Implementing Enterprise 2.0." Aaron is a contributing blogger and writer for Forbes, GigaOm OSTATIC, TechWeb Internet Evolution, Fortune Magazine, CNNMoney.com, CMSWire and ReadWriteWeb. In 2008, Aaron was cited one of seven "Leading Corporate Social Media Evangelists" by ReadWriteWeb. Aaron is also a frequent speaker on the topics of enterprise software, Enterprise 2.0, Social CRM (SCRM), open source, education, and entrepreneurship, In March 2010, he was named on the Mindtouch website as the forty-sixth in the list of "Most Powerful Voices in Open Source". References External links OSCON Speaker at Oscon 2009 Google Profile Fulkerson Personal Blog Open Core Licensing WOA Pioneer WOA Presentation Floss 89: Interview with Aaron Roe Fulkerson about Mindtouch Linux Journal video interview with Aaron Fulkerson Developer Video podcast about Wiki's Podcast interview Read Write Web Fulkerson Interviews American computer businesspeople Living people Year of birth missing (living people) Businesspeople in information technology
332693
https://en.wikipedia.org/wiki/Burroughs%20large%20systems
Burroughs large systems
The Burroughs Large Systems Group produced a family of large 48-bit mainframes using stack machine instruction sets with dense syllables. The first machine in the family was the B5000 in 1961. It was optimized for compiling ALGOL 60 programs extremely well, using single-pass compilers. It evolved into the B5500. Subsequent major redesigns include the B6500/B6700 line and its successors, as well as the separate B8500 line. In the 1970s, the Burroughs Corporation was organized into three divisions with very different product line architectures for high-end, mid-range, and entry-level business computer systems. Each division's product line grew from a different concept for how to optimize a computer's instruction set for particular programming languages. "Burroughs Large Systems" referred to all of these large-system product lines together, in contrast to the COBOL-optimized Medium Systems (B2000, B3000, and B4000) or the flexible-architecture Small Systems (B1000). Background Founded in the 1880s, Burroughs was the oldest continuously operating company in computing (Elliott Brothers was founded before Burroughs, but didn't make computing devices in the 19th century). By the late 1950s its computing equipment was still limited to electromechanical accounting machines such as the Sensimatic. It had nothing to compete with its traditional rivals IBM and NCR, who had started to produce larger-scale computers, or with recently founded Univac. In 1956, they purchased a 3rd party company and rebranded its design as the B205. Burroughs' first internally developed machine, the B5000, was designed in 1961 and Burroughs sought to address its late entry in the market with the strategy of a completely different design based on the most advanced computing ideas available at the time. While the B5000 architecture is dead, it inspired the B6500 (and subsequent B6700 and B7700). Computers using that architecture were still in production as the Unisys ClearPath Libra servers which run an evolved but compatible version of the MCP operating system first introduced with the B6700. The third and largest line, the B8500, had no commercial success. In addition to a proprietary CMOS processor design, Unisys also uses Intel Xeon processors and runs MCP, Microsoft Windows and Linux operating systems on their Libra servers; the use of custom chips was gradually eliminated, and by 2018 the Libra servers had been strictly commodity Intel for some years. B5000 The first member of the first series, the B5000, was designed beginning in 1961 by a team under the leadership of Robert (Bob) Barton. It had an unusual architecture. It has been listed by the computing scientist John Mashey as one of the architectures that he admires the most. "I always thought it was one of the most innovative examples of combined hardware/software design I've seen, and far ahead of its time." The B5000 was succeeded by the B5500 (which used disks rather than drum storage) and the B5700 (which allowed multiple CPUs to be clustered around shared disk). While there was no successor to the B5700, the B5000 line heavily influenced the design of the B6500, and Burroughs ported the Master Control Program (MCP) to that machine. Features All code automatically reentrant (fig 4.5 from the ACM Monograph shows in a nutshell why): programmers don't have to do anything more to have any code in any language spread across processors than to use just the two shown simple primitives. This results from these major features of this architecture: Partially data-driven tagged and descriptor-based design Hardware was designed to support software requirements Hardware designed to exclusively support high-level programming languages No Assembly language or assembler; all system software written in an extended variety of ALGOL 60. However, ESPOL had statements for each of the syllables in the architecture. Few programmer accessible registers Simplified instruction set Stack machine where all operations use the stack rather than explicit operands. This approach has by now fallen out of favor. All interrupts and procedure calls use the stack Support for an operating system (MCP, Master Control Program) Support for asymmetric (master/slave) multiprocessing Support for other languages such as COBOL Powerful string manipulation An attempt at a secure architecture prohibiting unauthorized access of data or disruptions to operations Early error-detection supporting development and testing of software A commercial implementation virtual memory, preceded only by the Ferranti Atlas. First segmented memory model Successors still exist in the Unisys ClearPath/MCP machines System design The B5000 was unusual at the time in that the architecture and instruction set were designed with the needs of software taken into consideration. This was a large departure from the computer system design of the time, where a processor and its instruction set would be designed and then handed over to the software people. The B5000, B5500 and B5700 in Word Mode has two different addressing modes, depending on whether it is executing a main program (SALF off) or a subroutine (SALF on). For a main program, the T field of an Operand Call or Descriptor Call syllable is relative to the Program Reference Table (PRT). For subroutines, the type of addressing is dependent on the high three bits of T and on the Mark Stack FlipFlop (MSFF), as shown in B5x00 Relative Addressing. Language support The B5000 was designed to exclusively support high-level languages. This was at a time when such languages were just coming to prominence with FORTRAN and then COBOL. FORTRAN and COBOL were considered weaker languages by some, when it comes to modern software techniques, so a newer, mostly untried language was adopted, ALGOL-60. The ALGOL dialect chosen for the B5000 was Elliott ALGOL, first designed and implemented by C. A. R. Hoare on an Elliott 503. This was a practical extension of ALGOL with I/O instructions (which ALGOL had ignored) and powerful string processing instructions. Hoare's famous Turing Award lecture was on this subject. Thus the B5000 was based on a very powerful language. Donald Knuth had previously implemented ALGOL 58 on an earlier Burroughs machine during the three months of his summer break, and he was peripherally involved in the B5000 design as a consultant. Many wrote ALGOL off, mistakenly believing that high-level languages could not have the same power as assembler, and thus not realizing ALGOL's potential as a systems programming language. The Burroughs ALGOL compiler was very fast โ€” this impressed the Dutch scientist Edsger Dijkstra when he submitted a program to be compiled at the B5000 Pasadena plant. His deck of cards was compiled almost immediately and he immediately wanted several machines for his university, Eindhoven University of Technology in the Netherlands. The compiler was fast for several reasons, but the primary reason was that it was a one-pass compiler. Early computers did not have enough memory to store the source code, so compilers (and even assemblers) usually needed to read the source code more than once. The Burroughs ALGOL syntax, unlike the official language, requires that each variable (or other object) be declared before it is used, so it is feasible to write an ALGOL compiler that reads the data only once. This concept has profound theoretical implications, but it also permits very fast compiling. Burroughs large systems could compile as fast as they could read the source code from the punched cards, and they had the fastest card readers in the industry. The powerful Burroughs COBOL compiler was also a one-pass compiler and equally fast. A 4000-card COBOL program compiled as fast as the 1000-card/minute readers could read the code. The program was ready to use as soon as the cards went through the reader. B6500 and B7500 The B6500 (delivery in 1969) and B7500 were the first computers in the only line of Burroughs systems to survive to the present day. While they were inspired by the B5000, they had a totally new architecture. Among the most important differences were The B6500 had variable length instructions with an 8-bit syllable instead of fixed length instructions with a 12-bit syllable. The B6500 had a 51-bit instead of a 48-bit word, and used 3 bits as a tag The B6500 had Symmetric Multiprocessing (SMP) The B6500 had a Saguaro stack The B6500 had paged arrays The B6500 had Display Registers, D1 thru D32 to allow nested subroutines to access variables in outer blocks. The B6500 used monolithic integrated circuits with magnetic thin-film memory. B6700 and B7700 Among other customers were all five New Zealand universities in 1971. B8500 The B8500 line derives from the D825, a military computer that was inspired by the B5000. The B8500 was designed in the 1960s as an attempt to merge the B5500 and the D825 designs. The system used monolithic integrated circuits with magnetic thin-film memory. The architecture employed a 48-bit word, stack, and descriptors like the B5500, but was not advertised as being upward-compatible. The B8500 could never be gotten to work reliably, and the project was canceled after 1970, never having delivered a completed system. History The central concept of virtual memory appeared in the designs of the Ferranti Atlas and the Rice Institute Computer, and the central concepts of descriptors and tagged architecture appeared in the design of the Rice Institute Computer in the late 1950s. However, even if those designs had a direct influence on Burroughs, the architectures of the B5000, B6500 and B8500 were very different from those of the Atlas and the Rice machine; they are also very different from each other. The first of the Burroughs large systems was the B5000. Designed in 1961, it was a second-generation computer using discrete transistor logic and magnetic-core memory. The first machines to replace the B5000 architecture were the B6500 and B7500. The successor machines followed the hardware development trends to re-implement the architectures in new logic over the next 25 years, with the B5500, B6500, B5700, B6700, B7700, B6800, B7800, and finally the Burroughs A series. After a merger in which Burroughs acquired Sperry Corporation and changed its name to Unisys, the company continued to develop new machines based on the MCP CMOS ASIC. These machines were the Libra 100 through the Libra 500, With the Libra 590 being announced in 2005. Later Libras, including the 590, also incorporate Intel Xeon processors and can run the Burroughs large systems architecture in emulation as well as on the MCP CMOS processors. It is unclear if Unisys will continue development of new MCP CMOS ASICs. Primary lines of hardware Hardware and software design, development, and manufacturing were split between two primary locations, in Orange County, California, and the outskirts of Philadelphia. The initial Large Systems Plant, which developed the B5000 and B5500, was located in Pasadena, California but moved to City of Industry, California, where it developed the B6500. The Orange County location, which was based in a plant in Mission Viejo, California but at times included facilities in nearby Irvine and Lake Forest, was responsible for the smaller B6x00 line, while the East Coast operations, based in Tredyffrin, Pennsylvania, handled the larger B7x00 line. All machines from both lines were fully object-compatible, meaning a program compiled on one could be executed on another. Newer and larger models had instructions which were not supported on older and slower models, but the hardware, when encountering an unrecognized instruction, invoked an operating system function which interpreted it. Other differences include how process switching and I/O were handled, and maintenance and cold-starting functionality. Larger systems included hardware process scheduling and more capable input/output modules, and more highly functional maintenance processors. When the Bxx00 models were replaced by the A Series models, the differences were retained but no longer readily identifiable by model number. ALGOL The Burroughs large systems implement an ALGOL-derived stack architecture. The B5000 was the first stack-based system. While B5000 was specifically designed to support ALGOL, this was only a starting point. Other business-oriented languages such as COBOL were also well supported, most notably by the powerful string operators which were included for the development of fast compilers. The ALGOL used on the B5000 is an extended ALGOL subset. It includes powerful string manipulation instructions but excludes certain ALGOL constructs, notably unspecified formal parameters. A DEFINE mechanism serves a similar purpose to the #defines found in C, but is fully integrated into the language rather than being a preprocessor. The EVENT data type facilitates coordination between processes, and ON FAULT blocks enable handling program faults. The user level of ALGOL does not include many of the insecure constructs needed by the operating system and other system software. Two levels of language extensions provide the additional constructs: ESPOL and NEWP for writing the MCP and closely related software, and DCALGOL and DMALGOL to provide more specific extensions for specific kinds of system software. ESPOL and NEWP Originally, the B5000 MCP operating system was written in an extension of extended ALGOL called ESPOL (Executive Systems Programming Oriented Language). This was replaced in the mid-to-late 70s by a language called NEWP. Though NEWP probably just meant "New Programming language", legends surround the name. A common (perhaps apocryphal) story within Burroughs at the time suggested it came from โ€œNo Executive Washroom Privileges.โ€ Another story is that circa 1976, John McClintock of Burroughs (the software engineer developing NEWP) named the language "NEWP" after being asked, yet again, "does it have a name yet": answering "nyoooop", he adopted that as a name. NEWP, too, was a subset ALGOL extension, but it was more secure than ESPOL, and dropped some little-used complexities of ALGOL. In fact, all unsafe constructs are rejected by the NEWP compiler unless a block is specifically marked to allow those instructions. Such marking of blocks provide a multi-level protection mechanism. NEWP programs that contain unsafe constructs are initially non-executable. The security administrator of a system is able to "bless" such programs and make them executable, but normal users are not able to do this. (Even "privileged users", who normally have essentially root privilege, may be unable to do this depending on the configuration chosen by the site.) While NEWP can be used to write general programs and has a number of features designed for large software projects, it does not support everything ALGOL does. NEWP has a number of facilities to enable large-scale software projects, such as the operating system, including named interfaces (functions and data), groups of interfaces, modules, and super-modules. Modules group data and functions together, allowing easy access to the data as global within the module. Interfaces allow a module to import and export functions and data. Super-modules allow modules to be grouped. DCALGOL and Message Control Systems (MCS) The second intermediate level of security between operating system code (in NEWP) and user programs (in ALGOL) is for middleware programs, which are written in DCALGOL (data comms ALGOL). This is used for message reception and dispatching which remove messages from input queues and places them on queues for other processes in the system to handle. Middleware such as COMS (introduced around 1984) receive messages from around the network and dispatch these messages to specific handling processes or to an MCS (Message Control System) such as CANDE ("Command AND Edit," the program development environment). MCSs are items of software worth noting โ€“ they control user sessions and provide keeping track of user state without having to run per-user processes since a single MCS stack can be shared by many users. Load balancing can also be achieved at the MCS level. For example, saying that you want to handle 30 users per stack, in which case if you have 31 to 60 users, you have two stacks, 61 to 90 users, three stacks, etc. This gives B5000 machines a great performance advantage in a server since you don't need to start up another user process and thus create a new stack each time a user attaches to the system. Thus you can efficiently service users (whether they require state or not) with MCSs. MCSs also provide the backbone of large-scale transaction processing. The MCS talked with an external co-processor, the DCP (Datacomm Control Processor). This was a 24-bit minicomputer with a conventional register architecture and hardware I/O capability to handle thousands of remote terminals. The DCP and the B6500 communicated by messages in memory, essentially packets in today's terms, and the MCS did the B6500-side processing of those messages. In the early years the DCP did have an assembler (Dacoma), an application program called DCPProgen written in B6500 ALGOL. Later the NDL (Network Definition Language) compiler generated the DCP code and NDF (network definition file). There was one ALGOL function for each kind of DCP instruction, and if you called that function then the corresponding DCP instruction bits would be emitted to the output. A DCP program was an ALGOL program comprising nothing but a long list of calls on these functions, one for each assembly language statement. Essentially ALGOL acted like the macro pass of a macro assembler. The first pass was the ALGOL compiler; the second pass was running the resulting program (on the B6500) which would then emit the binary for the DCP. DMALGOL and databases Another variant of ALGOL is DMALGOL (Data Management ALGOL). DMALGOL is ALGOL extended for compiling the DMSII database software from database description files created by the DASDL (Data Access and Structure Definition Language) compiler. Database designers and administrators compile database descriptions to generate DMALGOL code tailored for the tables and indexes specified. Administrators never need to write DMALGOL themselves. Normal user-level programs obtain database access by using code written in application languages, mainly ALGOL and COBOL, extended with database instructions and transaction processing directives. The most notable feature of DMALGOL is its preprocessing mechanisms to generate code for handling tables and indices. DMALGOL preprocessing includes variables and loops, and can generate names based on compile-time variables. This enables tailoring far beyond what can be done by preprocessing facilities which lack loops. DMALGOL is used to provide tailored access routines for DMSII databases. After a database is defined using the Data Access and Structure Definition Language (DASDL), the schema is translated by the preprocessor into tailored DMALGOL access routines and then compiled. This means that, unlike in other DBMS implementations, there is often no need for database-specific if/then/else code at run-time. In the 1970s, this "tailoring" was used very extensively to reduce the code footprint and execution time. It became much less used in later years, partly because low-level fine tuning for memory and speed became less critical, and partly because eliminating the preprocessing made coding simpler and thus enabled more important optimizations. DMALGOL included verbs like "find", "lock", "store". Also the verbs "begintransaction" and "endtransaction" were included, solving the deadlock situation when multiple processes accessed and updated the same structures. Roy Guck of Burroughs was one of the main developers of DMSII. In later years, with compiler code size being less of a concern, most of the preprocessing constructs were made available in the user level of ALGOL. Only the unsafe constructs and the direct processing of the database description file remain restricted to DMALGOL. Stack architecture In many early systems and languages, programmers were often told not to make their routines too small. Procedure calls and returns were expensive, because a number of operations had to be performed to maintain the stack. The B5000 was designed as a stack machine โ€“ all program data except for arrays (which include strings and objects) was kept on the stack. This meant that stack operations were optimized for efficiency. As a stack-oriented machine, there are no programmer addressable registers. Multitasking is also very efficient on the B5000 and B6500 lines. There are specific instruction to perform process switches: B5000, B5500, B5700 Initiate P1 (IP1) and Initiate P2 (IP2) B6500, B7500 and successors MVST (move stack). Each stack and associated Program Reference Table (PRT) represents a process (task or thread) and tasks can become blocked waiting on resource requests (which includes waiting for a processor to run on if the task has been interrupted because of preemptive multitasking). User programs cannot issue an IP1, IP2 or MVST, and there is only one place in the operating system where this is done. So a process switch proceeds something like this โ€“ a process requests a resource that is not immediately available, maybe a read of a record of a file from a block which is not currently in memory, or the system timer has triggered an interrupt. The operating system code is entered and run on top of the user stack. It turns off user process timers. The current process is placed in the appropriate queue for the resource being requested, or the ready queue waiting for the processor if this is a preemptive context switch. The operating system determines the first process in the ready queue and invokes the instruction move_stack, which makes the process at the head of the ready queue active. Stack speed and performance Stack performance was considered to be slow compared to register-based architectures, for example, such an architecture had been considered and rejected for the System/360. One way to increase system speed is to keep data as close to the processor as possible. In the B5000 stack, this was done by assigning the top two positions of the stack to two registers A and B. Most operations are performed on those two top of stack positions. On faster machines past the B5000, more of the stack may be kept in registers or cache near the processor. Thus the designers of the current successors to the B5000 systems can optimize in whatever is the latest technique, and programmers do not have to adjust their code for it to run faster โ€“ they do not even need to recompile, thus protecting software investment. Some programs have been known to run for years over many processor upgrades. Such speed up is limited on register-based machines. Another point for speed as promoted by the RISC designers was that processor speed is considerably faster if everything is on a single chip. It was a valid point in the 1970s when more complex architectures such as the B5000 required too many transistors to fit on a single chip. However, this is not the case today and every B5000 successor machine now fits on a single chip as well as the performance support techniques such as caches and instruction pipelines. In fact, the A Series line of B5000 successors included the first single chip mainframe, the Micro-A of the late 1980s. This "mainframe" chip (named SCAMP for Single-Chip A-series Mainframe Processor) sat on an Intel-based plug-in PC board. How programs map to the stack Here is an example of how programs map to the stack structure begin โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” This is lexical level 2 (level zero is reserved for the operating system and level 1 for code segments). โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” At level 2 we place global variables for our program. integer i, j, k; real f, g; array a [0:9]; procedure p (real p1, p2); value p1; โ€” p1 passed by value, p2 implicitly passed by reference. begin โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” This block is at lexical level 3 โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” real r1, r2; r2 := p1 * 5; p2 := r2; โ€” This sets g to the value of r2 p1 := r2; โ€” This sets p1 to r2, but not f โ€” Since this overwrites the original value of f in p1 it might be a โ€” coding mistake. Some few of ALGOL's successors therefore insist that โ€” value parameters be read only โ€“ but most do not. if r2 > 10 then begin โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” A variable declared here makes this lexical level 4 โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” โ€” integer n; โ€” The declaration of a variable makes this a block, which will invoke some โ€” stack building code. Normally you won't declare variables here, in which โ€” case this would be a compound statement, not a block. ... <== sample stack is executing somewhere here. end; end; ..... p (f, g); end. Each stack frame corresponds to a lexical level in the current execution environment. As you can see, lexical level is the static textual nesting of a program, not the dynamic call nesting. The visibility rules of ALGOL, a language designed for single pass compilers, mean that only variables declared before the current position are visible at that part of the code, thus the requirement for forward declarations. All variables declared in enclosing blocks are visible. Another case is that variables of the same name may be declared in inner blocks and these effectively hide the outer variables which become inaccessible. Lexical nesting is static, unrelated to execution nesting with recursion, etc. so it is very rare to find a procedure nested more than five levels deep, and it could be argued that such programs would be poorly structured. B5000 machines allow nesting of up to 32 levels. This could cause difficulty for some systems that generated Algol source as output (tailored to solve some special problem) if the generation method frequently nested procedure within procedure. Procedures Procedures can be invoked in four ways โ€“ normal, call, process, and run. The normal invocation invokes a procedure in the normal way any language invokes a routine, by suspending the calling routine until the invoked procedure returns. The call mechanism invokes a procedure as a coroutine. Coroutines have partner tasks, where control is explicitly passed between the tasks by means of a CONTINUE instruction. These are synchronous processes. The process mechanism invokes a procedure as an asynchronous task and in this case a separate stack is set up starting at the lexical level of the processed procedure. As an asynchronous task, there is no control over exactly when control will be passed between the tasks, unlike coroutines. The processed procedure still has access to the enclosing environment and this is a very efficient IPC (Inter Process Communication) mechanism. Since two or more tasks now have access to common variables, the tasks must be synchronized to prevent race conditions, which is handled by the EVENT data type, where processes can WAIT on an event until they are caused by another cooperating process. EVENTs also allow for mutual exclusion synchronization through the PROCURE and LIBERATE functions. If for any reason the child task dies, the calling task can continue โ€“ however, if the parent process dies, then all child processes are automatically terminated. On a machine with more than one processor, the processes may run simultaneously. This EVENT mechanism is a basic enabler for multiprocessing in addition to multitasking. Run invocation type The last invocation type is run. This runs a procedure as an independent task which can continue on after the originating process terminates. For this reason, the child process cannot access variables in the parent's environment, and all parameters passed to the invoked procedure must be call-by-value. Thus Burroughs Extended ALGOL had some of the multi-processing and synchronization features of later languages like Ada. It made use of the support for asynchronous processes that was built into the hardware. Inline procedures One last possibility is that a procedure may be declared INLINE, that is when the compiler sees a reference to it the code for the procedure is generated inline to save the overhead of a procedure call; this is best done for small pieces of code. Inline functions are similar to parameterized macros such as C #defines, except you don't get the problems with parameters that you can with macros. This facility is available in NEWP. Asynchronous calls In the example program only normal calls are used, so all the information will be on a single stack. For asynchronous calls, the stack would be split into multiple stacks so that the processes share data but run asynchronously. Display registers A stack hardware optimization is the provision of D (or "display") registers. These are registers that point to the start of each called stack frame. These registers are updated automatically as procedures are entered and exited and are not accessible by any software. There are 32 D registers, which is what limits to 32 levels of lexical nesting. Consider how we would access a lexical level 2 (D[2]) global variable from lexical level 5 (D[5]). Suppose the variable is 6 words away from the base of lexical level 2. It is thus represented by the address couple (2, 6). If we don't have D registers, we have to look at the control word at the base of the D[5] frame, which points to the frame containing the D[4] environment. We then look at the control word at the base of this environment to find the D[3] environment, and continue in this fashion until we have followed all the links back to the required lexical level. This is not the same path as the return path back through the procedures which have been called in order to get to this point. (The architecture keeps both the data stack and the call stack in the same structure, but uses control words to tell them apart.) As you can see, this is quite inefficient just to access a variable. With D registers, the D[2] register points at the base of the lexical level 2 environment, and all we need to do to generate the address of the variable is to add its offset from the stack frame base to the frame base address in the D register. (There is an efficient linked list search operator LLLU, which could search the stack in the above fashion, but the D register approach is still going to be faster.) With D registers, access to entities in outer and global environments is just as efficient as local variable access. D Tag Data โ€” Address couple, Comments register | 0 | n | (4, 1) The integer n (declared on entry to a block, not a procedure) |-----------------------| | D[4]==>3 | MSCW | (4, 0) The Mark Stack Control Word containing the link to D[3]. |=======================| | 0 | r2 | (3, 5) The real r2 |-----------------------| | 0 | r1 | (3, 4) The real r1 |-----------------------| | 1 | p2 | (3, 3) A SIRW reference to g at (2,6) |-----------------------| | 0 | p1 | (3, 2) The parameter p1 from value of f |-----------------------| | 3 | RCW | (3, 1) A return control word |-----------------------| | D[3]==>3 | MSCW | (3, 0) The Mark Stack Control Word containing the link to D[2]. |=======================| | 1 | a | (2, 7) The array a ======>[ten word memory block] |-----------------------| | 0 | g | (2, 6) The real g |-----------------------| | 0 | f | (2, 5) The real f |-----------------------| | 0 | k | (2, 4) The integer k |-----------------------| | 0 | j | (2, 3) The integer j |-----------------------| | 0 | i | (2, 2) The integer i |-----------------------| | 3 | RCW | (2, 1) A return control word |-----------------------| | D[2]==>3 | MSCW | (2, 0) The Mark Stack Control Word containing the link to the previous stack frame. |=======================| โ€” Stack bottom If we had invoked the procedure p as a coroutine, or a process instruction, the D[3] environment would have become a separate D[3]-based stack. This means that asynchronous processes still have access to the D[2] environment as implied in ALGOL program code. Taking this one step further, a totally different program could call another programโ€™s code, creating a D[3] stack frame pointing to another processโ€™ D[2] environment on top of its own process stack. At an instant the whole address space from the codeโ€™s execution environment changes, making the D[2] environment on the own process stack not directly addressable and instead make the D[2] environment in another process stack directly addressable. This is how library calls are implemented. At such a cross-stack call, the calling code and called code could even originate from programs written in different source languages and be compiled by different compilers. The D[1] and D[0] environments do not occur in the current process's stack. The D[1] environment is the code segment dictionary, which is shared by all processes running the same code. The D[0] environment represents entities exported by the operating system. Stack frames actually donโ€™t even have to exist in a process stack. This feature was used early on for file I/O optimization, the FIB (file information block) was linked into the display registers at D[1] during I/O operations. In the early nineties, this ability was implemented as a language feature as STRUCTURE BLOCKs and โ€“ combined with library technology - as CONNECTION BLOCKs. The ability to link a data structure into the display register address scope implemented object orientation. Thus, the B5000 actually used a form of object orientation long before the term was ever used. On other systems, the compiler might build its symbol table in a similar manner, but eventually the storage requirements would be collated and the machine code would be written to use flat memory addresses of 16-bits or 32-bits or even 64-bits. These addresses might contain anything so that a write to the wrong address could damage anything. Instead, the two-part address scheme was implemented by the hardware. At each lexical level, variables were placed at displacements up from the base of the level's stack, typically occupying one word - double precision or complex variables would occupy two. Arrays were not stored in this area, only a one word descriptor for the array was. Thus, at each lexical level the total storage requirement was not great: dozens, hundreds or a few thousand in extreme cases, certainly not a count requiring 32-bits or more. And indeed, this was reflected in the form of the VALC instruction (value call) that loaded an operand onto the stack. This op-code was two bits long and the rest of the byte's bits were concatenated with the following byte to give a fourteen-bit addressing field. The code being executed would be at some lexical level, say six: this meant that only lexical levels zero to six were valid, and so just three bits were needed to specify the lexical level desired. The address part of the VALC operation thus reserved just three bits for that purpose, with the remainder being available for referring to entities at that and lower levels. A deeply nested procedure (thus at a high lexical level) would have fewer bits available to identify entities: for level sixteen upwards five bits would be needed to specify the choice of levels 0โ€“31 thus leaving nine bits to identify no more than the first 512 entities of any lexical level. This is much more compact than addressing entities by their literal memory address in a 32-bit addressing space. Further, only the VALC opcode loaded data: opcodes for ADD, MULT and so forth did no addressing, working entirely on the top elements of the stack. Much more important is that this method meant that many errors available to systems employing flat addressing could not occur because they were simply unspeakable even at the machine code level. A task had no way to corrupt memory in use by another task, because it had no way to develop its address. Offsets from a specified D-register would be checked by the hardware against the stack frame bound: rogue values would be trapped. Similarly, within a task, an array descriptor contained information on the array's bounds, and so any indexing operation was checked by the hardware: put another way, each array formed its own address space. In any case, the tagging of all memory words provided a second level of protection: a misdirected assignment of a value could only go to a data-holding location, not to one holding a pointer or an array descriptor, etc. and certainly not to a location holding machine code. Array storage Arrays were not stored contiguous in memory with other variables, they were each granted their own address space, which was located via the descriptor. The access mechanism was to calculate on the stack the index variable (which therefore had the full integer range potential, not just fourteen bits) and use it as the offset into the array's address space, with bound checking provided by the hardware. Should an array's length exceed 1,024 words, the array would be segmented, and the index be converted into a segment index and an offset into the indexed segment. In ALGOL's case, a multidimensional array would employ multiple levels of such addressing. For a reference to A(i,j), the first index would be into an array of descriptors, one descriptor for each of the rows of A, which row would then be indexed with j as for a single-dimensional array, and so on for higher dimensions. Hardware checking against the known bounds of all the array's indices would prevent erroneous indexing. FORTRAN however regards all multidimensional arrays as being equivalent to a single-dimensional array of the same size, and for a multidimensional array simple integer arithmetic is used to calculate the offset where element A(i,j,k) would be found in that single sequence. The single-dimensional equivalent array, possibly segmented if large enough, would then be accessed in the same manner as a single-dimensional array in ALGOL. Although accessing outside this array would be prevented, a wrong value for one index combined with a suitably wrong value for another index might not result in a bounds violation of the single sequence array; in other words, the indices were not checked individually. Because an array's storage was not bounded on each side by storage for other items, it was easy for the system to "resize" an array - though changing the number of dimensions was precluded because compilers required all references to have the same number of dimensions. In ALGOL's case, this enabled the development of "ragged" arrays, rather than the usual fixed rectangular (or higher dimension) arrays. Thus in two dimensions, a ragged array would have rows that were of different sizes. For instance, given a large array A(100,100) of mostly-zero values, a sparse array representation that was declared as SA(100,0) could have each row resized to have exactly enough elements to hold only the non-zero values of A along that row. Because arrays larger than 1024 words were segmented but smaller arrays were not, on a system that was short of real memory, increasing the declared size of a collection of scratchpad arrays from 1,000 to say 1,050 could mean that the program would run with far less "thrashing" as only the smaller individual segments in use were needed in memory. Actual storage for an array segment would be allocated at run time only if an element in that segment were accessed, and all elements of a created segment would be initialised to zero. Not initialising an array to zero at the start therefore was encouraged by this, normally an unwise omission. Stack structure advantages One nice thing about the stack structure is that if a program does happen to fail, a stack dump is taken and it is very easy for a programmer to find out exactly what the state of a running program was. Compare that to core dumps and exchange packages of other systems. Another thing about the stack structure is that programs are implicitly recursive. FORTRAN was not expected to support recursion and perhaps one stumbling block to people's understanding of how ALGOL was to be implemented was how to implement recursion. On the B5000, this was not a problem โ€“ in fact, they had the reverse problem, how to stop programs from being recursive. In the end they didn't bother. The Burroughs FORTRAN compiler allowed recursive calls (just as every other FORTRAN compiler does), but unlike many other computers, on a stack-based system the returns from such calls succeeded as well. This could have odd effects, as with a system for the formal manipulation of mathematical expressions whose central subroutines repeatedly invoked each other without ever returning: large jobs were ended by stack overflow! Thus Burroughs FORTRAN had better error checking than other contemporary implementation of FORTRAN. For instance, for subroutines and functions it checked that they were invoked with the correct number of parameters, as is normal for ALGOL-style compilers. On other computers, such mismatches were common causes of crashes. Similarly with the array-bound checking: programs that had been used for years on other systems embarrassingly often would fail when run on a Burroughs system. In fact, Burroughs became known for its superior compilers and implementation of languages, including the object-oriented Simula (a superset of ALGOL), and Iverson, the designer of APL declared that the Burroughs implementation of APL was the best he'd seen. John McCarthy, the language designer of LISP disagreed, since LISP was based on modifiable code, he did not like the unmodifiable code of the B5000, but most LISP implementations would run in an interpretive environment anyway. The storage required for the multiple processes came from the system's memory pool as needed. There was no need to do SYSGENs on Burroughs systems as with competing systems in order to preconfigure memory partitions in which to run tasks. Tagged architecture The most defining aspect of the B5000 is that it is a stack machine as treated above. However, two other very important features of the architecture is that it is tag-based and descriptor-based. In the original B5000, a flag bit in each control or numeric word was set aside to identify the word as a control word or numeric word. This was partially a security mechanism to stop programs from being able to corrupt control words on the stack. Later, when the B6500 was designed, it was realized that the 1-bit control word/numeric distinction was a powerful idea and this was extended to three bits outside of the 48 bit word into a tag. The data bits are bits 0โ€“47 and the tag is in bits 48โ€“50. Bit 48 was the read-only bit, thus odd tags indicated control words that could not be written by a user-level program. Code words were given tag 3. Here is a list of the tags and their function: Internally, some of the machines had 60 bit words, with the extra bits being used for engineering purposes such as a Hamming code error-correction field, but these were never seen by programmers. The current incarnation of these machines, the Unisys ClearPath has extended tags further into a four bit tag. The microcode level that specified four bit tags was referred to as level Gamma. Even-tagged words are user data which can be modified by a user program as user state. Odd-tagged words are created and used directly by the hardware and represent a program's execution state. Since these words are created and consumed by specific instructions or the hardware, the exact format of these words can change between hardware implementation and user programs do not need to be recompiled, since the same code stream will produce the same results, even though system word format may have changed. Tag 1 words represent on-stack data addresses. The normal IRW simply stores an address couple to data on the current stack. The SIRW references data on any stack by including a stack number in the address. Tag 5 words are descriptors, which are more fully described in the next section. Tag 5 words represent off-stack data addresses. Tag 7 is the program control word which describes a procedure entry point. When operators hit a PCW, the procedure is entered. The ENTR operator explicitly enters a procedure (non-value-returning routine). Functions (value-returning routines) are implicitly entered by operators such as value call (VALC). Global routines are stored in the D[2] environment as SIRWs that point to a PCW stored in the code segment dictionary in the D[1] environment. The D[1] environment is not stored on the current stack because it can be referenced by all processes sharing this code. Thus code is reentrant and shared. Tag 3 represents code words themselves, which won't occur on the stack. Tag 3 is also used for the stack control words MSCW, RCW, TOSCW. Descriptor-based architecture The figure to the left shows how the Burroughs Large System architecture was fundamentally a hardware architecture for object-oriented programming, something that still doesn't exist in conventional architectures. Instruction sets There are three distinct instruction sets for the Burroughs large systems. All three are based on short syllables that fit evenly into words. B5000, B5500 and B5700 Programs on a B5000, B5500 and B5700 are made up of 12-bit syllables, four to a word. The architecture has two modes, Word Mode and Character Mode, and each has a separate repertoire of syllables. A processor may be either Control State or Normal State, and certain syllables are only permissible in Control State. The architecture does not provide for addressing registers or storage directly; all references are through the 1024 word Program Reference Table, current code segment, marked locations within the stack or to the A and B registers holding the top two locations on the stack. Burroughs numbers bits in a syllable from 0 (high bit) to 11 (low bit) B6500, B7500 and successors Programs are made up of 8-bit syllables, which may be Name Call, be Value Call or form an operator, which may be from one to twelve syllables in length. There are less than 200 operators, all of which fit into 8-bit syllables. Many of these operators are polymorphic depending on the kind of data being acted on as given by the tag. If we ignore the powerful string scanning, transfer, and edit operators, the basic set is only about 120 operators. If we remove the operators reserved for the operating system such as MVST and HALT, the set of operators commonly used by user-level programs is less than 100. The Name Call and Value Call syllables contain address couples; the Operator syllables either use no addresses or use control words and descriptors on the stack. Multiple processors The B5000 line also were pioneers in having multiple processors connected together on a high-speed bus. The B7000 line could have up to eight processors, as long as at least one was an I/O module. RDLK is a very low-level way of synchronizing between processors. The high level used by user programs is the EVENT data type. The EVENT data type did have some system overhead. To avoid this overhead, a special locking technique called Dahm locks (named after a Burroughs software guru, Dave Dahm) can be used. Notable operators are: HEYU โ€” send an interrupt to another processor RDLK โ€” Low-level semaphore operator: Load the A register with the memory location given by the A register and place the value in the B register at that memory location in a single uninterruptible cycle. The Algol compiler produced code to invoke this operator via a special function that enabled a "swap" operation on single-word data without an explicit temporary value. x:=RDLK(x,y); WHOI โ€” Processor identification IDLE โ€” Idle until an interrupt is received Two processors could infrequently simultaneously send each other a 'HEYU' command resulting in a lockup known as 'a deadly embrace'. Influence of the B5000 The direct influence of the B5000 can be seen in the current Unisys ClearPath range of mainframes which are the direct descendants of the B5000 and still have the MCP operating system after 40 years of consistent development. This architecture is now called emode (for emulation mode) since the B5000 architecture has been implemented on machines built from Intel Xeon processors running the x86 instruction set as the native instruction set, with code running on those processors emulating the B5000 instruction set. In those machines, there was also going to be an nmode (native mode), but this was dropped, so you may often hear the B5000 successor machines being referred to as "emode machines". B5000 machines were programmed exclusively in high-level languages; there is no assembler. The B5000 stack architecture inspired Chuck Moore, the designer of the programming language Forth, who encountered the B5500 while at MIT. In Forth - The Early Years, Moore described the influence, noting that Forth's DUP, DROP and SWAP came from the corresponding B5500 instructions (DUPL, DLET, EXCH). B5000 machines with their stack-based architecture and tagged memory also heavily influenced the Soviet Elbrus series of mainframes and supercomputers. The first two generations of the series featured tagged memory and stack-based CPUs that were programmed only in high-level languages. There existed a kind of an assembly language for them, called El-76, but it was more or less a modification of ALGOL 68 and supported structured programming and first-class procedures. Later generations of the series, though, switched away from this architecture to the EPIC-like VLIW CPUs. The Hewlett-Packard designers of the HP 3000 business system had used a B5500 and were greatly impressed by its hardware and software; they aimed to build a 16-bit minicomputer with similar software. Several other HP divisions created similar minicomputer or microprocessor stack machines. Bob Barton's work on reverse Polish notation (RPN) also found its way into HP calculators beginning with the 9100A, and notably the HP-35 and subsequent calculators. The NonStop systems designed by Tandem Computers in the late 1970s and early 1980s were also 16-bit stack machines, influenced by the B5000 indirectly through the HP 3000 connection, as several of the early Tandem engineers were formerly with HP. Around 1990, these systems migrated to MIPS RISC architecture but continued to support execution of stack machine binaries by object code translation or direct emulation. Sometime after 2000, these systems migrated to Itanium architecture and continued to run the legacy stack machine binaries. Bob Barton was also very influential on Alan Kay. Kay was also impressed by the data-driven tagged architecture of the B5000 and this influenced his thinking in his developments in object-oriented programming and Smalltalk. Another facet of the B5000 architecture was that it was a secure architecture that runs directly on hardware. This technique has descendants in the virtual machines of today in their attempts to provide secure environments. One notable such product is the Java JVM which provides a secure sandbox in which applications run. The value of the hardware-architecture binding that existed before emode would be substantially preserved in the x86-based machines to the extent that MCP was the one and only control program, but the support provided by those machines is still inferior to that provided on the machines where the B5000 instruction set is the native instruction set. A little-known Intel processor architecture that actually preceded 32-bit implementations of the x86 instruction set, the Intel iAPX 432, would have provided an equivalent physical basis, as it too was essentially an object-oriented architecture. See also Burroughs Medium Systems Burroughs Small Systems CANDE Network Definition Language (NDL) Work Flow Language (WFL) Octal floating point Notes References The Extended ALGOL Primer (Three Volumes), Donald J. Gregory. Computer Architecture: A Structured Approach, R. Doran, Academic Press (1979). Stack Computers: The New Wave, Philip J. Koopman, available at: B5500, B6500, B6700, B6800, B6900, B7700 manuals at: bitsavers.org Further reading Barton, Robert S. "A New Approach to the Functional Design of a Digital Computer" Proceedings of the Western Joint Computer Conference. ACM (1961). Burroughs B 5000 Oral history, Charles Babbage Institute, University of Minnesota. The Burroughs 5000 computer series is discussed by individuals responsible for its development and marketing from 1957 through the 1960s in a 1985 conference sponsored by AFIPS and Burroughs Corporation. Hauck, E.A., Dent, Ben A. "Burroughs B6500/B7500 Stack Mechanism", SJCC (1968) pp.ย 245โ€“251. McKeeman, William M. "Language Directed Computer Design", Fall Joint Computer Conference, (1967) pp.ย 413โ€“417. Organick, Elliot I. "Computer System Organization The B5700/B6700 series", Academic Press (1973). Waychoff, Richard, "Stories of the B5000 and People Who Were There", September 27, 1979. Allweiss, Jack. "The Burroughs B5900 and E-Mode A bridge to 21st Century Computing", Revised 2010. Martin, Ian. "'Too far ahead of its time': Britain, Burroughs and real-time banking in the 1960s", Society for the History of Technology Annual Meeting, 20 Sep-3 Oct 2010, Tacoma, USA. External links Ian Joyner's Burroughs page The Burroughs B5900 and E-Mode: A bridge to 21st Century Computing - Jack Allweiss (web archive of:) Ralph Klimek on the B7800 at Monash University "Early Burroughs Machines", University of Virginia's Computer Museum. "Computer System Organization", ACM Monograph Series. Index of B8500 manuals B5500 Emulation Project Project to create a functional emulator for the Burroughs B5500 computer system. "Burroughs B6500 film & transcript" Large Systems High-level language computer architecture Stack machines Transistorized computers Unisys Computer-related introductions in 1961 1960s in computing 1970s in computing 1980s in computing Burroughs B5000
30454551
https://en.wikipedia.org/wiki/1916%20USC%20Trojans%20football%20team
1916 USC Trojans football team
The 1916 USC Trojans football team represented the University of Southern California (USC) in the 1916 college football season. In their third non-consecutive year under head coach Dean Cromwell (Cromwell was also coach in 1909 and 1910), the Trojans compiled a 5-3 record and outscored their opponents by a combined total of 129 to 80. The season featured USC's first game against Arizona, a 20-7 victory in Phoenix, its third game against California, a 27-0 loss in Los Angeles, and its second game against Oregon Agricultural, a 16-7 loss in Los Angeles. Schedule References USC Trojans USC Trojans football seasons USC Trojans football
14941891
https://en.wikipedia.org/wiki/Gregory%20Abowd
Gregory Abowd
Gregory Dominic Abowd (born September 12, 1964) is a computer scientist best known for his work in ubiquitous computing, software engineering, and technologies for autism. He is the J.Z. Liang Professor in the School of Interactive Computing at the Georgia Institute of Technology, where he joined the faculty in 1994. Biography Early life Gregory Abowd was born in 1964 and raised in Farmington Hills, a suburb of Detroit, Michigan. He graduated summa cum laude with a B.S. in Honors Mathematics from the University of Notre Dame in 1986. He attended the University of Oxford in the United Kingdom as a Rhodes Scholar, where he received his M.Sc. in 1987 and his D.Phil. in 1991, both in the field of Computation. He was a research associate from 1989 to 1992 at the University of York and a postdoctoral research associate from 1992 to 1994 at Carnegie Mellon University. In 1994, he was appointed to the faculty at the Georgia Institute of Technology, where he remains today. Research interests and achievements Abowd's published work is primarily in the areas of Human-Computer Interaction, Ubiquitous Computing, Software Engineering, and Computer Supported Cooperative Work. He is particularly known for his work in ubiquitous computing, where he has made contributions in the areas of automated capture and access, context-aware computing, and smart home technologies. Abowd's research primarily has an applications focus, where he has worked to develop systems for health care, education, the home, and individuals with autism. At Georgia Tech, he teaches in the School of Interactive Computing in the College of Computing. He is a member of the GVU Center and directs the Ubiquitous Computing and Autism and Technology research groups. Abowd was the founding Director of the Aware Home Research Initiative and is Executive Director of the Health Systems Institute at Georgia Tech. In 2008, he founded the Atlanta Autism Consortium, a group of researchers interested in autism in Atlanta, Georgia. He is one of the authors of Human-Computer Interaction (Prentice Hall), a popular human-computer interaction textbook. Abowd's contributions to the fields of Human-Computer Interaction and Ubiquitous Computing have been recognized through his numerous awards and extensive published work. In 2008, he was named a Fellow of the Association for Computing Machinery, one of the top honors for computer science researchers. Within the field of Human-Computer Interaction, he has been recognized at the CHI Conference, the most prestigious publication venue in HCI, as a top researcher through induction to the CHI Academy in 2008 and was awarded the Social Impact Award in 2007. He is also one of the most prolific authors in computer science and in the field of Human-Computer Interaction. In March 2016, Abowd was named the J.Z. Liang Professor in the School of Interactive Computing. Selected bibliography Kientz, J.A., R.I. Arriaga, and G.D. Abowd: Baby Steps: Evaluation of a System to Support Record-Keeping for Parents of Young Children. CHI 2009. Hayes, G.R., L.M. Gardere, G.D. Abowd, K.N. Truong: CareLog: a selective archiving tool for behavior management in schools. CHI 2008: 685-694 Kientz, J.A., G.R. Hayes, T.L. Westeyn, T. Starner, G.D. Abowd: Pervasive Computing and Autism: Assisting Caregivers of Children with Special Needs. IEEE Pervasive Computing 6(1): 28-35 (2007) Patel, S.N., K.N. Truong, and G.D. Abowd. PowerLine Positioning: A Practical Sub-Room-Level Indoor Location System for Domestic Use. Proceedings of Ubicomp 2006. Kientz, J.A. G.R. Hayes, G.D. Abowd, R.E. Grinter: From the war room to the living room: decision support for home-based therapy teams. CSCW 2006: 209-218 Hayes, G.R., J.A. Kientz, K.N. Truong, D.R. White, G.D. Abowd, Trevor Pering: Designing Capture Applications to Support the Education of Children with Autism. Ubicomp 2004: 161-178 Abowd, G.D., and E.D. Mynatt: Charting past, present, and future research in ubiquitous computing. ACM Trans. Comput.-Hum. Interact. 7(1): 29-58 (2000) Abowd, G.D.: Classroom 2000: An Experiment with the Instrumentation of a Living Educational Environment. IBM Systems Journal 38(4): 508-530 (1999) Abowd, G.D., A.K. Dey, P.J. Brown, N. Davies, M. Smith, P. Steggles: Towards a Better Understanding of Context and Context-Awareness. HUC 1999: 304-307 Kidd, C.D. R. Orr, G.D. Abowd, C.G. Atkeson, I.A. Essa, B. MacIntyre, E.D. Mynatt, T. Starner, W. Newstetter: The Aware Home: A Living Laboratory for Ubiquitous Computing Research. CoBuild 1999: 191-198 Abowd, G.D. C.G. Atkeson, J.I. Hong, S. Long, R. Kooper, M. Pinkerton: Cyberguide: A mobile context-aware tour guide. Wireless Networks 3(5): 421-433 (1997) Abowd, G.D., R.B. Allen, D. Garlan: Formalizing Style to Understand Descriptions of Software Architecture. ACM Trans. Softw. Eng. Methodol. 4(4): 319-364 (1995) See also Mark Weiser Jennifer Mankoff Anind Dey Shwetak Patel CHI Academy References External links Abowd's personal home page Georgia Tech Ubiquitous Computing Research Group Georgia Tech Autism and Technology Research Group Aware Home Research Initiative Health Systems Institute Atlanta Autism Consortium Living people American computer scientists Humanโ€“computer interaction researchers Georgia Tech faculty Ubiquitous computing researchers Fellows of the Association for Computing Machinery Alumni of Trinity College, Oxford 1964 births Carnegie Mellon University alumni University of Notre Dame alumni People from Farmington Hills, Michigan Scientists from Michigan
17576402
https://en.wikipedia.org/wiki/ASP.NET%20Web%20Site%20Administration%20Tool
ASP.NET Web Site Administration Tool
ASP.NET Web Site Administration Tool is a utility provided along with Microsoft Visual Studio which assists in the configuration and administration of a website created using Microsoft Visual Studio 2005 and later versions. History The Web Site Administration tool was first introduced with ASP.NET 2.0 along with ASP.NET Microsoft Management Console (MMC) Snap-in. Interface ASP.NET Web Site Administration Tool can be accessed by clicking ASP.NET Configuration from the Website menu or Project menu in Visual Studio 2010 Professional, or by clicking on the ASP.NET Configuration icon in the Solution Explorer window. Programmatic access to the features provided by the ASP.NET Web Site administration tool is made possible by inclusion of the System.Web.Security namespace in the ASP.NET program. The classes Membership and Roles are used to store, access and modify user information in the ASPNETDB database. The user could be authenticated using the Membership.ValidateUser or FormsAuthentication.Authenticate methods. Page-based user authorization is realized by the usage of the AuthorizeRequest event of the HttpApplication class. Features The ASP.NET Web Site Administration tool is a multi-tabbed utility which has the following features: Web Site Administration Tool Security Tab Web Site Administration Tool Application Tab Web Site Administration Tool Provider Tab Web Site Administration Tool Internals Security tab The security tab is used to create users and roles, group users under different roles and assign access rules either at the role-level or user-level. When the Web site administration tool is opened to modify the existing settings, a new database is created in the App_Data folder of the application. This database stores ASP.NET membership-related information. The name of the database created is ASPNETDB by default. The security tab simplifies and optimizes user authentication and authorization. It makes it comparatively easy to configure user permissions than code-based user-defined authentication systems which require a great amount of time, cost and manpower. However, a major drawback of this tool is that access rules could be defined only at the folder-level and not at the page-level. Application tab The Application tab is used to specify application settings, configure SMTP settings and enable or disable debugging and tracing apart from other uses. The Application tab interacts with the configuration file of the application (web.config) and not with the ASPNETDB database. Application settings are created as objects and inserted as name-value pairs in the web.config file. Provider tab The Provider tab is used to specify the database provider for the ASPNETDB database used to store ASP.NET membership and role information. The security page does not appear unless and until the database provider is specified in the Providers tab. An SQL Data provider is generally used, but Oracle Data providers are also used in case of Oracle databases. The provider allows the user the option to store all data related to the ASP.NET Website Administration tool or different databases for each purpose. References ASP.NET Microsoft Visual Studio
8798298
https://en.wikipedia.org/wiki/LedgerSMB
LedgerSMB
LedgerSMB is a free software double entry accounting and Enterprise resource planning (ERP) system. Accounting data is stored in a SQL database server and a standard web browser can be used as its user interface. The system uses the Perl programming language and a Perl database interface module for processing, and PostgreSQL for data storage. LedgerSMB is a client-server application, with server access through a web browser. LedgerSMB is distributed under the terms of the GPL-2.0-or-later license. Features LedgerSMB features a full general ledger, with multi-currency support, accounts receivable & payable, with outstanding & aging reports, project accounting and other flexible accounting dimensions, financial reports, with multi-period comparisons: Income statement (Profit & Loss report) Balance sheet Trial balance, quotations and order management, time tracking, invoicing capabilities (mailing, printing), with invoices based on: orders (which in turn can be based on quotations) shipments time cards, inventory tracking, with activity reports, fixed assets full separation of duties for invoices and GL transactions LedgerSMB supports multiple currencies, multiple sales or VAT tax rates and per-user language and locale (number formatting) settings. It also supports per-customer language settings, so invoices can be translated into various languages when printed, and per-language invoice templates are also an option. Releases 1.9.0 was released on 2021-09-24 with a wide variety of improvements and fixes, including repair of the ability to send out AR/AP aging reports by e-mail (which regressed in 1.3.42). Where prior releases had a central theme or special focus, this release is more a general cleanup release which touches all parts of the codebase. 1.8.0 was released on 2020-09-04 with a wide variety of improvements and fixes; to that extent, this release is different than the thematic releases between 1.5 and 1.7 which sought to improve specific areas of functionality. Notable changes in this release include better support for container images by allowing logos (for inclusion in printed documents) to be stored in the database instead of on disc, allowing the use of standard containers as well as the upgrade of payments to be first order citizens. Where payment data used to be derived from transaction data, this release stores all payments as separate data items specifically, considerably changing reconciliation experience. 1.7.0 was released on 2019-10-04 with improved support for transactions in foreign currencies, much code cleanup and yet more tests again. With the 1.7.0 release, the project continues the trend to shorten the cycle between minor (.0) releases. 1.6.0 (End of Life) was released on 2018-06-10 with a change log focused on stability and a code base to build a future on. 1.5.0 (End of Life) was released on 2016-12-24 with a change log focused on stability and user experience. 1.4.0 (End of Life) was released on 2014-09-15 with another sizeable change log. The 1.3.0 (End of Life) release came out on 2011-10-11, with a sizeable change log, generally focussing on performance, separation of duties and fixing the (design) issues in 1.2. The 1.2.0 (End of Life) release (announced on 2007-04-06) included a number of very deep security fixes and the beginnings of the refactoring process. The tax and price matrix code was centralized. This release was quite problematic and the core team ended up pulling 1.2.0 and 1.2.1 from public distribution due to a number of issues in integrating old and new code. Many members of the core team have expressed frustration at the level of problems, but Chris Travers has generally compared the problems to those of Apache 2.0, where changes in architecture have caused problematic releases. The general hope is that 1.2.x will be the most difficult and problematic release, perhaps of all time. At the same time, it cannot be denied that a number of the problems in 1.2.0 were the result of trying to do too much too quickly without adequate review. The 1.1.0 release merged in many patches that had been done for other customers but did not change the structure of the code in any significant way. By this time, however, most of the core members were unhappy with the current architecture and had decided to work on refactoring the code. The initial release (1.0.0 on 2006-09-06) and the events leading up to it, are described in the History section. 1.5+ Developments As of 1.5, development has taken a direction to move to a heavier (in-browser) client with access to web services in the backend. To that extent, the 1.5 UI has been realised as a single-page web application. The result is a (much) more responsive experience which looks a lot more modern and builds a foundation for much more fundamental separation of front and back end. Massive efforts have gone into developing quality assurance measures during the development 1.5 cycle and continue to be a focus going forward. 1.3+ Developments Prior to 1.3, there were numerous challenges in the code base, such as the fact that the Perl code generated both database queries and web pages by using a combination of string-concatenation and string-printing page snippets to compose the resulting HTML. While this functioned reasonably well, it made the interface very difficult to modify, and interoperability with projects written in other languages particularly difficult. Additionally, most state was kept in global variables which were modified all over the place, leading to unexpected results on nearly every code-modification. Faced with these challenges, the LedgerSMB team developed a new architecture which addresses these issues by adding support for templates in the user interface, and moving all database calls into stored procedures. Although closely resembling model-view-controller (MVC) in structure, it is not broken down in precisely the same way as other MVC implementations. The overall design considerations included a desire to ensure that multiple programming languages could be used cross-platform to access LedgerSMB logic and that security would be consistently enforced across these applications. Thus the LedgerSMB team envisioned a "one database, many applications" environment typical of SQL. The overall approach heavily leverages PostgreSQL roles (application users are database users, and are assigned roles). Access to the database logic for new code (added in 1.3 or later) goes through stored procedures which act like named queries. Permissions are sometimes granted on underlying relations or on the stored procedures. The stored procedures have semantic argument names, allowing for automatic mapping in of object properties. These are then exposed to the Perl code through fairly light-weight wrappers. User interface code wrapped around Template Toolkit, which is also used for generating PDF's via LaTeX, CSV files, Excel, Open Document etc. Workflow is handled through relatively light-weight Perl scripting. History The project began as a fork of SQL-Ledger when Chris Travers, dissatisfied with the handling of security bugs in SQL-Ledger, joined forces with Christopher Murtagh to produce a fix for CVE-2006-4244. This bug was apparently reported to the SQL-Ledger author, Dieter Simader, several months prior to the Chris' working on a patch. The initial release of LedgerSMB, along with full disclosure of the bug on the main mailing list, strained relations between SQL-Ledger supporters and the members of the nascent LedgerSMB project. See also Comparison of accounting software Enterprise resource planning (ERP) List of ERP software packages List of free and open source software packages References External links Official website Free accounting software Free software programmed in Perl Software forks Enterprise resource planning software for Linux Business software for Linux Business software for MacOS Business software for Windows Free ERP software ERP software Accounting software for Linux Accounting software
5648968
https://en.wikipedia.org/wiki/Luis%20von%20Ahn
Luis von Ahn
Luis von Ahn (; born 19 August 1978) is a Guatemalan entrepreneur and a Consulting Professor in the Computer Science Department at Carnegie Mellon University in Pittsburgh, Pennsylvania. He is known as one of the pioneers of crowdsourcing. He is the founder of the company reCAPTCHA, which was sold to Google in 2009, and the co-founder and CEO of Duolingo, the world's most popular language-learning platform. Education and early life Luis von Ahn was born in and grew up in Guatemala City. Von Ahn grew up in an upper-middle class household with both of his parents working as physicians. He attended a private English language school in Guatemala City, an experience he cites as a great privilege. When von Ahn was eight years old, his mother bought him a Commodore 64 computer, beginning his fascination with technology and computer science. He is a Guatemalan of German-Jewish descent. At age 18, von Ahn began studying at Duke University, where he received a Bachelor of Science (BS) in Mathematics (summa cum laude) in 2000. He later earned his PhD in Computer Science at Carnegie Mellon University in 2005. In 2006, Von Ahn became a faculty member at the Carnegie Mellon School of Computer Science, Carnegie Mellon University. Career and research Von Ahn's early research was in the field of cryptography. With Nicholas J. Hopper and John Langford, he was the first to provide rigorous definitions of steganography and to prove that private-key steganography is possible. In 2000, he did early pioneering work with Manuel Blum on CAPTCHAs, computer-generated tests that humans are routinely able to pass but that computers have not yet mastered. These devices are used by web sites to prevent automated programs, or bots, from perpetrating large-scale abuse, such as automatically registering for large numbers of accounts or purchasing huge numbers of tickets for resale by scalpers. CAPTCHAs brought von Ahn his first widespread fame among the general public due to their coverage in the New York Times and USA Today and on the Discovery Channel, NOVA scienceNOW, and other mainstream outlets. Von Ahn's Ph.D. thesis, completed in 2005, was the first publication to use the term "human computation" that he had coined, referring to methods that combine human brainpower with computers to solve problems that neither could solve alone. Von Ahn's Ph.D. thesis is also the first work on Games With A Purpose, or GWAPs, which are games played by humans that produce useful computation as a side effect. The most famous example is the ESP Game, an online game in which two randomly paired people are simultaneously shown the same picture, with no way to communicate. Each then lists a number of words or phrases that describe the picture within a time limit, and are rewarded with points for a match. This match turns out to be an accurate description of the picture, and can be successfully used in a database for more accurate image search technology. The ESP Game was licensed by Google in the form of the Google Image Labeler, and is used to improve the accuracy of the Google Image Search. Von Ahn's games brought him further coverage in the mainstream media. His thesis won the Best Doctoral Dissertation Award from Carnegie Mellon University's School of Computer Science. In July 2006, von Ahn gave a tech talk at Google on "Human Computation" (i.e., crowdsourcing) which was watched by over one million viewers. In 2007, von Ahn invented reCAPTCHA, a new form of CAPTCHA that also helps digitize books. In reCAPTCHA, the images of words displayed to the user come directly from old books that are being digitized; they are words that optical character recognition could not identify and are sent to people throughout the web to be identified. ReCAPTCHA is currently in use by over 100,000 web sites and is transcribing over 40 million words per day. In 2009, von Ahn and his graduate student Severin Hacker began to develop Duolingo, a language education platform. They founded a company of the same name, with von Ahn as chief executive officer and Hacker as chief technology officer. In November 2011, a private beta test of Duolingo was launched and the app was released to the public in June 2012.As of May 2020, Duolingo was valued at $1.5 billion. In a talk with NPR, von Ahn shared that Duolingo saw a spike in users during the COVID-19 pandemic. von Ahn has a chapter giving advice in Tim Ferriss' book Tools of Titans. In May of 2021 von Ahn joined the executive committee of Partnership for Central America, an entity bringing together a variety of businesses, academic organizations and nonprofit organizations "to advance economic opportunity, address urgent climate, education and health challenges, and promote long-term investments and workforce capability building to support a vision of hope for Central America". The Partnership for Central America was presented in the context of the United States' Vice President Kamala Harris's โ€œcall to actionโ€ to address irregular migration from Central America to the United States by โ€œdeepening investment in the Northern Triangleโ€ (a term coined to refer to Guatemala, El Salvador and Honduras). Awards and honors His research on CAPTCHAs and human computation has earned him international recognition and numerous honors. He was awarded a MacArthur Fellowship in 2006, the David and Lucile Packard Foundation Fellowship in 2009, a Sloan Fellowship in 2009, and a Microsoft New Faculty Fellowship in 2007, and the Presidential Early Career Award for Scientists and Engineers in 2012. He has also been named one of the 50 Best Brains in Science by Discover, and has made it to many recognition lists that include Popular Science's Brilliant 10, Silicon.com's 50 Most Influential People in Technology, MIT Technology Review'''s TR35: Young Innovators Under 35, and Fast Company's 100 Most Innovative People in Business.Siglo Veintiuno'', one of the biggest newspapers in Guatemala, chose him as the person of the year in 2009. In 2011, Foreign Policy Magazine in Spanish named him the most influential intellectual of Latin America and Spain. In 2011, he was awarded the A. Nico Habermann development chair in computer science, which is awarded every three years to a junior faculty member of unusual promise in the School of Computer Science. In 2017, he was awarded the Distinguished Leadership Award for Innovation and Social Impact by the Inter-American Dialogue. In 2018, von Ahn was awarded the Lemelson-MIT prize for his "dedication to improving the world through technology." Teaching Von Ahn has used a number of unusual techniques in his teaching, which have won him multiple teaching awards at Carnegie Mellon University. In the fall of 2008, he began teaching a new course at Carnegie Mellon entitled "Science of the Web". A combination of graph theory and social science, the course covers topics from network and game theory to auction theory. References External links Google Tech Talk on human computation by Luis von Ahn Google Image Labeler John D. and Catherine T. MacArthur Foundation Example of SEO Project given at CMU Profile: Luis von Ahn NOVA scienceNOW aired 2009-06-30 Guatemalan computer scientists 1978 births Living people MacArthur Fellows Carnegie Mellon University alumni Duke University Trinity College of Arts and Sciences alumni Duolingo Carnegie Mellon University faculty Human-based computation People from Guatemala City Computer science educators Guatemalan people of German descent Guatemalan academics Hispanic and Latino American scientists
18890
https://en.wikipedia.org/wiki/Microsoft%20Windows
Microsoft Windows
Microsoft Windows, commonly referred to as Windows, is a group of several proprietary graphical operating system families, all of which are developed and marketed by Microsoft. Each family caters to a certain sector of the computing industry. Active Microsoft Windows families include Windows NT and Windows IoT; these may encompass subfamilies, (e.g. Windows Server or Windows Embedded Compact) (Windows CE). Defunct Microsoft Windows families include Windows 9x, Windows Mobile and Windows Phone. Microsoft introduced an operating environment named Windows on November 20, 1985, as a graphical operating system shell for MS-DOS in response to the growing interest in graphical user interfaces (GUIs). Microsoft Windows came to dominate the world's personal computer (PC) market with over 90% market share, overtaking Mac OS, which had been introduced in 1984. Apple came to see Windows as an unfair encroachment on their innovation in GUI development as implemented on products such as the Lisa and Macintosh (eventually settled in court in Microsoft's favor in 1993). On PCs, Windows is still the most popular operating system in all countries. However, in 2014, Microsoft admitted losing the majority of the overall operating system market to Android, because of the massive growth in sales of Android smartphones. In 2014, the number of Windows devices sold was less than 25% that of Android devices sold. This comparison, however, may not be fully relevant, as the two operating systems traditionally target different platforms. Still, numbers for server use of Windows (that are comparable to competitors) show one third market share, similar to that for end user use. , the most recent version of Windows for PCs and tablets is Windows 11, version 21H2. The most recent version for embedded devices is Windows 10, version 21H1. The most recent version for server computers is Windows Server 2022, version 21H2. A specialized version of Windows also runs on the Xbox One and Xbox Series X/S video game consoles. Genealogy By marketing role Microsoft, the developer of Windows, has registered several trademarks, each of which denotes a family of Windows operating systems that target a specific sector of the computing industry. As of 2014, the following Windows families were being actively developed: Windows NT: Started as a family of operating systems with Windows NT 3.1, an operating system for server computers and workstations. It now consists of three operating system subfamilies that are released almost at the same time and share the same kernel: Windows: The operating system for mainstream personal computers and tablets. The latest version is Windows 11. The main competitor of this family is macOS by Apple for personal computers and iPadOS and Android for tablets (c.f. ). Windows Server: The operating system for server computers. The latest version is Windows Server 2022. Unlike its client sibling, it has adopted a strong naming scheme. The main competitor of this family is Linux. (c.f. ) Windows PE: A lightweight version of its Windows sibling, meant to operate as a live operating system, used for installing Windows on bare-metal computers (especially on many computers at once), recovery or troubleshooting purposes. The latest version is Windows PE 10. Windows IoT (previously Windows Embedded): Initially, Microsoft developed Windows CE as a general-purpose operating system for every device that was too resource-limited to be called a full-fledged computer. Eventually, however, Windows CE was renamed Windows Embedded Compact and was folded under Windows Compact trademark which also consists of Windows Embedded Industry, Windows Embedded Professional, Windows Embedded Standard, Windows Embedded Handheld and Windows Embedded Automotive. The following Windows families are no longer being developed: Windows 9x: An operating system that targeted the consumer market. Discontinued because of suboptimal performance. (PC World called its last version, Windows Me, one of the worst products of all time.) Microsoft now caters to the consumer market with Windows NT. Windows Mobile: The predecessor to Windows Phone, it was a mobile phone operating system. The first version was called Pocket PC 2000; the third version, Windows Mobile 2003 is the first version to adopt the Windows Mobile trademark. The last version is Windows Mobile 6.5. Windows Phone: An operating system sold only to manufacturers of smartphones. The first version was Windows Phone 7, followed by Windows Phone 8, and Windows Phone 8.1. It was succeeded by Windows 10 Mobile, that is now also discontinued. Version history The term Windows collectively describes any or all of several generations of Microsoft operating system products. These products are generally categorized as follows: Early versions The history of Windows dates back to 1981 when Microsoft started work on a program called "Interface Manager". It was announced in November 1983 (after the Apple Lisa, but before the Macintosh) under the name "Windows", but Windows 1.0 was not released until November 1985. Windows 1.0 was to compete with Apple's operating system, but achieved little popularity. Windows 1.0 is not a complete operating system; rather, it extends MS-DOS. The shell of Windows 1.0 is a program known as the MS-DOS Executive. Components included Calculator, Calendar, Cardfile, Clipboard Viewer, Clock, Control Panel, Notepad, Paint, Reversi, Terminal and Write. Windows 1.0 does not allow overlapping windows. Instead all windows are tiled. Only modal dialog boxes may appear over other windows. Microsoft sold as included Windows Development libraries with the C development environment, which included numerous windows samples. Windows 2.0 was released in December 1987, and was more popular than its predecessor. It features several improvements to the user interface and memory management. Windows 2.03 changed the OS from tiled windows to overlapping windows. The result of this change led to Apple Computer filing a suit against Microsoft alleging infringement on Apple's copyrights. Windows 2.0 also introduced more sophisticated keyboard shortcuts and could make use of expanded memory. Windows 2.1 was released in two different versions: Windows/286 and Windows/386. Windows/386 uses the virtual 8086 mode of the Intel 80386 to multitask several DOS programs and the paged memory model to emulate expanded memory using available extended memory. Windows/286, in spite of its name, runs on both Intel 8086 and Intel 80286 processors. It runs in real mode but can make use of the high memory area. In addition to full Windows-packages, there were runtime-only versions that shipped with early Windows software from third parties and made it possible to run their Windows software on MS-DOS and without the full Windows feature set. The early versions of Windows are often thought of as graphical shells, mostly because they ran on top of MS-DOS and use it for file system services. However, even the earliest Windows versions already assumed many typical operating system functions; notably, having their own executable file format and providing their own device drivers (timer, graphics, printer, mouse, keyboard and sound). Unlike MS-DOS, Windows allowed users to execute multiple graphical applications at the same time, through cooperative multitasking. Windows implemented an elaborate, segment-based, software virtual memory scheme, which allows it to run applications larger than available memory: code segments and resources are swapped in and thrown away when memory became scarce; data segments moved in memory when a given application had relinquished processor control. Windows 3.x Windows 3.0, released in 1990, improved the design, mostly because of virtual memory and loadable virtual device drivers (VxDs) that allow Windows to share arbitrary devices between multi-tasked DOS applications. Windows 3.0 applications can run in protected mode, which gives them access to several megabytes of memory without the obligation to participate in the software virtual memory scheme. They run inside the same address space, where the segmented memory provides a degree of protection. Windows 3.0 also featured improvements to the user interface. Microsoft rewrote critical operations from C into assembly. Windows 3.0 is the first Microsoft Windows version to achieve broad commercial success, selling 2ย million copies in the first six months. Windows 3.1, made generally available on March 1, 1992, featured a facelift. In August 1993, Windows for Workgroups, a special version with integrated peer-to-peer networking features and a version number of 3.11, was released. It was sold along with Windows 3.1. Support for Windows 3.1 ended on December 31, 2001. Windows 3.2, released 1994, is an updated version of the Chinese version of Windows 3.1. The update was limited to this language version, as it fixed only issues related to the complex writing system of the Chinese language. Windows 3.2 was generally sold by computer manufacturers with a ten-disk version of MS-DOS that also had Simplified Chinese characters in basic output and some translated utilities. Windows 9x The next major consumer-oriented release of Windows, Windows 95, was released on August 24, 1995. While still remaining MS-DOS-based, Windows 95 introduced support for native 32-bit applications, plug and play hardware, preemptive multitasking, long file names of up to 255 characters, and provided increased stability over its predecessors. Windows 95 also introduced a redesigned, object oriented user interface, replacing the previous Program Manager with the Start menu, taskbar, and Windows Explorer shell. Windows 95 was a major commercial success for Microsoft; Ina Fried of CNET remarked that "by the time Windows 95 was finally ushered off the market in 2001, it had become a fixture on computer desktops around the world." Microsoft published four OEM Service Releases (OSR) of Windows 95, each of which was roughly equivalent to a service pack. The first OSR of Windows 95 was also the first version of Windows to be bundled with Microsoft's web browser, Internet Explorer. Mainstream support for Windows 95 ended on December 31, 2000, and extended support for Windows 95 ended on December 31, 2001. Windows 95 was followed up with the release of Windows 98 on June 25, 1998, which introduced the Windows Driver Model, support for USB composite devices, support for ACPI, hibernation, and support for multi-monitor configurations. Windows 98 also included integration with Internet Explorer 4 through Active Desktop and other aspects of the Windows Desktop Update (a series of enhancements to the Explorer shell which were also made available for Windows 95). In May 1999, Microsoft released Windows 98 Second Edition, an updated version of Windows 98. Windows 98 SE added Internet Explorer 5.0 and Windows Media Player 6.2 amongst other upgrades. Mainstream support for Windows 98 ended on June 30, 2002, and extended support for Windows 98 ended on July 11, 2006. On September 14, 2000, Microsoft released Windows Me (Millennium Edition), the last DOS-based version of Windows. Windows Me incorporated visual interface enhancements from its Windows NT-based counterpart Windows 2000, had faster boot times than previous versions (which however, required the removal of the ability to access a real mode DOS environment, removing compatibility with some older programs), expanded multimedia functionality (including Windows Media Player 7, Windows Movie Maker, and the Windows Image Acquisition framework for retrieving images from scanners and digital cameras), additional system utilities such as System File Protection and System Restore, and updated home networking tools. However, Windows Me was faced with criticism for its speed and instability, along with hardware compatibility issues and its removal of real mode DOS support. PC World considered Windows Me to be one of the worst operating systems Microsoft had ever released, and the 4th worst tech product of all time. Windows NT Version history Early versions (Windows NT 3.1/3.5/3.51/4.0/2000) In November 1988, a new development team within Microsoft (which included former Digital Equipment Corporation developers Dave Cutler and Mark Lucovsky) began work on a revamped version of IBM and Microsoft's OS/2 operating system known as "NT OS/2". NT OS/2 was intended to be a secure, multi-user operating system with POSIX compatibility and a modular, portable kernel with preemptive multitasking and support for multiple processor architectures. However, following the successful release of Windows 3.0, the NT development team decided to rework the project to use an extended 32-bit port of the Windows API known as Win32 instead of those of OS/2. Win32 maintained a similar structure to the Windows APIs (allowing existing Windows applications to easily be ported to the platform), but also supported the capabilities of the existing NT kernel. Following its approval by Microsoft's staff, development continued on what was now Windows NT, the first 32-bit version of Windows. However, IBM objected to the changes, and ultimately continued OS/2 development on its own. Windows NT was the first Windows operating system based on a hybrid kernel. The hybrid kernel was designed as a modified microkernel, influenced by the Mach microkernel developed by Richard Rashid at Carnegie Mellon University, but without meeting all of the criteria of a pure microkernel. The first release of the resulting operating system, Windows NT 3.1 (named to associate it with Windows 3.1) was released in July 1993, with versions for desktop workstations and servers. Windows NT 3.5 was released in September 1994, focusing on performance improvements and support for Novell's NetWare, and was followed up by Windows NT 3.51 in May 1995, which included additional improvements and support for the PowerPC architecture. Windows NT 4.0 was released in June 1996, introducing the redesigned interface of Windows 95 to the NT series. On February 17, 2000, Microsoft released Windows 2000, a successor to NT 4.0. The Windows NT name was dropped at this point in order to put a greater focus on the Windows brand. Windows XP The next major version of Windows NT, Windows XP, was released on October 25, 2001. The introduction of Windows XP aimed to unify the consumer-oriented Windows 9x series with the architecture introduced by Windows NT, a change which Microsoft promised would provide better performance over its DOS-based predecessors. Windows XP would also introduce a redesigned user interface (including an updated Start menu and a "task-oriented" Windows Explorer), streamlined multimedia and networking features, Internet Explorer 6, integration with Microsoft's .NET Passport services, a "compatibility mode" to help provide backwards compatibility with software designed for previous versions of Windows, and Remote Assistance functionality. At retail, Windows XP was now marketed in two main editions: the "Home" edition was targeted towards consumers, while the "Professional" edition was targeted towards business environments and power users, and included additional security and networking features. Home and Professional were later accompanied by the "Media Center" edition (designed for home theater PCs, with an emphasis on support for DVD playback, TV tuner cards, DVR functionality, and remote controls), and the "Tablet PC" edition (designed for mobile devices meeting its specifications for a tablet computer, with support for stylus pen input and additional pen-enabled applications). Mainstream support for Windows XP ended on April 14, 2009. Extended support ended on April 8, 2014. After Windows 2000, Microsoft also changed its release schedules for server operating systems; the server counterpart of Windows XP, Windows Server 2003, was released in April 2003. It was followed in December 2005, by Windows Server 2003 R2. Windows Vista After a lengthy development process, Windows Vista was released on November 30, 2006, for volume licensing and January 30, 2007, for consumers. It contained a number of new features, from a redesigned shell and user interface to significant technical changes, with a particular focus on security features. It was available in a number of different editions, and has been subject to some criticism, such as drop of performance, longer boot time, criticism of new UAC, and stricter license agreement. Vista's server counterpart, Windows Server 2008 was released in early 2008. Windows 7 On July 22, 2009, Windows 7 and Windows Server 2008 R2 were released as RTM (release to manufacturing) while the former was released to the public 3 months later on October 22, 2009. Unlike its predecessor, Windows Vista, which introduced a large number of new features, Windows 7 was intended to be a more focused, incremental upgrade to the Windows line, with the goal of being compatible with applications and hardware with which Windows Vista was already compatible. Windows 7 has multi-touch support, a redesigned Windows shell with an updated taskbar with revealable jump lists that contain shortcuts to files frequently used with specific applications and shortcuts to tasks within the application, a home networking system called HomeGroup, and performance improvements. Windows 8 and 8.1 Windows 8, the successor to Windows 7, was released generally on October 26, 2012. A number of significant changes were made on Windows 8, including the introduction of a user interface based around Microsoft's Metro design language with optimizations for touch-based devices such as tablets and all-in-one PCs. These changes include the Start screen, which uses large tiles that are more convenient for touch interactions and allow for the display of continually updated information, and a new class of apps which are designed primarily for use on touch-based devices. The new Windows version required a minimum resolution of 1024ร—768 pixels, effectively making it unfit for netbooks with 800ร—600-pixel screens. Other changes include increased integration with cloud services and other online platforms (such as social networks and Microsoft's own OneDrive (formerly SkyDrive) and Xbox Live services), the Windows Store service for software distribution, and a new variant known as Windows RT for use on devices that utilize the ARM architecture, and a new keyboard shortcut for screenshots. An update to Windows 8, called Windows 8.1, was released on October 17, 2013, and includes features such as new live tile sizes, deeper OneDrive integration, and many other revisions. Windows 8 and Windows 8.1 have been subject to some criticism, such as removal of the Start menu. Windows 10 On September 30, 2014, Microsoft announced Windows 10 as the successor to Windows 8.1. It was released on July 29, 2015, and addresses shortcomings in the user interface first introduced with Windows 8. Changes on PC include the return of the Start Menu, a virtual desktop system, and the ability to run Windows Store apps within windows on the desktop rather than in full-screen mode. Windows 10 is said to be available to update from qualified Windows 7 with SP1, Windows 8.1 and Windows Phone 8.1 devices from the Get Windows 10 Application (for Windows 7, Windows 8.1) or Windows Update (Windows 7). In February 2017, Microsoft announced the migration of its Windows source code repository from Perforce to Git. This migration involved 3.5 million separate files in a 300 gigabyte repository. By May 2017, 90 percent of its engineering team was using Git, in about 8500 commits and 1760 Windows builds per day. In June 2021, shortly before Microsoft's announcement of Windows 11, Microsoft updated their lifecycle policy pages for Windows 10, revealing that support for their last release of Windows 10 will be October 14, 2025. Windows 11 On June 24, 2021, Windows 11 was announced as the successor to Windows 10 during a livestream. The new operating system was designed to be more user-friendly and understandable. It was released on October 5, 2021. Windows 11 is a free upgrade to some Windows 10 users as of now. Windows 365 In July 2021, Microsoft announced it will start selling subscriptions to virtualized Windows desktops as part of a new Windows 365 service in the following month. It is not a standalone version of Microsoft Windows, but a web service that provides access to Windows 10 and Windows 11 built on top of Azure Virtual Desktop. The new service will allow for cross-platform usage, aiming to make the operating system available for both Apple and Android users. The subscription-based service will be accessible through any operating system with a web browser. Microsoft has stated that the new service is an attempt at capitalizing on the growing trend, fostered during the COVID-19 pandemic, for businesses to adopt a hybrid work environment, in which "employees split their time between the office and home" according to vice president Jared Spataro. As the service will be accessible through web-browsers, Microsoft will be able to bypass the need to publish the service through Google Play or the Apple App Store. Microsoft announced Windows 365 availability to business and enterprise customers on August 2, 2021. Multilingual support Multilingual support has been built into Windows since Windows 3.0. The language for both the keyboard and the interface can be changed through the Region and Language Control Panel. Components for all supported input languages, such as Input Method Editors, are automatically installed during Windows installation (in Windows XP and earlier, files for East Asian languages, such as Chinese, and right-to-left scripts, such as Arabic, may need to be installed separately, also from the said Control Panel). Third-party IMEs may also be installed if a user feels that the provided one is insufficient for their needs. Interface languages for the operating system are free for download, but some languages are limited to certain editions of Windows. Language Interface Packs (LIPs) are redistributable and may be downloaded from Microsoft's Download Center and installed for any edition of Windows (XP or later) they translate most, but not all, of the Windows interface, and require a certain base language (the language which Windows originally shipped with). This is used for most languages in emerging markets. Full Language Packs, which translates the complete operating system, are only available for specific editions of Windows (Ultimate and Enterprise editions of Windows Vista and 7, and all editions of Windows 8, 8.1 and RT except Single Language). They do not require a specific base language, and are commonly used for more popular languages such as French or Chinese. These languages cannot be downloaded through the Download Center, but available as optional updates through the Windows Update service (except Windows 8). The interface language of installed applications is not affected by changes in the Windows interface language. The availability of languages depends on the application developers themselves. Windows 8 and Windows Server 2012 introduces a new Language Control Panel where both the interface and input languages can be simultaneously changed, and language packs, regardless of type, can be downloaded from a central location. The PC Settings app in Windows 8.1 and Windows Server 2012 R2 also includes a counterpart settings page for this. Changing the interface language also changes the language of preinstalled Windows Store apps (such as Mail, Maps and News) and certain other Microsoft-developed apps (such as Remote Desktop). The above limitations for language packs are however still in effect, except that full language packs can be installed for any edition except Single Language, which caters to emerging markets. Platform support Windows NT included support for several platforms before the x86-based personal computer became dominant in the professional world. Windows NT 4.0 and its predecessors supported PowerPC, DEC Alpha and MIPS R4000 (although some of the platforms implement 64-bit computing, the OS treated them as 32-bit). Windows 2000 dropped support for all platforms, except the third generation x86 (known as IA-32) or newer in 32-bit mode. The client line of Windows NT family still runs on IA-32 but the Windows Server line ceased supporting this platform with the release of Windows Server 2008 R2. With the introduction of the Intel Itanium architecture (IA-64), Microsoft released new versions of Windows to support it. Itanium versions of Windows XP and Windows Server 2003 were released at the same time as their mainstream x86 counterparts. Windows XP 64-Bit Edition, released in 2005, is the last Windows client operating systems to support Itanium. Windows Server line continues to support this platform until Windows Server 2012; Windows Server 2008 R2 is the last Windows operating system to support Itanium architecture. On April 25, 2005, Microsoft released Windows XP Professional x64 Edition and Windows Server 2003 x64 Editions to support x86-64 (or simply x64), the 64-bit version of x86 architecture. Windows Vista was the first client version of Windows NT to be released simultaneously in IA-32 and x64 editions. x64 is still supported. An edition of Windows 8 known as Windows RT was specifically created for computers with ARM architecture and while ARM is still used for Windows smartphones with Windows 10, tablets with Windows RT will not be updated. Starting from Windows 10 Fall Creators Update (version 1709) and later includes support for PCs with ARM architecture. Windows 11 is the first version to drop support for 32-bit hardware. Windows CE Windows CE (officially known as Windows Embedded Compact), is an edition of Windows that runs on minimalistic computers, like satellite navigation systems and some mobile phones. Windows Embedded Compact is based on its own dedicated kernel, dubbed Windows CE kernel. Microsoft licenses Windows CE to OEMs and device makers. The OEMs and device makers can modify and create their own user interfaces and experiences, while Windows CE provides the technical foundation to do so. Windows CE was used in the Dreamcast along with Sega's own proprietary OS for the console. Windows CE was the core from which Windows Mobile was derived. Its successor, Windows Phone 7, was based on components from both Windows CE 6.0 R3 and Windows CE 7.0. Windows Phone 8 however, is based on the same NT-kernel as Windows 8. Windows Embedded Compact is not to be confused with Windows XP Embedded or Windows NT 4.0 Embedded, modular editions of Windows based on Windows NT kernel. Xbox OS Xbox OS is an unofficial name given to the version of Windows that runs on Xbox consoles. From Xbox One onwards it is an implementation with an emphasis on virtualization (using Hyper-V) as it is three operating systems running at once, consisting of the core operating system, a second implemented for games and a more Windows-like environment for applications. Microsoft updates Xbox One's OS every month, and these updates can be downloaded from the Xbox Live service to the Xbox and subsequently installed, or by using offline recovery images downloaded via a PC. It was originally based on NT 6.2 (Windows 8) kernel, and the latest version runs on an NT 10.0 base. This system is sometimes referred to as "Windows 10 on Xbox One" or "OneCore". Xbox One and Xbox Series operating systems also allow limited (due to licensing restrictions and testing resources) backward compatibility with previous generation hardware, and the Xbox 360's system is backwards compatible with the original Xbox. Version control system Before 2017 Microsoft has used a proprietary SourceDepot Version Control system which couldn't keep up with size of Windows. Microsoft had begun to integrate Git into Team Foundation Server in 2013, but Windows continued to rely on Source Depot. The Windows code was divided among 65 different repositories with a kind of virtualization layer to produce unified view of all of the code. In 2017 Microsoft announced that it would start using Git, an open source version control system created by Linus Torvalds and in May 2017 they reported that has completed migration into the Git repository. VFSForGit Because of its large, decades-long history, however, the Windows codebase is not especially well suited to the decentralized nature of Linux development that Git was originally created to manage. Each Git repository contains a complete history of all the files, which proved unworkable for Windows developers because cloning the whole repository takes several hours. Microsoft has been working on a new project called the Virtual File System for Git (VFSForGit) to address these challenges. In 2021 the VFS for Git has been superseded by Scalar. Timeline of releases Usage share and device sales Use of the latest version Windows 10 has exceeded Windows 7 globally since early 2018. For desktop and laptop computers, according to Net Applications and StatCounter, which track the use of operating systems in devices that are active on the Web, Windows was the most used operating-system family in August 2021, with around 91% usage share according to Net Applications and around 76% usage share according to StatCounter. Including personal computers of all kinds (e.g., desktops, laptops, mobile devices, and game consoles), Windows OSes accounted for 32.67% of usage share in August 2021, compared to Android (highest, at 46.03%), iOS's 13.76%, iPadOS's 2.81%, and macOS's 2.51%, according to Net Applications and 30.73% of usage share in August 2021, compared to Android (highest, at 42.56%), iOS/iPadOS's 16.53%, and macOS's 6.51%, according to StatCounter. Those statistics do not include servers (including so-called cloud computing, where Microsoft is known not to be a leader, with Linux used more than Windows), as Net Applications and StatCounter use web browsing as a proxy for all use. Security Consumer versions of Windows were originally designed for ease-of-use on a single-user PC without a network connection, and did not have security features built in from the outset. However, Windows NT and its successors are designed for security (including on a network) and multi-user PCs, but were not initially designed with Internet security in mind as much, since, when it was first developed in the early 1990s, Internet use was less prevalent. These design issues combined with programming errors (e.g. buffer overflows) and the popularity of Windows means that it is a frequent target of computer worm and virus writers. In June 2005, Bruce Schneier's Counterpane Internet Security reported that it had seen over 1,000 new viruses and worms in the previous six months. In 2005, Kaspersky Lab found around 11,000 malicious programs viruses, Trojans, back-doors, and exploits written for Windows. Microsoft releases security patches through its Windows Update service approximately once a month (usually the second Tuesday of the month), although critical updates are made available at shorter intervals when necessary. In versions of Windows after and including Windows 2000 SP3 and Windows XP, updates can be automatically downloaded and installed if the user selects to do so. As a result, Service Pack 2 for Windows XP, as well as Service Pack 1 for Windows Server 2003, were installed by users more quickly than it otherwise might have been. While the Windows 9x series offered the option of having profiles for multiple users, they had no concept of access privileges, and did not allow concurrent access; and so were not true multi-user operating systems. In addition, they implemented only partial memory protection. They were accordingly widely criticised for lack of security. The Windows NT series of operating systems, by contrast, are true multi-user, and implement absolute memory protection. However, a lot of the advantages of being a true multi-user operating system were nullified by the fact that, prior to Windows Vista, the first user account created during the setup process was an administrator account, which was also the default for new accounts. Though Windows XP did have limited accounts, the majority of home users did not change to an account type with fewer rightsย โ€“ partially due to the number of programs which unnecessarily required administrator rightsย โ€“ and so most home users ran as administrator all the time. Windows Vista changes this by introducing a privilege elevation system called User Account Control. When logging in as a standard user, a logon session is created and a token containing only the most basic privileges is assigned. In this way, the new logon session is incapable of making changes that would affect the entire system. When logging in as a user in the Administrators group, two separate tokens are assigned. The first token contains all privileges typically awarded to an administrator, and the second is a restricted token similar to what a standard user would receive. User applications, including the Windows shell, are then started with the restricted token, resulting in a reduced privilege environment even under an Administrator account. When an application requests higher privileges or "Run as administrator" is clicked, UAC will prompt for confirmation and, if consent is given (including administrator credentials if the account requesting the elevation is not a member of the administrators group), start the process using the unrestricted token. Leaked documents published by WikiLeaks, codenamed Vault 7 and dated from 2013 to 2016, detail the capabilities of the CIA to perform electronic surveillance and cyber warfare, such as the ability to compromise operating systems such as Microsoft Windows. In August 2019, computer experts reported that the BlueKeep security vulnerability, , that potentially affects older unpatched Microsoft Windows versions via the program's Remote Desktop Protocol, allowing for the possibility of remote code execution, may now include related flaws, collectively named DejaBlue, affecting newer Windows versions (i.e., Windows 7 and all recent versions) as well. In addition, experts reported a Microsoft security vulnerability, , based on legacy code involving Microsoft CTF and ctfmon (ctfmon.exe), that affects all Windows versions from the older Windows XP version to the most recent Windows 10 versions; a patch to correct the flaw is currently available. File permissions All Windows versions from Windows NT 3 have been based on a file system permission system referred to as AGDLP (Accounts, Global, Domain Local, Permissions) in which file permissions are applied to the file/folder in the form of a 'local group' which then has other 'global groups' as members. These global groups then hold other groups or users depending on different Windows versions used. This system varies from other vendor products such as Linux and NetWare due to the 'static' allocation of permission being applied directly to the file or folder. However using this process of AGLP/AGDLP/AGUDLP allows a small number of static permissions to be applied and allows for easy changes to the account groups without reapplying the file permissions on the files and folders. Alternative implementations Owing to the operating system's popularity, a number of applications have been released that aim to provide compatibility with Windows applications, either as a compatibility layer for another operating system, or as a standalone system that can run software written for Windows out of the box. These include: Wine โ€“ a free and open-source implementation of the Windows API, allowing one to run many Windows applications on x86-based platforms, including UNIX, Linux and macOS. Wine developers refer to it as a "compatibility layer" and use Windows-style APIs to emulate Windows environment. CrossOver โ€“ a Wine package with licensed fonts. Its developers are regular contributors to Wine, and focus on Wine running officially supported applications. Cedega โ€“ a proprietary fork of Wine by TransGaming Technologies, designed specifically for running Microsoft Windows games on Linux. A version of Cedega known as Cider allows Windows games to run on macOS. Since Wine was licensed under the LGPL, Cedega has been unable to port the improvements made to Wine to their proprietary codebase. Cedega ceased its service in February 2011. Darwine โ€“ a port of Wine for macOS and Darwin. Operates by running Wine on QEMU. Linux Unified Kernel โ€“ a set of patches to the Linux kernel allowing many Windows executable files in Linux (using Wine DLLs); and some Windows drivers to be used. ReactOS โ€“ an open-source OS intended to run the same software as Windows, originally designed to simulate Windows NT 4.0, now aiming at Windows 7 compatibility. It has been in the development stage since 1996. Linspire โ€“ formerly LindowsOS, a commercial Linux distribution initially created with the goal of running major Windows software. Changed its name to Linspire after Microsoft v. Lindows. Discontinued in favor of Xandros Desktop, that was also later discontinued. Freedows OSย โ€“ an open-source attempt at creating a Windows clone for x86 platforms, intended to be released under the GNU General Public License. Started in 1996, by Reece K. Sellin, the project was never completed, getting only to the stage of design discussions which featured a number of novel concepts until it was suspended in 2002. See also Architecture of Windows NT Azure Sphere, Microsoft's Linux-based operating system BlueKeep De facto standard Dominant design Windows Subsystem for Linux, a subsystem in Windows 10, not using the Linux kernel; reimplementing Wintel References External links Official Windows Blog Microsoft Developer Network Windows Developer Center Microsoft Windows History Timeline Pearson Education, InformITย โ€“ History of Microsoft Windows Microsoft Business Software Solutions Windows 10 release Information 1985 software Computer-related introductions in 1985 Computing platforms Microsoft franchises Personal computers Windows Operating system families Products introduced in 1985
61202255
https://en.wikipedia.org/wiki/Jiebo%20Luo
Jiebo Luo
Jiebo Luo (; born 1967) is a Chinese-American computer scientist, Professor of Computer Science at the University of Rochester and Distinguished Researcher with Goergen Institute for Data Science. He is interested in artificial intelligence, data science and computer vision. Biography Luo was born in 1967 in Yunnan, China. He obtained his undergraduate degree (1989) and master degree (1992) from University of Science and Technology of China, and PhD degree (1996) from University of Rochester, all in Electrical Engineering. Luo joined the Computer Science Department at University of Rochester in Fall 2011 after over fifteen prolific years at Kodak Research Laboratories, where he last held the position of Senior Principal Scientist. Luo has been actively involved in numerous technical conferences, including serving as General Chair of 2007 SPIE VCIP, 2008 ACM CIVR and 2018 ACM Multimedia, Program Chair of 2010 ACM Multimedia, 2012 IEEE CVPR, 2016 ACM ICMR and 2017 IEEE ICIP, as well as Area Chair or Senior PC of CVPR, ICCV, ECCV, KDD, AAAI, IJCAI, ACM Multimedia, MICCAI, ICDM, ICWSM, ICPR, ICIP, ICME and ICASSP. He has served as the Editor-in-Chief of the IEEE Transactions on Multimedia, and on the editorial boards of the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), IEEE Transactions on Multimedia (TMM), IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), IEEE Transactions on Big Data (TBD), ACM Transactions on Intelligent Systems and Technology (TIST), Pattern Recognition (PR), Knowledge and Information Systems (KAIS), Machine Vision and Applications (MVA), and Journal of Electronic Imaging (JEI). He was a guest editor for many special issues, including "Image Understanding for Digital Photos" (PR 2005), "Real-World Image Annotation and Retrieval" (TPAMI 2008), "Event Analysis in Video" (TCSVT 2008), "Integration of Content and Context for Multimedia Management" (TMM 2009), "Probabilistic Graphic Models in Computer Vision" (TPAMI 2009), "Knowledge Discovery over Community-Contributed Multimedia Data" (IEEE Multimedia 2010), "Social Media" (ACM TOMM 2011), "Social Media as Sensors" (TMM 2013), "Deep Learning in Multimedia Computing" (TMM 2015), "Video Analytics with Deep Learning" (PR 2020), "Learning with Fewer Labels in Computer Vision" (TPAMI, 2021), and so on. Selected works Luo's broad research spans image processing, computer vision, natural language processing, machine learning, data mining, social media, biomedical informatics, and ubiquitous computing. He is a pioneer of contextual inference in semantic understanding of visual data and social multimedia data mining. He has published extensively in these fields with over 500 peer-reviewed technical papers and over 90 US patents. His h-Index is 103. Some of his notable works are listed below and details can be seen on his website at University of Rochester. Books 2011. Social Media Modeling and Computing (Springer), 2011. Interactive Co-segmentation of Objects in Image Collections (Springer Briefs in Computer Science), 2011. Computer Vision (USTC Press), 2010. Multimedia Interaction and Intelligent User Interfaces (Springer Advances in Computer Vision and Pattern Recognition), Articles 2021. Best Long Paper, North American Chapter of the Association for Computational Linguistics (NAACL) 2018. Best Industrial Related Paper, International Conference on Pattern Recognition (ICPR) 2014. IEEE Multimedia Prize Paper, IEEE Transactions on Multimedia (TMM) 2010. Best Student Paper, IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Patents https://patents.justia.com/inventor/jiebo-luo Awards and honors 2004. George Eastman Innovation Award (Kodak's most prestigious technology prize) "For contributions to the market-leading Kodak digital radiography systems." 2008. SPIE Fellow "For achievements in visual communication and electronic imaging." 2009. IEEE Fellow "For contributions to semantic image understanding and intelligent image processing." 2010. IAPR Fellow "For contributions to contextual inference in semantic understanding of images and video." 2018. IEEE Region 1 Technological Innovation in Academic Award "For contributions in computer vision and data mining." 2018. AAAI Fellow "For significant contributions to the fields of computer vision and data mining, and particularly pioneering work on multimodal understanding for sentiment analysis, computational social science, and digital health." 2018. ACM Fellow "For contributions to multimedia content analysis and social multimedia informatics." 2021. ACM SIGMM Technical Achievement Award for Outstanding Technical Contributions to Multimedia Computing, Communications and Applications "For outstanding, pioneering and continued research contributions in the areas of multimedia content analysis and social media analytics and for outstanding and continued service to the multimedia community" References External links Jiebo Luo's homepage on Rochester 1967 births Living people Computer scientists University of Science and Technology of China alumni University of Rochester alumni
5746922
https://en.wikipedia.org/wiki/Business%20Careers%20High%20School
Business Careers High School
NSITE High School (formerly Business Careers High School) is a business and technology magnet high school part of the Northside Independent School District in San Antonio, Texas. It is a "school within a school" located on the campus of Oliver Wendell Holmes High School. The school attracts students who want to study business and other related fields. The school allows the students to grasp the concepts of the business world easier by offering laptops to each student to use for a school purposes. History Northside Independent School District and the business community combined their resources in 1991 to create Business Careers High School. Curriculum Business Careers High School is a magnet school that focuses on business aspects. The curriculum at BCHS is intended to expose students to business professions, with classes including business etiquette, finance, and technology. Such components also include proper business attire and etiquette. The curriculum gives students a plan for college. Students may choose to follow their interests while choosing classes specific to their needs or participate in one of the academy curricula offered at Business Careers. Additionally, BC students make frequent field trips to local businesses and workplaces. Campus life Business Careers High School has the same location as Oliver Wendell Holmes High School. As such, the school mascot and many activities will be under Holmes High School. The mascot of the school is the Siberian Husky and the school colors are green and gold. Students attending BCHS may participate in organizations and clubs in the same manner as Holmes students do. Students are also mixed in with Holmes students in their classes and only differentiate when taking strictly BC courses such as Financial Planning. Class sizes are rather small and are more one-to-one focused, especially in the business courses and the honors program. Student's also participate in the same UIL (University Interscholastic League) events such as the fine arts program (band, choir, art, orchestra, theatre, etc.) and sports activities. Students are also allowed to participate in many clubs and organizations such as the Young Women's Organization (YWO) and Young Men's Organization (YMO) as well as the Goldenbelles and Silverbelles (Pep Squad, Dance Team, and Cheerleaders), the AFJROTC, Academic Decathlon team and its UIL event teams. Community Business Careers High School serves students from across San Antonio. Students residing in nearby districts, such as Edgewood and San Antonio ISD may attend Business Careers with the approval of the administration. The traditional high school, Holmes, serves around 1500 students from grades 9-12. Accreditation Business Careers High School, along with its parent school Holmes High School, is accredited by the Texas Education Agency (TEA). TEA rating Business Careers High School shares its rating with Holmes. Holmes is currently rated by the Texas Education Agency as "Academically Acceptable". The TEA is the agency who rates schools based on the performance of the school on tests such as the Texas Assessment of Knowledge and Skills. Population As a magnet school, the population is significantly lower than a traditional high school. The population of Business Careers for the 2006โ€“07 year was 573. The mother school, Holmes, includes Business Careers as part of its total population, therefore the total population of Holmes is around 2100 students having around 1500 students of its own. Application process The application process for Business Careers is much like the other magnet schools in NISD. A student planning to attend BCHS must fill out an application form. The typical form includes a section to sign residential information and a section where the student must briefly explain why they want to attend Business Careers. Middle school 8th grade students are the target of the BCHS recruiters since it will be their first year in high school and it is easier for them to take full advantage of the programs offered at BCHS, but occasionally students already in high school may be eligible to apply provided that they meet the requirements to attend. For 8th graders to attend the following year, they must have at least a "C" average and have a good attendance record with few disciplinary problems. Those already in high school may apply provided that they meet the previously stated requirements and apply by the end of the 9th-grade year. For 10th graders to be considered for acceptance they must already be taking business-related courses in their current high school. No 11th graders may apply for acceptance the following year. Should the applicant meet the requirements he/she will receive a notice of acceptance. As with all magnet schools in NISD, the school cannot accept everyone. Cap limits were initially stated at 170 students each year, but recently it has increased its quota. Current schedule Business Careers High School is currently on a traditional high school schedule. It operates on an 8-period day schedule, divided into two semesters. Classes meet for 45 minutes a day with one lunch period so students can earn up to seven Texas HS credits a year. Recently though, a new program being offered at Holmes High School which is open to students at BCHS is the Zero/9 period schedule. This schedule, which started in the 2006โ€“07 year, includes an additional 50-minute period class before or after school which gives students a chance to get ahead or catch up on their traditional Texas credit requirements. Higher education ties Business Careers High School has ties with several colleges and universities. Because of this, students at BCHS are allowed field trips to experience first-hand views on certain institutions for higher education. Several of these include The University of Texas at San Antonio, Texas State University, UT Austin, and campuses in the Alamo Community College District. This also allows students at BCHS to take "Dual Credit" courses and receive college credit hours per course at San Antonio College and Northwest Vista College. Some of these include Principles of Real Estate, Calculus AP, and English 4 Honors. Local corporate partnerships BCHS has partnerships with various companies in and around San Antonio. Originally the curriculum was created by leaders within these corporations. Some of the partnerships include: Wells Fargo, Security Service Federal Credit Union, American Funds, and Washington Mutual. With the help of these companies' contributions organizations such as the Academy of Finance, and BPA are available to the students. Even some classes offered wouldn't be available without the help of funds generated from these corporations. An example would be the Banking and Financial Systems junior course needed for AOF. Academies BCHS has two academies that only Business Careers students may take part in, but are not required to graduate. They include the Academy of Finance and the Academy of Travel and Tourism; both academies are sponsored by the National Academy Foundation (NAF). These academies give students a more focused view on the field of study they may pursue and as such, have their own unique set of career and technology courses that go along with it to successfully complete the program. Academy of Finance The Academy of Finance provides students with the knowledge and skills to prepare them for the work of an accountant or other financial professions. Usually, AOF students are required to take specific courses and participate in a summer internship the summer before their senior year. Classes taken to enhance financial knowledge include, but are not limited to: Introduction to Business, BCIS 1, Business Communication, Accounting 1 and 2, Banking and Finance, International Business, Securities and Insurance, Financial Planning, E-Commerce, and the dual credit course with San Antonio College, Principles of Real Estate. The academy has many benefits, including a senior trip to New York City, the certificate of financial studies, and being recognized at graduation with honor cords. Cyber Academy This third academy began as a pilot class in the 2006โ€“07 school year with the inclusion of an Oracle programming class. Successful completion of the course may certify students in the field of structured query language (SQL) programming. Mentorship Program In Business Careers' mentor program, students are paired up with individuals in the workforce (notable business and finance) that can relate personality-wise and by life goals. Student's can learn from members of society about real-world experiences and outside issues. The mentor program takes place at least once a month during lunchtime where the student's meet with their mentor to discuss experiences. For the year, the mentor usually brings lunch for the mentee. The program ends each year with a banquet to thank the mentors for their time. This event takes place in May, usually at Oak Hills Country Club, where the student offers to pay for the mentor's meal in honor of their experiences. The mentor program aims to provide opportunities for students to gain exposure to new jobs and careers. Annual job fair BCHS hosts an annual job fair every year for students who have attained at least a junior status in high school. Usually, students in the Academy of Finance are required to participate. The job fair's main purpose is to find candidates for summer internships and possible co-op studies. Just like traditional job fairs, the BCHS job fair requires students to dress professionally, come prepared with resumes and ready for interviews. Usually companies such as KSAT 12, NISD, Washington Mutual, Security Service Federal Credit Union, Wells Fargo, HEB, and SeaWorld San Antonio participate in the job fair searching for possible interns. Co-op classes The co-op program allows students to go to school half-a-day and then work the other half while still gaining high school credits. Usually, to be accepted into the program students need to have already been hired at a job. Credits for the actual course are considered business credits. UTSA Leadership Challenge BCHS has many ties with the local universities in Texas and one of them is with UTSA. The program began at UTSA in 1992 which later teamed up with Business Careers, creating a program for the high school seniors and college students in business, known as the "Leadership Challenge." The program selects 16 students from Business Careers who have outstanding leadership potential and 16 students from UTSA that meet the requirements set forth by UTSA's standards. The program is meant to teach these individuals the meaning of having a wider perspective of the future of "leaders" by exposing these individuals to various places around San Antonio that are worth exploring. Field trips to places such as United Way, the Jewish Community Center, and the wilderness are crucial to the program. Other times "speaker luncheons" are necessary when local business leaders come and talk about their experiences. The program usually takes the entire school year. Security Service Mobile Unit Unique to only two Northside schools, the other being Clark HS, at the time is the presence of the Security Service Mobile Unit. Typically this "mobile bank" is available on Mondays and Wednesdays during lunch periods for transactions to the students and teachers. Students may set up accounts and make transactions as if they were actually at a bank. Also unique is the appearance of volunteer students. Each year Business Careers students are asked to volunteer to help run the mobile unit at school. Students may apply to volunteer there at the beginning of each consecutive school year. Dress Code / "Dress for Success" Days "Dress for Success" days are a long-time tradition started in business high schools all across the country. This provides the opportunity for students to dress in business attire, i.e. long-sleeve button-up shirt with tie and slacks for males and various business dress for females. This new change in dress and grooming began with an idea that principal, Geri Berger suggested during the 2005โ€“06 school year. The original intent was to have students dress appropriately for school so it was created to prepare students for the demands of the workforce, and in turn, students are approached and offered internships with business collaboratives or with alumni who run their own businesses. "Dress for Success" days are usually scheduled every other Tuesday in opposition to the Academy of Finance shirt day on alternating weeks. The "Dress for Success" is not required, they are not for a grade. It is the student's choice to take part in the tradition, unless going on a field trip; then the student is required to "dress for success". It allows you to wear anything you desire. Grades and class rankings Business Careers High School has been known for exemplary students. Grades at this magnet school are typically higher than traditional high schools and as such, is said to be competitive. To remain at BC, students are expected to maintain a "C" average or higher. Holmes and Business Careers keep their graduating class sizes separate from each other to recognize each school's individual potentials, i.e. recognizing officially separate Valedictorians and Salutatorians. Recognizing a Valedictorian/Salutatorian from each school helps to distinguish between those that work hard at BC and those at Holmes. Holmes senior class sizes tend to vary between 240 students to as many as 600 depending on the year of enrollment, while Business Careers senior class sizes tend to vary with as many as over 200 to as low as 70. The lower the class size, the more competitive the atmosphere tends to be since a class of 70 would make quartiles shorter and top ten percent 7 students. Still, the school recognizes top ten students at graduation with just as much emphasis as a traditional high school. A separate "combined" ranking is given to students at Business Careers their senior year to recognize higher potential students since low-class sizes can make some students look worse than others even though this is not an official class rank, it is just used for informational purposes only. What makes the school competitive is its honors program which it shares with Holmes High School. Advanced Placement (AP) subjects offered include Mathematics, English, Social studies, Science, Fine arts, International Language, and Computer Sciences. Honors classes include the same. Such courses give weighted points as deemed necessary for students' GPA (grade point average). Honors courses are denoted 5 extra points and AP courses are denoted 8 extra points though these grades are not shown on a student's transcript, but are only reflected on a student's GPA. Graduation requirements Graduation requirements at Business Careers High School are no different than any other high school in the state of Texas, but it includes an additional course of study in the business section. A BCHS student's schedule emphasizes math, computer sciences, and business courses. Additionally, the student's learn from business professionals through consistent field trips in actual business settings. To earn a Business Careers High School diploma students must meet the requirements set forth by the Texas Education Agency and earn eight additional business credits (2 per grade level) and pass the TAKS (Texas Assessment of Knowledge and Skills). Graduation Upon graduation, students receive their high school diploma. Students graduating BC will usually wear gold caps, and gowns as opposed to the traditional green for Holmes graduates. BC students may get recognized for one or more of the following: Honor students Business Careers High School, like most high schools in Texas, is known for its Latin honors system for graduates. At graduation at Business Careers, students are given honors based on their final cumulative grade point average. GPA is not rounded to the nearest whole number, so thus students are forced to try harder to achieve the desired results. The awards are as follows: National Honor Society As with most schools in the country, Business Careers (Holmes) has its own chapter in the National Honor Society. Membership is based on scholarship, character, leadership, and service. Currently, the minimum GPA to join the NHS is an 89.0000. At any time should a member's GPA drop to an 88.9999 or below he/she will be placed on probation to bring it up otherwise that student will be dropped from the program. Those inducted who successfully complete the program's requirements (community service hours, aid in the fundraisers, etc.) will receive recognition in the graduation program along with the honor of wearing the NHS collar. Academy of Finance Those students who are members of the AOF will receive special recognition in the graduation program along with the ability to wear the honor cords which may be a different shade of green (usually light green). Top ten students Those students who rank in the top ten of their class will receive special recognition at the graduation ceremony. Top ten students are usually given dark green honor cords to display and will usually sit in the front row at graduation while everyone else typically sits in alphabetical order regardless of class rank or any other merit. Those graduating Valedictorian and Salutatorian will sit on stage alongside the senior Class president and Student Council President, as well as the principals and, will receive the NISD medallion for the honor at graduation. Valedictorians and Salutatorians The title of Valedictorian is given to the individual who has a numerical class rank of 1 and the title of Salutatorian is given to the individual who has a numerical class rank of 2. These two individuals will sit on stage at graduation, receive the NISD medallion recognizing them as Northside Independent School District scholars, and receive a plaque/framed-award as proof of the honor. To be considered for these top two positions, the individuals must not only have the official numeric rank, but must also maintain high standards of scholarship, leadership, attendance, and responsibility. An individual may be disqualified or removed from either position by failure to meet these standards. Though official ranks are given at the end of the sixth semester, the official Valedictorian and Salutatorian will not be announced until after the seventh semester. Once announced, these individuals will soon receive a notice for photos to be displayed on the NISD website in late May briefly telling about them and their future plans. To this date, no ties for these positions have ever occurred, but should a case arise in the future the administration will deal with it accordingly. Though, the odds of individuals receiving exactly the same GPA down to the fourth decimal are very slim. Wireless laptop initiative In the 2006-07 school year, BCHS, to further educate the students of its school, began a new initiative in issuing laptops to each student attending the school. BCHS was the first high school in the greater San Antonio area that had wireless communication. The one-to-one laptop initiative offers all students and teachers at BCHS continuous access to a wide range of software, electronic documents, the Internet, and other digital resources for teaching and learning. Every student was issued a Gateway laptop to carry as his/her own for the school year which would be implemented into the curriculum at BCHS. Each student was required to carry the laptop to and from class every day to complete daily assignments. Also new was the creation of what was known as "Net storage" which was an internet database in which students could communicate assignments with teachers and vice versa. Netstorage is accessible from any computer as long as the student has there password and username. Each consecutive year new courses at BCHS will be created and the curriculum will become more and more dependent on the laptops to achieve success and ultimately greater knowledge of technology. The slogan for the school with its new initiative is "Taking the lead with laptops." In the recent school year (2007โ€“2008) Net storage was replaced with s-files. Sfiles serves the same purpose of 'net storage', it is just easier for the students to learn and use. Programs (including those with Holmes) Business Careers offers numerous programs including Business Professionals of America (BPA) and NAWBO (National Association of Women Business Owners). Business Careers High School, since it is connected to Oliver Wendell Holmes High School, has a widely known Academic Decathlon team. The team's efforts have managed to pull off some great results in the past few years. At the state competition, the team has managed to come in 3rd ('04/'05), 2nd ('05/'06) and 5th ('06/'07). Another successful program is the Oliver Wendell Holmes High School Band. In the past, it has won numerous awards in marching and concert competitions. Northside School of Innovation, Technology, and Entrepreneurship - N-SITE On October 24, 2018, the official Business Careers High School Twitter page announced that in the fall of 2019, Business Careers High School will be discontinued, and in replacement, Northside School of Innovation, Technology, and Entrepreneurship will replace it. N-SITE Curriculum N-SITE has three main academies to choose upon the conclusion of Freshman year. The academies are Academy of Entrepreneurship, Academy of Computer Programming, and Academy of Cyber Security. Academy of Entrepreneurship In The Academy of Entrepreneurship, students are expected to take the classes Virtual Business and Social Media Marketing in their Sophomore Year, and an Entrepreneurship in their Junior Year. Academy of Computer Programming In The Academy of Computer Programming, students are expected to take Computer Programming 1 in their Sophomore year, and Computer Programming 2 in their Junior Year. Academy of Cyber Security In The Academy of Cyber Security, students are expected to take CISCO 1 in their Sophomore Year, and CISCO 2 in their Junior year. Requirements All students are required to take Principles of information Technology and Principles of Business, Marketing, and Finance their freshmen year. Additionally, all students are required to take the Virtual Enterprises Capstone Course their senior year. Notable alumni Darold Williamson, Olympic Gold Medallist References External links Official Site of Northside Independent School District Official Site of Business Careers High School Official Site of Holmes High School Official Site of the National Academy Foundation Official Site of the UTSA Center for Professional Excellence (the home of Leadership Challenge) Official Site of the Texas Academic Decathlon Official Site of the Holmes Husky Band Educational institutions established in 1991 High schools in San Antonio Public high schools in Bexar County, Texas Northside Independent School District high schools Magnet schools in Texas 1991 establishments in Texas
8453590
https://en.wikipedia.org/wiki/IBM%20storage
IBM storage
The IBM Storage product portfolio includes disk, flash, tape, NAS storage products, storage software and services. IBM's approach is to focus on data management. Software IBM Spectrum Storage IBM Spectrum Storage portfolio can centrally manage more than 300 different storage devices and yottabytes of data. IBM Spectrum Accelerate The functionality of Spectrum Accelerate is based on the IBM XIV, a high-end disk storage system. IBM Spectrum Accelerate and XIV run the same base software stack and interoperate with features such as management, remote replication and volume mobility. IBM Spectrum Scale IBM Spectrum Scale is software-defined storage for cloud and analytics. The product is very widely used in both commercial and academic environments. It has a history going back to the mid 1990s. It was known as GPFS before IBM re-branded all storage products in 2015. IBM Spectrum Virtualize IBM Spectrum Virtualize is a block storage virtualization system. Because the IBM Storwize V7000 uses SVC code, it can also be used to perform storage virtualization in exactly the same way as SVC. Since mid-2012 it offers real time compression with no performance impact, saving up to 80% of disk utilization. SVC can be configured on a Stretched Cluster Mode, with automatic failover between two datacenters and can have SSD that can be used by EasyTier software to perform sub-LUN automatic tiering. IBM Spectrum Control IBM Spectrum Control provides infrastructure management for virtualized, cloud and software-defined storage. IBM Spectrum Protect IBM Spectrum Protect is a progression of the Tivoli Storage Management product. IBM Spectrum Archive It allows users to run any application designed for disk files against tape data without concern for the fact that the data is physically stored on tape. IBM offers four options: IBM LTFS Single Drive Edition - access and manage data on a standalone tape drive as if the data were on disk IBM LTFS Library Edition - access and manage data on single or multiple cartridges in a tape library IBM LTFS Storage Manager - manage both online and offline files in IBM tape libraries IBM LTFS Enterprise Edition - run applications designed for disk files from tape storage. IBM SmartCloud Storage Access IBM SmartCloud Storage Access is a software application designed to create a private cloud storage service on existing storage devices. The software can be configured to allow users self-service, Internet-based access for account creation, storage provisioning and file management. The software offers simple management with monitoring and reporting capabilities, including storage usage by user and group definitions. Active Cloud Engine The Active Cloud Engine (ACE) is an advanced form of multiple site replication. ACE is designed to allow different types of cloud implementations to exchange data dynamically. ACE does is designed to extend the SONAS capability for a single, centrally managed namespace, to a truly distributed, geographically dispersed, global namespace. IBM Easy Tier IBM Easy Tier is designed to automate data placement throughout the disk pool to improve the efficiency and performance of the storage system. Easy Tier is designed to relocate data (at the extent level) across up to three drive tiers automatically and without disruption to application. IBM Easy Tier is available on the DS8000, Storwize (V7000, V7000 Unified, V5000, V3700 lines) and SAN Volume Controller. Current hardware After 2019 IBM dropped the HDD- and SSD-based storage server series and all current lines provide only flash-oriented or tape-oriented solutions. Flash storage IBM FlashSystem IBM FlashSystem offers a range of dedicated, non-SSD "all-flash" storage systems and based on a Intel x86 platform. IBM acquired flash storage system maker Texas Memory Systems in 2012, In April 2013, IBM announced a plan for a $1 billion investment in flash storage research and development, and then the product line-up was renewed in 2014 with the announcement of the FlashSystem 840 and FlashSystem V840. IBM has been refreshing those systems and adding new capabilities every year. Former FlashSystem has 1U-size solutions only, current lineup contains rackable systems with 1U, 2U or 6U form-factor, and based on a 6U modules cabinet-size solution. In 2017 FlashSystem brand replaces XIV brand, and in 2020 FlashSystem replaces Storwize brand. IBM Data Engine for NoSQL - is an integrated black-box device combining an IBM PowerLinux server with FlashSystem modules attached as non-volatile memory extension (not as storage).The integrated system offers large capacity NoSQL services based on pre-loaded Redis, Cassandra and Neo4J, up to 57 TB in-memory instances. Compared to a clustered in-memory implementation, the Data Engine for NoSQL consumes a fraction of the power and rack footprint while delivering similar performance by keeping relevant (hashing) data structures in fast memory. Use cases include scalable web shops, gaming, genomics, geolocation, catalogs, hash tables and cluster caches like memcached. DS servers IBM Power-based storage series, that offers specialized advanced functions optimized for IBM Power Systems and IBM Z servers; This line early known as System Storage DS series, and former TotalStorage DS series; current models slowly dropped the "System Storage" naming prior to simple line names (DS#### for flash systems, TS#### for tape storage). Currently DS series contains only DS8000 sub-line. DS8000 series The DS8000 line formerly offers only as an assembled cabinet-size solution, but current line-up contains half-rack mountable model. The DS8000 also can use self-encrypting drives for every drive tier to help secure data at rest. Tape and virtual tape systems TS libraries and servers Like the similar DS storage series, the tape system lines early known as System Storage TS series or former TotalStorage TS series and based on a IBM POWER controllers. TS7000 series Mainframe-oriented virtual tape library series, TS7700 line released in 20## as System Storage 7700, updated in 201#, 2016, 2018 and 2020 Like the DS8000 series, current models can be offered as assembled rack-size solution, or as half-rack rack-mountable system. TS4500 General purpose tape library series. TS4300 TS2900 TS readers TS22#0 series TS11#0 series Withdrawn hardware - x86 lines PureData servers Was introduced in 2012 for replacing the Netezza line. Flash storage IBM DeepFlash DeepFlash 150 - is an ultra-high density SSD, based on a SanDisk InfiniFlash IF100 drawer (holding up to 0.5 PB of Flash capacity in 3U rack space) and IBM Spectrum Scale software. It is directly attached via SAS to a maximum of 8 servers used as application cluster or as integrated device running some SDS storage management software. Its design point is lowest price per reliable capacity. In contrast, for lowest price per IOPS or best latency per invest, consider storage built around FlashSystem modules. Introduced in 2016. DeepFlash Elastic Storage Server - an integrated device combining one or two DeepFlash 150 drawers with IBM Spectrum Scale software for Exascale storage repositories with analytics capabilities (Hadoop, CCTV, analytics archive, media server etc.). The DeepFlash-ESS can be clustered non-disruptively with existing IBM Elastic Storage Servers, up to a theoretical limit of 8000 clustered devices. It features file (NFS, SMB), object (Swift, S3) and Hadoop transparent access. Spectrum Scale offers automated data placement and lifecycle management from Memory to Flash to Disk to Tape, besides geographically distributed caching and replication. Other flash storage capabilities High IOPS PCIe Adapters โ€“ PCIe card adapters for former IBM System x servers, offering capacities up to 2.4 TB. Moved to Lenovo. HDD/SSD storage - For entry and midrange workloads IBM XIV The IBM XIV Storage System was configured as cabinet-size solution and designed to work well in cloud and virtualized environments. The last XIV Gen3 model offers 2, 3, 4 or 6 TB drives, providing up to 485 TB of usable capacity per rack. SSD caching (available as an option) adds up to 12 TB of management-free high-performance data caching capability to the entire array. The system can also connect to external storage via Fibre Channel (8Gbit/s) and iSCSi (1 or 10 Gbit/s). The XIV line was replaced by IBM FlashSystem line. IBM Storwize family The Storwize family of storage controllers shares the software with the IBM SAN Volume Controller and offers the same functionality with few exceptions. Storwize systems are capable of external virtualization, and oriented for technology migration and investment protection for aging systems. Storwize advanced caching, free-of-charge Easy Tier (automatic data placement) and automatic hotspot elimination help infuse a second life to previous-generation storage systems. Modern virtualization functions like inline real-time compression for data on external systems can help delay capacity repurchase for several years. Storwize V7000 series - announced in 2010, is a compact (2U rack-mount enclosure) virtualizing storage system that inherits IBM SAN Volume Controller (SVC) functionality. Storwize V7000 Gen1 - can attach to storage clients via FCP (8 and 16 Gbit/s), FCoE or iSCSI (1 or 10 Gbit/s) protocols and can use Real-time Compression to reduce disk space usage by up to 80 percent. Storwize V7000 Gen2 and Gen2 turbo, each a technology upgrade with increased throughput and number of drives support: 720 slots per single controller or 3040 per clustered controller. Storwize V7000F - designed for SSD-only operations. Storwize V7000 Unified combines two head units running IBM Storwize File Module Software with the IBM Storwize V7000 block storage system. It is described as unified storage because it simultaneously implements NAS protocols (such as SMB and NFS) and block storage. It leverages IBM Spectrum Scale software capabilities. Storwize V7000 Gen3 - last upgrade of V7000 line, before merging to FlashSystem in 2020. Storwize V5000 series - announced in 2013, is a mid-range virtualizing storage system offering many of the features of the V7000 in a 2U rack-mount enclosure. Storwize V5000 - supports 6 Gbit SAS and 1 Gbit iSCSI host attachment and either 8 Gbit FC or 10 Gbit iSCSI/FCoE host attachment. The system can support up to 480 drives with nineteen expansion enclosures, and up to 960 drives in a two-way cluster configuration. Storwize V5000 Gen2, a technology upgrade with increased number of drives support. It is available as V5010, V5020, and V5030 with mutual in-place upgrade capability. Storwize V5000F - designed for SSD-only operations. Storwize V5000 Gen3 - last upgrade of v5000 line, before merging to FlashSystem in 2020. Storwize V3700 - announced in 2012, is an entry-level 2U system, oriented for the block storage needs of small and midsize businesses. This system offers consolidating and sharing data capabilities previously available in more expensive systems. Transparent Cloud Tiering for Swift- and S3-compatible object datastores can be used as a cold tier for incremental volume snapshots and volume archives without live production access. This allows keeping hourly time machine copies or archiving VM images including attached volumes at a price point somewhat closer to tape media. Supported on-premise datastores include IBM Cloud Object Store (aka Cleversafe) and IBM Spectrum Scale object. Off-premise datastores would be popular S3-compatible cloud services like IBM Bluemix (aka Cleversafe cloud). Off-premise Transparent Cloud Tiering per default uses AES encryption, which is a licensed feature. HDD/SSD storage - High density rack systems IBM Storwize High-Density Expansion 5U92 for Storwize V5000 Gen2, V7000 and SAN Volume Controller, attaching via 12Gb SAS lanes. This high density carrier hosts 92 hot-swappable large form factor drives in 5U rack height. Use cases include general footprint reduction, active archives, streaming media applications, or big data warehouses. Peak performance figures are equivalent to four chained 2U Storwize EXP 12Gb SAS expansions, at equal total number (and type) of drives. Withdrawn hardware - POWER and early RISC lines Flash storage Some System Storage DS8000 Series (models DS8###F) HDD/SSD storage System Storage (2006-2019) N7000 series N6000 series N3000 series DS300 (iSCSI controller) DS400 (FC Attached controller, using SCSI drives) DS3000 Series DS3200 DS3400 DS3500 DS4000 Series DS4500 DS4700 DS4800 DS5000 Series DS5020 DS5100 DS5300 DS6000 Series DS6800 Enterprise storage, with both FC and FICON host connection PowerPC 750 dual-controller with 8 host ports and 8 drive ports 3U enclosure with 16 FC drive bays Attached up to 128 drives using DS6000 expansion units (1750-EX1 and 1750-EX2) DS8000 Series DCS3700 DCS9550 (based on the DataDirect Networks S2A9550) Expansions EXP710 (2Gbit FC Expansion drawer for DS4000 attachment) EXP810 (4Gbit FC Expansion drawer for DS4000 attachment) TotalStorage (2001-2006) DS4000 Series DS4100 (FC Attached controller, using SATA drives) DS4200 DS4300 DS4300 Turbo DS4400 FAStT Series (renamed to TotalStorage DS4000 Series in 2004) EXP200 (1Gbit FC Expansion drawer for FAStT attachment) EXP500 (1Gbit FC Expansion drawer for FAStT attachment) FAStT100 (renamed to DS4100) FAStT200 FAStT500 FAStT600 (renamed to DS4300) FAStT600 Turbo (renamed DS4300 Turbo) FAStT700 (renamed to DS4400) FAStT900 (renamed to DS4500) Latest Enterprise Storage Server (renamed to TotalStorage DS8000 Series) Expansions EXP700 (2Gbit FC Expansion drawer for DS4000 attachment) EXP300 (SCSI Ultra160 Expansion drawer for direct host attachment) EXP400 (SCSI Ultra320 Expansion drawer for direct host or DS400 attachment) EXP100 (1Gbit FC Expansion drawer for DS4000 attachment, using SATA disks) Enterprise Storage Server (or ESS, or Shark; predecessor of DS8000 Series) STN6800 STN6500 SONAS IBM Scale Out Network Attached Storage (SONAS) was the IBM enterprise x86-based storage platform based on GPFS technology, and released in 2010 as hardware product. This system implements NAS based protocols over a large-scale global name space. Today the system can scale out using commodity components to 30 balanced nodes and up to 21 PB of storage in 2011. The 2013 lineup was based on a DCS3700 storage line. GPFS gives the SONAS system with built-in ILM and tight integration with Tivoli Storage Manager helps move data to disk pools. Tape and virtual tape systems For enterprise workloads TS4500 Tape Library High density tape library supporting Linear Tape-Open (LTO) 5 and 6 or TS1140 and TS1150 drives. Can scale up to 35.5 PB of native capacity with 3592 cartridges and up to 11.7 PB with LTO 6 cartridges. Supports up to 5.5 PB in 10 sq ft. TS3500 Tape Library Highly scalable tape library supporting Linear Tape-Open (LTO) or TS11x0 drives. Can scale up to 16 frames, 192 drives and over 20,000 cartridges capacity per library string or up to 2,700 drives per library complex. Tape drives TS1140 - Tape drive that uses 3592 media. TS1060 - LTO tape drive that uses LTO generation 6 technology for use in TS3500 tape libraries. For entry and midrange workloads Tape libraries TS3310 - Expandable library with up to 18 LTO drives (409 cartridges maximum with expansion modules.) TS3200 - Up to four LTO drive library using half-height drive assemblies (48 cartridges) or up to two with full-height drives. TS3100 - Up to two LTO drive library using half-height drive assemblies (24 cartridges) or one full-height drive. Tape drives The IBM System Storage TS2900 Tape Autoloader - Designed for entry-level automation for backup and archiving in small-to-medium business environments. The TS2900 is available with IBM Linear Tape-Open (LTO) Half-High SAS tape technology. TS2360 - Full-height external standalone or rack mountable shelf unit with a native physical capacity of 2.5 TB. The IBM Ultrium 6 technology is designed to support media partitioning, IBM Linear Tape File System (LTFS) technology and encryption of data and WORM cartridges. TS2260 - Half-height external standalone or rack mountable shelf unit with a native physical capacity of 2.5 TB. Virtual tape libraries TS7620 ProtecTIER Deduplication Appliance - Preconfigured repository that can be configured with either a Virtual Tape Library or Symantec OpenStorage interface with a capacity of up to 35 TB. IBM Virtualization Engine TS7700 series - The TS7700 is a virtual tape library for System z (mainframe) that uses disk drives for cache to accelerate backup operations. The design is intended to protect data while having shorter backup windows. End-to-end encryption protects data in motion, on cache hard drives and on tape. TS7740 and TS7720 are designed to speed up tape backups and restores by using a tiered hierarchy of disk and tape to make more efficient use of tape drives. IBM System Storage TS7650G ProtecTIER Deduplication Gateway - Designed to meet the disk-based data protection needs of the enterprise data center while reducing costs. The system offers inline deduplication performance and scalability up to 1 petabyte (PB) of physical storage capacity per system that can provide up to 25 PB or more backup storage capacity. See also List of IBM products History of IBM magnetic disk drives IBM XIV Storage System IBM SAN Volume Controller IBM Storwize family IBM FlashSystem IBM Tivoli Storage Manager References External links
46995531
https://en.wikipedia.org/wiki/Revionics
Revionics
Revionics (Revionics, Inc.) is an American software company that develops lifecycle price optimization software for retailers. The software is marketed via the software as a service (SaaS) model. History Revionics was founded in 2002 in Roseville, California, The company acquired Retail Optimization in July 2012 and acquired SkuLoop in November 2012. It was ranked 79th in Deloitte's 2012 Technology Fast 500 rankings. In September 2013 the company raised $11.2 million in venture financing. In October 2013, Revionics moved its headquarters from Roseville to Austin, Texas. In December 2014, the company announced an investment from Goldman Sachsโ€™ private capital Investing group, reported at $30 million. In December 2015 it announced the acquisition of Marketyze, based in Tel Aviv. In August 2020 it was acquired by Aptos, based in Atlanta. References Companies established in 2002 Software companies based in Texas 2002 establishments in California Companies based in Austin, Texas Software companies of the United States
8632926
https://en.wikipedia.org/wiki/Security%20bug
Security bug
A security bug or security defect is a software bug that can be exploited to gain unauthorized access or privileges on a computer system. Security bugs introduce security vulnerabilities by compromising one or more of: Authentication of users and other entities Authorization of access rights and privileges Data confidentiality Data integrity Security bugs do not need be identified nor exploited to be qualified as such and are assumed to be much more common than known vulnerabilities in almost any system. Causes Security bugs, like all other software bugs, stem from root causes that can generally be traced to either absent or inadequate: Software developer training Use case analysis Software engineering methodology Quality assurance testing and other best practices Taxonomy Security bugs generally fall into a fairly small number of broad categories that include: Memory safety (e.g. buffer overflow and dangling pointer bugs) Race condition Secure input and output handling Faulty use of an API Improper use case handling Improper exception handling Resource leaks, often but not always due to improper exception handling Preprocessing input strings before they are checked for being acceptable Mitigation See software security assurance. See also Computer security Hacking: The Art of Exploitation Second Edition IT risk Threat (computer) Vulnerability (computing) Hardware bug Secure coding References Further reading Computer security Software bugs Software testing
24099648
https://en.wikipedia.org/wiki/Ira%20Fuchs
Ira Fuchs
Ira H. Fuchs (born December 1948) is an internationally known authority on technology innovation in higher education and is a co-founder of BITNET, an important precursor of the Internet. He was inducted into the Internet Hall of Fame in 2017. Since 2012 he has been President of BITNET, LLC a consulting firm specializing in online learning and other applications of technology in higher education. Career Ira Fuchs graduated from the Columbia University School of Engineering and Applied Sciences in 1969 with a B.S. (Applied Physics) and in 1976 with a M.S. (Computer Science and Electrical Engineering). From 1973, at the age of 24, until 1980 he served as the first Executive Director of the University Computer Center at The City University of New York (CUNY) and then as CUNY's Vice Chancellor of University Systems until 1985. With Greydon Freeman, Mr. Fuchs co-founded BITNET in 1981 by initially connecting CUNY and Yale University. In the mid-1980s BITNET connected millions of users from more than 1,400 institutions of higher education, government laboratories, and IBM's VNET network. It was the first academic computer network to connect the United States to Japan, Taiwan, Singapore, Israel, the USSR, and most of western Europe. Along with Daniel Oberst and Ricky Hernandez, Fuchs was co-inventor of LISTSERV, an electronic mailing list application. From 1984 until 1989 Mr. Fuchs was President of BITNET Inc. and from 1989 to 2003 he was President of the Corporation for Research and Educational Networking (CREN), a not-for-profit organization that operated the BITNET academic computer network, as well as the CSNET network. From 1985 until 2000 Fuchs was vice president for Computing and Information Technology at Princeton University. In 1994, he was a co-founder of JSTOR, a not-for-profit organization dedicated to archiving and providing access to important scholarly journals. He served as the first Chief Scientist of JSTOR from 1994โ€“2000. From 2000 until 2010 he was vice president and Program Officer for Research in Information Technology at The Andrew W. Mellon Foundation, where he directed the Foundation's grant making in the area of digital technologies that can be applied to academic and administrative use in colleges and universities, libraries, museums, and arts organizations. Open source software initiatives supported by the Andrew W. Mellon Foundation include Sakai, uPortal, Kuali, Sophie, Chandler, Zotero, Open Knowledge Initiative, Bamboo, CollectionSpace, ConservationSpace, DecaPod, Fedora, SIMILE, DSpace, FLUID, OpenCast, SEASR, Visual Understanding Environment, and the Open Library Environment (OLE). From 2010 until 2012 he was Executive Director of Next Generation Learning Challenges where he was responsible for the development and day-to-day operations of the program which provides grants, builds evidence, and develops an active community committed to identifying and scaling technology-enabled approaches that dramatically improve college readiness and completion. Mr. Fuchs is currently a Director/Trustee of The Seeing Eye, The Philadelphia Contributionship (the oldest property insurer in the US) and Ithaka Harbors Inc. He was also a Founding Trustee of JSTOR, USENIX, the Internet Society and a former Trustee of Mills College, Sarah Lawrence College, Princeton University Press, the Open Source Applications Foundation, Princeton Public Library (Princeton, NJ) (Treasurer), and the Global Education Learning Community. Selected publications "Network Information is Not Free", Scholarly Publishing: The Electronic Frontier, Robin P. Peek and Gregory B. Newby, editors, Cambridge, MA, The MIT Press, 1996 "Research Networks and Acceptable Use", Educom Bulletin, Vol 23, No.2/3, Summer/Fall 1988, pp 43โ€“48 Awards Internet Hall of Fame, inducted 2017(Video) Indiana University's Thomas Hart Benton Mural Medallion - 2011 (Video) Educause- Excellence in Leadership 2010 (Award acknowledges leadership within higher education information technology) Educause- Excellence in Leadership 2000 (Highest professional award given to a CIO of an academic institution) Internet Innovator Award, Technology New Jersey Inc. 1999 References External links "How a Ham Radio Inspired the Internet", Internet Hall of Fame, August 2018 "Archimedes' Lever and Collaboration: An Interview with Ira Fuchs" by Richard N. Katz, Educause, March/April 2001 CUNY Matters, September, 2006, Page 8 "Needed: an 'Educore' to Aid Collaboration", Chronicle of Higher Education, September 2004, Volume 51, Issue 5, Page B19 " Collaboration for a Positive-Sum Outcome: An Interview with Ira H. Fuchs" by Christopher J. Mackie, Educause Review, Volume 46, Number 3, May/June 2011 Princeton Packet Magazine, October 2015 Encyclopedia.com, February 2021 Entrepreneur's Handbook, April 2021 Web Masters Episode #32, April 2021 Living people 1948 births American computer scientists Andrew W. Mellon Foundation Columbia School of Engineering and Applied Science alumni
24246419
https://en.wikipedia.org/wiki/Global%20Mapper
Global Mapper
Global Mapper is a geographic information system (GIS) software package currently developed by Blue Marble Geographics that runs on Microsoft Windows. The GIS software competes with ESRI, GeoMedia, Manifold System, and MapInfo GIS products. Global Mapper handles both vector, raster, and elevation data, and provides viewing, conversion, and other general GIS features. Global Mapper has an active user community with a mailing list and online forums. History In 1995 the USGS was in need of a Windows viewer for their data products, so they developed the dlgv32 application for viewing their DLG (Digital Line Graph) vector data products. Between 1995 and 1998 the dlgv32 application was expanded to include support for viewing other USGS data products, including DRG (topographic maps) and DEM (digital elevation model) and SDTS-DLG and SDTS-DEM data products. The development process is described in detail in the USGS paper titled 'A Programming Exercise'. In 1998 the USGS released the source code for dlgv32 v3.7 to the public domain. In 2001, the source code for dlgv32 was further developed by a private individual into the commercial product dlgv32 Pro v4.0 and offered for sale via the internet. Later that same year the product was renamed to Global Mapper and become a commercial product of the company Global Mapper Software LLC. The USGS was distributing a version of the software under the name dlgv32 Pro (Global Mapper). On November 2, 2011 Blue Marble Geographics, at the annual user conference, announced they had purchased Global Mapper LLC. Global Mapper Releases Since the initial commercialization of the Global Mapper product in 2001, there have been yearly major product releases and numerous intermediate point releases adding additional functionality to the software. A mobile version of Global Mapper, Global Mapper Mobile, was released in 2016. References External links Global Mapper website GIS companies GIS software
23095336
https://en.wikipedia.org/wiki/AES%20instruction%20set
AES instruction set
An Advanced Encryption Standard instruction set is now integrated into many processors. The purpose of the instruction set is to improve the speed and security of applications performing encryption and decryption using Advanced Encryption Standard (AES). They are often implemented as instructions implementing a single round of AES along with a special version for the last round which has a slightly different method. The side channel attack surface of AES is reduced when implemented in an instruction set, compared to when AES is implemented in software only. x86 architecture processors AES-NI (or the Intel Advanced Encryption Standard New Instructions; AES-NI) was the first major implementation. AES-NI is an extension to the x86 instruction set architecture for microprocessors from Intel and AMD proposed by Intel in March 2008. Instructions Intel The following Intel processors support the AES-NI instruction set: Westmere based processors, specifically: Westmere-EP (a.k.a. Gulftown Xeon 5600-series DP server model) processors Clarkdale processors (except Core i3, Pentium and Celeron) Arrandale processors (except Celeron, Pentium, Core i3, Core i5-4XXM) Sandy Bridge processors: Desktop: all except Pentium, Celeron, Core i3 Mobile: all Core i7 and Core i5. Several vendors have shipped BIOS configurations with the extension disabled; a BIOS update is required to enable them. Ivy Bridge processors All i5, i7, Xeon and i3-2115C only Haswell processors (all except i3-4000m, Pentium and Celeron) Broadwell processors (all except Pentium and Celeron) Silvermont/Airmont processors (all except Bay Trail-D and Bay Trail-M) Goldmont (and later) processors Skylake (and later) processors AMD Several AMD processors support AES instructions: Jaguar processors and newer Puma processors and newer "Heavy Equipment" processors Bulldozer processors Piledriver processors Steamroller processors Excavator processors and newer Zen (and later) based processors Hardware acceleration in other architectures AES support with unprivileged processor instructions is also available in the latest SPARC processors (T3, T4, T5, M5, and forward) and in latest ARM processors. The SPARC T4 processor, introduced in 2011, has user-level instructions implementing AES rounds. These instructions are in addition to higher level encryption commands. The ARMv8-A processor architecture, announced in 2011, including the ARM Cortex-A53 and A57 (but not previous v7 processors like the Cortex A5,ย 7,ย 8,ย 9,ย 11,ย 15 ) also have user-level instructions which implement AES rounds. Supporting x86 CPUs VIA x86 CPUs, AMD Geode, and Marvell Kirkwood (ARM, mv_cesa in Linux) use driver-based accelerated AES handling instead. (See Crypto API (Linux).) The following chips, while supporting AES hardware acceleration, do not support AES-NI: AMD Geode LX processors VIA, using VIA PadLock VIA C3 Nehemiah C5P (Eden-N) processors VIA C7 Esther C5J processors ARM architecture Programming information is available in ARM Architecture Reference Manual ARMv8, for ARMv8-A architecture profile (Section A2.3 "The Armv8 Cryptographic Extension"). ARMv8-A architecture ARM cryptographic extensions optionally supported on ARM Cortex-A30/50/70 cores Cryptographic hardware accelerators/engines Allwinner A10, A20, A30, A31, A80, A83T, H3 and A64 using Security System Broadcom BCM5801/BCM5805/BCM5820 using Security Processor NXP Semiconductors i.MX6 onwards Qualcomm Snapdragon 805 onwards Rockchip RK30xx series onwards Samsung Exynos 3 series onwards RISC-V architecture Whilst the RISC-V architecture doesn't include AES-specific instructions, a number of RISC-V chips include integrated AES co-processors. Examples include: Dual-core RISC-V 64 bits Sipeed-M1 support AES and SHA256. RISC-V architecture based ESP32-C (as well as Xtensa-based ESP32), support AES, SHA, RSA, RNG, HMAC, digital signature and XTS 128 for flash. Bouffalo Labs BL602/604 32-bit RISC-V supports various AES and SHA variants. POWER architecture Since the Power_ISA_v.2.07, the instructions vcipher and vcipyherlast implement one round of AES directly. IBM z/Architecture IBM z9 or later mainframe processors support AES as single-opcode (KM, KMC) AES ECB/CBC instructions via IBM's CryptoExpress hardware. These single-instruction AES versions are therefore easier to use than Intel NI ones, but may not be extended to implement other algorithms based on AES round functions (such as the Whirlpool and Grรธstl hash functions). Other architectures Atmel XMEGA (on-chip accelerator with parallel execution, not an instruction) SPARC T3 and later processors have hardware support for several cryptographic algorithms, including AES. Cavium Octeon MIPS All Cavium Octeon MIPS-based processors have hardware support for several cryptographic algorithms, including AES using special coprocessor 3 instructions. Performance In AES-NI Performance Analyzed, Patrick Schmid and Achim Roos found "impressive results from a handful of applications already optimized to take advantage of Intel's AES-NI capability". A performance analysis using the Crypto++ security library showed an increase in throughput from approximately 28.0 cycles per byte to 3.5 cycles per byte with AES/GCM versus a Pentium 4 with no acceleration. Supporting software Most modern compilers can emit AES instructions. Much security and cryptography software supports the AES instruction set, including the following notable core infrastructure: Apple's FileVault 2 full-disk encryption in macOS 10.10+ NonStop SSH2, NonStop cF SSL Library and BackBox VTC Software in HPE Tandem NonStop OS L-series Cryptography API: Next Generation (CNG) (requires Windows 7) Linux's Crypto API Java 7 HotSpot Network Security Services (NSS) version 3.13 and above (used by Firefox and Google Chrome) Solaris Cryptographic Framework on Solaris 10 onwards FreeBSD's OpenCrypto API (aesni(4) driver) OpenSSL 1.0.1 and above GnuTLS Libsodium VeraCrypt Go programming language BitLocker A fringe use of the AES instruction set involves using it on block ciphers with a similarly-structured S-box, using affine isomorphism to convert between the two. SM4 and Camellia have been accelerated using AES-NI. See also Advanced Vector Extensions (AVX) CLMUL instruction set FMA instruction set (FMA3, FMA4) RDRAND Notes References External links Intel Advanced Encryption Standard Instructions (AES-NI) AES instruction set whitepaper (2.93 MiB, PDF) from Intel X86 architecture X86 instructions Advanced Micro Devices technologies Advanced Encryption Standard
30867211
https://en.wikipedia.org/wiki/Demetra%2B
Demetra+
JDemetra+ is a computer program for seasonal adjustments that was developed and published by Eurostat โ€“ European Commission. It supports TRAMO&SEATS and X-12-ARIMA methods of adjustment. Development Governance The Demetra+ project is governed by the Eurostat. Unlike other software development carried out under an open source license, the Demetra+ project was not initiated by a community or a single developer, but started as an extension to the active role played by Eurostat (and in particular SA Steering Group) in the promotion, development and maintenance of a statistical analysis software system. The SA Steering Group, which consists of Eurostat-ECB high level group of experts from NSIs and NCBs, had been promoting for several years the development of freely available Demetra for seasonal adjustment to be used within ESS. The SA Steering Group is responsible also for facilitating collaboration between separate organizations interested in development of SA tools and has ultimate control over the whole project. Although the software itself will be made available under an open source license, participation in development is contingent upon the decision of Steering Group. The development of the software has been outsourced to the Department of Statistics of the National Bank of Belgium (NBB). In addition, the User Testing Group has been set up, with main tasks to supervise the implementation of the guidelines and user requirements. The User Testing Group is also responsible for issuing recommendations for new requirements and making decisions on adoption or rejection of new requirements not in line with the project guidelines. The Demetra+ community has been established on OSOR environment for reporting and exchange of experience between the members of the User Testing Group itself, as well as for communication with the development team in NBB. Extensions The Demetra+ allows developers to write implementation for: time series providers and browsers, repository for the definitions of SA processing, storage (or further processing) of the results, diagnostics on the SA estimations, summary (reporting) of a complete SA processing, data formatting (drag/drop and copy/paste). Features The technology (Object Oriented components) underlying the toolkit has proved to be for managing the complexity of seasonal adjustment algorithms and integrating the major well-known SA engines provided by the Bank of Spain and USCB. In addition, it could easily be embedded in many different environments allowing fast developments and extensions. In parallel with the adoption of the ESS guidelines on SA, the SASG has launched a task force on the SA tools users\' requirements (February - April 2008) in order to define the functional and non functional requirements for a new SA tool DEMETRA+. The role of this community is the common sharing and testing of the new tool DEMETRA+ developed by BNB. Description of menuโ€™s buttons and their functionality In Workspace menu from the Main menu the user can create new Workspaces, open an existing project in a new window, save the file, activates and deactivates the panels, open workspace recently saved and closes an open project. Tools menu is divided into three parts: Container for displaying data; Tool Window for charts and data transformation; Options for diagnostic and output options that can be set by user. Window menu offers the following functions: Floating, Tabbed, Tile vertically and Tile horizontally for the type of arrange all windows; Skinning for graphical appearance of Demetra+ and Documents options which offers some additional options for organising windows. Description workspace layout The key parts of the Demetra+ are: the browsers panel, which presents the available time series; the workspace panel, which shows information used or generated by the software; a central blank zone that will contain actual analyses; two auxiliary panels at the bottom of the application: TSProperties contains the current time series (from the browsersโ€™ panel) and Logs contains logging information. Availability and system requirements The software runs under the Microsoft Windows operating system, and is available for download. Licensing European Union Public License (EUPL) The EUPL is the first European Free/Open Source Software (F/OSS) licence. It has been created on the initiative of the European Commission. Following an intensive preparatory process and a public consultation, it was approved by the European Commission on 9 January 2007. The EUPL is available in 22 official languages of the European Union, all with identical legal value. 1). Notes and references Introduction to the EUPL licence - X12 specifications - Demetra+ extensions - Development governance - Interface to the software JDemetra+ has also an R interface developed in the package RJDemetra. It can be downloaded from its GitHub page: https://github.com/jdemetra/rjdemetra Free mathematics software Information technology organizations based in Europe Time series software
16777474
https://en.wikipedia.org/wiki/Targeted%20advertising
Targeted advertising
Targeted advertising is a form of advertising, including online advertising, that is directed towards an audience with certain traits, based on the product or person the advertiser is promoting. These traits can either be demographic with a focus on race, economic status, sex, age, generation, level of education, income level, and employment, or psychographic focused on the consumer values, personality, attitude, opinion, lifestyle and interest. This focus can also entail behavioral variables, such as browser history, purchase history, and other recent online activities. Targeted advertising is concentrated in certain traits and consumers who are likely to have a strong preference. These individuals will receive messages instead of those who have no interest and whose preferences do not match a particular product's attributes. This eliminates waste. Traditional forms of advertising, including billboards, newspapers, magazines, and radio channels, are progressively becoming replaced by online advertisements. The Information and communication technology (ICT) space has transformed recently, resulting in targeted advertising stretching across all ICT technologies, such as web, IPTV, and mobile environments. In the next generation's advertising, the importance of targeted advertisements will radically increase, as it spreads across numerous ICT channels cohesively. Through the emergence of new online channels, the need for targeted advertising is increasing because companies aim to minimize wasted advertising by means of information technology. Most targeted new media advertising currently uses second-order proxies for targets, such as tracking online or mobile web activities of consumers, associating historical web page consumer demographics with new consumer web page access, using a search word as the basis of implied interest, or contextual advertising. Types Web services are continually generating new business ventures and revenue opportunities for internet corporations. Companies have rapidly developed technological capabilities that allow them to gather information about web users. By tracking and monitoring what websites users visit, internet service providers can directly show ads that are relative to the consumer's preferences. Most of today's websites are using these targeting technologies to track users' internet behavior and there is much debate over the privacy issues present. Search engine marketing Search engine marketing uses search engines to reach target audiences. For example, Google's Remarketing Campaigns are a type of targeted marketing where advertisers use the IP addresses of computers that have visited their websites to remarket their ad specifically to users who have previously been on their website whilst they browse websites that are a part of the Google display network, or when searching for keywords related to a product or service on the Google search engine. Dynamic remarketing can improve the targeted advertising as the ads are able to include the products or services that the consumers have previously viewed on the advertisers' website within the ads. Google Ads includes different platforms. The Search Network displays the ads on 'Google Search, other Google sites such as Maps and Shopping, and hundreds of non-Google search partner websites that show ads matched to search results'. 'The Display Network includes a collection of Google websites (like Google Finance, Gmail, Blogger, and YouTube), partner sites, and mobile sites and apps that show adverts from Google Ads matched to the content on a given page.' These two kinds of advertising networks can be beneficial for each specific goal of the company, or type of company. For example, the search network can benefit a company with the goal of reaching consumers actively searching for a particular product or service. Other ways advertising campaigns are able to target the user is to use browser history and search history. For example, if the user types promotional pens into a search engine such as Google, ads for promotional pens will appear at the top of the page above the organic listings. These ads will be geo-targeted to the area of the user's IP address, showing the product or service in the local area or surrounding regions. The higher ad position is often rewarded to the ad having a higher quality score. The ad quality is affected by the 5 components of the quality score: The ad's expected click-through rate The quality of the landing page The ad/search relevance Geographic performance The targeted devices When ranked based on these criteria, it will affect the advertiser by improving ad auction eligibility, the actual cost per click (CPC), ad position, and ad position bid estimates; to summarise, the better the quality score, the better ad position, and lower costs. Google uses its display network to track what users are looking at and to gather information about them. When a user goes to a website that uses the Google display network, it will send a cookie to Google, showing information on the user, what he or she searched, where they are from, found by the IP address, and then builds a profile around them, allowing Google to easily target ads to the user more specifically. For example, if a user went onto promotional companies' websites often, that sell promotional pens, Google will gather data from the user such as age, gender, location, and other demographic information as well as information on the websites visited, the user will then be put into a category of promotional products, allowing Google to easily display ads on websites the user visits relating to promotional products. these types of adverts are also called behavioral advertisements as they track the website behavior of the user and display ads based on previous pages or searched terms. ("Examples Of Targeted Advertising") Social media targeting Social media targeting is a form of targeted advertising, that uses general targeting attributes such as geotargeting, behavioral targeting, socio-psychographic targeting, and gathers the information that consumers have provided on each social media platform. According to the media users' view history, customers who are interested in the criteria will be automatically targeted by the advertisements of certain products or service. For example, Facebook collects massive amounts of user data from surveillance infrastructure on its platforms. Information such as a user's likes, view history, and geographic location is leveraged to micro-target consumers with personalized products. Social media also creates profiles of the consumer and only needs to look at one place, the user's profile, to find all interests and 'likes'. E.g. Facebook lets advertisers target using broad characteristics like gender, age, and location. Furthermore, they allow more narrow targeting based on demographics, behavior, and interests (see a comprehensive list of Facebook's different types of targeting options). Television Advertisements can be targeted to specific consumers watching digital cable or over-the-top video. Targeting can be done according to age, gender, location, or personal interests in films, etc. Cable box addresses can be cross-referenced with information from data brokers like Acxiom, Equifax, and Experian, including information about marriage, education, criminal record, and credit history. Political campaigns may also match against public records such as party affiliation and which elections and party primaries the view has voted in. Mobile devices Since the early 2000s, advertising has been pervasive online and more recently in the mobile setting. Targeted advertising based on mobile devices allows more information about the consumer to be transmitted, not just their interests, but their information about their location and time. This allows advertisers to produce advertisements that could cater to their schedule and a more specific changing environment. Content and contextual targeting The most straightforward method of targeting is content/contextual targeting. This is when advertisers put ads in a specific place, based on the relative content present. Another name used is content-oriented advertising, as it is corresponding to the context being consumed. This targeting method can be used across different mediums, for example in an article online, purchasing homes would have an advert associated with this context, like an insurance ad. This is usually achieved through an ad matching system that analyses the contents on a page or finds keywords and presents a relevant advert, sometimes through pop-ups. Though sometimes the ad matching system can fail, as it can neglect to tell the difference between positive and negative correlations. This can result in placing contradictory adverts, which are not appropriate to the content. Technical targeting Technical targeting is associated with the user's own software or hardware status. The advertisement is altered depending on the user's available network bandwidth, for example, if a user is on their mobile phone that has limited connection, the ad delivery system will display a version of the ad that is smaller for a faster data transfer rate. Addressable advertising systems serve ads directly based on demographic, psychographic, or behavioral attributes associated with the consumer(s) exposed to the ad. These systems are always digital and must be addressable in that the endpoint which serves the ad (set-top box, website, or digital sign) must be capable of rendering an ad independently of any other endpoints based on consumer attributes specific to that endpoint at the time the ad is served. Addressable advertising systems, therefore, must use consumer traits associated with the endpoints as the basis for selecting and serving ads. Time targeting According to the Journal of Marketing, more than 1.8 billion clients spent a minimum of 118 minutes daily- via web-based networking media in 2016. Nearly 77% of these clients interact with the content through likes, commenting, and clicking on links related to content. With this astounding buyer trend, it is important for advertisers to choose the right time to schedule content, in order to maximize advertising efficiency. To determine what time of day is most effective for scheduling content, it is essential to know when the brain is most effective at retaining memory. Research in chronopsychology has credited that time-of-day impacts diurnal variety in a person's working memory accessibility and has discovered the enactment of inhibitory procedures to build working memory effectiveness during times of low working memory accessibility. Working memory is known to be vital for language perception, learning, and reasoning providing us with the capacities of putting away, recovering, and preparing quick data. For many people, working memory accessibility is good when they get up toward the beginning of the day, most reduced in mid-evening, and moderate at night. Sociodemographic targeting Sociodemographic targeting focuses on the characteristics of consumers. This includes their age, generation, gender, salary, and nationality. The idea is to target users specifically and to use this collected data, for example, targeting a male in the age bracket of 18โ€“24. Facebook and other social media platforms uses this form of targeting by showing advertisements relevant to the user's individual demographic on their account, this can show up in forms of banner ads, mobile ads, or commercial videos. Geographical and location-based targeting This type of advertising involves targeting different users based on their geographic location. IP addresses can signal the location of a user and can usually transfer the location through ZIP codes. Locations are then stored for users in static profiles, thus advertisers can easily target these individuals based on their geographic location. A location-based service (LBS) is a mobile information service that allows spatial and temporal data transmission and can be used to an advertiser's advantage. This data can be harnessed from applications on the device (mobile apps like uber) that allow access to the location information. This type of targeted advertising focuses on localizing content, for example, a user could be prompted with options of activities in the area, for example, places to eat, nearby shops, etc. Although producing advertising off consumer's location-based services can improve the effectiveness of delivering ads, it can raise issues with the user's privacy. Behavioral targeting Behavioral targeting is centered around the activity/actions of users, and is more easily achieved on web pages. Information from browsing websites can be collected from data mining, which finds patterns in users' search history. Advertisers using this method believe it produces ads that will be more relevant to users, thus leading consumers to be more likely influenced by them. If a consumer was frequently searching for plane ticket prices, the targeting system would recognize this and start showing related adverts across unrelated websites, such as airfare deals on Facebook. Its advantage is that it can target individual's interests, rather than target groups of people whose interests may vary. When a consumer visits a web site, the pages they visit, the amount of time they view each page, the links they click on, the searches they make, and the things that they interact with, allow sites to collect that data, and other factors, to create a 'profile' that links to that visitor's web browser. As a result, site publishers can use this data to create defined audience segments based upon visitors that have similar profiles. When visitors return to a specific site or a network of sites using the same web browser, those profiles can be used to allow marketers and advertisers to position their online ads and messaging in front of those visitors who exhibit a greater level of interest and intent for the products and services being offered. Behavioral targeting has emerged as one of the main technologies used to increase the efficiency and profits of digital marketing and advertisements, as media providers are able to provide individual users with highly relevant advertisements. On the theory that properly targeted ads and messaging will fetch more consumer interest, publishers can charge a premium for behaviorally targeted ads and marketers can achieve Behavioral marketing can be used on its own or in conjunction with other forms of targeting. Many practitioners also refer to this process as "audience targeting". Major advantages of Behavioral marketing are that it will help in reaching surfers with affinity, reach surfers that were not exposed to a media campaign, contact surfers close to conversion and in reconnecting with prospects or customers. Onsite Behavioral targeting may also be applied to any online property on the premise that it either improves the visitor experience or benefits the online property, typically through increased conversion rates or increased spending levels. The early adopters of this technology/philosophy were editorial sites such as HotWired, online advertising with leading online ad servers, retail or another e-commerce website as a technique for increasing the relevance of product offers and promotions on a visitor by visitor basis. More recently, companies outside this traditional e-commerce marketplace have started to experiment with these emerging technologies. The typical approach to this starts by using web analytics or behavioral analytics to break-down the range of all visitors into a number of discrete channels. Each channel is then analyzed and a virtual profile is created to deal with each channel. These profiles can be based around Personas that give the website operators a starting point in terms of deciding what content, navigation and layout to show to each of the different personas. When it comes to the practical problem of successfully delivering the profiles correctly this is usually achieved by either using a specialist content behavioral platform or by bespoke software development. Most platforms identify visitors by assigning a unique ID cookie to each and every visitor to the site thereby allowing them to be tracked throughout their web journey, the platform then makes a rules-based decision about what content to serve. Self-learning onsite behavioral targeting systems will monitor visitor response to site content and learn what is most likely to generate a desired conversion event. Some good content for each behavioral trait or pattern is often established using numerous simultaneous multivariate tests. Onsite behavioral targeting requires a relatively high level of traffic before statistical confidence levels can be reached regarding the probability of a particular offer generating a conversion from a user with a set behavioral profile. Some providers have been able to do so by leveraging their large user base, such as Yahoo!. Some providers use a rules-based approach, allowing administrators to set the content and offers shown to those with particular traits. According to research behavioral targeting provides little benefit at a huge privacy cost โ€” when targeting for gender, the targeted guess is 42% accurate, which is less than a random guess. When targeting for gender and age the accuracy is 24%. Network Advertising networks use behavioral targeting in a different way than individual sites. Since they serve many advertisements across many different sites, they are able to build up a picture of the likely demographic makeup of internet users. Data from a visit to one website can be sent to many different companies, including Microsoft and Google subsidiaries, Facebook, Yahoo, many traffic-logging sites, and smaller ad firms. This data can sometimes be sent to more than 100 websites, and shared with business partners, advertisers, and other third parties for business purposes. The data is collected using cookies, web beacons and similar technologies, and/or a third-party ad serving software, to automatically collect information about site users and site activity. Some servers even record the page that referred you to them, websites you visit after them, which ads you see and which ads you click on. Online advertising uses cookies, a tool used specifically to identify users, as a means of delivering targeted advertising by monitoring the actions of a user on the website. For this purpose, the cookies used are called tracking cookies. An ad network company such as Google uses cookies to deliver advertisements adjusted to the interests of the user, control the number of times that the user sees an ad and "measure" whether they are advertising the specific product to the customer's preferences. This data is collected without attaching the people's names, address, email address or telephone number, but it may include device identifying information such as the IP address, MAC address, cookie or other device-specific unique alphanumerical ID of your computer, but some stores may create guest IDs to go along with the data. Cookies are used to control displayed ads and to track browsing activity and usage patterns on sites. This data is used by companies to infer people's age, gender, and possible purchase interests so that they could make customized ads that you would be more likely to click on. An example would be a user seen on football sites, business sites, and male fashion sites. A reasonable guess would be to assume the user is male. Demographic analyses of individual sites provided either internally (user surveys) or externally (Comscore \ Netratings) allow the networks to sell audiences rather than sites. Although advertising networks were used to sell this product, this was based on picking the sites where the audiences were. Behavioral targeting allows them to be slightly more specific about this. Research In the work titled An Economic Analysis of Online Advertising Using Behavioral Targeting, Chen and Stallaert (2014) study the economic implications when an online publisher engages in behavioral targeting. They consider that the publisher auctions off an advertising slot and are paid on a cost-per-click basis. Chen and Stallaert (2014) identify the factors that affect the publisher's revenue, the advertisers' payoffs, and social welfare. They show that revenue for the online publisher in some circumstances can double when behavioral targeting is used. Increased revenue for the publisher is not guaranteed: in some cases, the prices of advertising and hence the publisher's revenue can be lower, depending on the degree of competition and the advertisers' valuations. They identify two effects associated with behavioral targeting: a competitive effect and a propensity effect. The relative strength of the two effects determines whether the publisher's revenue is positively or negatively affected. Chen and Stallaert (2014) also demonstrate that, although social welfare is increased and small advertisers are better off under behavioral targeting, the dominant advertiser might be worse off and reluctant to switch from traditional advertising. In 2006, BlueLithium (now Yahoo! Advertising) in a large online study, examined the effects of behavior targeted advertisements based on contextual content. The study used 400 million "impressions", or advertisements conveyed across behavioral and contextual borders. Specifically, nine behavioral categories (such as "shoppers" or "travelers")with over 10 million "impressions" were observed for patterns across the content. All measures for the study were taken in terms of click-through rates (CTR) and "action-through rates" (ATR), or conversions. So, for every impression that someone gets, the number of times they "click-through" to it will contribute to CTR data, and every time they go through with or convert on the advertisement the user adds "action-through" data. Results from the study show that advertisers looking for traffic on their advertisements should focus on behavioral targeting in context. Likewise, if they are looking for conversions on the advertisements, behavioral targeting out of context is the most effective process. The data was helpful in determining an "across-the-board rule of thumb"; however, results fluctuated widely by content categories. Overall results from the researchers indicate that the effectiveness of behavioral targeting is dependent on the goals of the advertiser and the primary target market the advertiser is trying to reach. Privacy and security concerns Many online users and advocacy groups are concerned about privacy issues around this type of targeting since targeted advertising requires aggregation of large amounts of personal data, including highly sensitive one (such as sexual orientation or sexual preferences, health issues, location) which is then traded between hundreds of parties in the process of real-time bidding. Obscure to a great many people, individual data are exchanged without the consent of the proprietors. Essentially, it is an obtrusive rupture of protection to profit from the unregulated exchange of individual data. However simultaneously, individual data, particularly the ones that are identified with intrigue and propensity, are a basic segment for conveying web-based promoting, which is the help of numerous sites. This is a controversy that the behavioral targeting industry is trying to contain through education, advocacy and product constraints in order to keep all information non-personally identifiable or to obtain permission from end-users. AOL created animated cartoons in 2008 to explain to its users that their past actions may determine the content of ads they see in the future. Canadian academics at the University of Ottawa Canadian Internet Policy and Public Interest Clinic have recently demanded the federal privacy commissioner to investigate online profiling of Internet users for targeted advertising. The European Commission (via commissioner Meglena Kuneva) has also raised a number of concerns related to online data collection (of personal data), profiling and behavioral targeting, and is looking for "enforcing existing regulation". In October 2009 it was reported that a recent survey carried out by University of Pennsylvania and the Berkeley Center for Law and Technology found that a large majority of US internet users rejected the use of behavioral advertising. Several research efforts by academics and others have demonstrated that data that is supposedly anonymized can be used to identify real individuals. In December 2010, online tracking firm Quantcast agreed to pay $2.4M to settle a class-action lawsuit for their use of 'zombie' cookies to track consumers. These zombie cookies, which were on partner sites such as MTV, Hulu, and ESPN, would re-generate to continue tracking the user even if they were deleted. Other uses of such technology include Facebook, and their use of the Facebook Beacon to track users across the internet, to later use for more targeted advertising. Tracking mechanisms without consumer consent are generally frowned upon; however, tracking of consumer behavior online or on mobile devices are key to digital advertising, which is the financial backbone to most of the internet. In March 2011, it was reported that the online ad industry would begin working with the Council of Better Business Bureaus to start policing itself as part of its program to monitor and regulate how marketers track consumers online, also known as behavioral advertising. Retargeting Retargeting is where advertisers use behavioral targeting to produce ads that follow users after users have looked at or purchased a particular item. An example of this is store catalogs, where stores subscribe customers to their email system after a purchase hoping that they draw attention to more items for continuous purchases. The main example of retargeting that has earned a reputation from most people is ads that follow users across the web, showing them the same items that they have looked at in the hope that they will purchase them. Retargeting is a very effective process; by analysing consumers activities with the brand they can address their consumers' behavior appropriately. Process Advertising provides advertisers with a direct line of communication to existing and prospective consumers. By using a combination of words and/or pictures the general aim of the advertisement is to act as a "medium of information" (David Ogilvy) making the means of delivery and to whom the information is delivered most important. Advertising should define how and when structural elements of advertisements influence receivers, knowing that all receivers are not the same and thus may not respond in a single, similar manner. Targeted advertising serves the purpose of placing particular advertisements before specific groups so as to reach consumers who would be interested in the information. Advertisers aim to reach consumers as efficiently as possible with the belief that it will result in a more effective campaign. By targeting, advertisers are able to identify when and where the ad should be positioned in order to achieve maximum profits. This requires an understanding of how customers' minds work (see also neuromarketing) so as to determine the best channel by which to communicate. Types of targeting include, but are not limited to advertising based on demographics, psychographics, behavioral variables and contextual targeting. Behavioral advertising is the most common form of targeting used online. Internet cookies are sent back and forth between an internet server and the browser, that allows a user to be identified or to track their progressions. Cookies provide detail on what pages a consumer visits, the amount of time spent viewing each page, the links clicked on; and searches and interactions made. From this information, the cookie issuer gathers an understanding of the user's browsing tendencies and interests generating a profile. Analyzing the profile, advertisers are able to create defined audience segments based upon users with similar returned similar information, hence profiles. Tailored advertising is then placed in front of the consumer based upon what organizations working on behalf of the advertisers assume are the interests of the consumer. These advertisements have been formatted so as to appear on pages and in front of users that it would most likely appeal to based on their profiles. For example, under behavioral targeting, if a user is known to have recently visited a number of automotive shopping and comparison sites based on the data recorded by cookies stored on the user's computer, the user can then be served automotive-related advertisements when visiting other sites. Behavioral advertising is reliant on data both wittingly and unwittingly provided by users and is made up of two different forms: one involving the delivery of advertising based on assessment of user's web movements; the second involving the examination of communication and information as it passes through the gateways of internet service providers. Demographic targeting was the first and most basic form of targeting used online. involves segmenting an audience into more specific groups using parameters such as gender, age, ethnicity, annual income, parental status etc. All members of the group share a common trait. So, when an advertiser wishes to run a campaign aimed at a specific group of people then that campaign is intended only for the group that contains those traits at which the campaign is targeted. Having finalized the advertiser's demographic target, a website or a website section is chosen as a medium because a large proportion of the targeted audience utilizes that form of media. Segmentation using psychographics Is based on an individual's personality, values, interests and lifestyles. A recent study concerning what forms of media people use- conducted by the Entertainment Technology Center at the University of Southern California, the Hallmark Channel, and E-Poll Market Research- concludes that a better predictor of media usage is the user's lifestyle. Researchers concluded that while cohorts of these groups may have similar demographic profiles, they may have different attitudes and media usage habits. Psychographics can provide further insight by distinguishing an audience into specific groups by using their personal traits. Once acknowledging this is the case, advertisers can begin to target customers having recognized that factors other than age for example provides greater insight into the customer. Contextual advertising is a strategy to place advertisements on media vehicles, such as specific websites or print magazines, whose themes are relevant to the promoted products. Advertisers apply this strategy in order to narrow-target their audiences. Advertisements are selected and served by automated systems based on the identity of the user and the displayed content of the media. The advertisements will be displayed across the user's different platforms and are chosen based on searches for keywords; appearing as either a web page or pop up ads. It is a form of targeted advertising in which the content of an ad is in direct correlation to the content of the webpage the user is viewing. The major psychographic segments Personality Every brand, service or product has itself a personality, how it is viewed by the public and the community and marketers will create these personalities to match the personality traits of their target market. Marketers and advertisers create these personalities because when a consumer can relate to the characteristics of a brand, service or product they are more likely to feel connected towards the product and purchase it. Lifestyle Advertisers are aware that different people lead different lives, have different lifestyles and different wants and needs at different times in their consumer's lives, thus individual differences can be compensated for Advertisers who base their segmentation on psychographic characteristics promote their product as the solution to these wants and needs. Segmentation by lifestyle considers where the consumer is in their life cycle and which preferences are associated with that life stage. Opinions, attitudes, interests and hobbies Psychographic segmentation also includes opinions on religion, gender and politics, sporting and recreational activities, views on the environment and arts and cultural issues. The views that the market segments hold and the activities they participate in will have an impact on the products and services they purchase and it will affect how they respond to the message. Alternatives to behavioral advertising and psychographic targeting include geographic targeting and demographic targeting When advertisers want to efficiently reach as many consumers as possible, they use a six-step process. identify the objectives the advertisers do this by setting benchmarks, identifying products or proposals, identifying the core values and strategic objectives. This step also includes listing and monitoring competitors content and creating objectives for the next 12-18months. The second step understanding buyers, is all about identifying what types of buyers the advertiser wants to target and identifying the buying process for the consumers. Identifying gaps is key as this illustrates all of the gaps in the content and finds what is important for the buying process and the stages of the content. content is created and the stage where the key messages are identified and the quality bench line is discussed. Organizing distribution is key for maximizing the potential of the content, these can be social media, blogs or google display networks. The last step is vital for an advertiser as they need to measure the return on investment (ROI) there are multiple ways to measure performance, these can be tracking web traffic, sales lead quality, and/ or social media sharing. Alternatives to behavioral advertising include audience targeting, contextual targeting, and psychographic targeting. Effectiveness Targeting aims to improve the effectiveness of advertising and reduce the wastage created by sending advertising to consumers who are unlikely to purchase that product. Targeted advertising or improved targeting may lead to lower advertising costs and expenditures. The effects of advertising on society and those targeted are all implicitly underpinned by consideration of whether advertising compromises autonomous choice. Those arguing for the ethical acceptability of advertising claim either that, because of the commercially competitive context of advertising, the consumer has a choice over what to accept and what to reject. Humans have the cognitive competence and are equipped with the necessary faculties to decide whether to be affected by adverts. Those arguing against note, for example, that advertising can make us buy things we do not want or that, as advertising is enmeshed in a capitalist system, it only presents choices based on consumerist-centered reality thus limiting the exposure to non-materialist lifestyles. Although the effects of target advertising are mainly focused on those targeted it also has an effect on those not targeted. Its unintended audiences often view an advertisement targeted at other groups and start forming judgments and decisions regarding the advertisement and even the brand and company behind the advertisement, these judgments may affect future consumer behavior. The Network Advertising Initiative conducted a study in 2009 measuring the pricing and effectiveness of targeted advertising. It revealed that targeted advertising: Secured an average of 2.7 times as much revenue per ad as non-targeted "run of network" advertising. Twice as effective at converting users who click on the ads into buyers However, other studies show that targeted advertising, at least by gender, is not effective. One of the major difficulties in measuring the economic efficiency of targeting, however, is being able to observe what would have happened in the absence of targeting since the users targeted by advertisers are more likely to convert than the general population. Farahat and Bailey exploit a large-scale natural experiment on Yahoo! allowing them to measure the true economic impact of targeted advertising on brand searches and clicks. They find, assuming the cost per 1000 ad impressions (CPM) is $1, that: The marginal cost of a brand-related search resulting from ads is $15.65 per search, but is only $1.69 per search from a targeted campaign. The marginal cost of a click is 72 cents, but only 16 cents from a targeted campaign. The variation in CTR lifts from targeted advertising campaigns is mostly determined by pre-existing brand interest. Research shows that Content marketing in 2015 generates 3 times as many leads as traditional outbound marketing, but costs 62% less showing how being able to advertise to targeted consumers is becoming the ideal way to advertise to the public. As other stats show how 86% of people skip television adverts and 44% of people ignore direct mail, which also displays how advertising to the wrong group of people can be a waste of resources. Benefits and disadvantages Benefits Proponents of targeted advertising argue that there are advantages for both consumers and advertisers: Consumers Targeted advertising benefits consumers because advertisers are able to effectively attract consumers by using their purchasing and browsing habits this enables ads to be more apparent and useful for customers. Having ads that are related to the interests of the consumers allow the message to be received in a direct manner through effective touchpoints. An example of how targeted advertising is beneficial to consumers if that if someone sees an ad targeted to them for something similar to an item they have previously viewed online and were interested in, they are more likely to buy it. Consumers can benefit from targeted advertising in the following ways: More effective delivery of desired product or service directly to the consumer: having assumed the traits or interests of the consumer from their targeting, advertisements that will appeal and engage the customer are used. More direct delivery of a message that relates to the consumer's interest: advertisements are delivered to the customer in a manner that is comfortable, whether it be jargon or a certain medium, the delivery of the message is part of the consumer's 'lifestyle' Intelligence agencies Intelligence agencies worldwide can more easily, and without exposing their personnel to the risks of HUMINT, track targets at sensitive locations such as military bases or training camps by simply purchasing location data from commercial providers who collect it from mobile devices with geotargeting enabled used by the operatives present at these places. Advertiser Advertisers benefit with target advertising are reduced resource costs and creation of more effective ads by attracting consumers with a strong appeal to these products. Targeted advertising allows advertisers in reduced cost of advertisement by minimizing "wasted" advertisements to non-interested consumers. Targeted advertising captivate the attention of consumers they were aimed at resulting in higher return on investment for the company. Because behavioral advertising enables advertisers to more easily determine user preference and purchasing habit, the ads will be more pertinent and useful for consumers. By creating a more efficient and effective manner of advertising to the consumer, an advertiser benefits greatly and in the following ways: More efficient campaign development: by having information about the consumer an advertiser is able to make more concise decisions on how to best communicate with them. Better use of advertising dollar: A greater understanding of the targeted audience will allow an advertiser to achieve better results with an advertising campaign Increased return on investment: Targeted advertisements will yield higher results for lower costs. Using information from consumers can benefit the advertiser by developing a more efficient campaign, targeted advertising is proven to work both effectively and efficiently. They don't want to waste time and money advertising to the "wrong people". Through technological advances, the internet has allowed advertisers to target consumers beyond the capabilities of traditional media, and target significantly larger amount. The main advantage of using targeted advertising is how it can help minimize wasted advertising by using detailed information about individuals who are intended for a product. If consumers are produced these ads that are targeted for them, it is more likely they will be interested and click on them. 'Know thy consumer', is a simple principle used by advertisers, when businesses know information about consumers, it can be easier to target them and get them to purchase their product. Some consumers do not mind if their information is used, and are more accepting to ads with easily accessible links. This is because they may appreciate adverts tailored to their preferences, rather than just generic ads. They are more likely to be directed to products they want, and possibly purchase them, in return generating more income for the business advertising. Disadvantages Consumers Targeted advertising raises privacy concerns. Targeted advertising is performed by analyzing consumers' activities through online services such as HTTP cookies and data mining, both of which can be seen as detrimental to consumers' privacy. Marketers research consumers' online activity for targeted advertising campaigns like programmatic and SEO. Consumers' privacy concerns revolve around today's unprecedented tracking capabilities and whether to trust their trackers. Consumers may feel uncomfortable with sites knowing so much about their activity online. Targeted advertising aims to increase promotions' relevance to potential buyers, delivering ad campaign executions to specified consumers at critical stages in the buying decision process. This potentially limits a consumer's awareness of alternatives and reinforces selective exposure. Consumers may start avoiding certain sites and brands if they keep getting served the same advertisements as the consumer may feel like they are being watched too much or may start getting annoyed with certain brands. Due to the increased use of tracking cookies all over the web many sites now have cookie notices that pop up when a visitor lands on a site. The notice informs the visitor about the use of cookies, how they affect the visitor, and the visitor's options in regards to what information the cookies can obtain. Advertiser Targeting advertising is not a process performed overnight, it takes time and effort to analyze the behavior of consumers. This results in more expenses than the traditional advertising processes. As targeted advertising is seen more effective this is not always a disadvantage but there are cases where advertisers have not received the profit expected. Targeted advertising has a limited reach to consumers, advertisers are not always aware that consumers change their minds and purchases which will no longer mean ads are apparent to them. Another disadvantage is that while using cookies to track activity advertisers are unable to depict whether 1 or more consumers are using the same computer. This is apparent in family homes where multiple people from a broad age range are using the same device. Controversies Targeted advertising has raised controversies, most particularly towards the privacy rights and policies. With behavioral targeting focusing in on specific user actions such as site history, browsing history, and buying behavior, this has raised user concern that all activity is being tracked. Privacy International is a UK based registered charity that defends and promotes the right to privacy across the world. This organization is fighting in order to make Governments legislate in a way that protects the rights of the general public. According to them, from any ethical standpoint such interception of web traffic must be conditional on the basis of explicit and informed consent. And action must be taken where organizations can be shown to have acted unlawfully. A survey conducted in the United States by the Pew Internet & American Life Project between January 20 and February 19, 2012, revealed that most of Americans are not in favor of targeted advertising, seeing it as an invasion of privacy. Indeed, 68% of those surveyed said they are "not okay" with targeted advertising because they do not like having their online behavior tracked and analyzed. Another issue with targeted advertising is the lack of 'new' advertisements of goods or services. Seeing as all ads are tailored to be based on user preferences, no different products will be introduced to the consumer. Hence, in this case the consumer will be at a loss as they are not exposed to anything new. Advertisers concentrate their resources on the consumer, which can be very effective when done right. When advertising doesn't work, consumer can find this creepy and start wondering how the advertiser learnt the information about them. Consumers can have concerns over ads targeted at them, which are basically too personal for comfort, feeling a need for control over their own data. In targeted advertising privacy is a complicated issue due to the type of protected user information and the number of parties involved. The three main parties involved in online advertising are the advertiser, the publisher, and the network. People tend to want to keep their previously browsed websites private, although users 'clickstreams' are being transferred to advertisers who work with ad networks. The user's preferences and interests are visible through their clickstream and their behavioral profile is generated. Many find this form of advertising to be concerning and see these tactics as manipulative and a sense of discrimination. As a result of this, a number of methods have been introduced in order to avoid advertising. Internet users employing ad blockers are rapidly growing in numbers. The average global ad-blocking rate in early 2018 was estimated at 27 percent. Greece is at the top of the list with more than 40% of internet users admit to using ad-blocking software. Among technical population ad-blocking reaches 58%. See also Behavioral retargeting Behavioral targeting case law: In re DoubleClick FTC regulation of behavioral advertising Cross-device tracking Deradicalization Digital self-determination Digital traces Forensic profiling Internet manipulation Personalization Personalized marketing Reality mining Surveillance capitalism References Further reading Ahmad, K., & Begen, A. C. (2009). IPTV and video networks in the 2015 timeframe: The evolution to medianets. Communications Magazine, pp.ย 68โ€“74. Retrieved from http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5350371 "Benefits Of Targeted Advertisements - Increase ROI With Targeted Ads". eReach Consulting. N.p., 2013. Web. 1 Apr. 2016. Constantinides, E. (2006). Journal of marketing management, vol 22 issue ร‚ยพ pp.407-438. Enschede, The Netherlands: Digital Advertising Alliance (DAA) Self-Regulatory Program | www.aboutads.info. (2016). Aboutads.info. Retrieved 29 March 2016, from http://www.aboutads.info/ Juels, A. (2011). Targeted advertisingรขโ‚ฌยฆ and privacy too. Springer Berlin Heidelberg Konow, R., Tan, W., Loyola, L., Pereira, J., Baloian, N. (2010). Recommender system for contextual advertising in IPTV scenarios, pp.ย 617โ€“622. Retrieved from http://allm.net/wp-content/uploads/2014/10/rd2010_02_CSCWD2010.pdf Kotler, P., Burton, S., Brown, L. & Armstrong, G. (2012). Marketing (9th ed.) Australia: Pearson Australia McCarthy, E.J. (1964). Basic marketing, a managerial approach. Homewood Richard D. Irwin, Inc Star turn. (2000). The Economist. Retrieved 29 March 2016, from http://www.economist.com/node/330628 Stern, B. J., & Subramaniam, G. K. (2006). Method and system for user to user targeted advertising. U.S. Patent Application No. 11/455,561. Suli, J (2017) How To Use Facebook To Get Targeted Traffic "Use Remarketing To Reach Past Website Visitors And App Users - Adwords Help".Support.google.com. N.p., 2016. Web. 1 Apr. 2016. Waechter, S. (2010). Contextual advertising in online communication: an investigation of relationships between multiple content types on a webpage. Auckland University of Technology Advertising Advertising techniques Online advertising methods Marketing by target group Market segmentation Promotion and marketing communications Online advertising
2780
https://en.wikipedia.org/wiki/Atari%205200
Atari 5200
The Atari 5200 SuperSystem or simply Atari 5200 is a home video game console introduced in 1982 by Atari, Inc. as a higher-end complement for the popular Atari Video Computer System. The VCS was renamed to the Atari 2600 at the time of the 5200's launch. Created to compete with Intellivision, the 5200 wound up a direct competitor of ColecoVision shortly after its release. While the Coleco system shipped with the first home version of Nintendo's Donkey Kong, the 5200 included the 1978 arcade game Super Breakout which had already appeared on the Atari 8-bit family and Atari VCS in 1979 and 1981 respectively. The CPU and the graphics and sound hardware are almost identical to that of the Atari 8-bit computers, although software is not directly compatible between the two systems. The 5200's controllers have an analog joystick and a numeric keypad along with start, pause, and reset buttons. The 360-degree non-centering joystick was touted as offering more control than the eight-way Atari CX40 joystick of the 2600, but was a focal point for criticism. On May 21, 1984, during a press conference at which the Atari 7800 was introduced, company executives revealed that the 5200 had been discontinued after just two years on the market. Total sales of the 5200 were reportedly in excess of 1ย million units, far short of its predecessor's sales of over 30ย million. Hardware Much of the technology in the Atari 8-bit family of home computer systems was originally developed as a second-generation games console intended to replace the 2600. However, as the system was reaching completion, the personal computer revolution was starting with the release of machines like the Commodore PET, TRS-80 and Apple II. These machines had less advanced hardware than the new Atari technology, but sold for much higher prices with associated higher profit margins. Atari's management decided to enter this market, and the technology was repackaged into the Atari 400 and 800. The chipset used in these machines was created with the mindset that the 2600 would likely be obsolete by the 1980 time frame. Atari later decided to re-enter the games market with a design that closely matched their original 1978 specifications. In its prototype stage, the Atari 5200 was originally called the "Atari Video System X โ€“ Advanced Video Computer System", and was codenamed "Pam" after a female employee at Atari, Inc. It is also rumored that PAM actually stood for "Personal Arcade Machine", as the majority of games for the system ended up being arcade conversions. Actual working Atari Video System X machines, whose hardware is 100% identical to the Atari 5200 do exist, but are extremely rare. The initial 1982 release of the system featured four controller ports, where nearly all other systems of the day had only one or two ports. The 5200 also featured a new style of controller with an analog joystick, numeric keypad, two fire buttons on each side of the controller and game function keys for Start, Pause, and Reset. The 5200 also featured the innovation of the first automatic TV switchbox, allowing it to automatically switch from regular TV viewing to the game system signal when the system was activated. Previous RF adapters required the user to slide a switch on the adapter by hand. The RF box was also where the power supply connected in a unique dual power/television signal setup similar to the RCA Studio II's. A single cable coming out of the 5200 plugged into the switch box and carried both electricity and the television signal. The 1983 revision of the Atari 5200 has two controller ports instead of four, and a change back to the more conventional separate power supply and standard non-autoswitching RF switch. It also has changes in the cartridge port address lines to allow for the Atari 2600 adapter released that year. While the adapter was only made to work on the two-port version, modifications can be made to the four-port to make it line-compatible. In fact, towards the end of the four-port model's production run, there were a limited number of consoles produced which included these modifications. These consoles can be identified by an asterisk in their serial numbers. At one point following the 5200's release, Atari planned a smaller, cost-reduced version of the Atari 5200, which removed the controller storage bin. Code-named the "Atari 5100" (a.k.a. "Atari 5200 Jr."), only a few fully working prototype 5100s were made before the project was canceled. Controllers The controller prototypes used in the electrical development lab employed a yoke-and-gimbal mechanism that came from an RC airplane controller kit. The design of the analog joystick, which used a weak rubber boot rather than springs to provide centering, proved to be ungainly and unreliable. They quickly became the Achilles' heel of the system due to the combination of an overly complex mechanical design and a very low-cost internal flex circuit system. Another major flaw of the controllers was that the design did not translate into a linear acceleration from the center through the arc of the stick travel. The controllers did, however, include a pause button, a unique feature at the time. Various third-party replacement joysticks were also released, including those made by Wico. Atari Inc. released the Pro-Line Trak-Ball controller for the system, which was used primarily for gaming titles such as Centipede and Missile Command. A paddle controller and an updated self-centering version of the original controller were also in development, but never made it to market. Games were shipped with plastic card overlays that snapped in over the keypad. The card would indicate which game functions, such as changing the view or vehicle speed, were assigned to each key. The primary controller was ranked the 10th worst video game controller by IGN editor Craig Harris. An editor for Next Generation said that their non-centering joysticks "rendered many games nearly unplayable". Internal differences from 8-bit computers David H. Ahl in 1983 described the Atari 5200 as "a 400 computer in disguise". Its internal design is a tweaked version of the Atari 8-bit family using the ANTIC, POKEY, and GTIA coprocessors. Software designed for one does not run on the other, but source code can be mechanically converted unless it uses computer-specific features. Antic magazine reported in 1984 that "the similarities grossly outweigh the differences, so that a 5200 program can be developed and almost entirely debugged [on an Atari 8-bit computer] before testing on a 5200". John J. Anderson of Creative Computing alluded to the incompatibility being intentional, caused by Atari's console division removing 8-bit compatibility to not lose control to the rival computer division. Besides the 5200's lack of a keyboard, the differences are: The Atari computer 10ย KB operating system is replaced with a simpler 2ย KB version, of which 1ย KB is the built-in character set. Some hardware registers, such as those of the GTIA and POKEY chips, are at different memory locations. The purpose of some registers is slightly different on the 5200. The 5200's analog joysticks appear as pairs of paddles to the hardware, which requires different input handling from the digital joystick input on the Atari computers In 1987, Atari Corporation released the XE Game System console, which is a repackaged 65XE (from 1985) with a detachable keyboard that can run home computer titles directly, unlike the 5200. Anderson wrote in 1984 that Atari could have released a console compatible with computer software in 1981. Reception The Atari 5200 did not fare well commercially compared to its predecessor, the Atari 2600. While it touted superior graphics to the 2600 and Mattel's Intellivision, the system was initially incompatible with the 2600's expansive library of games, and some market analysts have speculated that this hurt its sales, especially since an Atari 2600 cartridge adapter had been released for the Intellivision II. (A revised two-port model was released in 1983, along with a game adapter that allowed gamers to play all 2600 games.) This lack of new games was due in part to a lack of funding, with Atari continuing to develop most of its games for the saturated 2600 market. Many of the 5200's games appeared simply as updated versions of 2600 titles, which failed to excite consumers. Its pack-in game, Super Breakout, was criticized for not doing enough to demonstrate the system's capabilities. This gave the ColecoVision a significant advantage as its pack-in, Donkey Kong, delivered a more authentic arcade experience than any previous game cartridge. In its list of the top 25 game consoles of all time, IGN claimed that the main reason for the 5200's market failure was the technological superiority of its competitor, while other sources maintain that the two consoles are roughly equivalent in power. The 5200 received much criticism for the "sloppy" design of its non-centering analog controllers. Anderson described the controllers as "absolutely atrocious". David H. Ahl of Creative Computing Video & Arcade Games said in 1983 that the "Atari 5200 is, dare I say it, Atari's answer to Intellivision, Colecovision, and the Astrocade", describing the console as a "true mass market" version of the Atari 8-bit computers despite the software incompatibility. He criticized the joystick's imprecise control but said that "it is at least as good as many other controllers", and wondered why Super Breakout was the pack-in game when it did not use the 5200's improved graphics. Technical specifications CPU: Custom MOS Technology 6502C @ 1.79ย MHz (not a 65C02) Graphics chips: ANTIC and GTIA Support hardware: 3 custom VLSI chips Screen resolution: 14 modes: Six text modes (8ร—8, 4ร—8, and 8ร—10 character matrices supported), Eight graphics modes including 80 pixels per line (16 color), 160 pixels per line (4 color), 320 pixels per line (2 color), variable height and width up to overscan 384ร—240 pixels Color palette: 128 (16 hues, 8 luma) or 256 (16 hues, 16 luma) Colors on screen: 2 (320 pixels per line) to 16 (80 pixels per line). up to 23 colors per line with player/missile and playfield priority control mixing. Register values can be changed at every scanline using ANTIC display list interrupts, allowing up to 256 (16 hues, 16 luma) to be displayed at once, with up to 16 per scanline. Sprites: 4 8-pixel wide sprites, 4 2-pixel wide sprites; height of each is either 128 or 256 pixels; 1 color per sprite Scrolling: Coarse and fine scrolling horizontally and vertically. (Horizontal coarse scroll 4, 8, or 16-pixel/color clock increments, and vertically by mode line height 2, 4, 8, or 16 scan lines.) (Or horizontal fine scroll 0 to 3, 7, or 15 single-pixel/color clock increments and then a 4, 8, or 16-pixel/color clock increment coarse scroll; and vertical fine scroll 0 to 1, 3, 7, or 15 scan line increments and then a 2, 4, 8, or 16 scan line increment coarse scroll), Sound: 4-channel PSG sound via POKEY sound chip, which also handles keyboard scanning, serial I/O, high resolution interrupt capable timers (single cycle accurate), and random number generation. RAM: 16 KB ROM: 2 KB on-board BIOS for system startup and interrupt routing. 32 KB ROM window for standard game cartridges, expandable using bank switching techniques. Dimensions: 13" ร— 15" ร— 4.25" Games See also List of Atari 5200 emulators Video game crash of 1983 References External links AtariAge โ€“ Comprehensive Atari 5200 database and information Atari Museum 5200 Super System section 5200 Home video game consoles Second-generation video game consoles Products introduced in 1982
1305947
https://en.wikipedia.org/wiki/3D%20printing
3D printing
3D printing, or additive manufacturing, is the construction of a three-dimensional object from a CAD model or a digital 3D model. The term "3D printing" can refer to a variety of processes in which material is deposited, joined or solidified under computer control to create a three-dimensional object, with material being added together (such as plastics, liquids or powder grains being fused together), typically layer by layer. In the 1980s, 3D printing techniques were considered suitable only for the production of functional or aesthetic prototypes, and a more appropriate term for it at the time was rapid prototyping. , the precision, repeatability, and material range of 3D printing have increased to the point that some 3D printing processes are considered viable as an industrial-production technology, whereby the term additive manufacturing can be used synonymously with 3D printing. One of the key advantages of 3D printing is the ability to produce very complex shapes or geometries that would be otherwise impossible to construct by hand, including hollow parts or parts with internal truss structures to reduce weight. Fused deposition modeling (FDM), which uses a continuous filament of a thermoplastic material, is the most common 3D printing process in use . Terminology The umbrella term additive manufacturing (AM) gained popularity in the 2000s, inspired by the theme of material being added together (in any of various ways). In contrast, the term subtractive manufacturing appeared as a retronym for the large family of machining processes with material removal as their common process. The term 3D printing still referred only to the polymer technologies in most minds, and the term AM was more likely to be used in metalworking and end-use part production contexts than among polymer, inkjet, or stereolithography enthusiasts. Inkjet was the least familiar technology even though it was invented in 1950 and poorly understood because of its complex nature. The earliest inkjets were used as recorders and not printers. As late as the 1970s the term recorder was associated with inkjet. Continuous Inkjet later evolved to On-Demand or Drop-On-Demand Inkjet. Inkjets were single nozzle at the start; they may now have as many as thousands of nozzles for printing in each pass over a surface. By the early 2010s, the terms 3D printing and additive manufacturing evolved senses in which they were alternate umbrella terms for additive technologies, one being used in popular language by consumer-maker communities and the media, and the other used more formally by industrial end-use part producers, machine manufacturers, and global technical standards organizations. Until recently, the term 3D printing has been associated with machines low in price or in capability. 3D printing and additive manufacturing reflect that the technologies share the theme of material addition or joining throughout a 3D work envelope under automated control. Peter Zelinski, the editor-in-chief of Additive Manufacturing magazine, pointed out in 2017 that the terms are still often synonymous in casual usage, but some manufacturing industry experts are trying to make a distinction whereby additive manufacturing comprises 3D printing plus other technologies or other aspects of a manufacturing process. Other terms that have been used as synonyms or hypernyms have included desktop manufacturing, rapid manufacturing (as the logical production-level successor to rapid prototyping), and on-demand manufacturing (which echoes on-demand printing in the 2D sense of printing). Such application of the adjectives rapid and on-demand to the noun manufacturing was novel in the 2000s reveals the prevailing mental model of the long industrial era in which almost all production manufacturing involved long lead times for laborious tooling development. Today, the term subtractive has not replaced the term machining, instead complementing it when a term that covers any removal method is needed. Agile tooling is the use of modular means to design tooling that is produced by additive manufacturing or 3D printing methods to enable quick prototyping and responses to tooling and fixture needs. Agile tooling uses a cost-effective and high-quality method to quickly respond to customer and market needs, and it can be used in hydro-forming, stamping, injection molding and other manufacturing processes. History 1940s and 1950s The general concept of and procedure to be used in 3D-printing was first described by Murray Leinster in his 1945 short story Things Pass By "But this constructor is both efficient and flexible. I feed magnetronic plastics โ€” the stuff they make houses and ships of nowadays โ€” into this moving arm. It makes drawings in the air following drawings it scans with photo-cells. But plastic comes out of the end of the drawing arm and hardens as it comes ... following drawings only" <ref>M. Leinster, Things Pass By, in The Earth In Peril (D. Wollheim ed.). Ace Books 1957, USA, List of Ace SF double titles D-205, p.25, story copyright 1945, by Standard Magazines Inc.</ref> It was also described by Raymond F. Jones in his story, "Tools of the Trade," published in the November 1950 issue of Astounding Science Fiction magazine. He referred to it as a "molecular spray" in that story. 1970s In 1971, Johannes F Gottwald patented the Liquid Metal Recorder, , a continuous Inkjet metal material device to form a removable metal fabrication on a reusable surface for immediate use or salvaged for printing again by remelting. This appears to be the first patent describing 3D printing with rapid prototyping and controlled on-demand manufacturing of patterns. The patent states "As used herein the term printing is not intended in a limited sense but includes writing or other symbols, character or pattern formation with an ink. The term ink as used in is intended to include not only dye or pigment-containing materials, but any flowable substance or composition suited for application to the surface for forming symbols, characters, or patterns of intelligence by marking. The preferred ink is of a Hot melt type. The range of commercially available ink compositions which could meet the requirements of the invention are not known at the present time. However, satisfactory printing according to the invention has been achieved with the conductive metal alloy as ink." "But in terms of material requirements for such large and continuous displays, if consumed at theretofore known rates, but increased in proportion to increase in size, the high cost would severely limit any widespread enjoyment of a process or apparatus satisfying the foregoing objects." "It is therefore an additional object of the invention to minimize use to materials in a process of the indicated class." "It is a further object of the invention that materials employed in such a process be salvaged for reuse." "According to another aspect of the invention, a combination for writing and the like comprises a carrier for displaying an intelligence pattern and an arrangement for removing the pattern from the carrier." In 1974, David E. H. Jones laid out the concept of 3D printing in his regular column Ariadne in the journal New Scientist. 1980s Early additive manufacturing equipment and materials were developed in the 1980s. In April 1980, Hideo Kodama of Nagoya Municipal Industrial Research Institute invented two additive methods for fabricating three-dimensional plastic models with photo-hardening thermoset polymer, where the UV exposure area is controlled by a mask pattern or a scanning fiber transmitter. He filed a patent for this XYZ plotter, which was published on 10 November 1981. (JP S56-144478). His research results as journal papers were published in April and November in 1981.Hideo Kodama, "Automatic method for fabricating a three-dimensional plastic model with photo-hardening polymer," Review of Scientific Instruments, Vol. 52, No. 11, pp. 1770โ€“73, November 1981 However, there was no reaction to the series of his publications. His device was not highly evaluated in the laboratory and his boss did not show any interest. His research budget was just 60,000 yen or $545 a year. Acquiring the patent rights for the XYZ plotter was abandoned, and the project was terminated. A Patent US 4323756, Method of Fabricating Articles by Sequential Deposition, Raytheon Technologies Corp granted 6 April 1982 using hundreds or thousands of 'layers' of powdered metal and a laser energy source is an early reference to forming "layers" and the fabrication of articles on a substrate. On 2 July 1984, American entrepreneur Bill Masters filed a patent for his Computer Automated Manufacturing Process and System (US 4665492). This filing is on record at the USPTO as the first 3D printing patent in history; it was the first of three patents belonging to Masters that laid the foundation for the 3D printing systems used today. On 16 July 1984, Alain Le Mรฉhautรฉ, Olivier de Witte, and Jean Claude Andrรฉ filed their patent for the stereolithography process. The application of the French inventors was abandoned by the French General Electric Company (now Alcatel-Alsthom) and CILAS (The Laser Consortium). The claimed reason was "for lack of business perspective". In 1983, Robert Howard started R.H. Research, later named Howtek, Inc. in Feb 1984 to develop a color inkjet 2D printer, Pixelmaster, commercialized in 1986, using Thermoplastic (hot-melt) plastic ink. A team was put together, 6 members from Exxon Office Systems, Danbury Systems Division, an inkjet printer startup and some members of Howtek, Inc group who became popular figures in the 3D printing industry. One Howtek member, Richard Helinski (patent US5136515A, Method and Means for constructing three-dimensional articles by particle deposition, application 11/07/1989 granted 8/04/1992) formed a New Hampshire company C.A.D-Cast, Inc, name later changed to Visual Impact Corporation (VIC) on 8/22/1991. A prototype of the VIC 3D printer for this company is available with a video presentation showing a 3D model printed with a single nozzle inkjet. Another employee Herbert Menhennett formed a New Hampshire company HM Research in 1991 and introduced the Howtek, Inc, inkjet technology and thermoplastic materials to Royden Sanders of SDI and Bill Masters of Ballistic Particle Manufacturing (BPM) where he worked for a number of years. Both BPM 3D printers and SPI 3D printers use Howtek, Inc style Inkjets and Howtek, Inc style materials. Royden Sanders licensed the Helinksi patent prior to manufacturing the Modelmaker 6 Pro at Sanders prototype, Inc (SPI) in 1993. James K. McMahon who was hired by Howtek, Inc to help develop the inkjet, later worked at Sanders Prototype and now operates Layer Grown Model Technology, a 3D service provider specializing in Howtek single nozzle inkjet and SDI printer support. James K. McMahon worked with Steven Zoltan, 1972 drop-on-demand inkjet inventor, at Exxon and has a patent in 1978 that expanded the understanding of the single nozzle design inkjets (Alpha jets) and help perfect the Howtek, Inc hot-melt inkjets. This Howtek hot-melt thermoplastic technology is popular with metal investment casting, especially in the 3D printing jewelry industry. Sanders (SDI) first Modelmaker 6Pro customer was Hitchner Corporations, Metal Casting Technology, Inc in Milford, NH a mile from the SDI facility in late 1993-1995 casting golf clubs and auto engine parts. On 8 August 1984 a patent, US4575330, assigned to UVP, Inc., later assigned to Chuck Hull of 3D Systems Corporation was filed, his own patent for a stereolithography fabrication system, in which individual laminae or layers are added by curing photopolymers with impinging radiation, particle bombardment, chemical reaction or just ultraviolet light lasers. Hull defined the process as a "system for generating three-dimensional objects by creating a cross-sectional pattern of the object to be formed,". Hull's contribution was the STL (Stereolithography) file format and the digital slicing and infill strategies common to many processes today. In 1986, Charles "Chuck" Hull was granted a patent for this system, and his company, 3D Systems Corporation was formed and it released the first commercial 3D printer, the SLA-1, later in 1987 or 1988. The technology used by most 3D printers to dateโ€”especially hobbyist and consumer-oriented modelsโ€”is fused deposition modeling, a special application of plastic extrusion, developed in 1988 by S. Scott Crump and commercialized by his company Stratasys, which marketed its first FDM machine in 1992. Owning a 3D printer in the 1980s cost upwards of $300,000 ($650,000 in 2016 dollars). 1990s AM processes for metal sintering or melting (such as selective laser sintering, direct metal laser sintering, and selective laser melting) usually went by their own individual names in the 1980s and 1990s. At the time, all metalworking was done by processes that are now called non-additive (casting, fabrication, stamping, and machining); although plenty of automation was applied to those technologies (such as by robot welding and CNC), the idea of a tool or head moving through a 3D work envelope transforming a mass of raw material into a desired shape with a toolpath was associated in metalworking only with processes that removed metal (rather than adding it), such as CNC milling, CNC EDM, and many others. But the automated techniques that added metal, which would later be called additive manufacturing, were beginning to challenge that assumption. By the mid-1990s, new techniques for material deposition were developed at Stanford and Carnegie Mellon University, including microcasting and sprayed materials. Sacrificial and support materials had also become more common, enabling new object geometries. The term 3D printing originally referred to a powder bed process employing standard and custom inkjet print heads, developed at MIT by Emanuel Sachs in 1993 and commercialized by Soligen Technologies, Extrude Hone Corporation, and Z Corporation. The year 1993 also saw the start of an inkjet 3D printer company initially named Sanders Prototype, Inc and later named Solidscape, introducing a high-precision polymer jet fabrication system with soluble support structures, (categorized as a "dot-on-dot" technique). In 1995 the Fraunhofer Society developed the selective laser melting process. 2000s Fused Deposition Modeling (FDM) printing process patents expired in 2009. 2010s As the various additive processes matured, it became clear that soon metal removal would no longer be the only metalworking process done through a tool or head moving through a 3D work envelope, transforming a mass of raw material into a desired shape layer by layer. The 2010s were the first decade in which metal end use parts such as engine brackets and large nuts would be grown (either before or instead of machining) in job production rather than obligately being machined from bar stock or plate. It is still the case that casting, fabrication, stamping, and machining are more prevalent than additive manufacturing in metalworking, but AM is now beginning to make significant inroads, and with the advantages of design for additive manufacturing, it is clear to engineers that much more is to come. One place that AM is making a significant inroad is in the aviation industry. With nearly 3.8 billion air travelers in 2016, the demand for fuel efficient and easily produced jet engines has never been higher. For large OEMs (original equipment manufacturers) like Pratt and Whitney (PW) and General Electric (GE) this means looking towards AM as a way to reduce cost, reduce the number of nonconforming parts, reduce weight in the engines to increase fuel efficiency and find new, highly complex shapes that would not be feasible with the antiquated manufacturing methods. One example of AM integration with aerospace was in 2016 when Airbus was delivered the first of GE's LEAP engine. This engine has integrated 3D printed fuel nozzles giving them a reduction in parts from 20 to 1, a 25% weight reduction and reduced assembly times. A fuel nozzle is the perfect in road for additive manufacturing in a jet engine since it allows for optimized design of the complex internals and it is a low stress, non-rotating part. Similarly, in 2015, PW delivered their first AM parts in the PurePower PW1500G to Bombardier. Sticking to low stress, non-rotating parts, PW selected the compressor stators and synch ring brackets to roll out this new manufacturing technology for the first time. While AM is still playing a small role in the total number of parts in the jet engine manufacturing process, the return on investment can already be seen by the reduction in parts, the rapid production capabilities and the "optimized design in terms of performance and cost". As technology matured, several authors had begun to speculate that 3D printing could aid in sustainable development in the developing world. In 2012, Filabot developed a system for closing the loop with plastic and allows for any FDM or FFF 3D printer to be able to print with a wider range of plastics. In 2014, Benjamin S. Cook and Manos M. Tentzeris demonstrate the first multi-material, vertically integrated printed electronics additive manufacturing platform (VIPRE) which enabled 3D printing of functional electronics operating up to 40 GHz. As the price of printers started to drop people interested in this technology had more access and freedom to make what they wanted. The price as of 2014 was still high with the cost being over $2,000, yet this still allowed hobbyists an entrance into printing outside of production and industry methods. The term "3D printing" originally referred to a process that deposits a binder material onto a powder bed with inkjet printer heads layer by layer. More recently, the popular vernacular has started using the term to encompass a wider variety of additive-manufacturing techniques such as electron-beam additive manufacturing and selective laser melting. The United States and global technical standards use the official term additive manufacturing for this broader sense. The most-commonly used 3D printing process (46% ) is a material extrusion technique called fused deposition modeling, or FDM. While FDM technology was invented after the other two most popular technologies, stereolithography (SLA) and selective laser sintering (SLS), FDM is typically the most inexpensive of the three by a large margin, which lends to the popularity of the process. 2020s As of 2020, 3D printers have reached the level of quality and price that allows most people to enter the world of 3D printing. In 2020 decent quality printers can be found for less than US$200 for entry level machines. These more affordable printers are usually fused deposition modeling (FDM) printers. In November 2021 a British patient named Steve Verze received the world's first fully 3D-printed prosthetic eye from the Moorfields Eye Hospital in London. General principles Modeling 3D printable models may be created with a computer-aided design (CAD) package, via a 3D scanner, or by a plain digital camera and photogrammetry software. 3D printed models created with CAD result in relatively fewer errors than other methods. Errors in 3D printable models can be identified and corrected before printing. The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. 3D scanning is a process of collecting digital data on the shape and appearance of a real object, creating a digital model based on it. CAD models can be saved in the stereolithography file format (STL), a de facto CAD file format for additive manufacturing that stores data based on triangulations of the surface of CAD models. STL is not tailored for additive manufacturing because it generates large file sizes of topology optimized parts and lattice structures due to the large number of surfaces involved. A newer CAD file format, the Additive Manufacturing File format (AMF) was introduced in 2011 to solve this problem. It stores information using curved triangulations. Printing Before printing a 3D model from an STL file, it must first be examined for errors. Most CAD applications produce errors in output STL files, of the following types: holes faces normals self-intersections noise shells manifold errors overhang issues A step in the STL generation known as "repair" fixes such problems in the original model. Generally STLs that have been produced from a model obtained through 3D scanning often have more of these errors as 3D scanning is often achieved by point to point acquisition/mapping. 3D reconstruction often includes errors. Once completed, the STL file needs to be processed by a piece of software called a "slicer", which converts the model into a series of thin layers and produces a G-code file containing instructions tailored to a specific type of 3D printer (FDM printers). This G-code file can then be printed with 3D printing client software (which loads the G-code, and uses it to instruct the 3D printer during the 3D printing process). Printer resolution describes layer thickness and Xโ€“Y resolution in dots per inch (dpi) or micrometers (ยตm). Typical layer thickness is around , although some machines can print layers as thin as . Xโ€“Y resolution is comparable to that of laser printers. The particles (3D dots) are around in diameter. For that printer resolution, specifying a mesh resolution of and a chord length generates an optimal STL output file for a given model input file. Specifying higher resolution results in larger files without increase in print quality. Construction of a model with contemporary methods can take anywhere from several hours to several days, depending on the method used and the size and complexity of the model. Additive systems can typically reduce this time to a few hours, although it varies widely depending on the type of machine used and the size and number of models being produced simultaneously. Finishing Though the printer-produced resolution is sufficient for many applications, greater accuracy can be achieved by printing a slightly oversized version of the desired object in standard resolution and then removing material using a higher-resolution subtractive process. The layered structure of all additive manufacturing processes leads inevitably to a stair-stepping effect on part surfaces which are curved or tilted in respect to the building platform. The effects strongly depend on the orientation of a part surface inside the building process. Some printable polymers such as ABS, allow the surface finish to be smoothed and improved using chemical vapor processes based on acetone or similar solvents. Some additive manufacturing techniques are capable of using multiple materials in the course of constructing parts. These techniques are able to print in multiple colors and color combinations simultaneously, and would not necessarily require painting. Some printing techniques require internal supports to be built for overhanging features during construction. These supports must be mechanically removed or dissolved upon completion of the print. All of the commercialized metal 3D printers involve cutting the metal component off the metal substrate after deposition. A new process for the GMAW 3D printing allows for substrate surface modifications to remove aluminum or steel. Materials Traditionally, 3D printing focused on polymers for printing, due to the ease of manufacturing and handling polymeric materials. However, the method has rapidly evolved to not only print various polymers but also metals and ceramics, making 3D printing a versatile option for manufacturing. Layer-by-layer fabrication of three-dimensional physical models is a modern concept that "stems from the ever-growing CAD industry, more specifically the solid modeling side of CAD. Before solid modeling was introduced in the late 1980s, three-dimensional models were created with wire frames and surfaces." but in all cases the layers of materials are controlled by the printer and the material properties. The three-dimensional material layer is controlled by deposition rate as set by the printer operator and stored in a computer file. The earliest printed patented material was a Hot melt type ink for printing patterns using a heated metal alloy. See 1970's history above. Charles Hull filed the first patent on August 8, 1984, to use a UV-cured acrylic resin using a UV masked light source at UVP Corp to build a simple model. The SLA-1 was the first SL product announced by 3D Systems at Autofact Exposition, Detroit, November 1978 in Detroit. The SLA-1 Beta shipped in Jan 1988 to Baxter Healthcare, Pratt and Whitney, General Motors and AMP. The first production SLA-1 shipped to Precision Castparts in April 1988. The UV resin material changed over quickly to an epoxy-based material resin. In both cases, SLA-1 models needed UV oven curing after being rinsed in a solvent cleaner to remove uncured boundary resin. A Post Cure Apparatus (PCA) was sold with all systems. The early resin printers required a blade to move fresh resin over the model on each layer. The layer thickness was 0.006 inches and the HeCd Laser model of the SLA-1 was 12 watts and swept across the surface at 30 in per second. UVP was acquired by 3D Systems in Jan 1990. A review in the history shows a number of materials (resins, plastic powder, plastic filament and hot-melt plastic ink) were used in the 1980s for patents in the rapid prototyping field. Masked lamp UV-cured resin was also introduced by Cubital's Itzchak Pomerantz in the Soldier 5600, Carl Deckard's (DTM) Laser sintered thermoplastic powders, and adhesive-laser cut paper (LOM) stacked to form objects by Michael Feygin before 3D Systems made its first announcement. Scott Crump was also working with extruded "melted" plastic filament modeling (FDM) and Drop deposition had been patented by William E Masters a week after Charles Hull's patent in 1984, but he had to discover Thermoplastic Inkjets introduced by Visual Impact Corporation 3D printer in 1992 using inkjets from Howtek, Inc., before he formed BPM to bring out his own 3D printer product in 1994. Multi-material 3D printing Efforts to achieve multi-material 3D printing range from enhanced FDM-like processes like VoxelJet, to novel voxel-based printing technologies like Layered Assembly. A drawback of many existing 3D printing technologies is that they only allow one material to be printed at a time, limiting many potential applications which require the integration of different materials in the same object. Multi-material 3D printing solves this problem by allowing objects of complex and heterogeneous arrangements of materials to be manufactured using a single printer. Here, a material must be specified for each voxel (or 3D printing pixel element) inside the final object volume. The process can be fraught with complications, however, due to the isolated and monolithic algorithms. Some commercial devices have sought to solve these issues, such as building a Spec2Fab translator, but the progress is still very limited. Nonetheless, in the medical industry, a concept of 3D printed pills and vaccines has been presented. With this new concept, multiple medications can be combined, which will decrease many risks. With more and more applications of multi-material 3D printing, the costs of daily life and high technology development will become inevitably lower. Metallographic materials of 3D printing is also being researched. By classifying each material, CIMP-3D can systematically perform 3D printing with multiple materials. 4D Printing Using 3D printingย and multi-material structures in additive manufacturing has allowed for the design and creation of what is called 4D printing. 4D printing is an additive manufacturing process in which the printed object changes shape with time, temperature, or some other type of stimulation. 4D printing allows for the creation of dynamic structures with adjustable shapes, properties or functionality. The smart/stimulus responsive materials that are created using 4D printing can be activated to create calculated responses such as self-assembly, self-repair, multi-functionality, reconfiguration and shape shifting. This allows for customized printing of shape changing and shape-memory materials. 4D printing has the potential to find new applications and uses for materials (plastics, composites, metals, etc.) and will create new alloys and composites that were not viable before. The versatility of this technology and materials can lead to advances in multiple fields of industry, including space, commercial and the medical field. The repeatability, precision, and material range for 4D printing must increase to allow the process to become more practical throughout these industries.ย  To become a viable industrial production option, there are a couple of challenges that 4D printing must overcome. The challenges of 4D printing include the fact that the microstructures of these printed smart materials must be close to or better than the parts obtained through traditional machining processes. New and customizable materials need to be developed that have the ability to consistently respond to varying external stimuli and change to their desired shape. There is also a need to design new software for the various technique types of 4D printing. The 4D printing software will need to take into consideration the base smart material, printing technique, and structural and geometric requirements of the design. Processes and printers There are many different branded additive manufacturing processes, that can be grouped into seven categories: Vat photopolymerization Material jetting Binder jetting Powder bed fusion Material extrusion Directed energy deposition Sheet lamination The main differences between processes are in the way layers are deposited to create parts and in the materials that are used. Each method has its own advantages and drawbacks, which is why some companies offer a choice of powder and polymer for the material used to build the object. Others sometimes use standard, off-the-shelf business paper as the build material to produce a durable prototype. The main considerations in choosing a machine are generally speed, costs of the 3D printer, of the printed prototype, choice and cost of the materials, and color capabilities. Printers that work directly with metals are generally expensive. However less expensive printers can be used to make a mold, which is then used to make metal parts. ISO/ASTM52900-15 defines seven categories of Additive Manufacturing (AM) processes within its meaning: binder jetting, directed energy deposition, material extrusion, material jetting, powder bed fusion, sheet lamination, and vat photopolymerization. The first process where three-dimensional material is deposited to form an object was done with Material Jetting or as it was originally called particle deposition. Particle deposition by inkjet first started with Continuous Inkjet technology (CIT) (1950's) and later with drop-On-Demand Inkjet technology.(1970's) using Hot-melt inks. Wax inks were the first three-dimensional materials jetted and later low temperature alloy metal was jetted with CIT. Wax and thermoplastic hot-melts were jetted next by DOD. Objects were very small and started with text characters and numerals for signage. An object must have form and can be handled. Wax characters tumbled off paper documents and inspired a Liquid Metal Recorder patent to make metal characters for signage in 1971. Thermoplastic color inks (CMYK) printed with layers of each color to form the first digitally formed layered objects in 1984. The idea of investment casting with Solid-Ink jetted images or patterns in 1984 led to the first patent to form articles from particle deposition in 1989, issued in 1992. Some methods melt or soften the material to produce the layers. In Fused filament fabrication, also known as Fused deposition modeling (FDM), the model or part is produced by extruding small beads or streams of material which harden immediately to form layers. A filament of thermoplastic, metal wire, or other material is fed into an extrusion nozzle head (3D printer extruder), which heats the material and turns the flow on and off. FDM is somewhat restricted in the variation of shapes that may be fabricated. Another technique fuses parts of the layer and then moves upward in the working area, adding another layer of granules and repeating the process until the piece has built up. This process uses the unfused media to support overhangs and thin walls in the part being produced, which reduces the need for temporary auxiliary supports for the piece. Recently, FFF/FDM has expanded to 3-D print directly from pellets to avoid the conversion to filament. This process is called fused particle fabrication (FPF) (or fused granular fabrication (FGF) and has the potential to use more recycled materials. Powder Bed Fusion techniques, or PBF, include several processes such as DMLS, SLS, SLM, MJF and EBM. Powder Bed Fusion processes can be used with an array of materials and their flexibility allows for geometrically complex structures, making it a go to choice for many 3D printing projects. These techniques include selective laser sintering, with both metals and polymers, and direct metal laser sintering. Selective laser melting does not use sintering for the fusion of powder granules but will completely melt the powder using a high-energy laser to create fully dense materials in a layer-wise method that has mechanical properties similar to those of conventional manufactured metals. Electron beam melting is a similar type of additive manufacturing technology for metal parts (e.g. titanium alloys). EBM manufactures parts by melting metal powder layer by layer with an electron beam in a high vacuum. Another method consists of an inkjet 3D printing system, which creates the model one layer at a time by spreading a layer of powder (plaster, or resins) and printing a binder in the cross-section of the part using an inkjet-like process. With laminated object manufacturing, thin layers are cut to shape and joined together. In addition to the previously mentioned methods, HP has developed the Multi Jet Fusion (MJF) which is a powder base technique, though no lasers are involved. An inkjet array applies fusing and detailing agents which are then combined by heating to create a solid layer. Other methods cure liquid materials using different sophisticated technologies, such as stereolithography. Photopolymerization is primarily used in stereolithography to produce a solid part from a liquid. Inkjet printer systems like the Objet PolyJet system spray photopolymer materials onto a build tray in ultra-thin layers (between 16 and 30ย ยตm) until the part is completed. Each photopolymer layer is cured with UV light after it is jetted, producing fully cured models that can be handled and used immediately, without post-curing. Ultra-small features can be made with the 3D micro-fabrication technique used in multiphoton photopolymerisation. Due to the nonlinear nature of photo excitation, the gel is cured to a solid only in the places where the laser was focused while the remaining gel is then washed away. Feature sizes of under 100ย nm are easily produced, as well as complex structures with moving and interlocked parts. Yet another approach uses a synthetic resin that is solidified using LEDs. In Mask-image-projection-based stereolithography, a 3D digital model is sliced by a set of horizontal planes. Each slice is converted into a two-dimensional mask image. The mask image is then projected onto a photocurable liquid resin surface and light is projected onto the resin to cure it in the shape of the layer. Continuous liquid interface production begins with a pool of liquid photopolymer resin. Part of the pool bottom is transparent to ultraviolet light (the "window"), which causes the resin to solidify. The object rises slowly enough to allow resin to flow under and maintain contact with the bottom of the object. In powder-fed directed-energy deposition, a high-power laser is used to melt metal powder supplied to the focus of the laser beam. The powder fed directed energy process is similar to Selective Laser Sintering, but the metal powder is applied only where material is being added to the part at that moment. , additive manufacturing systems were on the market that ranged from $99 to $500,000 in price and were employed in industries including aerospace, architecture, automotive, defense, and medical replacements, among many others. For example, General Electric uses high-end 3D Printers to build parts for turbines. Many of these systems are used for rapid prototyping, before mass production methods are employed. Higher education has proven to be a major buyer of desktop and professional 3D printers which industry experts generally view as a positive indicator. Libraries around the world have also become locations to house smaller 3D printers for educational and community access. Several projects and companies are making efforts to develop affordable 3D printers for home desktop use. Much of this work has been driven by and targeted at DIY/Maker/enthusiast/early adopter communities, with additional ties to the academic and hacker communities. Computed axial lithography is a method for 3D printing based on computerised tomography scans to create prints in photo-curable resin. It was developed by a collaboration between the University of California, Berkeley with Lawrence Livermore National Laboratory. Unlike other methods of 3D printing it does not build models through depositing layers of material like fused deposition modelling and stereolithography, instead it creates objects using a series of 2D images projected onto a cylinder of resin. It is notable for its ability to build an object much more quickly than other methods using resins and the ability to embed objects within the prints. Liquid additive manufacturing (LAM) is a 3D printing technique which deposits a liquid or high viscose material (e.g. Liquid Silicone Rubber) onto a build surface to create an object which then is vulcanised using heat to harden the object. The process was originally created by Adrian Bowyer and was then built upon by German RepRap. Applications 3D printing or additive manufacturing has been used in manufacturing, medical, industry and sociocultural sectors (eg. Cultural Heritage) to create successful commercial technology. More recently, 3D printing has also been used in the humanitarian and development sector to produce a range of medical items, prosthetics, spares and repairs. The earliest application of additive manufacturing was on the toolroom end of the manufacturing spectrum. For example, rapid prototyping was one of the earliest additive variants, and its mission was to reduce the lead time and cost of developing prototypes of new parts and devices, which was earlier only done with subtractive toolroom methods such as CNC milling, turning, and precision grinding. In the 2010s, additive manufacturing entered production to a much greater extent. Food industry Additive manufacturing of food is being developed by squeezing out food, layer by layer, into three-dimensional objects. A large variety of foods are appropriate candidates, such as chocolate and candy, and flat foods such as crackers, pasta, and pizza. NASA is looking into the technology in order to create 3D printed food to limit food waste and to make food that is designed to fit an astronaut's dietary needs. In 2018, Italian bioengineer Giuseppe Scionti developed a technology allowing the production of fibrous plant-based meat analogues using a custom 3D bioprinter, mimicking meat texture and nutritional values. Fashion industry 3D printing has entered the world of clothing, with fashion designers experimenting with 3D-printed bikinis, shoes, and dresses. In commercial production Nike is using 3D printing to prototype and manufacture the 2012 Vapor Laser Talon football shoe for players of American football, and New Balance is 3D manufacturing custom-fit shoes for athletes. 3D printing has come to the point where companies are printing consumer grade eyewear with on-demand custom fit and styling (although they cannot print the lenses). On-demand customization of glasses is possible with rapid prototyping. Vanessa Friedman, fashion director and chief fashion critic at The New York Times, says 3D printing will have a significant value for fashion companies down the road, especially if it transforms into a print-it-yourself tool for shoppers. "There's real sense that this is not going to happen anytime soon," she says, "but it will happen, and it will create dramatic change in how we think both about intellectual property and how things are in the supply chain." She adds: "Certainly some of the fabrications that brands can use will be dramatically changed by technology." Transportation industry In cars, trucks, and aircraft, Additive Manufacturing is beginning to transform both (1) unibody and fuselage design and production and (2) powertrain design and production. For example: In early 2014, Swedish supercar manufacturer Koenigsegg announced the One:1, a supercar that utilizes many components that were 3D printed. Urbee is the name of the first car in the world car mounted using the technology 3D printing (its bodywork and car windows were "printed"). In 2014, Local Motors debuted Strati, a functioning vehicle that was entirely 3D Printed using ABS plastic and carbon fiber, except the powertrain. In May 2015 Airbus announced that its new Airbus A350 XWB included over 1000 components manufactured by 3D printing. In 2015, a Royal Air Force Eurofighter Typhoon fighter jet flew with printed parts. The United States Air Force has begun to work with 3D printers, and the Israeli Air Force has also purchased a 3D printer to print spare parts. In 2017, GE Aviation revealed that it had used design for additive manufacturing to create a helicopter engine with 16 parts instead of 900, with great potential impact on reducing the complexity of supply chains. Firearm industry AM's impact on firearms involves two dimensions: new manufacturing methods for established companies, and new possibilities for the making of do-it-yourself firearms. In 2012, the US-based group Defense Distributed disclosed plans to design a working plastic 3D printed firearm "that could be downloaded and reproduced by anybody with a 3D printer." After Defense Distributed released their plans, questions were raised regarding the effects that 3D printing and widespread consumer-level CNC machining may have on gun control effectiveness. Moreover, armour design strategies can be enhanced by taking inspiration from nature and prototyping those designs easily possible using additive manufacturing. Health sector Surgical uses of 3D printing-centric therapies have a history beginning in the mid-1990s with anatomical modeling for bony reconstructive surgery planning. Patient-matched implants were a natural extension of this work, leading to truly personalized implants that fit one unique individual. Virtual planning of surgery and guidance using 3D printed, personalized instruments have been applied to many areas of surgery including total joint replacement and craniomaxillofacial reconstruction with great success. One example of this is the bioresorbable trachial splint to treat newborns with tracheobronchomalacia developed at the University of Michigan. The use of additive manufacturing for serialized production of orthopedic implants (metals) is also increasing due to the ability to efficiently create porous surface structures that facilitate osseointegration. The hearing aid and dental industries are expected to be the biggest area of future development using the custom 3D printing technology. In March 2014, surgeons in Swansea used 3D printed parts to rebuild the face of a motorcyclist who had been seriously injured in a road accident. In May 2018, 3D printing has been used for the kidney transplant to save a three-year-old boy. , 3D bio-printing technology has been studied by biotechnology firms and academia for possible use in tissue engineering applications in which organs and body parts are built using inkjet printing techniques. In this process, layers of living cells are deposited onto a gel medium or sugar matrix and slowly built up to form three-dimensional structures including vascular systems. Recently, a heart-on-chip has been created which matches properties of cells. Thermal degradation during 3D printing of resorbable polymers, same as in surgical sutures, has been studied, and parameters can be adjusted to minimize the degradation during processing. Soft pliable scaffold structures for cell cultures can be printed. In 3D printing, computer-simulated microstructures are commonly used to fabricate objects with spatially varying properties. This is achieved by dividing the volume of the desired object into smaller subcells using computer aided simulation tools and then filling these cells with appropriate microstructures during fabrication. Several different candidate structures with similar behaviours are checked against each other and the object is fabricated when an optimal set of structures are found. Advanced topology optimization methods are used to ensure the compatibility of structures in adjacent cells. This flexible approach to 3D fabrication is widely used across various disciplines from biomedical sciences where they are used to create complex bone structures and human tissue to robotics where they are used in the creation of soft robots with movable parts. 3D printing also finds its uses more and more in design and fabrication of laboratory apparatuses. 3D printing has also been employed by researchers in the pharmaceutical field. During the last few years there's been a surge in academic interest regarding drug delivery with the aid of AM techniques. This technology offers a unique way for materials to be utilized in novel formulations. AM manufacturing allows for the usage of materials and compounds in the development of formulations, in ways that are not possible with conventional/traditional techniques in the pharmaceutical field, e.g. tableting, cast-molding, etc. Moreover, one of the major advantages of 3D printing, especially in the case of Fused Deposition Modelling (FDM), is the personalization of the dosage form that can be achieved, thus, targeting the patient's specific needs. In the not-so-distant future, 3D printers are expected to reach hospitals and pharmacies in order to provide on demand production of personalized formulations according to the patients' needs. In 2018, 3D printing technology was used for the first time to create a matrix for cell immobilization in fermentation. Propionic acid production by Propionibacterium acidipropionici immobilized on 3D-printed nylon beads was chosen as a model study. It was shown that those 3D-printed beads were capable of promoting high density cell attachment and propionic acid production, which could be adapted to other fermentation bioprocesses. In 2005, academic journals had begun to report on the possible artistic applications of 3D printing technology. , domestic 3D printing was reaching a consumer audience beyond hobbyists and enthusiasts. Off the shelf machines were increasingly capable of producing practical household applications, for example, ornamental objects. Some practical examples include a working clock and gears printed for home woodworking machines among other purposes. Web sites associated with home 3D printing tended to include backscratchers, coat hooks, door knobs, etc. Education sector 3D printing, and open source 3D printers in particular, are the latest technology making inroads into the classroom.Grujoviฤ‡, N., Radoviฤ‡, M., Kanjevac, V., Borota, J., Grujoviฤ‡, G., & Divac, D. (September 2011). "3D printing technology in education environment." In 34th International Conference on Production Engineering (pp. 29โ€“30). Some authors have claimed that 3D printers offer an unprecedented "revolution" in STEM education. The evidence for such claims comes from both the low-cost ability for rapid prototyping in the classroom by students, but also the fabrication of low-cost high-quality scientific equipment from open hardware designs forming open-source labs. Future applications for 3D printing might include creating open-source scientific equipment. Cultural heritage and museum-based digital twin In the last several years 3D printing has been intensively used by in the cultural heritage field for preservation, restoration and dissemination purposes. Many Europeans and North American Museums have purchased 3D printers and actively recreate missing pieces of their relics and archaeological monuments such as Tiwanaku in Bolivia. The Metropolitan Museum of Art and the British Museum have started using their 3D printers to create museum souvenirs that are available in the museum shops. Other museums, like the National Museum of Military History and Varna Historical Museum, have gone further and sell through the online platform Threeding digital models of their artifacts, created using Artec 3D scanners, in 3D printing friendly file format, which everyone can 3D print at home. The application of 3D printing for the representation of architectural assets has many challenges. In 2018, the structure of Iran National Bank was traditionally surveyed and modelled in computer graphics(CG) software (Cinema4D) and was optimised for 3D printing. The team tested the technique for the construction of the part and it was successful. After testing the procedure, the modellers reconstructed the structure in Cinema4D and exported the front part of the model to Netfabb. The entrance of the building was chosen due to the 3D printing limitations and the budget of the project for producing the maquette. 3D printing was only one of the capabilities enabled by the produced 3D model of the bank, but due to the project's limited scope, the team did not continue modelling for the virtual representation or other applications. In 2021, Parsinejad et al. comprehensively compared the hand surveying method for 3D reconstruction ready for 3D printing with digital recording (adoption of Photogrammetry method). Recent other applications 3D printed soft actuators is a growing application of 3D printing technology which has found its place in the 3D printing applications. These soft actuators are being developed to deal with soft structures and organs especially in biomedical sectors and where the interaction between human and robot is inevitable. The majority of the existing soft actuators are fabricated by conventional methods that require manual fabrication of devices, post processing/assembly, and lengthy iterations until maturity of the fabrication is achieved. Instead of the tedious and time-consuming aspects of the current fabrication processes, researchers are exploring an appropriate manufacturing approach for effective fabrication of soft actuators. Thus, 3D printed soft actuators are introduced to revolutionise the design and fabrication of soft actuators with custom geometrical, functional, and control properties in a faster and inexpensive approach. They also enable incorporation of all actuator components into a single structure eliminating the need to use external joints, adhesives, and fasteners. Circuit board manufacturing involves multiple steps which include imaging, drilling, plating, soldermask coating, nomenclature printing and surface finishes. These steps include many chemicals such as harsh solvents and acids. 3D printing circuit boards remove the need for many of these steps while still producing complex designs. Polymer ink is used to create the layers of the build while silver polymer is used for creating the traces and holes used to allow electricity to flow. Current circuit board manufacturing can be a tedious process depending on the design. Specified materials are gathered and sent into inner layer processing where images are printed, developed and etched. The etches cores are typically punched to add lamination tooling. The cores are then prepared for lamination. The stack-up, the buildup of a circuit board, is built and sent into lamination where the layers are bonded. The boards are then measured and drilled. Many steps may differ from this stage however for simple designs, the material goes through a plating process to plate the holes and surface. The outer image is then printed, developed and etched. After the image is defined, the material must get coated with soldermask for later soldering. Nomenclature is then added so components can be identified later. Then the surface finish is added. The boards are routed out of panel form into their singular or array form and then electrically tested. Aside from the paperwork which must be completed which proves the boards meet specifications, the boards are then packed and shipped. The benefits of 3D printing would be that the final outline is defined from the beginning, no imaging, punching or lamination is required and electrical connections are made with the silver polymer which eliminates drilling and plating. The final paperwork would also be greatly reduced due to the lack of materials required to build the circuit board. Complex designs which may takes weeks to complete through normal processing can be 3D printed, greatly reducing manufacturing time. During the COVID-19 pandemic 3d printers were used to supplement the strained supply of PPE through volunteers using their personally owned printers to produce various pieces of personal protective equipment (i.e. frames for face shields). As of 2021 and the years leading up to it, 3D printing has become both an industrial tool as well as a consumer product. With the price of certain 3D printers becoming ever cheaper and the quality constantly increasing many people have picked up the hobby of 3D printing. As of current estimates there are over 2 million people around the world who have purchased a 3D printer for hobby use. Legal aspects Intellectual property 3D printing has existed for decades within certain manufacturing industries where many legal regimes, including patents, industrial design rights, copyrights, and trademarks may apply. However, there is not much jurisprudence to say how these laws will apply if 3D printers become mainstream and individuals or hobbyist communities begin manufacturing items for personal use, for non-profit distribution, or for sale. Any of the mentioned legal regimes may prohibit the distribution of the designs used in 3D printing, or the distribution or sale of the printed item. To be allowed to do these things, where an active intellectual property was involved, a person would have to contact the owner and ask for a licence, which may come with conditions and a price. However, many patent, design and copyright laws contain a standard limitation or exception for 'private', 'non-commercial' use of inventions, designs or works of art protected under intellectual property (IP). That standard limitation or exception may leave such private, non-commercial uses outside the scope of IP rights. Patents cover inventions including processes, machines, manufacturing, and compositions of matter and have a finite duration which varies between countries, but generally 20 years from the date of application. Therefore, if a type of wheel is patented, printing, using, or selling such a wheel could be an infringement of the patent. Copyright covers an expression in a tangible, fixed medium and often lasts for the life of the author plus 70ย years thereafter. If someone makes a statue, they may have a copyright mark on the appearance of that statue, so if someone sees that statue, they cannot then distribute designs to print an identical or similar statue. When a feature has both artistic (copyrightable) and functional (patentable) merits, when the question has appeared in US court, the courts have often held the feature is not copyrightable unless it can be separated from the functional aspects of the item. In other countries the law and the courts may apply a different approach allowing, for example, the design of a useful device to be registered (as a whole) as an industrial design on the understanding that, in case of unauthorized copying, only the non-functional features may be claimed under design law whereas any technical features could only be claimed if covered by a valid patent. Gun legislation and administration The US Department of Homeland Security and the Joint Regional Intelligence Center released a memo stating that "significant advances in three-dimensional (3D) printing capabilities, availability of free digital 3D printable files for firearms components, and difficulty regulating file sharing may present public safety risks from unqualified gun seekers who obtain or manufacture 3D printed guns" and that "proposed legislation to ban 3D printing of weapons may deter, but cannot completely prevent, their production. Even if the practice is prohibited by new legislation, online distribution of these 3D printable files will be as difficult to control as any other illegally traded music, movie or software files." Currently, it is not prohibited by law to manufacture firearms for personal use in the United States, as long as the firearm is not produced with the intent to be sold or transferred, and meets a few basic requirements. A license is required to manufacture firearms for sale or distribution. The law prohibits a person from assembling a nonโ€“sporting semiautomatic rifle or shotgun from 10 or more imported parts, as well as firearms that cannot be detected by metal detectors or xโ€“ray machines. In addition, the making of an NFA firearm requires a tax payment and advance approval by ATF. Attempting to restrict the distribution of gun plans via the Internet has been likened to the futility of preventing the widespread distribution of DeCSS, which enabled DVD ripping. After the US government had Defense Distributed take down the plans, they were still widely available via the Pirate Bay and other file sharing sites. Downloads of the plans from the UK, Germany, Spain, and Brazil were heavy. Some US legislators have proposed regulations on 3D printers to prevent them from being used for printing guns. 3D printing advocates have suggested that such regulations would be futile, could cripple the 3D printing industry, and could infringe on free speech rights, with early pioneer of 3D printing Professor Hod Lipson suggesting that gunpowder could be controlled instead. Internationally, where gun controls are generally stricter than in the United States, some commentators have said the impact may be more strongly felt since alternative firearms are not as easily obtainable. Officials in the United Kingdom have noted that producing a 3D printed gun would be illegal under their gun control laws. Europol stated that criminals have access to other sources of weapons but noted that as technology improves, the risks of an effect would increase. Aerospace regulation In the United States, the FAA has anticipated a desire to use additive manufacturing techniques and has been considering how best to regulate this process. The FAA has jurisdiction over such fabrication because all aircraft parts must be made under FAA production approval or under other FAA regulatory categories. In December 2016, the FAA approved the production of a 3D printed fuel nozzle for the GE LEAP engine. Aviation attorney Jason Dickstein has suggested that additive manufacturing is merely a production method, and should be regulated like any other production method. He has suggested that the FAA's focus should be on guidance to explain compliance, rather than on changing the existing rules, and that existing regulations and guidance permit a company "to develop a robust quality system that adequately reflects regulatory needs for quality assurance." Health and safety Research on the health and safety concerns of 3D printing is new and in development due to the recent proliferation of 3D printing devices. In 2017, the European Agency for Safety and Health at Work has published a discussion paper on the processes and materials involved in 3D printing, potential implications of this technology for occupational safety and health and avenues for controlling potential hazards. Impact Additive manufacturing, starting with today's infancy period, requires manufacturing firms to be flexible, ever-improving users of all available technologies to remain competitive. Advocates of additive manufacturing also predict that this arc of technological development will counter globalization, as end users will do much of their own manufacturing rather than engage in trade to buy products from other people and corporations. The real integration of the newer additive technologies into commercial production, however, is more a matter of complementing traditional subtractive methods rather than displacing them entirely. The futurologist Jeremy Rifkin claimed that 3D printing signals the beginning of a third industrial revolution, succeeding the production line assembly that dominated manufacturing starting in the late 19th century. Social change Since the 1950s, a number of writers and social commentators have speculated in some depth about the social and cultural changes that might result from the advent of commercially affordable additive manufacturing technology. In recent years, 3D printing is creating significant impact in the humanitarian and development sector. Its potential to facilitate distributed manufacturing is resulting in supply chain and logistics benefits, by reducing the need for transportation, warehousing and wastage. Furthermore, social and economic development is being advanced through the creation of local production economies. Others have suggested that as more and more 3D printers start to enter people's homes, the conventional relationship between the home and the workplace might get further eroded. Likewise, it has also been suggested that, as it becomes easier for businesses to transmit designs for new objects around the globe, so the need for high-speed freight services might also become less. Finally, given the ease with which certain objects can now be replicated, it remains to be seen whether changes will be made to current copyright legislation so as to protect intellectual property rights with the new technology widely available. As 3D printers became more accessible to consumers, online social platforms have developed to support the community. This includes websites that allow users to access information such as how to build a 3D printer, as well as social forums that discuss how to improve 3D print quality and discuss 3D printing news, as well as social media websites that are dedicated to share 3D models. RepRap is a wiki based website that was created to hold all information on 3d printing, and has developed into a community that aims to bring 3D printing to everyone. Furthermore, there are other sites such as Pinshape, Thingiverse and MyMiniFactory, which were created initially to allow users to post 3D files for anyone to print, allowing for decreased transaction cost of sharing 3D files. These websites have allowed greater social interaction between users, creating communities dedicated to 3D printing. Some call attention to the conjunction of Commons-based peer production with 3D printing and other low-cost manufacturing techniques. The self-reinforced fantasy of a system of eternal growth can be overcome with the development of economies of scope, and here, society can play an important role contributing to the raising of the whole productive structure to a higher plateau of more sustainable and customized productivity. Further, it is true that many issues, problems, and threats arise due to the democratization of the means of production, and especially regarding the physical ones. For instance, the recyclability of advanced nanomaterials is still questioned; weapons manufacturing could become easier; not to mention the implications for counterfeiting and on intellectual property. It might be maintained that in contrast to the industrial paradigm whose competitive dynamics were about economies of scale, Commons-based peer production 3D printing could develop economies of scope. While the advantages of scale rest on cheap global transportation, the economies of scope share infrastructure costs (intangible and tangible productive resources), taking advantage of the capabilities of the fabrication tools. And following Neil Gershenfeld in that "some of the least developed parts of the world need some of the most advanced technologies," Commons-based peer production and 3D printing may offer the necessary tools for thinking globally but acting locally in response to certain needs. Larry Summers wrote about the "devastating consequences" of 3D printing and other technologies (robots, artificial intelligence, etc.) for those who perform routine tasks. In his view, "already there are more American men on disability insurance than doing production work in manufacturing. And the trends are all in the wrong direction, particularly for the less skilled, as the capacity of capital embodying artificial intelligence to replace white-collar as well as blue-collar work will increase rapidly in the years ahead." Summers recommends more vigorous cooperative efforts to address the "myriad devices" (e.g., tax havens, bank secrecy, money laundering, and regulatory arbitrage) enabling the holders of great wealth to "a paying" income and estate taxes, and to make it more difficult to accumulate great fortunes without requiring "great social contributions" in return, including: more vigorous enforcement of anti-monopoly laws, reductions in "excessive" protection for intellectual property, greater encouragement of profit-sharing schemes that may benefit workers and give them a stake in wealth accumulation, strengthening of collective bargaining arrangements, improvements in corporate governance, strengthening of financial regulation to eliminate subsidies to financial activity, easing of land-use restrictions that may cause the real estate of the rich to keep rising in value, better training for young people and retraining for displaced workers, and increased public and private investment in infrastructure developmentโ€”e.g., in energy production and transportation. Michael Spence wrote that "Now comes a ... powerful, wave of digital technology that is replacing labor in increasingly complex tasks. This process of labor substitution and disintermediation has been underway for some time in service sectorsโ€”think of ATMs, online banking, enterprise resource planning, customer relationship management, mobile payment systems, and much more. This revolution is spreading to the production of goods, where robots and 3D printing are displacing labor." In his view, the vast majority of the cost of digital technologies comes at the start, in the design of hardware (e.g. 3D printers) and, more important, in creating the software that enables machines to carry out various tasks. "Once this is achieved, the marginal cost of the hardware is relatively low (and declines as scale rises), and the marginal cost of replicating the software is essentially zero. With a huge potential global market to amortize the upfront fixed costs of design and testing, the incentives to invest [in digital technologies] are compelling." Spence believes that, unlike prior digital technologies, which drove firms to deploy underutilized pools of valuable labor around the world, the motivating force in the current wave of digital technologies "is cost reduction via the replacement of labor." For example, as the cost of 3D printing technology declines, it is "easy to imagine" that production may become "extremely" local and customized. Moreover, production may occur in response to actual demand, not anticipated or forecast demand. Spence believes that labor, no matter how inexpensive, will become a less important asset for growth and employment expansion, with labor-intensive, process-oriented manufacturing becoming less effective, and that re-localization will appear in both developed and developing countries. In his view, production will not disappear, but it will be less labor-intensive, and all countries will eventually need to rebuild their growth models around digital technologies and the human capital supporting their deployment and expansion. Spence writes that "the world we are entering is one in which the most powerful global flows will be ideas and digital capital, not goods, services, and traditional capital. Adapting to this will require shifts in mindsets, policies, investments (especially in human capital), and quite possibly models of employment and distribution." Naomi Wu regards the usage of 3D printing in the Chinese classroom (where rote memorization is standard) to teach design principles and creativity as the most exciting recent development of the technology, and more generally regards 3D printing as being the next desktop publishing revolution. Environmental change The growth of additive manufacturing could have a large impact on the environment. As opposed to traditional manufacturing, for instance, in which pieces are cut from larger blocks of material, additive manufacturing creates products layer-by-layer and prints only relevant parts, wasting much less material and thus wasting less energy in producing the raw materials needed. By making only the bare structural necessities of products, additive manufacturing also could make a profound contribution to lightweighting, reducing the energy consumption and greenhouse gas emissions of vehicles and other forms of transportation. A case study on an airplane component made using additive manufacturing, for example, found that the component's use saves 63% of relevant energy and carbon dioxide emissions over the course of the product's lifetime. In addition, previous life-cycle assessment of additive manufacturing has estimated that adopting the technology could further lower carbon dioxide emissions since 3D printing creates localized production, and products would not need to be transported long distances to reach their final destination. Continuing to adopt additive manufacturing does pose some environmental downsides, however. Despite additive manufacturing reducing waste from the subtractive manufacturing process by up to 90%, the additive manufacturing process creates other forms of waste such as non-recyclable material (metal) powders. Additive manufacturing has not yet reached its theoretical material efficiency potential of 97%, but it may get closer as the technology continues to increase productivity. Some large FDM printers which melt High-density polyethylene (HDPE) pellets may also accept sufficiently clean recycled material such as chipped milk bottles. In addition these printers can use shredded material coming from faulty builds or unsuccessful prototype versions thus reducing overall project wastage and materials handling and storage. The concept has been explored in the RecycleBot. See also 3D modeling 3D scanning 3D printing marketplace 3D bioprinting 3D food printing 3D Manufacturing Format 3D printing speed 3D Systems Additive Manufacturing File Format Actuator AstroPrint Cloud manufacturing Computer numeric control Delta robot Fraunhofer Competence Field Additive Manufacturing Fusion3 Laser cutting Limbitless Solutions List of 3D printer manufacturers List of common 3D test models List of emerging technologies List of notable 3D printed weapons and parts Magnetically assisted slip casting MakerBot Industries Milling center Organ-on-a-chip Robocasting Self-replicating machine Ultimaker Volumetric printing References Further reading Wright, Paul K. (2001). 21st Century Manufacturing. New Jersey: Prentice-Hall Inc. "3D printing: a new industrial revolution โ€“ Safety and health at work โ€“ EU-OSHA". osha.europa.eu''. Retrieved 28 July 2017. External links Computer printers DIY culture Industrial design Industrial processes 1981 introductions 1981 in technology Computer-related introductions in 1981 Articles containing video clips Emerging technologies Open-source hardware
25213
https://en.wikipedia.org/wiki/QWERTY
QWERTY
QWERTY () is a keyboard layout for Latin-script alphabets. The name comes from the order of the first six keys on the top left letter row of the keyboard ( ). The QWERTY design is based on a layout created for the Sholes and Glidden typewriter and sold to E. Remington and Sons in 1873. It became popular with the success of the Remington No. 2 of 1878, and remains in ubiquitous use. History The QWERTY layout was devised and created in the early 1870s by Christopher Latham Sholes, a newspaper editor and printer who lived in Kenosha, Wisconsin. In October 1867, Sholes filed a patent application for his early writing machine he developed with the assistance of his friends Carlos Glidden and Samuel W. Soulรฉ. The first model constructed by Sholes used a piano-like keyboard with two rows of characters arranged alphabetically as shown below: - 3 5 7 9 N O P Q R S T U V W X Y Z 2 4 6 8 . A B C D E F G H I J K L M Sholes struggled for the next five years to perfect his invention, making many trial-and-error rearrangements of the original machine's alphabetical key arrangement. The study of bigram (letter-pair) frequency by educator Amos Densmore, brother of the financial backer James Densmore, is believed to have influenced the array of letters, but the contribution was later called into question. Others suggest instead that the letter groupings evolved from telegraph operators' feedback. In November 1868 he changed the arrangement of the latter half of the alphabet, O to Z, right-to-left. In April 1870 he arrived at a four-row, upper case keyboard approaching the modern QWERTY standard, moving six vowel letters, A, E, I, O, U, and Y, to the upper row as follows: 2 3 4 5 6 7 8 9 - A E I . ? Y U O , B C D F G H J K L M Z X W V T S R Q P N In 1873 Sholes's backer, James Densmore, successfully sold the manufacturing rights for the Sholes & Glidden Type-Writer to E. Remington and Sons. The keyboard layout was finalized within a few months by Remington's mechanics and was ultimately presented: 2 3 4 5 6 7 8 9 - , Q W E . T Y I U O P Z S D F G H J K L M A X & C V B N ? ; R After they purchased the device, Remington made several adjustments, creating a keyboard with essentially the modern QWERTY layout. These adjustments included placing the "R" key in the place previously allotted to the period key. Apocryphal claims that this change was made to let salesmen impress customers by pecking out the brand name "TYPE WRITER QUOTE" from one keyboard row are not formally substantiated. Vestiges of the original alphabetical layout remained in the "home row" sequence DFGHJKL. The modern layout is: 1 2 3 4 5 6 7 8 9 0 - = Q W E R T Y U I O P [ ] \ A S D F G H J K L ; ' Z X C V B N M , . / The QWERTY layout became popular with the success of the Remington No. 2 of 1878, the first typewriter to include both upper and lower case letters, using a shift key. One popular but unverified explanation for the QWERTY arrangement is that it was designed to reduce the likelihood of internal clashing of typebars by placing commonly used combinations of letters farther from each other inside the machine. Differences from modern layout Substituting characters The QWERTY layout depicted in Sholes's 1878 patent is slightly different from the modern layout, most notably in the absence of the numerals 0 and 1, with each of the remaining numerals shifted one position to the left of their modern counterparts. The letter M is located at the end of the third row to the right of the letter L rather than on the fourth row to the right of the N, the letters X and C are reversed, and most punctuation marks are in different positions or are missing entirely. 0 and 1 were omitted to simplify the design and reduce the manufacturing and maintenance costs; they were chosen specifically because they were "redundant" and could be recreated using other keys. Typists who learned on these machines learned the habit of using the uppercase letter I (or lowercase letter L) for the digit one, and the uppercase O for the zero. The 0 key was added and standardized in its modern position early in the history of the typewriter, but the 1 and exclamation point were left off some typewriter keyboards into the 1970s. Combined characters In early designs, some characters were produced by printing two symbols with the carriage in the same position. For instance, the exclamation point, which shares a key with the numeral 1 on post-mechanical keyboards, could be reproduced by using a three-stroke combination of an apostrophe, a backspace, and a period. A semicolon (;) was produced by printing a comma (,) over a colon (:). As the backspace key is slow in simple mechanical typewriters (the carriage was heavy and optimized to move in the opposite direction), a more professional approach was to block the carriage by pressing and holding the space bar while printing all characters that needed to be in a shared position. To make this possible, the carriage was designed to advance forward only after releasing the space bar. In the era of mechanical typewriters, combined characters such as รฉ and รต were created by the use of dead keys for the diacritics (โ€ฒ, ~), which did not move the paper forward. Thus the โ€ฒ and e would be printed at the same location on the paper, creating รฉ. Contemporary alternatives There were no particular technological requirements for the QWERTY layout, since at the time there were ways to make a typewriter without the "up-stroke" typebar mechanism that had required it to be devised. Not only were there rival machines with "down-stroke" and "frontstroke" positions that gave a visible printing point, the problem of typebar clashes could be circumvented completely: examples include Thomas Edison's 1872 electric print-wheel device which later became the basis for Teletype machines; Lucien Stephen Crandall's typewriter (the second to come onto the American market) whose type was arranged on a cylindrical sleeve; the Hammond typewriter of 1887 which used a semi-circular "type-shuttle" of hardened rubber (later light metal); and the Blickensderfer typewriter of 1893 which used a type wheel. The early Blickensderfer's "Ideal" keyboard was also non-QWERTY, instead having the sequence "DHIATENSOR" in the home row, these 10 letters being capable of composing 70% of the words in the English language. Properties Alternating hands while typing is a desirable trait in a keyboard design. While one hand types a letter, the other hand can prepare to type the next letter, making the process faster and more efficient. In the QWERTY layout many more words can be spelled using only the left hand than the right hand. In fact, thousands of English words can be spelled using only the left hand, while only a couple of hundred words can be typed using only the right hand (the three most frequent letters in the English language, ETA, are all typed with the left hand). In addition, more typing strokes are done with the left hand in the QWERTY layout. This is helpful for left-handed people but disadvantageous for right-handed people. Contrary to popular belief, the QWERTY layout was not designed to slow the typist down, but rather to speed up typing. Indeed, there is evidence that, aside from the issue of jamming, placing often-used keys farther apart increases typing speed, because it encourages alternation between the hands. (On the other hand, in the German keyboard the Z has been moved between the T and the U to help type the frequent digraphs TZ and ZU in that language.) Almost every word in the English language contains at least one vowel letter, but on the QWERTY keyboard only the vowel letter "A" is on the home row, which requires the typist's fingers to leave the home row for most words. A feature much less commented-on than the order of the keys is that the keys do not form a rectangular grid, but rather each column slants diagonally. This is because of the mechanical linkages โ€“ each key is attached to a lever, and hence the offset prevents the levers from running into each other โ€“ and has been retained in most electronic keyboards. Some keyboards, such as the Kinesis or TypeMatrix, retain the QWERTY layout but arrange the keys in vertical columns, to reduce unnecessary lateral finger motion. Computer keyboards The first computer terminals such as the Teletype were typewriters that could produce and be controlled by various computer codes. These used the QWERTY layouts and added keys such as escape (ESC) which had special meanings to computers. Later keyboards added function keys and arrow keys. Since the standardization of PC-compatible computers and Windows after the 1980s, most full-sized computer keyboards have followed this standard (see drawing at right). This layout has a separate numeric keypad for data entry at the right, 12 function keys across the top, and a cursor section to the right and center with keys for Insert, Delete, Home, End, Page Up, and Page Down with cursor arrows in an inverted-T shape. Diacritical marks QWERTY was designed for English, a language with accents ('diacritics') appearing only in a few words of foreign origin. The standard US keyboard has no provision for these at all; the need was later met by the so called "US-International" keyboard mapping, which uses "dead keys" to type accents without having to add more physical keys. (The same principle is used in the standard US keyboard layout for MacOS, but in a different way.) Most European (including UK) keyboards for PCs have an AltGr key ('Alternative Graphics' key, replaces the right Alt key) that enables easy access to the most common diacritics used in the territory where sold. For example, default keyboard mapping for the UK/Ireland keyboard has the diacritics used in Irish but these are rarely printed on the keys; but to type the accents used in Welsh and Scots Gaelic requires the use of a "UK Extended" keyboard mapping and the dead key or compose key method. This arrangement applies to Windows, ChromeOS and Linux; MacOS computers have different techniques. The US International and UK Extended mappings provide many of the diacritics needed for students of other European languages. Other keys and characters Specific language variants Minor changes to the arrangement are made for other languages. There are a large number of different keyboard layouts used for different languages written in Latin script. They can be divided into three main families according to where the , , , , and keys are placed on the keyboard. These are usually named after the first six letters, for example this QWERTY layout and the AZERTY layout. In this section you will also find keyboard layouts that include some additional symbols of other languages. But they are different from layouts that were designed with the goal to be usable for multiple languages (see Multilingual variants). The following sections give general descriptions of QWERTY keyboard variants along with details specific to certain operating systems. The emphasis is on Microsoft Windows. English Canada English-speaking Canadians have traditionally used the same keyboard layout as in the United States, unless they are in a position where they have to write French on a regular basis. French-speaking Canadians respectively have favoured the Canadian French keyboard layout (see French (Canada), below). The CSA keyboard is the official multilingual keyboard layout of Canada. United Kingdom The United Kingdom and Ireland use a keyboard layout based on the 48-key version defined in the (now withdrawn) British Standard BS 4822. It is very similar to that of the United States, but has an AltGr key and a larger Enter key, includes ยฃ and โ‚ฌ signs and some rarely used EBCDIC symbols (ยฌ, ยฆ), and uses different positions for the characters @, ", #, ~, \, and |. The BS 4822:1994 standard did not make any use of the AltGr key and lacked support for any non-ASCII characters other than ยฌ and ยฃ. It also assigned a key for the non-ASCII character broken bar (ยฆ), but lacked one for the far more commonly used ASCII character vertical bar (|). It also lacked support for various diacritics used in the Welsh alphabet, and the Scottish Gaelic alphabet; and also is missing the letter yogh, ศ, used very rarely in the Scots language. Therefore, various manufacturers have modified or extended the BS 4822 standard: The B00 key (left of Z), shifted, results in vertical bar (|) on some systems (e.g. Windows UK/Ireland keyboard layout and Linux/X11 UK/Ireland keyboard layout), rather than the broken bar (ยฆ) assigned by BS 4822 and provided in some systems (e.g. IBM OS/2 UK166 keyboard layout) The E00 key (left of 1) with AltGr provides either vertical bar (|) (OS/2's UK166 keyboard layout, Linux/X11 UK keyboard layout) or broken bar (ยฆ) (Microsoft Windows UK/Ireland keyboard layout) Support for the diacritics needed for Scots Gaelic and Welsh was added to Windows and Chrome OS using a "UK-extended" setting (see below); Linux and X-Windows systems have an explicit or redesignated compose key for this purpose. UK Apple keyboard The British version of the Apple Keyboard does not use the standard UK layout. Instead, some older versions have the US layout (see below) with a few differences: the sign is reached by and the sign by , the opposite to the US layout. The is also present and is typed with . Umlauts are reached by typing and then the vowel, and รŸ is reached by typing . Newer Apple "British" keyboards use a layout that is relatively unlike either the US or traditional UK keyboard. It uses an elongated return key, a shortened left with and in the newly created position, and in the upper left of the keyboard are and instead of the traditional EBCDIC codes. The middle-row key that fits inside the key has and . United States The arrangement of the character input keys and the Shift keys contained in this layout is specified in the US national standard ANSI-INCITS 154-1988 (R1999) (formerly ANSI X3.154-1988 (R1999)), where this layout is called "ASCII keyboard". The complete US keyboard layout, as it is usually found, also contains the usual function keys in accordance with the international standard ISO/IEC 9995-2, although this is not explicitly required by the US American national standard. US keyboards are used not only in the United States, but also in many other English-speaking places, (except UK and Ireland), including India, Australia, Anglophone Canada, Hong Kong, New Zealand, South Africa, Malaysia, Singapore, Philippines, and Indonesia that uses the same 26-letter alphabets as English. In many other English-speaking jurisdictions (e.g., Canada, Australia, the Caribbean nations, Hong Kong, Malaysia, India, Pakistan, Bangladesh, Singapore, New Zealand, and South Africa), local spelling sometimes conforms more closely to British English usage, although these nations decided to use a US English keyboard layout. Until Windows 8 and later versions, when Microsoft separated the settings, this had the undesirable side effect of also setting the language to US English, rather than the local orthography. The US keyboard layout has a second Alt key instead of the AltGr key and does not use any dead keys; this makes it inefficient for all but a handful of languages. On the other hand, the US keyboard layout (or the similar UK layout) is occasionally used by programmers in countries where the keys for []{} are located in less convenient positions on the locally customary layout. On some keyboards the enter key is bigger than traditionally and takes up also a part of the line above, more or less the area of the traditional location of the backslash key (\). In these cases the backslash is located in alternative places. It can be situated one line above the default location, on the right of the equals sign key (=). Sometimes it is placed one line below its traditional situation, on the right of the apostrophe key (') (in these cases the enter key is narrower than usual on the line of its default location). It may also be two lines below its default situation on the right of a narrower than traditionally right shift key. A variant of this layout is used in Arabic-speaking countries. This variant has the | \ key to the left of Z, ~ ` key where the | \ key is in the usual layout, and the > < key where the ~ ` key is in the usual layout. Czech The typewriter came to the Czech-speaking area in the late 19th century, when it was part of Austria-Hungary where German was the dominant language of administration. Therefore, Czech typewriters have the QWERTZ layout. However, with the introduction of imported computers, especially since the 1990s, the QWERTY keyboard layout is frequently used for computer keyboards. The Czech QWERTY layout differs from QWERTZ in that the characters (e.g. @$& and others) missing from the Czech keyboard are accessible with AltGr on the same keys where they are located on an American keyboard. In Czech QWERTZ keyboards the positions of these characters accessed through AltGr differs. Danish Both the Danish and Norwegian keyboards include dedicated keys for the letters ร…/รฅ, ร†/รฆ and ร˜/รธ, but the placement is a little different, as the and keys are swapped on the Norwegian layout. (The Finnishโ€“Swedish keyboard is also largely similar to the Norwegian layout, but the and are replaced with and . On some systems, the Danish keyboard may allow typing ร–/รถ and ร„/รค by holding the or key while striking and , respectively.) Computers with Windows are commonly sold with ร–ร˜ร† and ร„ร†ร˜ printed on the two keys, allowing same computer hardware to be sold in Denmark, Finland, Norway and Sweden, with different operating system settings. Dutch (Netherlands) Though it is seldom used (most Dutch keyboards use US International layout), the Dutch layout uses QWERTY but has additions for the โ‚ฌ sign, the diaresis (ยจ), and the braces ({ }) as well as different locations for other symbols. An older version contained a single-stroke key for the Dutch character IJ/ij, which is usually typed by the combination of and . In the 1990s, there was a version with the now-obsolete florin sign (Dutch: guldenteken) for IBM PCs. In Flanders (the Dutch-speaking part of Belgium), "AZERTY" keyboards are used instead, due to influence from the French-speaking part of Belgium. See also #US-International in the Netherlands below. Estonian The keyboard layout used in Estonia is virtually the same as the Swedish layout. The main difference is that the and keys (to the right of ) are replaced with and respectively (the latter letter being the most distinguishing feature of the Estonian alphabet). Some special symbols and dead keys are also moved around. Faroese The same as the Danish layout with added (Eth), since the Faroe Islands are a self-governed part of the Kingdom of Denmark. French (Canada) This keyboard layout is commonly used in Canada by French-speaking Canadians. It is the most common layout for laptops and stand-alone keyboards aimed at the Francophone market. Unlike the AZERTY layout used in France and Belgium, it is a QWERTY layout and as such is also relatively commonly used by English speakers in the US and Canada (accustomed to using US standard QWERTY keyboards) for easy access to the accented letters found in some French loanwords. It can be used to type all accented French characters, as well as some from other languages, and serves all English functions as well. It is popular mainly because of its close similarity to the basic US keyboard commonly used by English-speaking Canadians and Americans, historical use of US-made typewriters by French-Canadians, and is the standard for keyboards in Quebec. It can also easily 'map' to or from a standard US QWERTY keyboard with the sole loss the guillemet/degree sign key. Its significant difference from the US standard is that the right Alt key is reconfigured as an AltGr key that gives easy access to a further range of characters (marked in blue and red on the keyboard image. Blue indicates an alternative character that will display as typed. Red indicates a dead key: the diacritic will be applied to the next vowel typed.) In some variants, the key names are translated to French: is or (short for Fixer/Verrouiller Majuscule, meaning Lock Uppercase). is . is . Greek The stress accents, indicated in red, are produced by pressing that key (or shifted key) followed by an appropriate vowel. Use of the "AltGr" key may produce the characters shown in blue. German Germany, Austria, Switzerland, Liechtenstein, and Luxembourg use QWERTZ layouts, where the letter Z is to the right of T. Icelandic The Icelandic keyboard layout is different from the standard QWERTY keyboard because the Icelandic alphabet has some special letters, most of which it shares with the other Nordic countries: รž/รพ, ร/รฐ, ร†/รฆ, and ร–/รถ. (ร†/รฆ also occurs in Norwegian, Danish and Faroese, ร/รฐ in Faroese, and ร–/รถ in Swedish, Finnish and Estonian. In Norwegian ร–/รถ could be substituted for ร˜/ร˜ which is the same sound/letter and is widely understood). The letters ร/รก, ร/รฝ, รš/รบ, ร/รญ, and ร‰/รฉ are produced by first pressing the dead key and then the corresponding letter. The Nordic letters ร…/รฅ and ร„/รค can be produced by first pressing , located below the key, and (for ยจ) which also works for the non-Nordic รฟ, รœ/รผ, ร/รฏ, and ร‹/รซ. These letters are not used natively in Icelandic, but may have been implemented for ease of communication in other Nordic languages. Additional diacritics may be found behind the key: for ห‹ (grave accent) and for ห† (circumflex). Irish Microsoft Windows includes an Irish layout which supports acute accents with for the Irish language and grave accents with the dead key for Scottish Gaelic. The other Insular Celtic languages have their own layout. The UK or UK-Extended layout is also frequently used. Italian Braces (right above square brackets and shown in purple) are given with both AltGr and Shift pressed. The tilde (~) and backquote (`) characters are not present on the Italian keyboard layout (with Linux, they are available by pressing ++, and ++; Windows might not recognise these keybindings). When using Microsoft Windows, the standard Italian keyboard layout does not allow one to write 100% correct Italian language, since it lacks capital accented vowels, and in particular the รˆ key. The common workaround is writing E' (E followed by an apostrophe) instead, or relying on the auto-correction feature of several word processors when available. It is possible to obtain the รˆ symbol in MS Windows by typing + . Mac users, however, can write the correct accented character by pressing + + or, in the usual Mac way, by pressing the correct key for the accent (in this case + ) and subsequently pressing the wanted letter (in this case + ). Linux users can also write it by pressing the key with enabled. There is an alternate layout, which differs only in disposition of characters accessible through , and includes the tilde and the curly brackets. It is commonly used in IBM keyboards. Italian typewriters often have the QZERTY layout instead. The Italian-speaking part of Switzerland uses the QWERTZ keyboard. Latvian Although rarely used, a keyboard layout specifically designed for the Latvian language called ลชGJRMV exists. The Latvian QWERTY keyboard layout is most commonly used; its layout is the same as Latin ones, but with a dead key, which allows entering special characters (ฤฤฤ“ฤฃฤซฤทฤผล†รตล—ลกลซลพ). The most common dead key is the apostrophe ('), which is followed by Alt+Gr (Windows default for Latvian layout). Some prefer using the tick (`). Lithuanian Where in standard QWERTY the number row is located, you find in Lithuanian QWERTY: ฤ„, ฤŒ, ฤ˜, ฤ–, ฤฎ, ล , ลฒ, ลช, ลฝ, instead of their counterparts 1, 2, 3, 4, 5, 6, 7, 8, =. If you still want to use the numbers of the mentioned 'number row', you can create them in combination with the -key. Aside from these changes the keyboard is standard QWERTY. Besides QWERTY, the ฤ„ลฝERTY layout without the adjustment of the number row is used. Maltese The Maltese language uses Unicode (UTF-8) to display the Maltese diacritics: ฤ‹ ฤŠ; ฤก ฤ ; ฤง ฤฆ; ลผ ลป (together with ร  ร€; รจ รˆ; รฌ รŒ; รฒ ร’; รน ร™). There are two standard keyboard layouts for Maltese, according to "MSA 100:2002 Maltese Keyboard Standard"; one of 47 keys and one of 48 keys. The 48-key layout is the most popular. Norwegian The Norwegian languages use the same letters as Danish, but the Norwegian keyboard differs from the Danish layout regarding the placement of the , and (backslash) keys. On the Danish keyboard, the and are swapped. The Swedish keyboard is also similar to the Norwegian layout, but and are replaced with and . On some systems, the Norwegian keyboard may allow typing ร–/รถ and ร„/รค by holding the or key while striking and , respectively. There is also an alternative keyboard layout called Norwegian with Sรกmi, which allows for easier input of the characters required to write various Sรกmi languages. All the Sรกmi characters are accessed through the key. On Macintosh computers, the Norwegian and Norwegian extended keyboard layouts have a slightly different placement for some of the symbols obtained with the help of the or keys. Notably, the $ sign is accessed with and ยข with . Furthermore, the frequently used @ is placed between and . Polish Most typewriters use a QWERTZ keyboard with Polish letters (with diacritical marks) accessed directly (officially approved as "Typist's keyboard", , Polish Standard PN-87), which is mainly ignored in Poland as impractical (custom-made keyboards, e.g., those in the public sector as well as some Apple computers, present an exception to this paradigm); the "Polish programmer's" () layout has become the de facto standard, used on virtually all computers sold on the Polish market. Most computer keyboards in Poland are laid out according to the standard US visual and functional layout. Polish diacritics are accessed by using the AltGr key with a corresponding similar letter from the base Latin alphabet. Normal capitalization rules apply with respect to Shift and Caps Lock keys. For example, to enter "ลน", one can type Shift+AltGr+X with Caps Lock off, or turn on Caps Lock and type AltGr+X. Both ANSI and ISO mechanical layouts are common sights, and even some non-standard mechanical layouts are in use. ANSI is often preferred, as the additional key provides no additional function, at least in Microsoft Windows where it duplicates the backslash key, while taking space from the Shift key. Many keyboards do not label AltGr as such, leaving the Alt marking as in the US layout - the right Alt key nevertheless functions as AltGr in this layout, causing possible confusion when keyboard shortcuts with the Alt key are required (these usually work only with the left Alt) and causing the key to be commonly referred to as right Alt (). However, keyboards with AltGr marking are available and it is also officially used by Microsoft when depicting the layout. Also, on MS Windows, the tilde character "~" (Shift+`) acts as a dead key to type Polish letters (with diacritical marks) thus, to obtain an "ล", one may press Shift+` followed by L. The tilde character is obtained with (Shift+`) then space. In Linux-based systems, the euro symbol is typically mapped to Alt+5 instead of Alt+U, the tilde acts as a normal key, and several accented letters from other European languages are accessible through combinations with left Alt. Polish letters are also accessible by using the compose key. Software keyboards on touchscreen devices usually make the Polish diacritics available as one of the alternatives which show up after long-pressing the corresponding Latin letter. However, modern predictive text and autocorrection algorithms largely mitigate the need to type them directly on such devices. There is also unofficial, expanded Polish keyboard layout since 2021, based on the layout from Polish 80s computers Mazovia and wide expanded into all Latin diacritical sings, Greek signs, mathematical signs, IPA signs, typographical signs, symbols and sign "zล‚" meaning Polish currency, like actual German expanded layout E1, available in two versions, QWERTZ and QWERTY. Portuguese Brazil The Brazilian computer keyboard layout is specified in the ABNT NBR 10346 variant 2 (alphanumeric portion) and 10347 (numeric portion) standards. Essentially, the Brazilian keyboard contains dead keys for five variants of diacritics in use in the language; the letter ร‡, the only application of the cedilla in Portuguese, has its own key. In some keyboard layouts the + combination produces the โ‚ข character (Unicode 0x20A2), symbol for the old currency cruzeiro, a symbol that is not used in practice (the common abbreviation in the eighties and nineties used to be Cr$). The cent sign ยข, is accessible via +, but is not commonly used for the centavo, subunit of previous currencies as well as the current real, which itself is represented by R$. The Euro sign โ‚ฌ is not standardized in this layout. The masculine and feminine ordinals ยช and ยบ are accessible via combinations. The section sign ยง (Unicode U+00A7), in Portuguese called parรกgrafo, is nowadays practically only used to denote sections of laws. Variant 2 of the Brazilian keyboard, the only which gained general acceptance (MS Windows treats both variants as the same layout), has a unique mechanical layout, combining some features of the ISO 9995-3 and the JIS keyboards in order to fit 12 keys between the left and right Shift (compared to the American standard of 10 and the international of 11). Its modern, IBM PS/2-based variations, are thus known as 107-keys keyboards, and the original PS/2 variation was 104-key. Variant 1, never widely adopted, was based on the ISO 9995-2 keyboards. To make this layout usable with keyboards with only 11 keys in the last row, the rightmost key (/?ยฐ) has its functions replicated across the +, +, and + combinations. Portugal Essentially, the Portuguese keyboard contains dead keys for five variants of diacritics; the letter ร‡, the only application of the cedilla in Portuguese, has its own key, but there are also a dedicated key for the ordinal indicators and a dedicated key for quotation marks. The + combination for producing the euro sign โ‚ฌ (Unicode 0x20AC) has become standard. On some QWERTY keyboards the key labels are translated, but the majority are labelled in English. During the 20th century, a different keyboard layout, HCESAR, was in widespread use in Portugal. Romanian (in Romania and Moldova) The current Romanian National Standard SR 13392:2004 establishes two layouts for Romanian keyboards: a "primary" one and a "secondary" one. The "primary" layout is intended for traditional users who have learned how to type with older, Microsoft-style implementations of the Romanian keyboard. The "secondary" layout is mainly used by programmers as it does not contradict the physical arrangement of keys on a US-style keyboard. The "secondary" arrangement is used as the default Romanian layout by Linux distributions, as defined in the "X Keyboard Configuration Database". There are four Romanian-specific characters that are incorrectly implemented in versions of Microsoft Windows until Vista came out: ศ˜ (U+0218, S with comma), incorrectly implemented as ลž (U+015E, S with cedilla) ศ™ (U+0219, s with comma), incorrectly implemented as ลŸ (U+015F, s with cedilla) ศš (U+021A, T with comma), incorrectly implemented as ลข (U+0162, T with cedilla) ศ› (U+021B, t with comma), incorrectly implemented as ลฃ (U+0163, t with cedilla) The cedilla-versions of the characters do not exist in the Romanian language (they came to be used due to a historic bug). The UCS now says that encoding this was a mistake because it messed up Romanian data and the letters with cedilla and the letters with comma are the same letter with a different style. Since Romanian hardware keyboards are not widely available, Cristian Secarฤƒ has created a driver that allows Romanian characters to be generated with a US-style keyboard in all versions of Windows prior to Vista through the use of the AltGr key modifier. Windows Vista and newer versions include the correct diacritical signs in the default Romanian Keyboard layout. This layout has the Z and Y keys mapped like in English layouts and also includes characters like the 'at' (@) and dollar ($) signs, among others. The older cedilla-version layout is still included albeit as the 'Legacy' layout. Slovak In Slovakia, similarly to the Czech Republic, both QWERTZ and QWERTY keyboard layouts are used. QWERTZ is the default keyboard layout for Slovak in Microsoft Windows. Spanish Spain The Spanish keyboard layout is used to write in Spanish and in other languages of Spain such as Catalan, Basque, Galician, Aragonese, Asturian and Occitan. It includes ร‘ for Spanish, Asturian and Galician, the acute accent, the diaeresis, the inverted question and exclamation marks (ยฟ, ยก), the superscripted o and a (ยบ, ยช) for writing abbreviated ordinal numbers in masculine and feminine in Spanish and Galician, and finally, some characters required only for typing Catalan and Occitan, namely ร‡, the grave accent and the interpunct ( / , used in lยทl, nยทh, sยทh; located at Shift-3). It can also be used to write other international characters, such as those using a circumflex accent (used in French and Portuguese among others) or a tilde (used in both Spanish and Portuguese), which are available as dead keys. However, it lacks two characters used in Asturian: แธค and แธถ (historically, general support for these two has been poor โ€“ they aren't present in the ISO 8859-1 character encoding standard, or any other ISO/IEC 8859 standard). Several alternative distributions, based on this one or created from scratch, have been created to address this issue (see the Other original layouts and layout design software section for more information). On most keyboards, โ‚ฌ is marked as Alt Gr + E and not Alt Gr + 5 as shown in the image. However, in some keyboards, โ‚ฌ is found marked twice. An alternative version exists, supporting all of ISO 8859-1. Spanish keyboards are usually labelled in Spanish instead of English, its abbreviations being: On some keyboards, the c-cedilla key (ร‡) is located one or two lines above, rather than on the right of, the acute accent key (ยด). In some cases it is placed on the right of the plus sign key (+), while in other keyboards it is situated on the right of the inverted exclamation mark key (ยก). Latin America, officially known as Spanish Latinamerican sort The Latin American Spanish keyboard layout is used throughout Mexico, Central and South America. Before its design, Latin American vendors had been selling the Spanish (Spain) layout as default. Its most obvious difference from the Spanish (Spain) layout is the lack of a ร‡ key; on Microsoft Windows it lacks a tilde (~) dead key, whereas on Linux systems the dead tilde can be optionally enabled. This is not a problem when typing in Spanish, but it is rather problematic when typing in Portuguese, which can be an issue in countries with large commercial ties to Brazil (Argentina, Uruguay and Paraguay). Normally "Bloq Mayรบs" is used instead of "Caps Lock", and "Intro" instead of "Enter". Swedish The central characteristics of the Swedish keyboard are the three additional letters ร…/รฅ, ร„/รค, and ร–/รถ. The same visual layout is also in use in Finland and Estonia, as the letters ร„/รค and ร–/รถ are shared with the Swedish language, and even ร…/รฅ is needed by Swedish-speaking Finns. However, the Finnish multilingual keyboard adds new letters and punctuation to the functional layout. The Norwegian keyboard largely resembles the Swedish layout, but the and are replaced with and . The Danish keyboard is also similar, but it has the and swapped. On some systems, the Swedish or Finnish keyboard may allow typing ร˜/รธ and ร†/รฆ by holding the or key while striking and , respectively. The Swedish with Sรกmi keyboard allows typing not only ร˜/รธ and ร†/รฆ, but even the letters required to write various Sรกmi languages. This keyboard has the same function for all the keys engraved on the regular Swedish keyboard, and the additional letters are available through the key. On Macintosh computers, the Swedish and Swedish Pro keyboards differ somewhat from the image shown above, especially as regards the characters available using the or keys. (on the upper row) produces the ยฐ sign, and produces the โ‚ฌ sign. The digit keys produce ยฉ@ยฃ$โˆžยง|[]โ‰ˆ with and ยก"ยฅยขโ€ฐยถ\{}โ‰  with . On Linux systems, the Swedish keyboard may also give access to additional characters as follows: first row: ยถยก@ยฃ$โ‚ฌยฅ{[]}\ยฑ and ยพยนยฒยณยผยขโ…รทยซยปยฐยฟยฌ second row: @ล‚โ‚ฌยฎรพโ†โ†“โ†’ล“รพ"~ and ฮฉลยขยฎรžยฅโ†‘ฤฑล’รžยฐห‡ third row: ยชรŸรฐฤ‘ล‹ฤงjฤธล‚รธรฆยด and ยบยงรยชลŠฤฆJ&ลร˜ร†ร— fourth row: |ยซยปยฉ""nยตยธยทฬฃ and ยฆ<>ยฉโ€˜โ€™Nยบห›ห™ห™ Several of these characters function as dead keys. Turkish Today the majority of Turkish keyboards are based on QWERTY (the so-called Q-keyboard layout), although there is also the older F-keyboard layout specifically designed for the language. Vietnamese The Vietnamese keyboard layout is an extended Latin QWERTY layout. The letters ฤ‚, ร‚, รŠ, and ร” are found on what would be the number keys โ€“ on the US English keyboard, with โ€“ producing the tonal marks (grave accent, hook, tilde, acute accent and dot below, in that order), producing ฤ, producing the ฤ‘แป“ng sign (โ‚ซ) when not shifted, and brackets () producing ฦฏ and ฦ . Multilingual variants Multilingual keyboard layouts, unlike the default layouts supplied for one language and market, try to make it possible for the user to type in any of several languages using the same number of keys. Mostly this is done by adding a further virtual layer in addition to the -key by means of (or 'right ' reused as such), which contains a further repertoire of symbols and diacritics used by the desired languages. This section also tries to arrange the layouts in ascending order by the number of possible languages and not chronologically according to the Latin alphabet as usual. United Kingdom (Extended) Layout Windows From Windows XP SP2 onwards, Microsoft has included a variant of the British QWERTY keyboard (the "United Kingdom Extended" keyboard layout) that can additionally generate several diacritical marks. This supports input on a standard physical UK keyboard for many languages without changing positions of frequently used keys, which is useful when working with text in Welsh, Scottish Gaelic and Irish โ€” languages native to parts of the UK (Wales, parts of Scotland and Northern Ireland respectively). In this layout, the grave accent key () becomes, as it also does in the US International layout, a dead key modifying the character generated by the next key pressed. The apostrophe, double-quote, tilde and circumflex (caret) keys are not changed, becoming dead keys only when 'shifted' with . Additional precomposed characters are also obtained by shifting the 'normal' key using the key. The extended keyboard is software installed from the Windows control panel, and the extended characters are not normally engraved on keyboards. The UK Extended keyboard uses mostly the AltGr key to add diacritics to the letters a, e, i, n, o, u, w and y (the last two being used in Welsh) as appropriate for each character, as well as to their capitals. Pressing the key and then a character that does not take the specific diacritic produces the behaviour of a standard keyboard. The key presses followed by spacebar generate a stand-alone mark.: grave accents (e.g. ร , รจ, etc.) needed for Scots Gaelic are generated by pressing the grave accent (or 'backtick') key , which is a dead key, then the letter. Thus produces ร . acute accents (e.g. รก) needed for Irish are generated by pressing the key together with the letter (or acting as a dead key combination followed by the letter). Thus produces รก; produces ร. (Some programs use the combination of and a letter for other functions, in which case the method must be used to generate acute accents). the circumflex diacritic needed for Welsh may be added by , acting as a dead key combination, followed by the letter. Thus then produces รข, then produces the letter ลต. Some other languages commonly studied in the UK and Ireland are also supported to some extent: diaeresis or umlaut (e.g. รค, รซ, รถ, etc.) is generated by a dead key combination , then the letter. Thus produces รค. tilde (e.g. รฃ, รฑ, รต, etc., as used in Spanish and Portuguese) is generated by dead key combination , then the letter. Thus produces รฃ. cedilla (e.g. รง) under c is generated by , and the capital letter (ร‡) is produced by The and letter method used for acutes and cedillas does not work for applications which assign shortcut menu functions to these key combinations. These combinations are intended to be mnemonic and designed to be easy to remember: the circumflex accent (e.g. รข) is similar to the free-standing circumflex (caret) (^), printed above the key; the diaeresis/umlaut (e.g. รถ) is visually similar to the double-quote (") above on the UK keyboard; the tilde (~) is printed on the same key as the . The UK Extended layout is almost entirely transparent to users familiar with the UK layout. A machine with the extended layout behaves exactly as with the standard UK, except for the rarely used grave accent key. This makes this layout suitable for a machine for shared or public use by a user population in which some use the extended functions. Despite being created for multilingual users, UK-Extended in Windows does have some gaps โ€” there are many languages that it cannot cope with, including Romanian and Turkish, and all languages with different character sets, such as Greek and Russian. It also does not cater for thorn (รพ, รž) in Old English, the รŸ in German, the ล“ in French, nor for the รฅ, รฆ, รธ, รฐ, รพ in Nordic languages. Chrome OS The UK Extended layout (a Chrome OS extension) provides all the same combinations as with Windows, but adds many more symbols and dead keys via AltGr. Notes: Dotted circle (โ—Œ) is used here to indicate a dead key. The key is the only one that acts as a free-standing dead key and thus does not respond as shown on the key-cap. All others are invoked by AltGr. (ยฐ) is a degree sign; (ยบ) is a masculine ordinal indicator Dead keys produces grave accents (e.g., ) ( produces a standalone grave sign). (release) produces diaeresis accents (e.g., ) (release) produces circumflex accents (e.g., ) (release) produces (mainly) comma diacritic or cedilla below the letter e.g., (release) produces a hook (diacritic) on vowels (e.g., ) AltGr+[ same as AltGr+2 AltGr+] same as AltGr+# (release) produces overrings (e.g., ) (release) produces macrons (e.g., ) (release) produces mainly horn (diacritic)s (e.g., ) (release) produces an adjacent horn (e.g., ) (release) produces acute accents (e.g., ) (release) produces double acute accents on some letters (e.g., ) that exist in Unicode as pre-composed characters (release) produces acute accents (e.g., ) (release) produces caron (haฤek) diacritics (e.g., ) (release) produces tilde diacritics (e.g., ) (release) produces inverted breve diacritics (e.g., ) (release) produces mainly underdots (e.g., ) (release) produces mainly overdots (e.g., ) Finally, any arbitrary Unicode glyph can be produced given its hexadecimal code point: , release, then the hex value, then or . For example (release) produces the Ethiopic syllable SEE, แˆด. US-International Windows and Linux An alternative layout uses the physical US keyboard to type diacritics in some operating systems (including Windows). This is the US-International layout setting, which uses the right key as an key to support many additional characters directly as an additional shift key. (Since many smaller keyboards don't have a right- key, Windows also allows + to be used as a substitute for .) This layout also uses keys , , , and as dead keys to generate characters with diacritics by pressing the appropriate key, then the letter on the keyboard. The international keyboard is a software setting installed from the Windows control panel or similar; the additional functions (shown in blue) may or may not be engraved on the keyboard, but are always functional. It can be used to type most major languages from Western Europe: Afrikaans, Danish, Dutch, English, Faroese, Finnish, French, German, Icelandic, Irish, Italian, Norwegian, Portuguese, Scottish Gaelic, Spanish, and Swedish. Some less common western and central European languages (such as Welsh, Maltese, Czech and Hungarian), are not fully supported by the US-International keyboard layout because of their use of additional diacritics or precomposed characters. A diacritic key is activated by pressing and releasing it, then pressing the letter that requires the diacritic. After the two strokes, the single character with diacritics is generated. Note that only certain letters, such as vowels and "n", can have diacritics in this way. To generate the symbols ', `, ", ^ and ~, when the following character is capable of having a diacritic, press the after the key. Characters with diacritics can be typed with the following combinations: + vowel โ†’ vowel with acute accent, e.g., โ†’ รฉ + vowel โ†’ vowel with grave accent, e.g., โ†’ รจ + vowel โ†’ vowel with diaeresis (or umlaut), e.g., โ†’ รซ + vowel โ†’ vowel with circumflex accent, e.g., โ†’ รช + , or โ†’ letter with tilde, e.g. โ†’ รฑ, โ†’ รต + โ†’ รง (Windows) or ฤ‡ (X11) The US-International layout is not entirely transparent to users familiar with the conventional US layout; when using a machine with the international layout setting active, the commonly used single- and double-quote keys and the less commonly used grave accent, tilde, and circumflex (caret) keys are dead keys and thus behave unconventionally. This could be disconcerting on a machine for shared or public use. There are also alternative US-International mappings, whereby modifier keys such as shift and alt are used, and the keys for the characters with diacritics are in different places from their unmodified counterparts. For example, the right-Alt key may be remapped as an AltGr modifier key or as a compose key and the dead key function deactivated, so that they (the ASCII quotation marks and circumflex symbol) can be typed normally with a single keystroke. US-International in the Netherlands The standard keyboard layout in the Netherlands is US-International, as it provides easy access to diacritics on common UK- or US-like keyboards. The Dutch layout is historical, and keyboards with this layout are rarely used. Many US keyboards sold do not have the extra US-International characters or engraved on the keys, although the euro sign (using ) always is; nevertheless, the keys work as expected even if not marked. Apple International English Keyboard There are three kinds of Apple Keyboards for English: the United States, the United Kingdom and International English. The International English version features the same changes as the United Kingdom version, only without substituting for the symbol on , and as well lacking visual indication for the symbol on (although this shortcut is present with all Apple QWERTY layouts). Differences from the US layout are: The key is located on the left of the key, and the key is located on the right of the key. The key is added on the left of the key. The left key is shortened and the key has the shape of inverted L. Canadian Multilingual Standard The Canadian Multilingual Standard keyboard layout is used by some Canadians. Though the caret (^) is missing, it is easily inserted by typing the circumflex accent followed by a space. Finnish multilingual The visual layout used in Finland is basically the same as the Swedish layout. This is practical, as Finnish and Swedish share the special characters ร„/รค and ร–/รถ, and while the Swedish ร…/รฅ is unnecessary for writing Finnish, it is needed by Swedish-speaking Finns and to write Swedish family names which are common. As of 2008, there is a new standard for the Finnish multilingual keyboard layout, developed as part of a localization project by CSC. All the engravings of the traditional Finnishโ€“Swedish visual layout have been retained, so there is no need to change the hardware, but the functionality has been extended considerably, as additional characters (e.g., ร†/รฆ, ฦ/ษ™, ฦท/ส’) are available through the key, as well as dead keys, which allow typing a wide variety of letters with diacritics (e.g., ร‡/รง, วค/วฅ, วฎ/วฏ). Based on the Latin letter repertory included in the Multilingual European Subset No. 2 (MES-2) of the Unicode standard, the layout has three main objectives. First, it provides for easy entering of text in both Finnish and Swedish, the two official languages of Finland, using the familiar keyboard layout but adding some advanced punctuation options, such as dashes, typographical quotation marks, and the non-breaking space (NBSP). Second, it is designed to offer an indirect but intuitive way to enter the special letters and diacritics needed by the other three Nordic national languages (Danish, Norwegian and Icelandic) as well as the regional and minority languages (Northern Sรกmi, Southern Sรกmi, Lule Sรกmi, Inari Sรกmi, Skolt Sรกmi, Romani language as spoken in Finland, Faroese, Kalaallisut also known as Greenlandic, and German). As a third objective, it allows for relatively easy entering of particularly names (of persons, places or products) in a variety of European languages using a more or less extended Latin alphabet, such as the official languages of the European Union (excluding Bulgarian and Greek). Some letters, like ล/ล‚ needed for Slavic languages, are accessed by a special "overstrike" key combination acting like a dead key. However, the Romanian letters ศ˜/ศ™ and ศš/ศ› (S/s and T/t with comma below) are not supported; the presumption is that ลž/ลŸ and ลข/ลฃ (with cedilla) suffice as surrogates. EurKEY EurKEY, a multilingual keyboard layout intended for Europeans, programmers and translators which uses the US-standard QWERTY layout as base and adds a third and fourth layer available through the key and +. These additional layers provide support for many Western European languages, special characters, the Greek alphabet (via dead keys), and many common mathematical symbols. Unlike most of the other QWERTY layouts, which are formal standards for a country or region, EurKEY is not an EU, EFTA or any national standard. To address the ergonomics issue of QWERTY, EurKEY Colemak-DH was also developed a Colmak-DH version with the EurKEY design principles. Alternatives Several alternatives to QWERTY have been developed over the years, claimed by their designers and users to be more efficient, intuitive, and ergonomic. Nevertheless, none have seen widespread adoption, partly due to the sheer dominance of available keyboards and training. Although some studies have suggested that some of these may allow for faster typing speeds, many other studies have failed to do so, and many of the studies claiming improved typing speeds were severely methodologically flawed or deliberately biased, such as the studies administered by August Dvorak himself before and after World War II. Economists Stan Liebowitz and Stephen Margolis have noted that rigorous studies are inconclusive as to whether they actually offer any real benefits, and some studies on keyboard layout have suggested that, for a skilled typist, layout is largely irrelevanteven randomized and alphabetical keyboards allow for similar typing speeds to QWERTY and Dvorak keyboardsand that switching costs always outweigh the benefits of further training with a keyboard layout a person has already learned. The most widely used such alternative is the Dvorak keyboard layout; another alternative is Colemak, which is based partly on QWERTY and is claimed to be easier for an existing QWERTY typist to learn while offering several supposed optimisations. Most modern computer operating systems support these and other alternative mappings with appropriate special mode settings, with some modern operating systems allowing the user to map their keyboard in any way they like, but few keyboards are made with keys labeled according to any other standard. Comparison to other keyboard input systems Comparisons have been made between Dvorak, Colemak, QWERTY, and other keyboard input systems, namely stenotype or its electronic implementations. However, stenotype is a fundamentally different system, which relies on phonetics and simultaneous key presses or chords. Although Shorthand (or 'stenography') has long been known as a faster and more accurate typing system, adoption has been limited, possibly due to the historically high cost of equipment, steeper initial learning curve, and low awareness of the benefits within primary education and in the general public. The first typed shorthand machines appeared around 1830, with English versions gaining popularity in the early 1900s. Modern electronic stenotype machines or programs produce output in written language, which provides an experience similar to other keyboard setups that immediately produce legible work. Half QWERTY A half QWERTY keyboard is a combination of an alpha-numeric keypad and a QWERTY keypad, designed for mobile phones. In a half QWERTY keyboard, two characters share the same key, which reduces the number of keys and increases the surface area of each key, useful for mobile phones that have little space for keys. It means that 'Q' and 'W' share the same key and the user must press the key once to type 'Q' and twice to type 'W'. See also AZERTY HCESAR QWERTZ JCUKEN Colemak Keyboard Dvorak keyboard layout KALQ keyboard split-screen touchscreen thumb-typing Android-only 2013 beta Keyboard monument Maltron keyboard Path dependence Repetitive strain injury Text entry interface Thumb keyboard Touch typing Velotype Virtual keyboard WASD References Informational notes Citations External links Article on QWERTY and Path Dependence from EH.NET's Encyclopedia QWERTY Keyboard History QWERTY Keyboard in Mobiles Android phones with QWERTY keyboards 1873 introductions American inventions Computer keyboard types Latin-script keyboard layouts
20349510
https://en.wikipedia.org/wiki/2009%20Rose%20Bowl
2009 Rose Bowl
The 2009 Rose Bowl, the 95th edition of the annual game, was a college football bowl game played on Thursday, January 1, 2009 at the same-named stadium in Pasadena, California. Because of sponsorship by Citi, the first game in the 2009 edition of the Bowl Championship Series was officially titled the "Rose Bowl Game presented by citi". The contest was televised on ABC with a radio broadcast on ESPN Radio beginning at 4:30 PM US EST with kickoff at 5:10 PM. Ticket prices for all seats in the Rose Bowl were listed at $145. The Rose Bowl Game was a contractual sell-out, with 64,500 tickets allocated to the participating teams and conferences. The remaining tickets went to the Tournament of Roses members, sponsors, City of Pasadena residents, and the general public. Scoring 24 unanswered points in the second quarter, the Pacific-10 Conference Champion University of Southern California Trojans defeated the Big Ten Conference co-champion, the Pennsylvania State University Nittany Lions, 38-24, for their third consecutive Rose Bowl victory (in their fourth consecutive appearance, having lost the 2006 BCS title game to the Texas Longhorns). The victory gave the Trojans their 24th Rose Bowl championship, the most by any team in the country. Quarterback Mark Sanchez scored five touchdowns, one rushing and four passing. Prior to the game, the Pac-10 conference had a 4-0 record in bowl games this season with wins by Arizona, Cal, Oregon, and Oregon State. The Trojan win gave the Pac-10 a perfect five out of five games, which was the only perfect conference bowl record of the season. The Big Ten conference had last won a Rose Bowl game in the 1999 season; this streak ended when Ohio State beat Oregon in the 2010 Rose Bowl. Teams The teams participating in the Rose Bowl Game were announced on Sunday, December 7, by the Pasadena Tournament of Roses football committee. Big Ten co-champions, Penn State, coached by Joe Paterno, were picked to play against Southern California, the champions of the Pac-10, coached by Pete Carroll. Penn State earned its bid via a head-to-head tiebreaker, beating Ohio State, 13โ€“6 in Columbus, Ohio, on October 25, 2008. The Men of Troy earned their way in by defeating UCLA 28-7 on December 6, 2008. USC was designated as the home team, wearing dark jerseys and using the east bench on game day. The 2009 game marked the first time since 2004 Rose Bowl that the traditional teamsโ€”the champions of the Big Ten and the Pac-10โ€”squared off at the Rose Bowl, because at least one league champion played in the BCS Championship in each the previous four years. In the 2005 Rose Bowl, Big Ten champion Michigan met Texas, as USC played Oklahoma in the designated BCS Championship Game that year, the Orange Bowl. The following year, USC met Texas in the game, which was designated as that year's BCS National Championship contest; Big Ten champion Penn State played in the Orange Bowl against Florida State. For the 2007 and 2008 games, the runners-up of the Big Ten were sent to this game, since the champion, Ohio State, participated in the newly established separate BCS Championship Game: the Wolverines played in 2007 and Illinois would do so the following year; both teams played (and lost to) the Trojans. Each team lost just one game during the 2008 regular season. Penn State was defeated by Iowa (24-23) and Southern California lost to Oregon State (27-21). USC and Penn State had faced two common opponents in the regular season. Both teams defeated Ohio State and Penn State beat Oregon State prior to the Beavers' defeat of the Trojans. Penn State had appeared in the Rose Bowl twice before, losing to the Trojans 14-3 in in their only previous meeting in "The Granddaddy Of Them All", and winning in over 1994 Pac-10 champion Oregon 38-20, the latter game capped an unbeaten season in which Penn State finished #2 in both major polls. The Trojans have played in the Rose Bowl more times than any other team and made its fourth consecutive appearance in 2009. The two teams have faced each other eight times, with each team winning four games. The Kickoff Classic XVIII on August 27, 2000, in Giants Stadium at East Rutherford, New Jersey, was the last time they met: the Trojans defeated the Nittany Lions, 29-5. Scoring summary First quarter USC โ€” Williams, D. 27-yard pass from Sanchez, Mark (Buehler, David kick), PSU 0 - USC 7 PSU โ€” Clark, Daryll 9-yard run (Kelly, Kevin kick), PSU 7 - USC 7 Second quarter USC โ€” Sanchez, Mark 6-yard run (Buehler, David kick), PSU 7 - USC 14 USC โ€” Buehler, David 30-yard field goal, PSU 7 - USC 17 USC โ€” Johnson, Ronald 19-yard pass from Sanchez, Mark (Buehler, David kick), PSU 7 - USC 24 USC โ€” Gable, C.J. 20-yard pass from Sanchez, Mark (Buehler, David kick), PSU 7 - USC 31 Third quarter No scoring Fourth quarter PSU โ€” Williams, D. 2-yard pass from Clark, Daryll (Kelly, Kevin kick), PSU 14 - USC 31 USC โ€” Johnson, Ronald 45-yard pass from Sanchez, Mark (Buehler, David kick), PSU 14 - USC 38 PSU โ€” Kelly, Kevin 25-yard field goal, PSU 17 - USC 38 PSU โ€” Norwood, Jordan 9-yard pass from Clark, Daryll (Kelly, Kevin kick), PSU 24 - USC 38 Game notes Mark Sanchez became the third quarterback to pass for more than 400 yards in a Rose Bowl Game, with 413 yards. The others were Wisconsinโ€™s Ron Vander Kelen (401 yards, 1963) and Oregonโ€™s Danny O'Neil (456 Yards, 1995). Sanchez set a Rose Bowl record for completion percentage, at 80%. USC is the only team in history to have won three straight Rose Bowl games. The Lathrop K. Leishman Trophy honoring the 2009 Champion was created by Tiffany. Penn State was the only team in 2008 to score more than 7 points against USC in the second half. This marked USC head coach Pete Carroll's fifth appearance and Penn State head coach Joe Paterno's second appearance in the Rose Bowl Game. BCS Commissioners took "appropriate responsive actions" against Penn State for two media access contracts violations - failure to give pre-game interviews to ABC broadcasters and provide after-game locker room access. Joe Paterno explained that he did not want the attention to be taken away from Pete Carroll, knowing that the questions would focus on his health. This would be the last appearance for either team in the Rose Bowl until they met in a rematch in the 2017 Rose Bowl. References Rose Bowl Rose Bowl Game Penn State Nittany Lions football bowl games USC Trojans football bowl games Rose Bowl Rose Bowl 21st century in Pasadena, California
21116845
https://en.wikipedia.org/wiki/Conficker
Conficker
Conficker, also known as Downup, Downadup and Kido, is a computer worm targeting the Microsoft Windows operating system that was first detected in November 2008. It uses flaws in Windows OS software and dictionary attacks on administrator passwords to propagate while forming a botnet, and has been unusually difficult to counter because of its combined use of many advanced malware techniques. The Conficker worm infected millions of computers including government, business and home computers in over 190 countries, making it the largest known computer worm infection since the 2003 Welchia. Despite its wide propagation, the worm did not do much damage, perhaps because its authors โ€“ believed to have been Ukrainian citizens โ€“ did not dare use it because of the attention it drew. Four men were arrested, and one pled guilty and was sentenced to 4 years in prison. Prevalence Estimates of the number of infected computers were difficult because the virus changed its propagation and update strategy from version to version. In January 2009, the estimated number of infected computers ranged from almost 9 million to 15 million. Microsoft has reported the total number of infected computers detected by its antimalware products has remained steady at around 1.7 million from mid-2010 to mid-2011. By mid-2015, the total number of infections had dropped to about 400,000, and it was estimated to be 500,000 in 2019. History Name The origin of the name Conficker is thought to be a combination of the English term "configure" and the German pejorative term Ficker (engl. fucker). Microsoft analyst Joshua Phillips gives an alternative interpretation of the name, describing it as a rearrangement of portions of the domain name trafficconverter.biz (with the letter k, not found in the domain name, added as in "trafficker", to avoid a "soft" c sound) which was used by early versions of Conficker to download updates. Discovery The first variant of Conficker, discovered in early November 2008, propagated through the Internet by exploiting a vulnerability in a network service (MS08-067) on Windows 2000, Windows XP, Windows Vista, Windows Server 2003, Windows Server 2008, and Windows Server 2008 R2 Beta. While Windows 7 may have been affected by this vulnerability, the Windows 7 Beta was not publicly available until January 2009. Although Microsoft released an emergency out-of-band patch on October 23, 2008 to close the vulnerability, a large number of Windows PCs (estimated at 30%) remained unpatched as late as January 2009. A second variant of the virus, discovered in December 2008, added the ability to propagate over LANs through removable media and network shares. Researchers believe that these were decisive factors in allowing the virus to propagate quickly. Impact in Europe Intramar, the French Navy computer network, was infected with Conficker on 15 January 2009. The network was subsequently quarantined, forcing aircraft at several airbases to be grounded because their flight plans could not be downloaded. The United Kingdom Ministry of Defence reported that some of its major systems and desktops were infected. The virus had spread across administrative offices, NavyStar/N* desktops aboard various Royal Navy warships and Royal Navy submarines, and hospitals across the city of Sheffield reported infection of over 800 computers. On 2 February 2009, the Bundeswehr, the unified armed forces of Germany, reported that about one hundred of its computers were infected. An infection of Manchester City Council's IT system caused an estimated ยฃ1.5m worth of disruption in February 2009. The use of USB flash drives was banned, as this was believed to be the vector for the initial infection. A memo from the Director of the UK Parliamentary ICT service informed the users of the House of Commons on 24 March 2009 that it had been infected with the virus. The memo, which was subsequently leaked, called for users to avoid connecting any unauthorised equipment to the network. In January 2010, the Greater Manchester Police computer network was infected, leading to its disconnection for three days from the Police National Computer as a precautionary measure; during that time, officers had to ask other forces to run routine checks on vehicles and people. Operation Although almost all of the advanced malware techniques used by Conficker have seen past use or are well known to researchers, the virus's combined use of so many has made it unusually difficult to eradicate. The virus's unknown authors are also believed to be tracking anti-malware efforts from network operators and law enforcement and have regularly released new variants to close the virus's own vulnerabilities. Five variants of the Conficker virus are known and have been dubbed Conficker A, B, C, D and E. They were discovered 21 November 2008, 29 December 2008, 20 February 2009, 4 March 2009 and 7 April 2009, respectively. The Conficker Working Group uses namings of A, B, B++, C, and E for the same variants respectively. This means that (CWG) B++ is equivalent to (MSFT) C and (CWG) C is equivalent to (MSFT) D. Initial infection Variants A, B, C and E exploit a vulnerability in the Server Service on Windows computers, in which an already-infected source computer uses a specially-crafted RPC request to force a buffer overflow and execute shellcode on the target computer. On the source computer, the virus runs an HTTP server on a port between 1024 and 10000; the target shellcode connects back to this HTTP server to download a copy of the virus in DLL form, which it then attaches to svchost.exe. Variants B and later may attach instead to a running services.exe or Windows Explorer process. Attaching to those processes might be detected by the application trust feature of an installed firewall. Variants B and C can remotely execute copies of themselves through the ADMIN$ share on computers visible over NetBIOS. If the share is password-protected, a dictionary attack is attempted, potentially generating large amounts of network traffic and tripping user account lockout policies. Variants B and C place a copy of their DLL form in the recycle.bin of any attached removable media (such as USB flash drives), from which they can then infect new hosts through the Windows AutoRun mechanism using a manipulated autorun.inf. To start itself at system boot, the virus saves a copy of its DLL form to a random filename in the Windows system or system32 folder, then adds registry keys to have svchost.exe invoke that DLL as an invisible network service. Payload propagation The virus has several mechanisms for pushing or pulling executable payloads over the network. These payloads are used by the virus to update itself to newer variants, and to install additional malware. Variant A generates a list of 250 domain names every day across five TLDs. The domain names are generated from a pseudo-random number generator (PRNG) seeded with the current date to ensure that every copy of the virus generates the same names each day. The virus then attempts an HTTP connection to each domain name in turn, expecting from any of them a signed payload. Variant B increases the number of TLDs to eight, and has a generator tweaked to produce domain names disjoint from those of A. To counter the virus's use of pseudorandom domain names, Internet Corporation for Assigned Names and Numbers (ICANN) and several TLD registries began in February 2009 a coordinated barring of transfers and registrations for these domains. Variant D counters this by generating daily a pool of 50,000 domains across 110 TLDs, from which it randomly chooses 500 to attempt for that day. The generated domain names were also shortened from 8โ€“11 to 4โ€“9 characters to make them more difficult to detect with heuristics. This new pull mechanism (which was disabled until April 1, 2009) is unlikely to propagate payloads to more than 1% of infected hosts per day, but is expected to function as a seeding mechanism for the virus's peer-to-peer network. The shorter generated names, however, are expected to collide with 150โ€“200 existing domains per day, potentially causing a distributed denial-of-service attack (DDoS) on sites serving those domains. However the large number of generated domains and the fact that not every domain will be contacted for a given day will probably prevent DDoS situations. Variant C creates a named pipe, over which it can push URLs for downloadable payloads to other infected hosts on a local area network. Variants B, C and E perform in-memory patches to NetBIOS-related DLLs to close MS08-067 and watch for re-infection attempts through the same vulnerability. Re-infection from more recent versions of Conficker are allowed through, effectively turning the vulnerability into a propagation backdoor. Variants D and E create an ad-hoc peer-to-peer network to push and pull payloads over the wider Internet. This aspect of the virus is heavily obfuscated in code and not fully understood, but has been observed to use large-scale UDP scanning to build up a peer list of infected hosts and TCP for subsequent transfers of signed payloads. To make analysis more difficult, port numbers for connections are hashed from the IP address of each peer. Armoring To prevent payloads from being hijacked, variant A payloads are first SHA-1-hashed and RC4-encrypted with the 512-bit hash as a key. The hash is then RSA-signed with a 1024-bit private key. The payload is unpacked and executed only if its signature verifies with a public key embedded in the virus. Variants B and later use MD6 as their hash function and increase the size of the RSA key to 4096 bits. Conficker B adopted MD6 mere months after it was first published; six weeks after a weakness was discovered in an early version of the algorithm and a new version was published, Conficker upgraded to the new MD6. Self-defense The DLL- Form of the virus is protected against deletion by setting its ownership to "SYSTEM", which locks it from deletion even if the user is granted with administrator privileges. The virus stores a backup copy of this DLL disguised as a .jpg image in the Internet Explorer cache of the user network services. Variant C of the virus resets System Restore points and disables a number of system services such as Windows Automatic Update, Windows Security Center, Windows Defender and Windows Error Reporting. Processes matching a predefined list of antiviral, diagnostic or system patching tools are watched for and terminated. An in-memory patch is also applied to the system resolver DLL to block lookups of hostnames related to antivirus software vendors and the Windows Update service. End action Variant E of the virus was the first to use its base of infected computers for an ulterior purpose. It downloads and installs, from a web server hosted in Ukraine, two additional payloads: Waledac, a spambot otherwise known to propagate through e-mail attachments. Waledac operates similarly to the 2008 Storm worm and is believed to be written by the same authors. SpyProtect 2009, a scareware rogue antivirus product. Symptoms Symptoms of a Conficker infection include: Account lockout policies being reset automatically. Certain Microsoft Windows services such as Automatic Updates, Background Intelligent Transfer Service (BITS), Windows Defender and Windows Error Reporting disabled. Domain controllers responding slowly to client requests. Congestion on local area networks (ARP flood as consequence of network scan). Web sites related to antivirus software or the Windows Update service becoming inaccessible. User accounts locked out. Response On 12 February 2009, Microsoft announced the formation of an industry group to collaboratively counter Conficker. The group, which has since been informally dubbed the Conficker Cabal, includes Microsoft, Afilias, ICANN, Neustar, Verisign, China Internet Network Information Center, Public Internet Registry, Global Domains International, M1D Global, America Online, Symantec, F-Secure, ISC, researchers from Georgia Tech, The Shadowserver Foundation, Arbor Networks, and Support Intelligence. From Microsoft On 13 February 2009, Microsoft offered a $USD250,000 reward for information leading to the arrest and conviction of the individuals behind the creation and/or distribution of Conficker. From registries ICANN has sought preemptive barring of domain transfers and registrations from all TLD registries affected by the virus's domain generator. Those which have taken action include: On 13 March 2009, NIC Chile, the .cl ccTLD registry, blocked all the domain names informed by the Conficker Working Group and reviewed a hundred already registered from the worm list. On 24 March 2009, CIRA, the Canadian Internet Registration Authority, locked all previously-unregistered .ca domain names expected to be generated by the virus over the next 12 months. On 27 March 2009, NIC-Panama, the .pa ccTLD registry, blocked all the domain names informed by the Conficker Working Group. On 30 March 2009, SWITCH, the Swiss ccTLD registry, announced it was "taking action to protect internet addresses with the endings .ch and .li from the Conficker computer worm." On 31 March 2009, NASK, the Polish ccTLD registry, locked over 7,000 .pl domains expected to be generated by the virus over the following five weeks. NASK has also warned that worm traffic may unintentionally inflict a DDoS attack to legitimate domains which happen to be in the generated set. On 2 April 2009, Island Networks, the ccTLD registry for Guernsey and Jersey, confirmed after investigations and liaison with the IANA that no .gg or .je names were in the set of names generated by the virus. By mid-April 2009 all domain names generated by Conficker A had been successfully locked or preemptively registered, rendering its update mechanism ineffective. Origin Working group members stated at the 2009 Black Hat Briefings that Ukraine is the probable origin of the virus, but declined to reveal further technical discoveries about the virus's internals to avoid tipping off its authors. An initial variant of Conficker did not infect systems with Ukrainian IP addresses or with Ukrainian keyboard layouts. The payload of Conficker.E was downloaded from a host in Ukraine. In 2015, Phil Porras, Vinod Yegneswaran and Hassan Saidi โ€“ who were the first to detect and reverse-engineer Conficker โ€“ wrote in the Journal of Sensitive Cyber Research and Engineering, a classified, peer-reviewed U.S. government cybersecurity publication, that they tracked the malware to a group of Ukrainian cybercriminals. Porras et al. believed that the criminals abandoned Conficker after it had spread much more widely than they assumed it would, reasoning that any attempt to use it would draw too much attention from law enforcement worldwide. This explanation is widely accepted in the cybersecurity field. In 2011, working with the FBI, Ukrainian police arrested three Ukrainians in relation to Conficker, but there are no records of them being prosecuted or convicted. A Swede, Mikael Sallnert, was sentenced to 48 months in prison in the U.S. after a guilty plea. Removal and detection Due to the lock of the virus files against deletion as long as the system is running, the manual or automatic removal itself has to be performed during boot process or with an external system installed. Deleting any existing backup copy is a crucial step. Microsoft released a removal guide for the virus, and recommended using the current release of its Windows Malicious Software Removal Tool to remove the virus, then applying the patch to prevent re-infection. Newer versions of Windows are immune to Conficker. Third-party software Many third-party anti-virus software vendors have released detection updates to their products and claim to be able to remove the worm. The evolving process of the malware shows some adoption to the common removal software, so it is likely that some of them might remove or at least disable some variants, while others remain active or, even worse, deliver a false positive to the removal software and become active with the next reboot. Automated remote detection On 27 March 2009, Felix Leder and Tillmann Werner from the Honeynet Project discovered that Conficker-infected hosts have a detectable signature when scanned remotely. The peer-to-peer command protocol used by variants D and E of the virus has since been partially reverse-engineered, allowing researchers to imitate the virus network's command packets and positively identify infected computers en-masse. Signature updates for a number of network scanning applications are now available. It can also be detected in passive mode by sniffing broadcast domains for repeating ARP requests. US CERT The United States Computer Emergency Readiness Team (US-CERT) recommends disabling AutoRun to prevent Variant B of the virus from spreading through removable media. Prior to the release of Microsoft knowledgebase article KB967715, US-CERT described Microsoft's guidelines on disabling Autorun as being "not fully effective" and provided a workaround for disabling it more effectively. US-CERT has also made a network-based tool for detecting Conficker-infected hosts available to federal and state agencies. See also Botnet Timeline of notable computer viruses and worms Bot herder Network Access Protection Zombie (computer science) Malware References External links Conficker Working Group Conficker Working Group -- Lessons Learned Conficker Eye Chart Worm: The First Digital World War by Mark Bowden (2011; ); "The 'Worm' That Could Bring Down The Internet", author interview (audio and transcript), Fresh Air on NPR, September 27, 2011; preliminarily covered by Bowden in Atlantic magazine article "The Enemy Within" (June 2010). Computer worms Hacking in the 2000s
13234008
https://en.wikipedia.org/wiki/Ultimate%20Defender
Ultimate Defender
Ultimate Defender is a rogue antivirus program published by Nous-Tech Solutions Ltd. The program is considered malware due to its difficult uninstallation and deceptive operation. Operation The program may be obtained via free download. Once installed, it purports to search a user's computer for viruses and other spyware. During installation, however, other files are installed onto the computer. These files are then detected by the software and listed as critical threats requiring immediate removal. Users are prompted to upgrade to a full version of the software, which would be capable of removing the threat - despite the fact that the threat was installed with the software. In addition to its purported operation as an antivirus program, the software floods users with multiple false security warnings about threats to the security of files on the computer, overwhelming pop-up blockers such as Norton Antivirus. The software may also alter the operation of screen savers, desktop, and desktop icons, directing users to the software's homepage. Removal The software is related to Ultimate fixer and Ultimate Cleaner, in that the software is extremely difficult to remove once installed. Companies such as Symantec provide detailed instructions for removal of the software, as some commercial anti-spyware programs may be unable to remove the software automatically. External links References Rogue software Scareware
28290656
https://en.wikipedia.org/wiki/TUPAS
TUPAS
TUPAS is a strong digital authentication method created by the Federation of Finnish Financial Services. TUPAS identification is a de facto standard for digital identification in Finland. It is used by all major Finnish banks including Aktia, Osuuspankki, Nordea, Danske Bank and S-Pankki (formerly Tapiola). Furthermore, TUPAS is used also by Finnish government to log into Kansanelรคkelaitos and Finnish Tax Administration site vero.fi. The phasing out of TUPAS began in 2016. The final deadline to shut down the identification services was in September 2019, but all banks continued providing service past this date. TrafiCom has issued a warning that monetary penalties will be collected from services that have not shut down by end of November 2019, and ultimately warned that services using TUPAS for strong authentication would be shut down. TUPAS was based on the Finnish law on strong electronic identification and digital signatures. The law requires strong identification methods to include at least two of the three following identification methods. Password or other similar that one knows, Chipcard or other similar that one possesses, or Fingerprint or other similar that is unique to the person. Commonly the identification is done using a password and a list of single-use passcodes or a passcode device. TUPAS was operated by the Finnish banks, and required service providers to negotiate contracts and perform integrations with each separate bank they deal with. As no real competition existed, TUPAS authentication was expensive to service providers. The eIDAS regulations provided the government with the opportunity to open up eID services to market competition. To that end, the Finnish authorities established the Finnish Trust Network (FTN), a framework that allows strong authentication service brokers to resell eID solutions in Finland using a single standardised service contract. These eID brokers act as intermediaries between the identity providers (banks and telecom operators) and online service providers, which enables them to operate as 'one-stop-shop' resellers of eIDs, as well as giving them the capacity to manage contracts and technical integrations. This new competitive environment has removed the main obstacles to developing strong identification services by: Capping transaction costs between the bank and eID broker Eliminating administrative hurdles, with a single contract serving all Finnish banks Streamlining integration, with only one standard technical interface required Traficom has recommended organizations to use an eID broker instead of directly connecting with eID providers. References External links http://www.fkl.fi/en/themes/e-services/tupas/Pages/default.aspx Computer access control Banking technology Communications in Finland
1721496
https://en.wikipedia.org/wiki/Free%20and%20open-source%20software
Free and open-source software
Free and open-source software (FOSS) is software that is both free software and open-source software where anyone is freely licensed to use, copy, study, and change the software in any way, and the source code is openly shared so that people are encouraged to voluntarily improve the design of the software. This is in contrast to proprietary software, where the software is under restrictive copyright licensing and the source code is usually hidden from the users. FOSS maintains the software user's civil liberty rights (see the Four Essential Freedoms, below). Other benefits of using FOSS can include decreased software costs, increased security and stability (especially in regard to malware), protecting privacy, education, and giving users more control over their own hardware. Free and open-source operating systems such as Linux and descendants of BSD are widely utilized today, powering millions of servers, desktops, smartphones (e.g., Android), and other devices. Free-software licenses and open-source licenses are used by many software packages. The free-software movement and the open-source software movement are online social movements behind widespread production and adoption of FOSS, with the former preferring to use the terms FLOSS or free/libre. Overview "Free and open-source software" (FOSS) is an umbrella term for software that is simultaneously considered both free software and open-source software. FOSS (free and open-source software) allows the user to inspect the source code and provides a high level of control of the software's functions compared to proprietary software. The term "free software" does not refer to the monetary cost of the software at all, but rather whether the license maintains the software user's civil liberties ("freeโ€ as in โ€œfree speech,โ€ not as in โ€œfree beerโ€). There are a number of related terms and abbreviations for free and open-source software (FOSS or F/OSS), or free/libre and open-source software (FLOSS or F/LOSS is preferred by FSF over FOSS, while free or free/libre is their preferred term). Although there is almost a complete overlap between free-software licenses and open-source-software licenses, there is a strong philosophical disagreement between the advocates of these two positions. The terminology of FOSS or "Free and Open-source software" was created to be a neutral on these philosophical disagreements between the FSF and OSI and have a single unified term that could refer to both concepts. Free software Richard Stallman's Free Software Definition, adopted by the Free Software Foundation (FSF), defines free software as a matter of liberty not price, and it upholds the Four Essential Freedoms. The earliest-known publication of the definition of his free-software idea was in the February 1986 edition of the FSF's now-discontinued GNU's Bulletin publication. The canonical source for the document is in the philosophy section of the GNU Project website. As of August 2017, it is published in 40 languages. Four essential freedoms of Free Software To meet the definition of "free software", the FSF requires the software's licensing respect the civil liberties / human rights of what the FSF calls the software user's "Four Essential Freedoms". The freedom to run the program as you wish, for any purpose (freedom 0). The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this. The freedom to redistribute copies so you can help others (freedom 2). The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this. Open source The Open Source Definition is used by the Open Source Initiative (OSI) to determine whether a software license qualifies for the organization's insignia for open-source software. The definition was based on the Debian Free Software Guidelines, written and adapted primarily by Bruce Perens. Perens did not base his writing on the Four Essential Freedoms of free software from the Free Software Foundation, which were only later available on the web. Perens subsequently stated that he felt Eric Raymond's promotion of open-source unfairly overshadowed the Free Software Foundation's efforts and reaffirmed his support for free software. In the following 2000s, he spoke about open source again. History From the 1950s and on through the 1980s, it was common for computer users to have the source code for all programs they used, and the permission and ability to modify it for their own use. Software, including source code, was commonly shared by individuals who used computers, often as public domain software (Note that FOSS is not the same as public domain software, as public domain software does not contain copyrights). Most companies had a business model based on hardware sales, and provided or bundled software with hardware, free of charge. By the late 1960s, the prevailing business model around software was changing. A growing and evolving software industry was competing with the hardware manufacturer's bundled software products; rather than funding software development from hardware revenue, these new companies were selling software directly. Leased machines required software support while providing no revenue for software, and some customers who were able to better meet their own needs did not want the costs of software bundled with hardware product costs. In United States vs. IBM, filed January 17, 1969, the government charged that bundled software was anticompetitive. While some software was still being provided without monetary cost and license restriction, there was a growing amount of software that was only at a monetary cost with restricted licensing. In the 1970s and early 1980s, some parts of the software industry began using technical measures (such as distributing only binary copies of computer programs) to prevent computer users from being able to use reverse engineering techniques to study and customize software they had paid for. In 1980, the copyright law was extended to computer programs in the United Statesโ€”previously, computer programs could be considered ideas, procedures, methods, systems, and processes, which are not copyrightable. Early on, closed-source software was uncommon until the mid-1970s to the 1980s, when IBM implemented in 1983 an "object code only" policy, no longer distributing source code. In 1983, Richard Stallman, longtime member of the hacker community at the MIT Artificial Intelligence Laboratory, announced the GNU project, saying that he had become frustrated with the effects of the change in culture of the computer industry and its users. Software development for the GNU operating system began in January 1984, and the Free Software Foundation (FSF) was founded in October 1985. An article outlining the project and its goals was published in March 1985 titled the GNU Manifesto. The manifesto included significant explanation of the GNU philosophy, Free Software Definition and "copyleft" ideas. The FSF takes the position that the fundamental issue Free software addresses is an ethical oneโ€”to ensure software users can exercise what it calls "The Four Essential Freedoms". The Linux kernel, created by Linus Torvalds, was released as freely modifiable source code in 1991. Initially, Linux was not released under either a Free software or an Open-source software license. However, with version 0.12 in February 1992, he relicensed the project under the GNU General Public License. FreeBSD and NetBSD (both derived from 386BSD) were released as Free software when the USL v. BSDi lawsuit was settled out of court in 1993. OpenBSD forked from NetBSD in 1995. Also in 1995, The Apache HTTP Server, commonly referred to as Apache, was released under the Apache License 1.0. In 1997, Eric Raymond published The Cathedral and the Bazaar, a reflective analysis of the hacker community and Free software principles. The paper received significant attention in early 1998, and was one factor in motivating Netscape Communications Corporation to release their popular Netscape Communicator Internet suite as Free software. This code is today better known as Mozilla Firefox and Thunderbird. Netscape's act prompted Raymond and others to look into how to bring the FSF's Free software ideas and perceived benefits to the commercial software industry. They concluded that FSF's social activism was not appealing to companies like Netscape, and looked for a way to rebrand the Free software movement to emphasize the business potential of sharing and collaborating on software source code. The new name they chose was "Open-source", and quickly Bruce Perens, publisher Tim O'Reilly, Linus Torvalds, and others signed on to the rebranding. The Open Source Initiative was founded in February 1998 to encourage the use of the new term and evangelize open-source principles. While the Open Source Initiative sought to encourage the use of the new term and evangelize the principles it adhered to, commercial software vendors found themselves increasingly threatened by the concept of freely distributed software and universal access to an application's source code. A Microsoft executive publicly stated in 2001 that "Open-source is an intellectual property destroyer. I can't imagine something that could be worse than this for the software business and the intellectual-property business." This view perfectly summarizes the initial response to FOSS by some software corporations. For many years FOSS played a niche role outside of the mainstream of private software development. However the success of FOSS Operating Systems such as Linux, BSD and the companies based on FOSS such as Red Hat, has changed the software industry's attitude and there has been a dramatic shift in the corporate philosophy concerning its development. Usage FOSS benefits over proprietary software Personal control, customizability and freedom Users of FOSS benefit from the Four Essential Freedoms to make unrestricted use of, and to study, copy, modify, and redistribute such software with or without modification. If they would like to change the functionality of software they can bring about changes to the code and, if they wish, distribute such modified versions of the software or often โˆ’ depending on the software's decision making model and its other users โˆ’ even push or request such changes to be made via updates to the original software. Privacy and security Manufacturers of proprietary, closed-source software are sometimes pressured to building in backdoors or other covert, undesired features into their software. Instead of having to trust software vendors, users of FOSS can inspect and verify the source code themselves and can put trust on a community of volunteers and users. As proprietary code is typically hidden from public view, only the vendors themselves and hackers may be aware of any vulnerabilities in them while FOSS involves as many people as possible for exposing bugs quickly. Low costs or no costs FOSS is often free of charge although donations are often encouraged. This also allows users to better test and compare software. Quality, collaboration and efficiency FOSS allows for better collaboration among various parties and individuals with the goal of developing the most efficient software for its users or use-cases while proprietary software is typically meant to generate profits. Furthermore, in many cases more organizations and individuals contribute to such projects than to proprietary software. It has been shown that technical superiority is typically the primary reason why companies choose open source software. Drawbacks compared to proprietary software Security and user-support According to Linus's law the more people who can see and test a set of code, the more likely any flaws will be caught and fixed quickly. However, this does not guarantee a high level of participation. Having a grouping of full-time professionals behind a commercial product can in some cases be superior to FOSS. Furthermore, publicized source code might make it easier for hackers to find vulnerabilities in it and write exploits. This however assumes that such malicious hackers are more effective than white hat hackers which responsibly disclose or help fix the vulnerabilities, that no code leaks or exfiltrations occur and that reverse engineering of proprietary code is a hindrance of significance for malicious hackers. Hardware and software compatibility Sometimes, FOSS is not compatible with proprietary hardware or specific software. This is often due to manufacturers obstructing FOSS such as by not disclosing the interfaces or other specifications needed for members of the FOSS movement to write drivers for their hardware - for instance as they wish customers to run only their own proprietary software or as they might benefit from partnerships. Bugs and missing features While FOSS can be superior to proprietary equivalents in terms of software features and stability, in many cases FOSS has more unfixed bugs and missing features when compared to similar commercial software. This varies per case and usually depends on the level of interest and participation in a FOSS project. Furthermore, unlike with typical commercial software, missing features and bugfixes can be implemented by any party that has the relevant motivation, time and skill to do so. Less guarantee of development There is often less certainty of FOSS projects gaining the required resources and participation for continued development than commercial software backed by companies. However, companies also often abolish projects for being unprofitable, yet large companies may rely on, and hence co-develop, open source software. Missing applications As the FOSS operating system distributions of Linux has a lower market share of end users there are also fewer applications available. Adoption by governments Adoption by supranational unions and international organizations In 2017, the European Commission stated that "EU institutions should become open source software users themselves, even more than they already are" and listed open source software as one of the nine key drivers of innovation, together with big data, mobility, cloud computing and the internet of things. Production Issues and incidents GPLv3 controversy While copyright is the primary legal mechanism that FOSS authors use to ensure license compliance for their software, other mechanisms such as legislation, patents, and trademarks have implications as well. In response to legal issues with patents and the Digital Millennium Copyright Act (DMCA), the Free Software Foundation released version 3 of its GNU Public License (GNU GPLv3) in 2007 that explicitly addressed the DMCA and patent rights. After the development of the GNU GPLv3 in 2007, the FSF (as the copyright holder of many pieces of the GNU system) updated many of the GNU programs' licenses from GPLv2 to GPLv3. On the other hand, the adoption of the new GPL version was heavily discussed in the FOSS ecosystem, several projects decided against upgrading. For instance the Linux kernel, the BusyBox project, AdvFS, Blender, and the VLC media player decided against adopting the GPLv3. Apple, a user of GCC and a heavy user of both DRM and patents, switched the compiler in its Xcode IDE from GCC to Clang, which is another FOSS compiler but is under a permissive license. LWN speculated that Apple was motivated partly by a desire to avoid GPLv3. The Samba project also switched to GPLv3, so Apple replaced Samba in their software suite by a closed-source, proprietary software alternative. Skewed prioritization, ineffectiveness and egoism of developers Leemhuis criticizes the prioritization of skilled developers who โˆ’ instead of fixing issues in already popular open-source applications and desktop environments โˆ’ create new, mostly redundant software to gain fame and fortune. He also criticizes notebook manufacturers for optimizing their own products only privately or creating workarounds instead of helping fix the actual causes of the many issues with Linux on notebooks such as the unnecessary power consumption. Commercial ownership of open-source software Mergers have affected major open-source software. Sun Microsystems (Sun) acquired MySQL AB, owner of the popular open-source MySQL database, in 2008. Oracle in turn purchased Sun in January 2010, acquiring their copyrights, patents, and trademarks. Thus, Oracle became the owner of both the most popular proprietary database and the most popular open-source database. Oracle's attempts to commercialize the open-source MySQL database have raised concerns in the FOSS community. Partly in response to uncertainty about the future of MySQL, the FOSS community forked the project into new database systems outside of Oracle's control. These include MariaDB, Percona, and Drizzle. All of these have distinct names; they are distinct projects and cannot use the trademarked name MySQL. Legal cases Oracle v. Google In August 2010, Oracle sued Google, claiming that its use of Java in Android infringed on Oracle's copyrights and patents. In May 2012, the trial judge determined that Google did not infringe on Oracle's patents and ruled that the structure of the Java APIs used by Google was not copyrightable. The jury found that Google infringed a small number of copied files, but the parties stipulated that Google would pay no damages. Oracle appealed to the Federal Circuit, and Google filed a cross-appeal on the literal copying claim. As part/driver of a new socio-economic model By defying ownership regulations in the construction and use of informationโ€”a key area of contemporary growthโ€”the Free/Open Source Software (FOSS) movement counters neoliberalism and privatization in general. By realizing the historical potential of an "economy of abundance" for the new digital world FOSS may lay down a plan for political resistance or show the way towards a potential transformation of capitalism. According to Yochai Benkler, Jack N. and Lillian R. Berkman Professor for Entrepreneurial Legal Studies at Harvard Law School, free software is the most visible part of a new economy of commons-based peer production of information, knowledge, and culture. As examples, he cites a variety of FOSS projects, including both free software and open-source. See also FLOSS Manuals FLOSS Weekly Free software community Free software license Graphics hardware and FOSS List of free and open source software packages List of formerly proprietary software Open-source license Outline of free software Notes References Sources Further reading Software licenses
6767612
https://en.wikipedia.org/wiki/Carleton%20School%20of%20Information%20Technology
Carleton School of Information Technology
Carleton School of Information Technology (CSIT) is part of the Faculty of Engineering and Design at Carleton University. CSIT, together with Algonquin College, offers a Bachelor of Information Technology degree from one of four undergraduate programs: Information Resource Management, Interactive Multimedia and Design, Network Technology, and Optical Systems and Sensors. History In 2000, the Ontario government awarded funding for classrooms and laboratory space in the newly built Azrieli Pavilion, as well as for computing and laboratory equipment. This funding was awarded according to the provincial government's call to address the double cohort with joint programs between community colleges and universities. In 2003, the School opened under the directorship of Dr. Ben Gianni, offering two unique programs (Network Technology, and Interactive Multimedia and Design) in partnership with Algonquin College, whereby students graduate in four years with both an advanced college diploma from Algonquin College, and a Bachelor of Information Technology from Carleton University. In September 2005, Dr. Dorina Petriu took over directorship of the School. In September 2006, CSIT acquired a motion capture 3D development system and a renderfarm. In July 2009, Dr. Anthony Whitehead took over directorship of the School. Directorship 2003 - 2005: Dr. Ben Gianni 2005 - 2009: Dr. Dorina Petriu 2009 - 2017: Dr. Anthony Whitehead 2016 - 2017: Dr. Chris Smelser (Acting Director) 2017 - Present: Dr. Chris Joslin Information Research Management Program Students in the Information Research Management Program develop a strong background in the cataloging, indexing, storage, presentation, analysis, and manipulation of all kinds of digital data. This program was introduced in Fall 2015. Interactive Multimedia and Design Program Students in the Interactive Multimedia and Design program develop a background in a wide range of topics such as: web development, game development, animation, 2D and 3D graphics, programming, audio and video, graphic design, and general design as it relates to each of these areas of study. Networking Technology Program Students in the Networking Technology program develop a strong background in topics related to networking, programming and computer science. Optical Systems and Sensors Program Students in the Optical Systems and Sensors program (formerly Photonics and Laser Technology) develop a strong background in light-based technologies, from optical communication to computer vision, as well as from laser technology to remote sensing. This program was introduced in Fall 2012. Research CSIT faculty members are active in the following research areas: Networking Network Security and Information Assurance Network Architecture and Applications Multimedia Augmented Reality and Displays 3D Video Image/Video Processing Collaborative Virtual Environment Systems Virtual Reality Systems (immersive and interactive) Media Adaptation Media Compression (3D/2D, Video) Multimedia Session Mobility Medical pre/intra/post Operative Interfaces & Displays In-car Displays and Multimodal Interaction Urban and Architectural Planning Systems Social User Interfaces Social Agents See also Interactive Multimedia Bachelor of Information Technology External links Carleton School of Information Technology Carleton BIT Program References Carleton University
11167982
https://en.wikipedia.org/wiki/Windows%20Task%20Scheduler
Windows Task Scheduler
Task Scheduler (formerly Scheduled Tasks) is a job scheduler in Microsoft Windows that launches computer programs or scripts at pre-defined times or after specified time intervals. Microsoft introduced this component in the Microsoft Plus! for Windows 95 as System Agent. Its core component is an eponymous Windows service. The Windows Task Scheduler infrastructure is the basis for the Windows PowerShell scheduled jobs feature introduced with PowerShell v3. Task Scheduler can be compared to cron or anacron on Unix-like operating systems. This service should not be confused with the scheduler, which is a core component of the OS kernel that allocates CPU resources to processes already running. Versions Task Scheduler 1.0 Task Scheduler 1.0 is included with Windows NT 4.0 (with Internet Explorer 4.0 or later), Windows 2000, Windows XP and Windows Server 2003. It runs as a Windows Service, and the task definitions and schedules are stored in binary .job files. Tasks are manipulated directly by manipulating the .job files. Each task corresponds to single action. On Windows 95 (with Internet Explorer 4.0 or later), Windows 98 and Windows Me, the Task Scheduler runs as an ordinary program, mstask.exe. It also displays a status icon in the notification area on Windows 95 and Windows 98 and runs as a hidden service on Windows Me, but can be made to show a tray icon. Computer programs and scripts can access the service through six COM interfaces. Microsoft provides a scheduling agent DLL, a sample VBScript and a configuration file to automate Task Scheduler. In addition to the graphical user interface for Task Scheduler in Control Panel, Windows provides two command-line tools for managing scheduled task: at.exe (deprecated) and schtasks.exe. However, at.exe cannot access tasks created or modified by Control Panel or schtasks.exe. Also, tasks created with at.exe are not interactive by default; interactivity needs to be explicitly requested. The binary ".job" files which the AT command produces are stored in the %WINDIR%\Tasks directory. Task Scheduler 2.0 Task Scheduler 2.0 was introduced with Windows Vista and included in Windows Server 2008 as well. The redesigned Task Scheduler user interface is now based on Management Console. In addition to running tasks on scheduled times or specified intervals, Task Scheduler 2.0 also supports calendar and event-based triggers, such as starting a task when a particular event is logged to the event log, or when a combination of events has occurred. Also, several tasks that are triggered by the same event can be configured to run either simultaneously or in a pre-determined chained sequence of a series of actions, instead of having to create multiple scheduled tasks. Tasks can also be configured to run based on system status such as being idle for a pre-configured amount of time, on startup, logoff, or only during or for a specified time. XPath expressions can be used to filter events from the Windows Event Log. Tasks can also be delayed for a specified time after the triggering event has occurred, or repeat until some other event occurs. Actions that need to be done if a task fails can also be configured. The actions that can be taken in response to triggers, both event-based as well as time-based, not only include launching applications but also take a number of custom actions. Task Scheduler includes a number of actions built-in, spanning a number of applications; including send an e-mail, show a message box, or fire a COM handler when it is triggered. Custom actions can also be specified using the Task Scheduler API. Task Scheduler keeps a history log of all execution details of all the tasks. Windows Vista uses Task Scheduler 2.0 to run various system-level tasks; consequently, the Task Scheduler service can no longer be disabled (except with a simple registry tweak). Task Scheduler 2.0 exposes an API to allow computer programs and scripts create tasks. It consists of 42 COM interfaces. The Windows API does not, however, include a managed wrapper for Task Scheduler though an open source implementation exists. The job files for Task Scheduler 2.0 are XML-based, and are human-readable, conforming to the Task Scheduler Schema. Other features New security features, including using Credential Manager to passwords for tasks on workgroup computers and using Active Directory for task credentials on domain-joined computers so that they cannot be retrieved easily. Also, scheduled tasks are executed in their own session, instead of the same session as system services or the current user. Ability to wake up a machine remotely or using BIOS timer from sleep or hibernation to execute a scheduled task or run a previously scheduled task after a machine gets turned on. Ability to attach tasks to events directly from the Event Viewer. Tasks The Task Scheduler service works by managing Tasks; Task refers to the action (or actions) taken in response to trigger(s). A task is defined by associating a set of actions, which can include launching an application or taking some custom-defined action, to a set of triggers, which can either be time-based or event-based. In addition, a task also can contain metadata that defines how the actions will be executed, such as the security context the task will run in. Tasks are serialized to .job files and are stored in the special folder titled Task Folder, organized in subdirectories. Programmatically, the task folder is accessed using the ITaskFolder interface or the TaskFolder scripting object and individual tasks using the IRegisteredTask interface or RegisteredTask object. Column 'Last Result' The Last Result column displays a completion code. The common codes for scheduled tasks are: 0 or 0x0: The operation completed successfully. 1 or 0x1: Incorrect function called or unknown function called. 2 or 0x2: File not found. 10 or 0xa: The environment is incorrect. 0x00041300: Task is ready to run at its next scheduled time. 0x00041301: The task is currently running. 0x00041302: The task has been disabled. 0x00041303: The task has not yet run. 0x00041304: There are no more runs scheduled for this task. 0x00041305: One or more of the properties that are needed to run this task have not been set. 0x00041306: The last run of the task was terminated by the user. 0x00041307: Either the task has no triggers or the existing triggers are disabled or not set. 0x00041308: Event triggers do not have set run times. 0x80010002: Call was canceled by the message filter 0x80041309: A task's trigger is not found. 0x8004130A: One or more of the properties required to run this task have not been set. 0x8004130B: There is no running instance of the task. 0x8004130C: The Task Scheduler service is not installed on this computer. 0x8004130D: The task object could not be opened. 0x8004130E: The object is either an invalid task object or is not a task object. 0x8004130F: No account information could be found in the Task Scheduler security database for the task indicated. 0x80041310: Unable to establish existence of the account specified. 0x80041311: Corruption was detected in the Task Scheduler security database 0x80041312: Task Scheduler security services are available only on Windows NT. 0x80041313: The task object version is either unsupported or invalid. 0x80041314: The task has been configured with an unsupported combination of account settings and run time options. 0x80041315: The Task Scheduler Service is not running. 0x80041316: The task XML contains an unexpected node. 0x80041317: The task XML contains an element or attribute from an unexpected namespace. 0x80041318: The task XML contains a value which is incorrectly formatted or out of range. 0x80041319: The task XML is missing a required element or attribute. 0x8004131A: The task XML is malformed. 0x0004131B: The task is registered, but not all specified triggers will start the task. 0x0004131C: The task is registered, but may fail to start. Batch logon privilege needs to be enabled for the task principal. 0x8004131D: The task XML contains too many nodes of the same type. 0x8004131E: The task cannot be started after the trigger end boundary. 0x8004131F: An instance of this task is already running. 0x80041320: The task will not run because the user is not logged on. 0x80041321: The task image is corrupt or has been tampered with. 0x80041322: The Task Scheduler service is not available. 0x80041323: The Task Scheduler service is too busy to handle your request. Please try again later. 0x80041324: The Task Scheduler service attempted to run the task, but the task did not run due to one of the constraints in the task definition. 0x00041325: The Task Scheduler service has asked the task to run. 0x80041326: The task is disabled. 0x80041327: The task has properties that are not compatible with earlier versions of Windows. 0x80041328: The task settings do not allow the task to start on demand. 0xC000013A: The application terminated as a result of a CTRL+C. 0xC0000142: The application failed to initialize properly. Bugs On Windows 2000 and Windows XP, when a computer is prepared for disk imaging with the sysprep utility, it cannot run tasks configured to run in the context of the SYSTEM account. Sysprep changes the security identifier (SID) to avoid duplication but does not update scheduled tasks to use the new SID. Consequently, the affected tasks fail to run. There is no solution for this problem but one may reschedule the affected tasks to work around the issue. On Windows Vista or Windows Server 2008, the next execution time displayed in Task Scheduler may be wrong. Microsoft issued a hotfix to remedy this issue. See also cron, job scheduler for Unix-like operating systems References Further reading External links Task Scheduler on MSDN The Log File in the Task Scheduler May Be Incorrectly Formatted and Difficult to Read - Unable to Delete Text in the Task Scheduler Log File Task Scheduler Service Does Not Start Scheduled Program Does Not Start in Task Scheduler - Cannot Disable Task Scheduler Windows administration Windows services 1995 software
9940136
https://en.wikipedia.org/wiki/T%20S%20Narayanaswami%20College%20of%20Arts%20and%20Science
T S Narayanaswami College of Arts and Science
T S Narayanaswami College of Arts and Science is a co-educational institution of higher learning affiliated to the University of Madras and founded by The India Cements Educational Society. The institute, offering both under-graduate and post graduate degree courses, is situated in a picturesque campus at Navalur, off Old Mahabalipuram Road, in Chennai, Tamil Nadu, India. History The India Cements Limited (ICL), one of India's largest cement companies, has promoted liberal education in arts, cultural, science, technology and sports. ICL has been running educational institutions like Bala Vidyalayas and Higher Secondary Schools at Sankarnagar, Sankagiri, Dalavoi, Chilamkur and Yerraguntla (A.P) and a Polytechnic at Sankarnagar, helping a nearly 7000 strong student community during the past four decades. With a view to extend its vista in the area of higher education, ICL established a society by name The India Cements Educational Society (Regd.) in 1994. To begin with, the agency promoted this self-financed co-educational college of arts and science in memory of the founder of ICL, Shri T S Narayanaswami, during the academic year 1996โ€“97. Sanction from the Government of Tamil Nadu (G.O.No. 488 dt. 25.7.96) and affiliation from the University of Madras were obtained for running the college from the academic year 1996โ€“97. Courses offered Under-graduation courses (3 years duration) B.Sc Computer Science (Bachelor's degree in computer science) B.Sc Biochemistry (Bachelor's degree in biochemistry) B.C.A (Bachelor's degree in computer applications) B.Com (Bachelor's degree in commerce) B.B.A (Bachelor's degree in business administration) Post graduation courses (2 years duration) M.Sc IT (Master's degree''' in Information Technology) M.Sc Biochemistry (Master's degree in biochemistry) M.Com (Master's degree in commerce) Integrated courses (5 years duration) M.Sc Computer Science and Technology (Master's degree in computer science and technology) External links T S Narayanaswami College website Arts and Science colleges in Chennai Colleges affiliated to University of Madras
29191543
https://en.wikipedia.org/wiki/Motion%20%28surveillance%20software%29
Motion (surveillance software)
Motion is a free software CCTV software application developed for Linux. It uses video4linux and its output can be jpeg, netpbm files, or mpeg video sequences. It is strictly command line driven and can run as a daemon with a rather small footprint and low CPU usage. It is operated mainly via config files, though the end video streams can be viewed from a web browser. It can also call to user configurable "triggers" when certain events occur. Starting with version 4.2, the motion daemon supports encryption with Transport Layer Security. See also ZoneMinder Closed-circuit television (CCTV) References External links An Introduction to Video Surveillance with 'Motion': small tutorial for Debian users Surveillance Linux software Free software programmed in C Video surveillance Software using the GPL license
2056516
https://en.wikipedia.org/wiki/OpenCV
OpenCV
OpenCV (Open Source Computer Vision Library) is a library of programming functions mainly aimed at real-time computer vision. Originally developed by Intel, it was later supported by Willow Garage then Itseez (which was later acquired by Intel). The library is cross-platform and free for use under the open-source Apache 2 License. Starting with 2011, OpenCV features GPU acceleration for real-time operations. History Officially launched in 1999 the OpenCV project was initially an Intel Research initiative to advance CPU-intensive applications, part of a series of projects including real-time ray tracing and 3D display walls. The main contributors to the project included a number of optimization experts in Intel Russia, as well as Intel's Performance Library Team. In the early days of OpenCV, the goals of the project were described as: Advance vision research by providing not only open but also optimized code for basic vision infrastructure. No more reinventing the wheel. Disseminate vision knowledge by providing a common infrastructure that developers could build on, so that code would be more readily readable and transferable. Advance vision-based commercial applications by making portable, performance-optimized code available for free โ€“ with a license that did not require code to be open or free itself. The first alpha version of OpenCV was released to the public at the IEEE Conference on Computer Vision and Pattern Recognition in 2000, and five betas were released between 2001 and 2005. The first 1.0 version was released in 2006. A version 1.1 "pre-release" was released in October 2008. The second major release of the OpenCV was in October 2009. OpenCVย 2 includes major changes to the C++ interface, aiming at easier, more type-safe patterns, new functions, and better implementations for existing ones in terms of performance (especially on multi-core systems). Official releases now occur every six months and development is now done by an independent Russian team supported by commercial corporations. In August 2012, support for OpenCV was taken over by a non-profit foundation OpenCV.org, which maintains a developer and user site. In May 2016, Intel signed an agreement to acquire Itseez, a leading developer of OpenCV. In July 2020, OpenCV announced and began a Kickstarter campaign for the OpenCV AI Kit, a series of hardware modules and additions to OpenCV supporting Spatial AI. Applications OpenCV's application areas include: 2D and 3D feature toolkits Egomotion estimation Facial recognition system Gesture recognition Humanโ€“computer interaction (HCI) Mobile robotics Motion understanding Object detection Segmentation and recognition Stereopsis stereo vision: depth perception from 2 cameras Structure from motion (SFM) Motion tracking Augmented reality To support some of the above areas, OpenCV includes a statistical machine learning library that contains: Boosting Decision tree learning Gradient boosting trees Expectation-maximization algorithm k-nearest neighbor algorithm Naive Bayes classifier Artificial neural networks Random forest Support vector machine (SVM) Deep neural networks (DNN) Programming language OpenCV is written in C++ and its primary interface is in C++, but it still retains a less comprehensive though extensive older C interface. All of the new developments and algorithms appear in the C++ interface. There are bindings in Python, Java and MATLAB/OCTAVE. The API for these interfaces can be found in the online documentation. Wrappers in several programming languages have been developed to encourage adoption by a wider audience. In version 3.4, JavaScript bindings for a selected subset of OpenCV functions was released as OpenCV.js, to be used for web platforms. Hardware acceleration If the library finds Intel's Integrated Performance Primitives on the system, it will use these proprietary optimized routines to accelerate itself. A CUDA-based GPU interface has been in progress since September 2010. An OpenCL-based GPU interface has been in progress since October 2012, documentation for version 2.4.13.3 can be found at docs.opencv.org. OS support OpenCV runs on the following desktop operating systems: Windows, Linux, macOS, FreeBSD, NetBSD, OpenBSD. OpenCV runs on the following mobile operating systems: Android, iOS, Maemo, BlackBerry 10. The user can get official releases from SourceForge or take the latest sources from GitHub. OpenCV uses CMake. See also AForge.NET, a computer vision library for the Common Language Runtime (.NET Framework and Mono). ROS (Robot Operating System). OpenCV is used as the primary vision package in ROS. VXL, an alternative library written in C++. CVIPtools, a complete GUI-based computer-vision and image-processing software environment, with C function libraries, a COM-based DLL, along with two utility programs for algorithm development and batch processing. OpenNN, an open-source neural networks library written in C++. List of free and open source software packages References C++ libraries Computer vision software Gesture recognition Software using the Apache license
68614188
https://en.wikipedia.org/wiki/CodeMonkey%20%28software%29
CodeMonkey (software)
CodeMonkey is an educational computer coding environment that allows beginners to learn computer programming concepts and languages. CodeMonkey is intended for students ages 6โ€“14. Students learn text-based coding on languages like Python, Blockly and CoffeeScript, as well as learning the fundamentals of computer science and math. The software was first released in 2014, and was originally developed by Jonathan Schor, Ido Schor and Yishai Pinchover, supported by the Center for Educational Technology in Israel. Development history CodeMonkey software program in form of a game for children was developed by three software engineers from Haifa, Israel: the brothers Jonathan and Ido Schor and Yishai Pinchover. The trio set up a start-up company CodeMonkey Studios Ltd., supported by the Center for Educational Technology. The game was launched in May 2014 and is currently available in 23 languages. The company has offices in Israel and USA. Since 2014, CodeMonkey launched several additional programming tools in form of games including Coding Adventure, Game Builder, Dodo Does Math, Banana Tales, CodeMonkey Jr. and Beaver Achiever. In 2018, the software company was acquired by TAL Education Group, a Chinese holding company, but remained active as its independent subsidiary also retaining its software development team. In June 2020, CodeMonkey joined UNESCO distance learning initiative and offered free courses for all schools that were forced to close during the Covid-19 lockdown. Overview and functionality The game does not require prior programming experience and is intended for children from the age of 6. It allows the user to make his first steps in programming but also progresses to more advanced topics. The teaching method is experiential, in accordance with the principles of Game-based learning: the children control the figures of animals and direct them to collect bananas, overcoming various obstacles. One of the salient features of the game is that it requires writing actual textual code, as opposed to games that work in a method that represents commands using graphical blocks. Supported language The programming languages are Python and CoffeeScript, chosen mostly due to a friendly syntax. Some games like CodeMonkey Jr. and Beaver Achiever rely on block-based coding using Blockly. Integration of the game in schools The games are intended for individual use and for educational classrooms and have been selectively applied by schools and school centers in several countries including Israel, USA, UK, China, India and Bhutan, among others. CodeMonkey was also integrated in the Israeli Cyber Championship for Elementary Schools (Skillz Olympics) and a high school software program also called Skillz, where CodeMonkey games are a part of coding competition for young students. See also Educational programming language References Computer science education Educational programming languages Pedagogic integrated development environments
6987474
https://en.wikipedia.org/wiki/RNAD%20Coulport
RNAD Coulport
Royal Naval Armaments Depot Coulport, shortened to RNAD Coulport, on Loch Long in Argyll, Scotland, is the storage and loading facility for the nuclear warheads of the United Kingdom's Trident programme. The base, near the village of Coulport, has up to 16 reinforced concrete bunkers built into the hillside on the eastern shore of Loch Long. It is the last depot in Britain to retain the "RNAD" designation, indicating a Royal Naval Armaments Depot. The depot was established during the Cold War as the storage, maintenance and loading facility for Polaris nuclear weapons. Today, Coulport is mainly used for handling Trident warheads. Two docks are located on the shoreline at the foot of the hill. There, weapons are loaded onto nuclear submarines before they go on patrol and unloaded before they return to base at nearby Faslane. An older jetty is known as the Polaris Jetty, while the newer, covered Explosive Handling Jetty (EHJ) is used for handling Trident warheads. History Coulport had originally been best known as a summer holiday retreat for wealthy Glaswegians, with its most notable feature being Kibble Palace, now relocated to the Glasgow Botanic Gardens. It is the site of the farm of Duchlage (historically spelt Duchlass). The Nassau Agreement was signed in December 1962, and the Polaris Sales Agreement was signed in April 1963. Initial construction took place between 1963, when Faslane was chosen as the new Polaris base, and 1968, when the first Polaris boat began its patrol. Safety considerations required that the armament maintenance and the storage facility have its own berth and be at least from the main facility whilst operational considerations dictated that the two facilities should be within an hour's sailing time. Coulport, on the opposite peninsula, met both of these requirements. The Trident Works Programme at Coulport and Faslane, co-ordinated by the Property Services Agency, took 13 years to complete. Planning work at Coulport began in 1982, and the estimated final cost for the entire programme, at 1994 prices, was approximately ยฃ1.9 billion. This made it the second most expensive procurement project in the UK after the Channel Tunnel project. Prior to the 2014 Scottish independence referendum, the implications of a potential vote for Scottish independence from the United Kingdom for the Coulport and Faslane bases were extensively discussed in the media, as it was unclear if any submarine base in England, Wales or Northern Ireland could house the Coulport silos. However, since the Scottish electorate voted against independence, the area along with the rest of Scotland remained UK territory and so the bases, and the equipment housed there, were unaffected. A covered floating dry dock for the Trident submarines was built at Hunterston in Ayrshire, and floated to RNAD Coulport where it has been situated since 1993. This Explosive Handling Jetty is one of the world's largest floating concrete structures. Sister depot at Kings Bay, Georgia The UK's Polaris system was fully serviced at Coulport, but the Trident missiles are randomly selected from the large US stockpile at the Trident Refit Facility at the Naval Submarine Base Kings Bay, Georgia. The missiles are not owned outright by the UK, which has "mingled asset" ownership rights to 58 missiles from a pool shared with the US Navy. The Trident warheads are designed and manufactured by the Atomic Weapons Establishment at Aldermaston, Berkshire, England, and are owned by the UK government. Site management RNAD Coulport is owned by the Ministry of Defence (MOD) and is one of four Atomic Weapons Establishment (AWE) sites. Under a fifteen-year contract agreed in 2012, AWE plc, Babcock and Lockheed Martin UK Strategic Systems, together known as the ABL Alliance, manage and operate Coulport, although the Royal Navy's Naval Base Commander Clyde retains overall control and responsibility for security and activities. The site is regulated by the Office for Nuclear Regulation and Defence Nuclear Safety Regulator. Safety Exercise Bowline is the annual test of the emergency response routines to a nuclear weapon accident at Coulport. It is conducted by the Office for Nuclear Regulation. In 2011 the test failed as "a number of command and control aspects of the exercise were not considered to have been adequately demonstrated". The exercise was repeated later in the year and recorded "a marked improvement" and that "the agreed objectives and associated success criteria of the 'Command and Control' aspects were met." Transport of Trident nuclear warheads by road The main logistical movement of nuclear weapons in the United Kingdom is between the Atomic Weapons Establishment in Berkshire and RNAD Coulport in Argyll, in both directions. Because the warheads need to be constantly refurbished, batches are shuttled by road convoy several times a year. Convoys use staging posts and crew change locations during this journey. The Truck Cargo Heavy Duty (TCHD) carriers containing the weapons are escorted in a convoy of MoD vehicles commanded by a Ministry of Defence Police (MDP) Chief Inspector. The crew, of up to 50 people, includes a first aid team, fire crew and personnel equipped to monitor for radiological hazards. The convoy maintains contact by radio and telephone with Task Control, MDP Central Control Room, Wethersfield, Essex, which monitors its movement, and with the civil police forces in the affected areas. Police forces are notified at least 24 hours in advance of a convoy being routed through their area; this enables them to advise the convoy about any local traffic problems. Police forces may advise fire brigades of the presence of the convoy if it is moving into the vicinity of a fire brigade operation. Details of nuclear warhead convoys are kept secret by the UK government and the MoD who operate a "Neither Confirm Nor Deny" policy on informing the public regarding convoys. Evidence given by the Nuclear Information Campaign to the Defence Select Committee (based on figures from campaign group Nukewatch UK for 2000 to 2006) give the number of convoys as ranging from two to six return journeys per year from Aldermaston to Coulport. Estimates of the warhead numbers transported during this period are that 88 were moved from Aldermaston to Coulport while 120 were returned, indicating a withdrawal of between 30 and 50 warheads leaving an operational stockpile of between 170 and 150 warheads. In the event of a nuclear accident the SSC would activate the MoD's Nuclear Accident Response Organisation and would alert the local police constabulary immediately. The responsibility for these operations rests with the Director Nuclear Movements & Nuclear Accident Response Group. Protesters regularly try to stop the convoy and climb onto the TCHDs. The MDP are trained on a regular basis to counter any protest. MDP motorcyclists and traffic car officers make arrests and then hand over responsibility to the local police force. See also 1958 USโ€“UK Mutual Defence Agreement British replacement of the Trident system DM Glen Douglas Nuclear weapons and the United Kingdom Polaris Sales Agreement Scottish Campaign for Nuclear Disarmament Special Relationship UGM-27 Polaris References Further reading External links Official page at the Royal Navy website Floating Trident Submarine Dry Dock at Coulport 1963 establishments in Scotland Ammunition dumps in Scotland Buildings and structures completed in 1968 Buildings and structures in Argyll and Bute Royal Navy bases in Scotland Nuclear bunkers in the United Kingdom Nuclear stockpile stewardship United Kingdom nuclear command and control Ports and harbours of Scotland Royal Navy shore establishments Scottish coast Royal Navy submarine bases Trident (UK nuclear programme) United Kingdomโ€“United States relations Polaris (UK nuclear programme) Drydocks
51373803
https://en.wikipedia.org/wiki/1984%20Troy%20State%20Trojans%20football%20team
1984 Troy State Trojans football team
The 1984 Troy State Trojans football team represented Troy State University during the 1984 NCAA Division II football season, and completed the 64th season of Trojans football. The Trojans played their home games in at Veterans Memorial Stadium in Troy, Alabama. The 1984 team came off a 7โ€“4 record from the previous season. The 1984 team was led by coach Chan Gailey. The team finished the regular season with a 9โ€“1 record and made the NCAA Division II playoffs. The Trojans defeated the North Dakota State Bison 18โ€“17 in the National Championship Game en route to the program's first NCAA Division II Football Championship and second overall national championship. The National Championship Game The title game between Troy and North Dakota State proved to be a good one, as it pitted the #3-ranked team against the #1-ranked team. The game was shown nationally on ESPN. The game was close throughout, with both teams playing good defense in a low-scoring affair. Troy State trailed 17-15 late in the contest and, with 1:30 remaining in the game, Carey Christensen returned to lead the Trojans on their final drive to try and win the game. Starting from its own 10-yard line, Troy State eventually reached the NDSU 32 yard-line. With the clock running and the Trojans out of time-outs, the Trojans and freshman kicker Ted Clem took the field with :08 seconds remaining on the clock as it was ticking down. Just as time was about to expire, the Trojans got the snap off, and Ted Clem hit a 50-yard field goal to give Troy the lead and the win over North Dakota State. Schedule References Troy Troy Trojans football seasons NCAA Division II Football Champions Gulf South Conference football champion seasons Troy State
1472922
https://en.wikipedia.org/wiki/List%20of%20Acorn%20Electron%20games
List of Acorn Electron games
Following is a list of Acorn Electron games, with original publishers. 0โ€“9 3D Bomb Alley (Software Invasion) 3D Dotty (Blue Ribbon) 3D Maze (IJK) 3D Tankzone (Dynabyte) 737 Flight Simulator (Salamander) 747 (Doctorsoft) 747 Flight Simulator (DACC Limited) 767 Advanced Flight Simulator (Flightdeck) A Abyss (Cases) Aces High (Oasis) Acheton (Topologika) Adventure (Micro Power) Adventureland (Adventure International) The Adventures of Buckaroo Banzai (Adventure International) Adventurous English (Highlight) Airline (Cases) Alien Break In (Romik) Alien Dropout (Superior Software) Alphatron (Tynesoft) Anarchy Zone (Atlantis) Arcade Soccer (4th Dimension) Arcadians (Acornsoft) Arena 3000 (Microdeal) Arrow of Death part 1 (Adventure Soft) Arrow of Death part 2 (Adventure Soft) Astro Plumber (Blue Ribbon) Atom Smasher (Romik) Auf Wiedersehen, Pet (Tynesoft) Avon (Topologika) B Ballistix (Superior Software/Acornsoft) Balloon Buster (Blue Ribbon) Bandits at 3 O'Clock (Micro Power) Bar Billiards (Blue Ribbon) Barbarian: The Ultimate Warrior (Superior Software/Acornsoft) Barbarian II: The Dungeon of Drax (Superior Software/Acornsoft) Baron (video game)|Baron (Superior Software/Acornsoft) Battle 1917 (Cases) Battlefields (BBCSoft) Battlezone 2000 (MC Lothlorien) Battlezone Six (Kansas) Beach-Head (U.S. Gold) Bed Bugs (Optima Software) Beebtreck (Software For All) The Big KO (Tynesoft) Birdie Barrage (CDS Software) Birds of Prey (Romik) Birdstrike (Firebird) Blagger (Alligata) Blast! (Audiogenic) Blitzkrieg (Software Invasion) Blockbusters (Macsen) Blockbusters Gold Run (Macsen) Blockbusters Question Master (Macsen) Bobby Charlton Soccer (DACC Limited) Boffin (Addictive Games) Bomber Baron (Optyx) Bonecruncher (Superior Software/Acornsoft) The Boss (Peaksoft) Boulder Dash (Tynesoft) Bouncing Bombs (Tynesoft) Boxer (Acornsoft) Bozo the Brave (Tynesoft) Braz (Livewire) Breakthrough (Audiogenic) Brian Clough's Football Fortunes (CDS Software) Brian Jacks Superstar Challenge (Martech) Bridge Challenge (Livewire) Bridge Master (J Keyne) Bruce's Play Your Cards Right (Britannia) Buffalo Bill's Rodeo Games (Tynesoft) Bug Blaster (Alligata) Bug Eyes (Icon) Bug Eyes 2 (Audiogenic) Bugs (Virgin Games) Bullseye (Macsen) Bumble Bee (Micro Power) Bun Fun (Squirrel) Business Games (Acornsoft) By Fair Means or Foul (Superior Software/Acornsoft) C Camelot (Superior Software/Blue Ribbon) Castle Assault (Blue Ribbon) Castle Blackstar (SCR Adventures) Castle of Riddles (Acornsoft) Castles & Clowns (Macmillan) Caterpillar (IJK) Caterpillar (Romik) Caveman (Kansas) Caveman Capers (Icon) Centibug (Superior Software) Chess (Acornsoft) Chess (Micro Power) Chess (Superior Software) Chip Buster (Software Invasion) Chuckie Egg (A&F Software) Circus (Adventure Soft) Circus Games (Tynesoft) Citadel (Superior Software) City Defence (Bug-Byte) Clogger (Impact Software) Codename: Droid (Superior Software/Acornsoft) Colossus Bridge 4 (CDS Software) Colossus Chess 4 (CDS Software) Combat Lynx (Durell Software) Commando (Elite) Commonwealth Games (Tynesoft) Condition Red (Blue Ribbon) Confuzion (Incentive) Contact Bridge (Alligata) Cops 'n' Robbers (Atlantis) Corn Cropper (Cases) Corporate Climber (Dynabyte) Cosmic Camouflage (Superior Software/Acornsoft) The Count (Adventure International) Countdown to Doom (Topologika) Counter Attack (OIC) Crack-Up (Atlantis) Crazee Rider (Superior Software/Acornsoft) Crazy Er*Bert (Alternative Software) Crazy Tracer (Acornsoft) Creepy Cave (Atlantis) Cricket (Bug-Byte) Croaker (Micro Power) Crown Jewels (Alligata) Crystal Castles (U.S. Gold) Custard Pie Fight (Comsoft) Cybertron Mission (Micro Power) Cyborg Warriors (Superior Software/Acornsoft) Cylon Attack (A&F Software) Cylon Invasion (Tynesoft) D Dallas (Cases) Danger UXB (Micro Power) Daredevil Dennis (Visions) Darts (Blue Ribbon) Dead or Alive (Alternative Software) Deathstar (Superior Software) Despatch Rider (Audiogenic) Diamond Mine (MRM) Diamond Mine II (Blue Ribbon) Diamond Pete (Alligata) Dogfight - For Aces Only (Slogger Systems) Dominoes (Blue Ribbon) Drain Mania (Icon) Draughts (Superior Software) Draughts & Reversi (Acornsoft) Dream Time (Heyley) Dunjunz (Bug-Byte) E E-Type (4th Dimension) Eddie Kidd Jump Challenge (Martech) Egghead In Space (Cronosoft) Electron Invaders (Micro Power) Elite (Acornsoft) Elixir (Superior Software/Acornsoft) Empire (Shards Software) Enigma (Brainbox) Enthar Seven (Robico) Er*Bert (Microbyte) Escape from Moonbase Alpha (Micro Power) Escape from Pulsar 7 (Adventure Soft) Evening Star (Hewson Consultants) Exile (Superior Software/Acornsoft) F Fantasia Diamond (Hewson Consultants) The Feasibility Experiment (Adventure Soft) Felix and the Fruit Monsters (Micro Power) Felix in the Factory (Micro Power) Felix Meets the Evil Weevils (Micro Power) Fighter Pilot (Kansas) Firebug (Acornsoft) Firetrack (Superior Software/Acornsoft) Firienwood (MP Software) First Moves Chess (Longman) Five-A-Side Socca (IJK) Flight Path 737 (Anirog) Football Manager (Addictive Games) Footballer of the Year (Gremlin) Frak! (Aardvark) Frankenstein 2000 (Icon) Free Fall (Acornsoft) Frenzy (Micro Power) Froot Raid (Audiogenic) Fruit Catcher (Livewire) Fruit Machine (Alligata) Fruit Machine (Superior Software) Fruit Machine (Doctorsoft) Fruit Machine Simulator (Codemasters) Fun School 1 (Database Educational Software) Fun School 2 (Database Educational Software) Future Shock (Tynesoft) G Galactic Commander (Micro Power) Galactic Patrol (Mastertronic) Galaforce (Superior Software) Galaforce 2 (Superior Software/Blue Ribbon) Galaxy Wars (Bug-Byte) Gatecrasher (Quicksilva) Gauntlet (Micro Power) Geoff Capes Strong Man (Martech) Ghost Town (Adventure International) Ghouls (Micro Power) Gisburne's Castle (Martech) Go (Acornsoft) Goal! (Tynesoft) The Golden Baton (Adventure Soft) The Golden Figurine (Atlantis) The Golden Voyage (Adventure International) Golf (Blue Ribbon) Golf (Yes!) Gorph (Doctorsoft) Graham Gooch's Match Cricket (Alternative Software) Graham Gooch's Test Cricket (Audiogenic) The Great Wall (Artic Computing) Gremlins: The Adventure (Adventure Soft) Grid Iron (Top Ten) Grid Iron 2 (Alternative Software) Guardian (Alligata) Gunfighter (Atlantis) Gunsmoke (Software Invasion) Gyroscope (Melbourne House) H The Hacker (Firebird) Hampstead (Melbourne House) Hard Hat Harry (Retro Software) Hareraiser (Haresoft) Harlequin (Kansas) Heathrow ATC (Hewson Consultants) Hell Hole (Alligata) Helter Skelter (Audiogenic) Hercules (The Power House) Hezarin (Topologika) Hi Q Quiz (Blue Ribbon) Hobgoblin (Atlantis) Hobgoblin 2 (Atlantis) Holed Out (4th Dimension) Holed Out Extra Courses 1 (4th Dimension) Holed Out Extra Courses 2 (4th Dimension) Hopper (Acornsoft) Horoscopes (Third Program) Horse Race (Dynabyte) Hostages (Superior Software/Acornsoft) House of Horrors (Kayess) Hunchback (Ocean) Hunkidory (Bug-Byte) Hyper Viper (Retro Software) Hyperball (Superior Software/Acornsoft) Hyperdrive (IJK) I Ian Botham's Test Match (Tynesoft) Icarus (Mandarin) Ice Hockey (Bug-Byte) Imogen (Superior Software/Acornsoft) Impact (Audiogenic) Impossible Mission (U.S. Gold) Indoor Soccer (Alternative Software) Indoor Sports (Tynesoft) Inertia (4th Dimension) Intergalactic Trader (Micro Power) Inu (MRJ) Invaders (IJK) Invaders (Superior Software) J Jack Attack (Bug-Byte) Jet-Boot Jack (English Software) Jet Power Jack (Micro Power) Jet Set Willy (Tynesoft) Jet Set Willy II (Tynesoft) Joe Blade (Players) Joe Blade 2 (Players) Joey (Blue Ribbon) Johnny Reb (MC Lothlorien) Jump Jet (Anirog) Jungle Jive (Virgin Games) Jungle Journey (Retro Software) Junior Maths Pack (Micro Power) K Kamakazi (A&F Software) Kane (Mastertronic) Karate Combat (Superior Software) Kastle (Tynesoft) Kayleth (U.S. Gold) Ket Trilogy (Incentive) Killapede (Players) Killer Gorilla (Micro Power) Killer Gorilla 2 (Superior Software/Acornsoft) Kingdom of Hamil (Topologika) Kissin' Cousins (English Software) Know Your Own Psi-Q (Mirrorsoft) Kourtyard (Go-Dax) L Laser Reflex (Talent Computer Systems) The Last Days of Doom|Last Days of Doom (Topologika) Last Ninja (Superior Software/Acornsoft) Last Ninja 2 (Superior Software/Acornsoft) Last of the Free (Audiogenic) League Challenge (Atlantis) Lemming Syndrome (Dynabyte) Licence to Kill (Alternative Software) The Life of Repton (Superior Software/Acornsoft) The Living Body (Martech) Locomotion (BBCSoft) Loony Loco (Kansas) Loopz (Audiogenic) Lunar Rescue (Alligata) M Magic Mushrooms (Acornsoft) Mango (Blue Ribbon) Maniac Mower (Kansas) Master Break (Superior Software/Acornsoft) Maze (Acornsoft) Mazezam (Retro Software) Megaforce (Tynesoft) Mendips Stone (Dee-Kay) Merry Xmas Santa (Icon) Meteors (Acornsoft) Mexico '86 (Qualsoft) Micro Olympics (Database Software) Microball (Alternative Software) Mikie (Imagine) Millionaire (Incentive) The Mine (Micro Power) Mined Out (Quicksilva) Mineshaft (Durell Software) Missile Control (Gemini) Monkey Nuts (Bug-Byte) Monsters (Acornsoft) Moon Buggy (Kansas) Moon Raider (Micro Power) Mouse Trap (Tynesoft) Mr Wiz (Superior Software) Munchman (Kansas) Murdac (Topologika) Mystery Fun House (Adventure International) N Network (Superior Software/Acornsoft) Night Strike (Alternative Software) Night World (Alligata) Nightmare Maze (Blue Ribbon) O Omega Orb (Audiogenic) Omega Probe (Optima Software) One Last Game (Bevan Technology) Orbital (Impact Software) Osprey (Bourne) Overdrive (Superior Software) Oxbridge (Tynesoft) P Palace of Magic (Superior Software/Acornsoft) Pandemonium (Superior Software/Acornsoft) Panik! (Atlantis) Paperboy (Elite) Paras (MC Lothlorien) Paul Daniels Magic Show (Acornsoft) Pedro (Imagine) Peg Leg (IJK) Pengi (Visions) Pengywn (Postern) Percy Penguin (Superior Software) Perplexity (Superior Software/Acornsoft) Perseus and Andromeda (Adventure Soft) Pettigrew's Diary (Shards Software) Phantom (Tynesoft) Phantom Combat (Doctorsoft) Pharaoh's Tomb (A&F Software) Philosopher's Quest (Acornsoft) Pinball (Microbyte) Pinball Arcade (Kansas) Pipe Mania (Empire Interactive) Pipeline (Superior Software/Acornsoft) Pirate Adventure (Adventure International) Plan B (Bug-Byte) Plan B2 (Bug-Byte) Planetoid (Acornsoft) Playbox (Comsoft) Plunder (Cases) Podd (ASK/Acornsoft) Poker (Duckworth) Pool (Dynabyte) Positron (Micro Power) Predator (Superior Software/Acornsoft) Pro Boxing Simulator (Codemasters) Pro Golf (Atlantis) Psycastria (Audiogenic) Psycastria 2 (Audiogenic) Pyramid of Doom (Adventure International) Q Qbix (Alligata) Quest (Superior Software/Acornsoft) Quest for Freedom (IJK) A Question of Sport (Superior Software/Acornsoft) Questprobe featuring The Human Torch and The Thing (Adventure International) Questprobe: The Incredible Hulk (Adventure International) Questprobe: Spiderman (Adventure International) Qwak (Superior Software/Acornsoft) R Ransack (Audiogenic) Ravage (Blue Ribbon) Ravenskull (Superior Software) Rebel Planet (U.S. Gold) Red Coats (MC Lothlorien) Repton (Superior Software) Repton 2 (Superior Software) Repton 3 (Superior Software/Acornsoft) Repton Around the World (Superior Software/Acornsoft) Repton Infinity (Superior Software/Acornsoft) Repton: The Lost Realms (Retro Software) Repton Thru Time (Superior Software/Acornsoft) Return of R2 (Blue Ribbon) Return to Doom (Topologika) Revenge of Zor (Kansas) Reversi (Microbyte) Reversi (Kansas) Reversi (Superior Software) Ricochet (Superior Software/Acornsoft) Rig Attack (Tynesoft) Rik the Roadie (Alternative Software) Robin of Sherwood (Adventure Soft) Roboto (Bug-Byte) Robotron: 2084 (Atarisoft) Roman Empire (MC Lothlorien) Round Ones (Alternative Software) Row of Four (Software For All) RTC Birmingham (Dee-Kay) RTC Crewe (Dee-Kay) RTC Doncaster (Dee-Kay) Rubble Trouble (Micro Power) S Saigon (Tynesoft) Santa's Delivery (Tynesoft) Saracoid (Audiogenic) SAS Commander (Comsoft) Savage Island part 1 (Adventure International) Savage Island part 2 (Adventure International) Savage Pond (Starcade) Screwball (Blue Ribbon) Sea Wolf (Optima Software) Secret Mission (Adventure International) Serpent's Lair (Comsoft) Shanghai Warriors (Players) Shark (Audiogenic) Shark Attack (Romik) Shedmaster Bounds Green (Dee-Kay) Shedmaster Finsbury Park]] (Dee-Kay) Shuffle (Budgie) Sim (CSM / Viper) SimCity (Superior Software/Acornsoft) Skirmish (Go-Dax) Sky Hawk (Bug-Byte) Smash and Grab (Superior Software) Snake (Kansas) Snapper (Acornsoft) Snapple Hopper (Macmillan) Snooker (Acornsoft) Snooker (Visions) Soccer Boss (Alternative Software) Soccer Supremo (Qualsoft) Sooty's Fun With Numbers (Friendly Learning) Sorcerer of Claymorgue Castle (Adventure International) South Devon Hyrdraulics (Dee-Kay) Southern Belle (Hewson Consultants) Space Agent Zelda (Audiogenic) Space Caverns (Tynesoft) Space Ranger (Audiogenic) Space Shuttle (Microdeal) Space Station Alpha (Icon) Space Trek (Dimax) Spaceman Sid (English Software) Special Operations (MC Lothlorien) Spectipede (Mastertronic) Spellbinder (Superior Software/Acornsoft) Sphere of Destiny (Audiogenic) Sphere of Destiny 2 (Audiogenic) Sphinx Adventure (Acornsoft) Spitfire 40 (Mirrorsoft) Spooksville (Blue Ribbon) Sporting Triangles (CDS Software) Spy Snatcher (Topologika) Spy vs. Spy (Tynesoft) Spycat (Superior Software/Acornsoft) Squeekaliser (Bug-Byte) Stairway to Hell (Software Invasion) Star Drifter (Firebird) Star Force Seven (Bug-Byte) Star Maze 2 (Mastertronic) Star Wars (Domark) Starport (Superior Software/Acornsoft) Starship Command (Acornsoft) Steve Davis Snooker (CDS Software) Stix (Supersoft) Stock Car (Micro Power) The Stolen Lamp (MC Lothlorien) Storm Cycle (Atlantis) Stranded (Superior Software) Strange Odyssey (Adventure International) Stratobomber (IJK) Strike Force Harrier (Mirrorsoft) Strip Poker II Plus (Anco Software) Stryker's Run (Superior Software/Acornsoft) Subway Vigilante (Players) Summer Olympiad (Tynesoft) Super Hangman (IJK) Super Fruit (Simonsoft) Super Golf (Squirrel) Super Gran: The Adventure (Tynesoft) Super Pool (Software Invasion) Superior Soccer (Superior Software/Acornsoft) Superman: The Game (First Star/Prism Leisure) Superman: The Man of Steel (Tynesoft) Survivors (Atlantis) Swag (Micro Power) Swoop (Micro Power) Syncron (Superior Software) T Tactic (Superior Software - Unreleased) Tales of the Arabian Nights (Interceptor Micros) Tank Attack (CDS Software) The Taroda Scheme (Heyley) Tarzan (Martech) Tarzan Boy (Alligata) Tempest (Superior Software) Templeton (Bug-Byte) Ten Little Indians (Adventure Soft) Tennis (Bug-Byte) Terrormolinos (Melbourne House) Test Match (CRL) Tetris (Mirrorsoft) Thai Boxing (Anco Software) Thrust (Superior Software) Thunderstruck (Audiogenic) Thunderstruck 2 (Audiogenic) The Time Machine (Adventure Soft) The Times Computer Crosswords Jubilee Puzzles (The Times) The Times Computer Crosswords Volume 1 (The Times) Tomcat (Players) Tops and Tails (Macmillan) Traditional Games (Gemini) Trafalgar (Squirrel) Trapper (Blue Ribbon) Treasure Hunt (Macsen) Trek II (Tynesoft) Twin Kingdom Valley (Bug-Byte) U Uggie's Garden (Superior Software - Unreleased) UKPM (IJK) Ultron (Icon) Uranians (Bug-Byte) US Drag Racing (Tynesoft) V Vegas Jackpot (Mastertronic) Vertigo (Superior Software/Acornsoft) Video Card Arcade (Blue Ribbon) Video Classics (Firebird) Video Pinball (Alternative Software) Video's Revenge (Budgie) Vindaloo (Tynesoft) Voodoo Castle (Adventure International) Vortex (Software Invasion) W Walk the Plank (Mastertronic) War at Sea (Betasoft) Warehouse (Top Ten) Warp 1 (Icon) Waterloo (MC Lothlorien) Waxworks (Adventure Soft) The Way of the Exploding Fist (Melbourne House) Web War (Artic Computing) Weenies (Cronosoft) Weetabix vs the Titchies (Romik) West (Talent Computer Systems) Wet Zone (Tynesoft) Whist Challenge (Livewire) White Knight Mk 11 (BBCSoft) White Magic (4th Dimension) White Magic 2 (4th Dimension) Whoopsy (Shards Software) Winter Olympiad '88 (Tynesoft) Winter Olympic (Tynesoft) The Wizard of Akyrz (Adventure Soft) Wizzy's Mansion (Audiogenic) Woks (Artic Computing) Wongo (Icon) X Xadomy (Brassington) Xanagrams (Postern) XOR (Logotron) Y Yie Ar Kung-Fu (Imagine) Yie Ar Kung-Fu II (Imagine) Z Zalaga (Aardvark) Zany Kong Junior (Superior Software) Zenon (Impact Software) Ziggy (Audiogenic) Zorakk the Conqueror (Icon) See also Lists of video games Acorn Electron
15773661
https://en.wikipedia.org/wiki/Federal%20Service%20for%20Supervision%20of%20Communications%2C%20Information%20Technology%20and%20Mass%20Media
Federal Service for Supervision of Communications, Information Technology and Mass Media
The Federal Service for Supervision of Communications, Information Technology and Mass Media () or Roskomnadzor () is the Russian federal executive agency responsible for monitoring, controlling and censoring Russian mass media. Its areas of responsibility include electronic media, mass communications, information technology and telecommunications, supervising compliance with the law, protecting the confidentiality of personal data being processed, and organizing the work of the radio-frequency service. History This Federal Service for Supervision in the Sphere of Telecom, Information Technologies and Mass Communications was re-established in May 2008. Resolution number 419, "On Federal Service for Supervision in the Sphere of Telecom, Information Technologies and Mass Communications", was adopted on February 6, 2008. In March 2007 the authorityโ€”then a subdivision of the Cultural Ministry of Russia called Russian Federal Surveillance Service for Compliance with the Legislation in Mass Media and Cultural Heritage Protection (Rosokhrankultura)โ€”warned the Kommersant newspaper that it shouldn't mention National Bolshevik Party on its pages, as the party had been denied official registration. In 2019 media criticized the service's choice of experts who are performing analysis of referred publications to assess their compliance with regulations. A number of experts recruited by Roscomnadzor are associated with pseudo-scientific and sectarian movements, including HIV-deniers, ultra-conservative, anti-vaccination and alternative "medicine" activists. Three of such expertsโ€”Anna Volkova, Tatyana Simonova and Elena Shabalinaโ€”assessed lyrics of popular rapper Egor Kreed in which they found "mutagenic effect", "satanic influence" and "psychological warfare". Also in 2019 Roscomnadzor published the first iteration of the "list of information resources who had in the past been spreading unreliable information" including a number of social media groups and media websites accused mostly of incorrectly reporting on a single incident in Dzerzhinsk in June 2019. After the nationwide January 2021 street protests, the agency fined seven social media companies for not removing pro-Navalny videos. โ€œFacebook, Instagram, Twitter, TikTok, VKontakte, Odnoklassniki and YouTube will be fined for non-compliance with requirements to prevent the dissemination of calls to minors to participate in unauthorized rallies,โ€ Roskomnadzor said in a statement published on its website. Service tasks Roskomnadzor is a federal executive body responsible for control, censorship, and supervision in the field of media, including electronic media and mass communications, information technology and communications functions control and supervision over the compliance of personal data processing requirements of the legislation of the Russian Federation in the field of personal data, and the role of co-ordinating the activities of radio frequency service. It's an authorized federal executive body for the protection of human subjects of personal data. It is also the body administering Russian Internet censorship filters. It also designs and implements procedures of Russian Autonomous Internet Subnetwork, like inventory of Russian Autonomous Systems, alternative DNS root servers in Russian National Domain Name System, controls local ISPs interconnect and Internet exchanges. The main goal is to provide access to Russian Autonomous Internet Subnetwork even after disconnect or isolation from the global Internet (Sovereign Internet Law) Enforcement actions On April 5, 2013, it was confirmed by a spokesperson for Roskomnadzor that Wikipedia had been blacklisted over the article 'Cannabis smoking' (ะšัƒั€ะตะฝะธะต ะบะฐะฝะฝะฐะฑะธัะฐ) on Russian Wikipedia. On March 31, 2013 The New York Times reported that Russia was beginning 'Selectively Blocking [the] Internet'. In 2014, during the Crimea Crisis, Roskomnadzor had a number of websites criticising Russian policy in Ukraine blocked, including the blog of Alexei Navalny, Kasparov.ru and . Also, on June 22, 2016 Amazon Web Services was entirely blocked for a couple of hours because of a poker app. GitHub In October 2014 GitHub was blocked for a short time. On December 2 GitHub was blocked again for some satiric notes, describing "methods of suicide", which caused major tensions among Russian software developers. It was unblocked on December 4, and GitHub had set up a special page dedicated to Roskomnadzor-related issues. All content was and remains available for non-Russian networks. Russian Wikipedia On August 18, 2015, an article in Russian Wikipedia about charas (ะงะฐั€ะฐั (ะฝะฐั€ะบะพั‚ะธั‡ะตัะบะพะต ะฒะตั‰ะตัั‚ะฒะพ)) was blacklisted by Roskomnadzor as containing propaganda on narcotics. The article was then rewritten from scratch using UN materials and textbooks, but on August 24 it was included in the list of forbidden materials, sent to Internet providers of Russia. As Wikipedia uses HTTPS protocol to encrypt traffic, effectively all of the site with all language versions of Wikipedia was blocked in Russia on the night of August 25. Adult content In September 2016, adult websites Pornhub and YouPorn were blocked by Roskomnadzor as containing adult pornographic content. The watchdog says that they are not in the market and the demography is not a commodity. The Daily Stormer In 2017, the neo-Nazi website The Daily Stormer was briefly moved to a Russian domain, but Roskomnazdor subsequently acted to remove its access, and the site was subsequently moved to the dark web. Telegram On April 16, 2018, Roskomnadzor ordered Russian ISPs to block access to the instant messenger Telegram, as the company refused to hand over the encryption keys for users' chats to Russian authorities. The information watchdog applied the method of mass IP blocks, hitting the major hosting providers, such as Amazon, and disrupting hundreds of Russian internet services. Roskomnadzor had to abandon this approach, but failed to implement any other means to stop Russian users from accessing Telegram. In the end, the Roskomnadzor and other government structures set up their own channels in the โ€œoutlawedโ€ app. In mid-2020 Roskomnadzor officially gave up on trying to block Telegram. Twitter On 10 March 2021, Roskomnadzor started to โ€œslow downโ€ Twitter for users in Russia, attributing the decision to the platform's failure to remove illegal content. This action occasionally caused Russia's key websites, including Roskomnadzor itself, to stop working. It also led to malfunctions of major commercial services, such as Qiwi payment system, and blocked some users from accessing Yandex, Google, and YouTube. In addition, along with Twitter, Roskomnadzor throttled access to numerous websites with domain names that end in โ€œt.coโ€, that is one of Twitter domain, hitting no less than 48 thousand hosts. That affected GitHub, Russia Today, Reddit, Microsoft, Google, Dropbox, Steam, and others. See also Censorship in Russia Federal Agency on Press and Mass Communications of the Russian Federation (Rospechat) Information privacy Internet censorship in Russia References Further reading How Russians Are Outsmarting Internet Censorship - Global Voices Advocacy Federal Service for Monitoring Compliance with Cultural Heritage Protection Law External links Politics of Russia Censorship in Russia 2008 establishments in Russia Government agencies established in 2008 Information operations and warfare Communications authorities
19949974
https://en.wikipedia.org/wiki/CCG%20Profiles
CCG Profiles
CCG Profiles is software for designing joinery constructions for windows and doors industry. History The first version was released in 1995 โ€“ named Alumin, as a software for design and calculation of aluminium constructions for Windows. In 1999 the software was renamed Profiles and it was redesigned, in order to calculate PVC and timber constructions. Capabilities Profiles covers a large part of the activity of companies engaged in manufacturing of aluminium, PVC and timber joinery. Basic components: drawing, offering, optimization, materials, store. Design With Profiles, constructions with varying levels of complexity can be designed: windows, doors, balcony doors, hanging facades, commercial shop fronts, constructions with unlimited number of wings, sliding constructions, arches, trapezoids etc.At every stage of the working process, CCG Profiles allows change of the construction parameters: dimensions, roundings, bevels, beams position, width of the wings. Materials The software automatically calculates all materials required for the manufacturing process: profiles, accessories, hardware, glazing; window-sills, rolling blinds, insect screens; length of the arc and bending radius of arches. CCG Profiles generates a wide range of reports, including: cost price, specification, Bill Of Materials, cutting scheme, glass-packs, arcs, hardware, accessories. Pricing CCG Profiles uses user-defined formulae to calculate the total cost, discounts, services. Data Bases The software works with all systems aluminium, PVC, timber profiles, and has no limits to its scope in the use of databases. Profiles has built-in sample databases of some profile systems โ€” Altest, Blick, Etem, Europa, Exalco, Kommerling, Profilink, Thyssen, Trocal, Veka, Winhouse etc. Administration Settings and restrictions for the work with Profiles are included in the separate programming module - Admin. They allow the users to select a working language, select a database system, select export to automatic circular-saws, Import/Export capabilities. Reviews and awards At the 62nd International Fair Plovdiv, 2006, the program CCG Profiles was awarded a golden medal. Articles for the program were published in the Bulgarian magazine AMS Aspects and the Serbian one Aluminium & PVC magazin. According to an unofficial data, CCG Profiles is one of the most popular software for windows & doors industry in Bulgaria, and a large number of companies โ€“ manufacturers and suppliers of profile systems (Etem, Blick, Veka, Weiss Profil, Exalco, Profilink, Altest, Profilko, Roplasto) offer the software product to its customers. References External links http://www.ccg-bg.com โ€” Official site Proprietary software
20626312
https://en.wikipedia.org/wiki/1979%20USC%20Trojans%20football%20team
1979 USC Trojans football team
The 1979 USC Trojans football team represented the University of Southern California (USC) in the 1979 NCAA Division I-A football season. In their fourth year under head coach John Robinson, the Trojans compiled an 11โ€“0โ€“1 record (6โ€“0โ€“1 against conference opponents), won the Pacific-10 Conference (Pac-10) championship, and outscored their opponents by a combined total of 389 to 171. The team was ranked #2 in both the final AP Poll and the final UPI Coaches Poll. Quarterback Paul McDonald led the team in passing, completing 164 of 264 passes for 2,223 yards with 18 touchdowns and six interceptions. Charles White led the team in rushing with 332 carries for 2,050 yards and 19 touchdowns. Dan Garcia led the team in receiving with 29 catches for 492 yards and three touchdowns. The team was named national champion by the College Football Researchers Association, an NCAA-designated major selector. Schedule Season summary at Texas Tech at Oregon State Paul McDonald completed eight of nine passes for 108 yards and two touchdowns in just one half of action while Charles White watched from the sidelines with an injured shoulder. McDonald led the Trojans to touchdowns on their first five possessions before he and the rest of USC starters sat for the second half. Minnesota at LSU Washington State Stanford Charles White 32 rushes, 221 yards at Notre Dame at California Arizona at Washington vs. UCLA Rose Bowl (vs. Ohio State) Charles White 39 rushes, 247 yards Personnel 1979 Team Players in the NFL Marcus Allen Chip Banks Hoby Brenner Joey Browner Brad Budde Steve Busick Ray Butler Dennis Johnson Myron Lapka Ronnie Lott Jeff Fisher Chris Foote Roy Foster Bruce Matthews Paul McDonald Larry McGrew Don Mosebar Anthony Muรฑoz Eric Scoggins Dennis Smith Keith Van Horne Charles White Awards and honors Brad Budde, Lombardi Award Charles White, Heisman Trophy Charles White, Maxwell Award Charles White, Walter Camp Award References USC USC Trojans football seasons Pac-12 Conference football champion seasons Rose Bowl champion seasons College football undefeated seasons USC Trojans football
5398006
https://en.wikipedia.org/wiki/PulseAudio
PulseAudio
PulseAudio is a network-capable sound server program distributed via the freedesktop.org project. It runs mainly on Linux, various BSD distributions such as FreeBSD and OpenBSD, macOS, as well as Illumos distributions and the Solaris operating system. PulseAudio is free and open-source software, and is licensed under the terms of the LGPL-2.1-or-later. It was created in 2004 under the name Polypaudio but was renamed in 2006 to PulseAudio. History Microsoft Windows was previously supported via MinGW (an implementation of the GNU toolchain, which includes various tools such as GCC and binutils). The Windows port has not been updated since 2011, however. Software architecture In broad terms ALSA is a kernel subsystem that provides the sound hardware driver, and PulseAudio is the interface engine between applications and ALSA. However, its use is not mandatory and audio can still be played and mixed together without PulseAudio. PulseAudio acts as a sound server, where a background process accepting sound input from one or more sources (processes, capture devices, etc.) is created. The background process then redirects these sound sources to one or more sinks (sound cards, remote network PulseAudio servers, or other processes). One of the goals of PulseAudio is to reroute all sound streams through it, including those from processes that attempt to directly access the hardware (like legacy OSS applications). PulseAudio achieves this by providing adapters to applications using other audio systems, like aRts and ESD. In a typical installation scenario under Linux, the user configures ALSA to use a virtual device provided by PulseAudio. Thus, applications using ALSA will output sound to PulseAudio, which then uses ALSA itself to access the real sound card. PulseAudio also provides its own native interface to applications that want to support PulseAudio directly, as well as a legacy interface for ESD applications, making it suitable as a drop-in replacement for ESD. For OSS applications, PulseAudio provides the padsp utility, which replaces device files such as /dev/dsp, tricking the applications into believing that they have exclusive control over the sound card. In reality, their output is rerouted through PulseAudio. libcanberra libcanberra is an abstract API for desktop event sounds and a total replacement for the "PulseAudio sample cache API": Complies with the XDG Sound Theme and Naming Specifications. Defines a simple abstract interface for playing event sounds. Interfaces with ALSA through libasound. Has a back-end to PulseAudio. libSydney libSydney is a total replacement for the "PulseAudio streaming API", and plans have been made for libSydney to eventually become the only audio API used in PulseAudio. Features The main PulseAudio features include: Per-application volume controls. An extensible plugin architecture with support for loadable modules. Compatibility with many popular audio applications. Support for multiple audio sources and sinks. A zero-copy memory architecture for processor resource efficiency. Ability to discover other computers using PulseAudio on the local network and play sound through their speakers directly. Ability to change which output device applications use to play sound through while they are playing sound (Applications do not need to support this, PulseAudio is capable of doing this without applications detecting that it has happened) A command-line interface with scripting capabilities. A sound daemon with command line reconfiguration capabilities. Built-in sample conversion and resampling capabilities. The ability to combine multiple sound cards into one. The ability to synchronize multiple playback streams. Bluetooth audio device support with dynamic detection capabilities. The ability to enable system wide equalization. Adoption PulseAudio first appeared for regular users in Fedora Linux, starting with version 8, then was adopted by major Linux distributions such as Ubuntu, Debian, Mandriva Linux, and openSUSE. There is support for PulseAudio in the GNOME project, and also in KDE, as it is integrated into Plasma Workspaces, adding support to Phonon (the KDE multimedia framework) and KMix (the integrated mixer application) as well as a "Speaker Setup" GUI to aid the configuration of multi-channel speakers. PulseAudio is also available in the Illumos distribution OpenIndiana, and enabled by default in its MATE desktop environment. Various Linux-based mobile devices, including Nokia N900, Nokia N9 and the Palm Pre use PulseAudio. Tizen, an open-source mobile operating system, which is a project of the Linux Foundation and is governed by a Technical Steering Group (TSG) composed of Intel and Samsung, uses PulseAudio. Problems during adoption phase The PortAudio API was incompatible with PulseAudio's design and needed to be modified. Almost all packages using OSS and many of the packages using ALSA needed to be modified to support PulseAudio. Further development of the glitch-free audio feature required a complete rewrite of the PulseAudio core, and also changes to the ALSA API and internals were needed. When first adopted by distributions, PulseAudio developer Lennart Poettering (also the creator of systemd) described it as "the software that currently breaks your audio". Poettering later claimed that "Ubuntu didn't exactly do a stellar job. They didn't do their homework" in adopting PulseAudio for Ubuntu "Hardy Heron" (8.04), a problem that was improved with subsequent Ubuntu releases. However, in October 2009, Poettering reported that he was still not happy with Ubuntu's integration of PulseAudio. Interaction with old sound components by particular software: Certain programs, such as Adobe Flash for Linux, caused instability in PulseAudio. Newer implementations of Flash plugins do not require the conflicting elements, and as a result Flash and PulseAudio are now compatible. Early management of buffer over/underruns: Earlier versions of PulseAudio sometimes started to distort the processed audio due to incorrect handling of buffer over/underruns. For headphone users, the potential for noise-induced hearing loss due to extremely loud volumes in the event of a misbehaving application. Related software Other sound servers JACK is a sound server that provides real-time, low latency (i.e. 5 milliseconds or less) audio performance and, since JACK2, supports efficient load balancing by utilizing symmetric multiprocessing; that is, the load of all audio clients can be distributed among several processors. JACK is the preferred sound server for professional audio applications such as Ardour, ReZound, and LinuxSampler; multiple free audio-production distributions use it as the default audio server. It is possible for JACK and PulseAudio to coexist: while JACK is running, PulseAudio can automatically connect itself as a JACK client, allowing PulseAudio clients to make and record sound at the same time as JACK clients. PipeWire is an audio and video server that "aims to support the use cases currently handled by both PulseAudio and Jack". General audio infrastructures Before JACK and PulseAudio, sound on these systems was managed by multi-purpose integrated audio solutions. These solutions do not fully cover the mixing and sound streaming process, but they are still used by JACK and PulseAudio to send the final audio stream to the sound card. ALSA provides a software mixer called dmix, which was developed prior to PulseAudio. This is available on almost all Linux distributions and is a simpler PCM audio mixing solution. It does not provide the advanced features (such as timer-based scheduling and network audio) of PulseAudio. On the other hand, ALSA offers, when combined with corresponding sound cards and software, low latencies. OSS was the original sound system used in Linux and other Unix operating systems, but was deprecated after the 2.5 Linux kernel. Proprietary development was continued by 4Front Technologies, who in July 2007 released sources for OSS under CDDL-1.0 for OpenSolaris and under GPL-2.0-only for Linux. The modern implementation, Open Sound System v4, provides software mixing, resampling, and changing of the volume on a per-application basis; in contrast to PulseAudio, these features are implemented within the kernel. PulseAudio support in OpenIndiana and other illumos distributions relies on the in-kernel OSS implementation ("Boomer"). See also PortAudio Comparison of free software for audio List of Linux audio software References External links 2004 software Audio libraries Audio software for Linux Free audio software Free software programmed in C Linux APIs Collabora Software using the LGPL license
27571814
https://en.wikipedia.org/wiki/Mark%20d%27Inverno
Mark d'Inverno
Mark d'Inverno (born 29 August 1965) is a British computer scientist, currently a professor of Computer Science at Goldsmiths, University of London, in east London, England. Biography d'Inverno studied for an MA in Mathematics and an MSc in Computation at St Catherine's College, Oxford. He was awarded a PhD from University College London in artificial intelligence. For four years between 2007 and 2011, d'Inverno head of the Department of Computing, which has championed interdisciplinary research and teaching around computers and creativity for nearly a decade. He has published over 100 articles including books, journal and conference articles and has led recent research projects in a diverse range of fields relating to computer science including multi-agent systems, systems biology, art, design, and music. He is currently the principal investigator or co-investigator on a range of projects including designing systems for sharing online cultural experiences, connecting communities through video orchestration, building online communities of music practice. In 2011/12, d'Inverno took a research sabbatical shared between the Artificial Intelligence Research Institute in Barcelona, Spain, and Sony Computer Science Laboratory in Paris, France. Musical activities d'Inverno is a jazz pianist and composer, his album Joy receiving a number of favourable reviews and over the last few decades has led a variety of bands in a range of different musical genres (e.g., the Mark d'Inverno Quintet), his album Joy receiving a number of critical plaudits, and playing in London at venues including the National Theatre, London. Personal life d'Inverno was an original trustee and the first chairman of the charity Safe Ground in 1994, which in more recent years or so has developed a range of courses that were originally devised by prisoners that have been delivered in a large number of UK prisons including Family Man and Father's Inside. Mark d'Inverno has been captain of the Weekenders Cricket Club for 11 years, which was founded by the actor Clive Swift, with the writer Christopher Douglas as its long-serving secretary. d'Inverno is partner to the theatre and opera director Melly Still. See also AgentSpeak, an agent-oriented programming language Distributed multi-agent reasoning system (dMARS), a platform for intelligent agents Selected books and papers J. McCormack and M. d'Inverno, Computers and Creativity, Springer, 2012. M. d'Inverno and M.Luck, Understanding Agent Systems, 2nd edition, Springer, 2004. Mark d'Inverno and Michael Luck, Creativity through Autonomy and Interaction, Cognitive Computing, 2012. Mark d'Inverno, Michael Luck, Pablo Noriega, Juan Rodriguez-Aguilar and Carles Sierra, A framework for communication in open systems, Artificial Intelligence, 186:38โ€“94, 2012. Ben Fields, Kurt Jacobson, Christophe Rhodes, Mark d'Inverno, Mark Sandler and Michael Casey, Analysis and Exploitation of Musician Social Networks for Recommendation and Discovery, IEEE Transactions on Multimedia, 13(4): 674โ€“686, 2011. Jon Bird, Mark d'Inverno and Jane Prophet, Net Work: An Interactive Artwork Designed Using an Interdisciplinary Collaborative Approach, Special Issue on Computational Models of Creativity in the Arts, Journal of Digital Creativity, 18(1), 1123, 2007. Mark d'Inverno, Michael Luck, Michael Georgeff, David Kinny and Michael Wooldridge, The dMARS architecture: A specification of the distributed multi-agent reasoning system, Autonomous Agents and Multi-Agent Systems, 9(1โ€“2):5โ€“53, 2004. Jon McCormack and Mark d'Inverno, Why does Computing matter to Creativity?, in Jon McCormack and Mark d'Inverno (eds.), Springer, 2012. Mark d'Inverno, Neil Theise and Jane Prophet, Mathematical modelling of stem cells: a complexity primer for the stem cell biologist, In Christopher Potten, Jim Watson, Robert Clarke, and Andrew Renehan, editors, Tissue Stem Cells: Biology and Applications, pages 1โ€“15, Taylor and Francis, 2008. Michael O. Jewell, Christophe Rhodes, and Mark d'Inverno, Querying Improvised Music: Do You Sound Like Yourself? 11th International Society for Music Information Retrieval Conference (ISMIR 2010), pages 483โ€“488, 2010. References External links Mark d'Inverno home page 1965 births Living people Alumni of St Catherine's College, Oxford Members of the Department of Computer Science, University of Oxford Alumni of University College London British computer scientists Formal methods people Computer science writers Academics of the University of Westminster Academics of Goldsmiths, University of London English jazz pianists English jazz composers Male jazz composers British male pianists 21st-century pianists 21st-century British male musicians
4475882
https://en.wikipedia.org/wiki/MikroMikko
MikroMikko
MikroMikko was a Finnish line of microcomputers released by Nokia Corporation's computer division Nokia Data from 1981 through 1987. MikroMikko was Nokia Data's attempt to enter the business computer market. They were especially designed for good ergonomy. History The first model in the line, MikroMikko 1, was released on 29 September 1981, 48 days after IBM introduced its Personal Computer. The launch date of MikroMikko 1 is the name day of Mikko in the Finnish almanac. The MikroMikko line was manufactured in a factory in the Kilo district of Espoo, Finland, where computers had been produced since the 1960s. Nokia later bought the computer division of the Swedish telecommunications company Ericsson. During Finland's economic depression in the early 1990s, Nokia streamlined many of its operations and sold many of its less profitable divisions to concentrate on its key competence of telecommunications. Nokia's personal computer division was sold to the British computer company ICL (International Computers Limited) in 1991, which later became part of Fujitsu. However, ICL and later Fujitsu retained the MikroMikko trademark in Finland. Internationally the MikroMikko line was marketed by Fujitsu under the trademark ErgoPro. Fujitsu later transferred its personal computer operations to Fujitsu Siemens Computers, which shut down its only factory in Espoo at the end of March 2000, thus ending large-scale PC manufacturing in the country. Models MikroMikko 1 M6 Processor: Intel 8085, 2ย MHz 64 kB RAM, 4 kB ROM Display: 80ร—24 character text mode, the 25th row was used as a status row. Graphics resolutions 160ร—75 and 800ร—375 pixels, refresh rate 50ย Hz Two 640 kB 5.25" floppy drives (other models might only have one drive) Optional 5 MB hard disk (stock in model M7) Connectors: two RS-232s, display, printer, keyboard Software: Nokia CP/M 2.2 operating system, Microsoft Basic, editor, assembler and debugger Cost: 30,000 mk in 1984 MikroMikko 2 Released in 1983 Processor: Intel 80186 Partly MS-DOS compatible, used Nokia's own version of MS-DOS 2.x MikroMikko 3 Released in 1986 PC/AT compatible Processor: 6 or 8ย MHz Intel 80286 Hercules monitor Six extension card slots Mouse Cost: 47,950 mk MikroMikko 3 TT Team workstation, released in spring 1987 Processor: 8ย MHz Intel 80286 1 MB RAM Two extension card slots One or two 3.5" 720 kB floppy drives Optional 20 MB hard disk MS-DOS 3.2 operating system Cost: with one floppy drive 21,500 mk, two drives 23,000 mk, one floppy drive and 20 MB hard disk 25,900 mk MikroMikko 3 TT M125 Processor: 33ย MHz Intel 80386DX 4 MB RAM 1.44 MB 3.5" floppy drive 40 MB hard disk Connectors: display, keyboard, mouse, RS-232 serial port, Centronics printer port Software: MS-DOS 5.0 operating system Laptop computers MikroMikko 4m310 MikroMikko N3/25x Tiimi workgroup system The "Tiimi" workgroup system was a local area network consisting of MikroMikko workstations and servers, popular in the late 1980s. The servers were MPS-10s or MikroMikko models 2 and 3. The workstations were MikroMikko 3TT and PรครคteMikko computers. At least SQL/DMS database software and NOSS document manager software was available. References External links Old-Computers.com โ€“ MikroMikko page Nokia Personal computers Home computers Computer-related introductions in 1981
21168501
https://en.wikipedia.org/wiki/Software%20requirements
Software requirements
The requirement for a system are the description of what the system should do, the service or services that it provides and the constraints on its operation. The IEEE Standard Glossary of Software Engineering Terminology defines a requirement as: A condition or capability needed by a user to solve a problem or achieve an objective. A condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. A documented representation of a condition or capability as in 1 or 2. The activities related to working with software requirements can broadly be broken down into elicitation, analysis, specification, and management. Note that the wording Software requirements is additionally used in software release notes to explain, which depending software packages are required for a certain software to be built/installed/used. Elicitation Elicitation is the gathering and discovery of requirements from stakeholders and other sources. A variety of techniques can be used such as joint application design (JAD) sessions, interviews, document analysis, focus groups, etc. Elicitation is the first step of requirements development. Analysis Analysis is the logical breakdown that proceeds from elicitation. Analysis involves reaching a richer and more precise understanding of each requirement and representing sets of requirements in multiple, complementary ways. Requirements Triage or prioritization of requirements is another activity which often follows analysis. This relates to Agile software development in planning phase, e.g. by Planning poker, however it might not be the same depending on the context and nature of project and requirements or product/service that is getting build. Specification Specification involves representing and storing the collected requirements knowledge in a persistent and well-organized fashion that facilitates effective communication and change management. Use cases, user stories, functional requirements, and visual analysis models are popular choices for requirements specification. Validation Validation involves techniques to confirm that the correct set of requirements has been specified to build a solution that satisfies the project's business objectives. Management Requirements change during projects and there are often many of them. Management of this change becomes paramount to ensuring that the correct software is built for the stakeholders. Tool support for Requirements Engineering Tools for Requirements Elicitation, Analysis and Validation Taking into account that these activities may involve some artifacts such as observation reports (user observation), questionnaires (interviews, surveys and polls), use cases, user stories; activities such as requirement workshops (charrettes), brainstorming, mind mapping, role-playing; and even, prototyping; software products providing some or all of these capabilities can be used to help achieve these tasks. There is at least one author who advocates, explicitly, for mind mapping tools such as FreeMind; and, alternatively, for the use of specification by example tools such as Concordion. Additionally, the ideas and statements resulting from these activities may be gathered and organized with wikis and other collaboration tools such as Trello. The features actually implemented and standards compliance vary from product to product. Tools for Requirements Specification A Software Requirement Specification document (SRS) might be created using a software tool as general as a word processor or an electronic spreadsheet; but, there are several specialized tools to carry out this activity. Some of these tools can import, edit, export and publish SRS documents. They may or not help the user to follow standards such as IEEE 2918-2011 to compose the requirements according to some structure. Likewise, the tool may or not use some standard to import or export requirements (such as ReqIF); or, not allow these exchanges at all. Tools for Requirements Document Verification Tools of this kind verify if there are any errors in a requirements document according to some expected structure or standard. Tools for Requirements Comparison Tools of this kind compare two requirement sets according to some expected document structure and standard. Tools for Requirements Merge and Update Tools of this kind allow the merging and update of requirement documents. Tools for Requirements Traceability Tools of this kind allow to trace requirements to other artifacts such as models and source code (forward traceability) or, to previous ones such as business rules and constraints (backwards traceability). Tools for Model-Based Software or Systems Requirement Engineering Model-based systems engineering (MBSE) is the formalised application of modelling to support system requirements, design, analysis, measurement, verification and validation activities beginning in the conceptual design phase and continuing throughout development and later lifecycle phases. It is also possible to take a model-based approach for some stages of the requirements engineering and, a more traditional one, for others. Very many combinations might be possible. The level of formality and complexity depends on the underlying methodology involved (for instance, i* is much more formal than SysML and, even more formal than UML) Tools for general Requirements Engineering Tools in this category may provide some mix of the capabilities mentioned previously and others such as requirement configuration management and collaboration. The features actually implemented and standards compliance vary from product to product. There are even more capable or general tools that support other stages and activities. They are classified as ALM tools. See also Requirement Requirements engineering Software requirements specification (SRS) Comprehensive & Robust Requirements Specification Process List of requirements engineering tools Non-functional requirement Performance requirements which are covered by Software performance testing Safety requirements Security requirements References Further reading Burek, Paul (2008). Creating clear project requirements differentiating "what" from "how". Conference Paper. Requirements Management, Business Analysis, Scope Management. Koopman, Philip (2020). Embedded Software Requirements. Fall Lectures. IEEE Xplore Search. "Software Requirements".
4401160
https://en.wikipedia.org/wiki/Zfone
Zfone
Zfone is software for secure voice communication over the Internet (VoIP), using the ZRTP protocol. It is created by Phil Zimmermann, the creator of the PGP encryption software. Zfone works on top of existing SIP- and RTP-programs, but should work with any SIP- and RTP-compliant VoIP-program. Zfone turns many existing VoIP clients into secure phones. It runs in the Internet Protocol stack on any Windows XP, Mac OS X, or Linux PC, and intercepts and filters all the VoIP packets as they go in and out of the machine, and secures the call on the fly. A variety of different software VoIP clients can be used to make a VoIP call. The Zfone software detects when the call starts, and initiates a cryptographic key agreement between the two parties, and then proceeds to encrypt and decrypt the voice packets on the fly. It has its own separate GUI, telling the user if the call is secure. Zfone describes itself to end-users as a "bump on the wire" between the VoIP client and the Internet, which acts upon the protocol stack. Zfone's libZRTP SDK libraries are released under either the Affero General Public License (AGPL) or a commercial license. Note that only the libZRTP SDK libraries are provided under the AGPL. The parts of Zfone that are not part of the libZRTP SDK libraries are not licensed under the AGPL or any other open source license. Although the source code of those components is published for peer review, they remain proprietary. The Zfone proprietary license also contains a time bomb provision. It appears that Zfone development has stagnated, however, as the most recent version was released on 22 Mar 2009. In addition, since 29 Jan 2011, it has not been possible to download Zfone from the developer's website since the download server has gone offline. Platforms and specification Availability โ€“ Mac OS X, Linux, and Windows as compiled programs as well as an SDK. Encryption standards โ€“ Based on ZRTP, which uses 128- or 256-bit AES together with a 3072-bit key exchange system and voice-based verification to prevent man-in-the-middle attacks. ZRTP protocol โ€“ Published as an IETF : "ZRTP: Media Path Key Agreement for Unicast Secure RTP" VoIP clients โ€“ Zfone has been tested with the following VoIP clients: X-Lite, Gizmo5, XMeeting, Google Talk VoIP client, and SJphone. See also Comparison of VoIP software Secure telephone References External links Zfone home page CNET News: E-mail security hero takes on VoIP 'Wired.com' article April 03 2006 VoIP software Cryptographic software Internet privacy software Software using the GNU AGPL license
18495481
https://en.wikipedia.org/wiki/Agenda%20VR3
Agenda VR3
The Agenda VR3 was the name of the first "pure Linux" Personal Digital Assistant (PDA), released in May 2001 by Agenda Computing, Inc. of Irvine, California. The Linux Documentation Project considers the VR3 to be a "true Linux PDA" because the manufacturers installed Linux-based operating systems on them by default. History The VR3 was unveiled at LinuxWorld Conference and Expo in August 2000 by Agenda Computing, which was at the time "a wholly owned subsidiary of the publicly traded electronics manufacturing giant, Kessel International Holdings, based in Hong Kong." A developer model, the VR3d, was available by December. By late 2001, the VR3's price dropped from $249 to $119 at some US retailers, which caused some to wonder whether the promised VR5 (a color handheld) was to be released, or Agenda Computing was closing shop. In April 2002, after the demise of Agenda Computing, the Softfield Vr3 became available from Softfield Technologies of Toronto, Ontario, Canada. As of July 2008, the device is still available from SoftField. Hardware The VR3 was 4.5"x3.0"x0.8". It included a 2.25"x3.25", 160x240 pixel, monochrome, backlit LCD touchscreen. It utilized a 66MHz MIPS CPU with 8MB of RAM and 16MB of built-in Flash memory for storage. For input, it included push buttons for actions (such as Page-Up and Down, and Left and Right), stylus-activated power on/off, on-screen hard buttons for launching applications and a built-in microphone jack. It also included a notification buzzer, an LED notification light, an IrDA port and an RS232 port. It was powered by two AAA batteries, and connected to PCs via an RS232 cable, or a docking station that the cable connected to. Both contained a button for activating sync software. Software The VR3 came with a 2.4.0 version of the Linux kernel, XFree86, the Rxvt terminal emulator, the Bash shell, and a user interface based on the FLTK GUI library. It included on-screen keyboard and handwriting recognition software, a number of personal information management (PIM) applications (including an expense tracker, e-mail, to-do list, contacts list, and schedule), games, and other tools. It is possible to telnet, FTP and make remote X connections to the device. Numerous applications were created by third-party developers, with the Agenda Software Repository listing nearly 200 titles by the end of 2003. References The Agenda VR3: Real Linux in a PDA at O'Reilly Media's linuxdevcenter.com Personal digital assistants
68466
https://en.wikipedia.org/wiki/Gamma%20correction
Gamma correction
Gamma correction or gamma is a nonlinear operation used to encode and decode luminance or tristimulus values in video or still image systems. Gamma correction is, in the simplest cases, defined by the following power-law expression: where the non-negative real input value is raised to the power and multiplied by the constant A to get the output value . In the common case of , inputs and outputs are typically in the range 0โ€“1. A gamma value is sometimes called an encoding gamma, and the process of encoding with this compressive power-law nonlinearity is called gamma compression; conversely a gamma value is called a decoding gamma, and the application of the expansive power-law nonlinearity is called gamma expansion. Explanation Gamma encoding of images is used to optimize the usage of bits when encoding an image, or bandwidth used to transport an image, by taking advantage of the non-linear manner in which humans perceive light and color. The human perception of brightness (lightness), under common illumination conditions (neither pitch black nor blindingly bright), follows an approximate power function (note: no relation to the gamma function), with greater sensitivity to relative differences between darker tones than between lighter tones, consistent with the Stevens power law for brightness perception. If images are not gamma-encoded, they allocate too many bits or too much bandwidth to highlights that humans cannot differentiate, and too few bits or too little bandwidth to shadow values that humans are sensitive to and would require more bits/bandwidth to maintain the same visual quality. Gamma encoding of floating-point images is not required (and may be counterproductive), because the floating-point format already provides a piecewise linear approximation of a logarithmic curve. Although gamma encoding was developed originally to compensate for the inputโ€“output characteristic of cathode ray tube (CRT) displays, that is not its main purpose or advantage in modern systems. In CRT displays, the light intensity varies nonlinearly with the electron-gun voltage. Altering the input signal by gamma compression can cancel this nonlinearity, such that the output picture has the intended luminance. However, the gamma characteristics of the display device do not play a factor in the gamma encoding of images and videoโ€”they need gamma encoding to maximize the visual quality of the signal, regardless of the gamma characteristics of the display device. The similarity of CRT physics to the inverse of gamma encoding needed for video transmission was a combination of coincidence and engineering, which simplified the electronics in early television sets. Photographic film has a much greater ability to record fine differences in shade than can be reproduced on photographic paper. Similarly, most video screens are not capable of displaying the range of brightnesses (dynamic range) that can be captured by typical electronic cameras. For this reason, considerable artistic effort is invested in choosing the reduced form in which the original image should be presented. The gamma correction, or contrast selection, is part of the photographic repertoire used to adjust the reproduced image. Analogously, digital cameras record light using electronic sensors that usually respond linearly. In the process of rendering linear raw data to conventional RGB data (e.g. for storage into JPEG image format), color space transformations and rendering transformations will be performed. In particular, almost all standard RGB color spaces and file formats use a non-linear encoding (a gamma compression) of the intended intensities of the primary colors of the photographic reproduction; in addition, the intended reproduction is almost always nonlinearly related to the measured scene intensities, via a tone reproduction nonlinearity. Generalized gamma The concept of gamma can be applied to any nonlinear relationship. For the power-law relationship , the curve on a logโ€“log plot is a straight line, with slope everywhere equal to gamma (slope is represented here by the derivative operator): That is, gamma can be visualized as the slope of the inputโ€“output curve when plotted on logarithmic axes. For a power-law curve, this slope is constant, but the idea can be extended to any type of curve, in which case gamma (strictly speaking, "point gamma") is defined as the slope of the curve in any particular region. Film photography When a photographic film is exposed to light, the result of the exposure can be represented on a graph showing log of exposure on the horizontal axis, and density, or negative log of transmittance, on the vertical axis. For a given film formulation and processing method, this curve is its characteristic or Hurterโ€“Driffield curve. Since both axes use logarithmic units, the slope of the linear section of the curve is called the gamma of the film. Negative film typically has a gamma less than 1; positive film (slide film, reversal film) typically has a gamma with absolute value greater than 1. Microsoft Windows, Mac, sRGB and TV/video standard gammas Analog TV Output to CRT-based television receivers and monitors does not usually require further gamma correction, The standard video signals that are transmitted or stored in image files incorporate gamma compression matching the gamma expansion of the CRT (although it is not the exact inverse). For television signals, gamma values are fixed and defined by the analog video standards. CCIR System M and N, associated with NTSC color, use gamma 2.2; the rest (systems B/G, H, I, D/K, K1 and L) associated with PAL or SECAM color, use gamma 2.8. Computer displays In most computer display systems, images are encoded with a gamma of about 0.45 and decoded with the reciprocal gamma of 2.2. A notable exception, until the release of Mac OS X 10.6 (Snow Leopard) in September 2009, were Macintosh computers, which encoded with a gamma of 0.55 and decoded with a gamma of 1.8. In any case, binary data in still image files (such as JPEG) are explicitly encoded (that is, they carry gamma-encoded values, not linear intensities), as are motion picture files (such as MPEG). The system can optionally further manage both cases, through color management, if a better match to the output device gamma is required. The sRGB color space standard used with most cameras, PCs, and printers does not use a simple power-law nonlinearity as above, but has a decoding gamma value near 2.2 over much of its range, as shown in the plot to the right. Below a compressed value of 0.04045 or a linear intensity of 0.00313, the curve is linear (encoded value proportional to intensity), so . The dashed black curve behind the red curve is a standard power-law curve, for comparison. Gamma correction in computers is used, for example, to display a gamma = 1.8 Apple picture correctly on a gamma = 2.2 PC monitor by changing the image gamma. Another usage is equalizing of the individual color-channel gammas to correct for monitor discrepancies. Gamma meta information Some picture formats allow an image's intended gamma (of transformations between encoded image samples and light output) to be stored as metadata, facilitating automatic gamma correction as long as the display system's exponent is known. The PNG specification includes the gAMA chunk for this purpose and with formats such as JPEG and TIFF the Exif Gamma tag can be used. These features have historically caused problems, especially on the web. There is no numerical value of gamma that matches the "show the 8-bit numbers unchanged" method used for JPG, GIF, HTML, and CSS colors, so the PNG would not match. In addition, much of the image authoring software would write incorrect gamma values such as 1.0. This situation has since improved, as major browsers such as Google Chrome (and all other Chromium-based browsers) and Mozilla Firefox either ignore the gamma setting entirely, or ignore it when set to known wrong values. Power law for video display A gamma characteristic is a power-law relationship that approximates the relationship between the encoded luma in a television system and the actual desired image luminance. With this nonlinear relationship, equal steps in encoded luminance correspond roughly to subjectively equal steps in brightness. Ebner and Fairchild used an exponent of 0.43 to convert linear intensity into lightness (luma) for neutrals; the reciprocal, approximately 2.33 (quite close to the 2.2 figure cited for a typical display subsystem), was found to provide approximately optimal perceptual encoding of grays. The following illustration shows the difference between a scale with linearly-increasing encoded luminance signal (linear gamma-compressed luma input) and a scale with linearly-increasing intensity scale (linear luminance output). On most displays (those with gamma of about 2.2), one can observe that the linear-intensity scale has a large jump in perceived brightness between the intensity values 0.0 and 0.1, while the steps at the higher end of the scale are hardly perceptible. The gamma-encoded scale, which has a nonlinearly-increasing intensity, will show much more even steps in perceived brightness. A cathode ray tube (CRT), for example, converts a video signal to light in a nonlinear way, because the electron gun's intensity (brightness) as a function of applied video voltage is nonlinear. The light intensity I is related to the source voltage Vs according to where ฮณ is the Greek letter gamma. For a CRT, the gamma that relates brightness to voltage is usually in the range 2.35 to 2.55; video look-up tables in computers usually adjust the system gamma to the range 1.8 to 2.2, which is in the region that makes a uniform encoding difference give approximately uniform perceptual brightness difference, as illustrated in the diagram at the top of this section. For simplicity, consider the example of a monochrome CRT. In this case, when a video signal of 0.5 (representing a mid-gray) is fed to the display, the intensity or brightness is about 0.22 (resulting in a mid-gray, about 22% the intensity of white). Pure black (0.0) and pure white (1.0) are the only shades that are unaffected by gamma. To compensate for this effect, the inverse transfer function (gamma correction) is sometimes applied to the video signal so that the end-to-end response is linear. In other words, the transmitted signal is deliberately distorted so that, after it has been distorted again by the display device, the viewer sees the correct brightness. The inverse of the function above is where Vc is the corrected voltage, and Vs is the source voltage, for example, from an image sensor that converts photocharge linearly to a voltage. In our CRT example 1/ฮณ is 1/2.2ย โ‰ˆย 0.45. A color CRT receives three video signals (red, green, and blue) and in general each color has its own value of gamma, denoted ฮณR, ฮณG or ฮณB. However, in simple display systems, a single value of ฮณ is used for all three colors. Other display devices have different values of gamma: for example, a Game Boy Advance display has a gamma between 3 and 4 depending on lighting conditions. In LCDs such as those on laptop computers, the relation between the signal voltage Vs and the intensity I is very nonlinear and cannot be described with gamma value. However, such displays apply a correction onto the signal voltage in order to approximately get a standard behavior. In NTSC television recording, . The power-law function, or its inverse, has a slope of infinity at zero. This leads to problems in converting from and to a gamma colorspace. For this reason most formally defined colorspaces such as sRGB will define a straight-line segment near zero and add raising (where K is a constant) to a power so the curve has continuous slope. This straight line does not represent what the CRT does, but does make the rest of the curve more closely match the effect of ambient light on the CRT. In such expressions the exponent is not the gamma; for instance, the sRGB function uses a power of 2.4 in it, but more closely resembles a power-law function with an exponent of 2.2, without a linear portion. Methods to perform display gamma correction in computing Up to four elements can be manipulated in order to achieve gamma encoding to correct the image to be shown on a typical 2.2- or 1.8-gamma computer display: The pixel's intensity values in a given image file; that is, the binary pixel values are stored in the file in such way that they represent the light intensity via gamma-compressed values instead of a linear encoding. This is done systematically with digital video files (as those in a DVD movie), in order to minimize the gamma-decoding step while playing, and maximize image quality for the given storage. Similarly, pixel values in standard image file formats are usually gamma-compensated, either for sRGB gamma (or equivalent, an approximation of typical of legacy monitor gammas), or according to some gamma specified by metadata such as an ICC profile. If the encoding gamma does not match the reproduction system's gamma, further correction may be done, either on display or to create a modified image file with a different profile. The rendering software writes gamma-encoded pixel binary values directly to the video memory (when highcolor/truecolor modes are used) or in the CLUT hardware registers (when indexed color modes are used) of the display adapter. They drive Digital-to-Analog Converters (DAC) which output the proportional voltages to the display. For example, when using 24-bit RGB color (8 bits per channel), writing a value of 128 (rounded midpoint of the 0โ€“255 byte range) in video memory it outputs the proportional voltage to the display, which it is shown darker due to the monitor behavior. Alternatively, to achieve intensity, a gamma-encoded look-up table can be applied to write a value near to 187 instead of 128 by the rendering software. Modern display adapters have dedicated calibrating CLUTs, which can be loaded once with the appropriate gamma-correction look-up table in order to modify the encoded signals digitally before the DACs that output voltages to the monitor. Setting up these tables to be correct is called hardware calibration. Some modern monitors allow the user to manipulate their gamma behavior (as if it were merely another brightness/contrast-like setting), encoding the input signals by themselves before they are displayed on screen. This is also a calibration by hardware technique but it is performed on the analog electric signals instead of remapping the digital values, as in the previous cases. In a correctly calibrated system, each component will have a specified gamma for its input and/or output encodings. Stages may change the gamma to correct for different requirements, and finally the output device will do gamma decoding or correction as needed, to get to a linear intensity domain. All the encoding and correction methods can be arbitrarily superimposed, without mutual knowledge of this fact among the different elements; if done incorrectly, these conversions can lead to highly distorted results, but if done correctly as dictated by standards and conventions will lead to a properly functioning system. In a typical system, for example from camera through JPEG file to display, the role of gamma correction will involve several cooperating parts. The camera encodes its rendered image into the JPEG file using one of the standard gamma values such as 2.2, for storage and transmission. The display computer may use a color management engine to convert to a different color space (such as older Macintosh's color space) before putting pixel values into its video memory. The monitor may do its own gamma correction to match the CRT gamma to that used by the video system. Coordinating the components via standard interfaces with default standard gamma values makes it possible to get such system properly configured. Simple monitor tests This procedure is useful for making a monitor display images approximately correctly, on systems in which profiles are not used (for example, the Firefox browser prior to version 3.0 and many others) or in systems that assume untagged source images are in the sRGB colorspace. In the test pattern, the intensity of each solid color bar is intended to be the average of the intensities in the surrounding striped dither; therefore, ideally, the solid areas and the dithers should appear equally bright in a system properly adjusted to the indicated gamma. Normally a graphics card has contrast and brightness control and a transmissive LCD monitor has contrast, brightness, and backlight control. Graphics card and monitor contrast and brightness have an influence on effective gamma, and should not be changed after gamma correction is completed. The top two bars of the test image help to set correct contrast and brightness values. There are eight three-digit numbers in each bar. A good monitor with proper calibration shows the six numbers on the right in both bars, a cheap monitor shows only four numbers. Given a desired display-system gamma, if the observer sees the same brightness in the checkered part and in the homogeneous part of every colored area, then the gamma correction is approximately correct. In many cases the gamma correction values for the primary colors are slightly different. Setting the color temperature or white point is the next step in monitor adjustment. Before gamma correction the desired gamma and color temperature should be set using the monitor controls. Using the controls for gamma, contrast and brightness, the gamma correction on a LCD can only be done for one specific vertical viewing angle, which implies one specific horizontal line on the monitor, at one specific brightness and contrast level. An ICC profile allows to adjust the monitor for several brightness levels. The quality (and price) of the monitor determines how much deviation of this operating point still gives a satisfactory gamma correction. Twisted nematic (TN) displays with 6-bit color depth per primary color have lowest quality. In-plane switching (IPS) displays with typically 8-bit color depth are better. Good monitors have 10-bit color depth, have hardware color management and allow hardware calibration with a tristimulus colorimeter. Often a 6bit plus FRC panel is sold as 8bit and a 8bit plus FRC panel is sold as 10bit. FRC is no true replacement for more bits. The 24-bit and 32-bit color depth formats have 8 bits per primary color. With Microsoft Windows 7 and above the user can set the gamma correction through the display color calibration tool dccw.exe or other programs. These programs create an ICC profile file and load it as default. This makes color management easy. Increase the gamma slider in the dccw program until the last colored area, often the green color, has the same brightness in checkered and homogeneous area. Use the color balance or individual colors gamma correction sliders in the gamma correction programs to adjust the two other colors. Some old graphics card drivers do not load the color Look Up Table correctly after waking up from standby or hibernate mode and show wrong gamma. In this case update the graphics card driver. On some operating systems running the X Window System, one can set the gamma correction factor (applied to the existing gamma value) by issuing the command xgamma -gamma 0.9 for setting gamma correction factor to 0.9, and xgamma for querying current value of that factor (the default is 1.0). In macOS systems, the gamma and other related screen calibrations are made through the System Preferences. Scaling and blending The test image is only valid when displayed "raw", i.e. without scaling (1:1 pixel to screen) and color adjustment, on the screen. It does, however, also serve to point out another widespread problem in software: many programs perform scaling in a color space with gamma, instead of a physically-correct linear space. In a sRGB color space with an approximate gamma of 2.2, the image should show a "2.2" result at 50% size, if the zooming is done linearly. Jonas Berlin has created a "your scaling software sucks/rules" image based on the same principle. In addition to scaling, the problem also applies to other forms of downsampling (scaling down), such as chroma subsampling in JPEG's gamma-enabled Yโ€ฒCbCr. WebP solves this problem by calculating the chroma averages in linear space then converting back to a gamma-enabled space; an iterative solution is used for larger images. The same "sharp YUV" (formerly "smart YUV") code is used in sjpeg. Kornelski provides a simpler approximation by luma-based weighted average. Alpha compositing, color gradients, and 3D rendering are also affected by this issue. Paradoxically, when upsampling (scaling up) an image, the result processed in the "wrong" gamma-enabled space tends to be more aesthetically pleasing. This is because upscaling filters are tuned to minimize the ringing artifacts in a linear space, but human perception is non-linear and better approximated by gamma. An alternative way to trim the artifacts is using a sigmoidal light transfer function, a technique pioneered by GIMP's LoHalo filter and later adopted by madVR. Terminology The term intensity refers strictly to the amount of light that is emitted per unit of time and per unit of surface, in units of lux. Note, however, that in many fields of science this quantity is called luminous exitance, as opposed to luminous intensity, which is a different quantity. These distinctions, however, are largely irrelevant to gamma compression, which is applicable to any sort of normalized linear intensity-like scale. "Luminance" can mean several things even within the context of video and imaging: luminance is the photometric brightness of an object (in units of cd/m2), taking into account the wavelength-dependent sensitivity of the human eye (the photopic curve); relative luminance is the luminance relative to a white level, used in a color-space encoding; luma is the encoded video brightness signal, i.e., similar to the signal voltage VS. One contrasts relative luminance in the sense of color (no gamma compression) with luma in the sense of video (with gamma compression), and denote relative luminance by Y and luma by Yโ€ฒ, the prime symbol (โ€ฒ) denoting gamma compression. Note that luma is not directly calculated from luminance, it is the (somewhat arbitrary) weighted sum of gamma compressed RGB components. Likewise, brightness is sometimes applied to various measures, including light levels, though it more properly applies to a subjective visual attribute. Gamma correction is a type of power law function whose exponent is the Greek letter gamma (ฮณ). It should not be confused with the mathematical Gamma function. The lower case gamma, ฮณ, is a parameter of the former; the upper case letter, ฮ“, is the name of (and symbol used for) the latter (as in ฮ“(x)). To use the word "function" in conjunction with gamma correction, one may avoid confusion by saying "generalized power law function". Without context, a value labeled gamma might be either the encoding or the decoding value. Caution must be taken to correctly interpret the value as that to be applied-to-compensate or to be compensated-by-applying its inverse. In common parlance, in many occasions the decoding value (as 2.2) is employed as if it were the encoding value, instead of its inverse (1/2.2 in this case), which is the real value that must be applied to encode gamma. See also Brightness BT.1886 Callier effect Color balance Color cast Color management Color grading Color temperature Contrast (vision) Luminance Luminance (video) Luminance (relative) Post-production Standard-dynamic-range video Telecine Tone mapping Transfer functions in imaging Video calibration software White point References External links General information PNG Specification; Version 1.0; 13. Appendix: Gamma Tutorial Rehabilitation of Gamma by Charles Poynton Frequently Asked Questions about Gamma CGSD โ€“ Gamma Correction Home Page by Computer Graphics Systems Development Corporation Stanford University CS 178 interactive Flash demo about gamma correction. A Standard Default Color Space for the Internet โ€“ sRGB, defines and explains viewing gamma, camera gamma, CRT gamma, LUT gamma and display gamma Gamma error in picture scaling by Eric Brasseur WHAT EVERY CODER SHOULD KNOW ABOUT GAMMA by JOHN NOVAK Monitor gamma tools The Lagom LCD monitor test pages The Gamma adjustment page Monitor test pattern for correct gamma correction (by Norman Koren) QuickGamma Display technology Science of photography Power laws Photometry
26855273
https://en.wikipedia.org/wiki/1930%20Rose%20Bowl
1930 Rose Bowl
The 1930 Rose Bowl was the 16th Rose Bowl game, an American post-season college football game that was played on New Year's Day 1930 in Pasadena, California. It featured the Pittsburgh Panthers against the USC Trojans. Scoring First Quarter USC โ€“ Edelson, 55-yard pass from Saunders (Shaver kick good) USC โ€“ Erny Pinckert, 25-yard pass from Saunders (Shaver kick blocked) Second Quarter USC โ€“ Duffield, 1-yard run (Baker kick failed) USC โ€“ Duffield, 1-yard run (Baker kick good) Third Quarter USC โ€“ Russ Saunders, 16-yard run (Shaver kick good) Pitt โ€“ Walinchus, 28-yard pass from Baker (Parkinson kick good) USC โ€“ Edelson, 39-yard pass from Saunders (Baker kick good) Fourth Quarter Pitt โ€“ Collins, 36-yard pass from Williams (Parkinson kick good) USC โ€“ Wilcox, 57-yard pass from Duffield (Duffield dropkicked extra point) Game notes By losing to the Trojans, the Panthers gave up the most points since 1903. References Rose Bowl Rose Bowl Game Pittsburgh Panthers football bowl games USC Trojans football bowl games 1930 in sports in California January 1930 sports events
296007
https://en.wikipedia.org/wiki/Telephone%20number%20mapping
Telephone number mapping
Telephone number mapping is a system of unifying the international telephone number system of the public switched telephone network with the Internet addressing and identification name spaces. Internationally, telephone numbers are systematically organized by the E.164 standard, while the Internet uses the Domain Name System (DNS) for linking domain names to IP addresses and other resource information. Telephone number mapping systems provide facilities to determine applicable Internet communications servers responsible for servicing a given telephone number using DNS queries. The most prominent facility for telephone number mapping is the E.164 number to URI mapping (ENUM) standard. It uses special DNS record types to translate a telephone number into a Uniform Resource Identifier (URI) or IP address that can be used in Internet communications. Rationale Being able to dial telephone numbers the way customers have come to expect is considered crucial for the convergence of classic telephone service (PSTN) and Internet telephony (Voice over IP, VoIP), and for the development of new IP multimedia services. The problem of a single universal personal identifier for multiple communication services can be solved with different approaches. One simple approach is the Electronic Number Mapping System (ENUM), developed by the IETF, using existing E.164 telephone numbers, protocols and infrastructure to indirectly access different services available under a single personal identifier. ENUM also permits connecting the IP world to the telephone system in a seamless manner. System details For an ENUM subscriber to be able to activate and use the ENUM service, it needs to obtain three elements from a Registrar: A personal Uniform Resource Identifier (URI) to be used on the IP part of the network, as explained below. One E.164 regular personal telephone number associated with the personal URI, to be used on the PSTN part of the network. Authority to write their call forwarding/termination preferences in the NAPTR record accessible via the personal URI. This works as follows: (1) the Registrar provides the Subscriber (or Registrant) with a domain name, the URI, that will be used for accessing a DNS server to fetch a NAPTR record, (2) a personal E.164 telephone number (the ENUM number). The URI domain name of (1) is biunivocally associated (one-to-one mapped) to the subscriber E.164 ENUM number of (2). Finally (3) the NAPTR record corresponding to the subscriber URI contains the subscriber call forwarding/termination preferences. Therefore, if a calling party being at the PSTN network dials a called party ENUM number by touch typing the E.164 called party number, the number will be translated at the ENUM gateway into the corresponding URI. This URI will be used for looking-up and fetching the NAPTR record obtaining the called party wishes about how the call should be forwarded or terminated (either on IP or on PSTN terminations) โ€“ the so-called access information โ€“ which the registrant (the called party) has specified by writing his/her choice at the NAPTR record ("Naming Authority Pointer Resource Records" as defined in RFC 3403), such as e-mail addresses, a fax number, a personal website, a VoIP number, mobile telephone numbers, voice mail systems, IP-telephony addresses, web pages, GPS coordinates, call diversions or instant messaging. Alternatively, when the calling party is at the IP side, the User Agent (UA) piece of software of the dialler will allow to dial a E.164 number, but the dialler UA will convert it into a URI, to be used for looking-up at the ENUM gateway DNS and fetch the NAPTR record obtaining the called party wishes about how the call should be forwarded or terminated (again, either on IP or on PSTN terminations). Calling by using a new personal E.164 number (the ENUM number) to look-up at a database is therefore an indirect calling support service. The ITU ENUM allocates a specific zone, namely "e164.arpa" for use with ENUM E.164 numbers on the IP side of the network. RFC 6116 defines how any E.164 number, such as +1 555 42 42 can be transformed into a URI, by reversing the numbers, separating them with dots and adding the e164.arpa suffix thus: 2.4.2.4.5.5.5.1.e164.arpa The URI can then be used for obtaining the Internet Protocol addresses for services such as the Session Initiation Protocol (SIP) VoIP telephony. In the DNS, NAPTR records are used for setting the subscriber call forwarding/termination preferences. Therefore, the whole system can 'translate' E.164 addresses to SIP addresses. An example NAPTR record is: $ORIGIN 2.4.2.4.5.5.5.1.e164.arpa. IN NAPTR 100 10 "u" "E2U+sip" "!^.*$!sip:[email protected]!" . IN NAPTR 102 10 "u" "E2U+mailto" "!^.*$!mailto:[email protected]!" . This example specifies that if you want to use the "E2U+sip" service, you should use sip:[email protected] as the address. The regular expression can be used by a telephone company to easily assign addresses to all of its clients. For example, if your number is +15554242, your SIP address is sip:[email protected]; if your number is +15551234, your SIP address is sip:[email protected]. The following Figure illustrates how ENUM works by giving an example: Subscriber A sets out to call Subscriber B. The user agent of an ENUM-enabled subscriber terminal device, or a PBX, or a gateway, translates the request for the number +34 98 765 4321 in accordance with the rule described in RFC 6116 into the ENUM domain 1.2.3.4.5.6.7.8.9.4.3.e164.arpa. A request is sent to the DNS for the NAPTR record of the domain name 1.2.3.4.5.6.7.8.9.4.3.e164.arpa. The query returns a result set of NAPTR records, as per RFC 3403. In the example above, the response is an address that can be reached in the Internet using the VoIP protocol SIP per RFC 3261. The terminal application now sets up a communication link, and the call is routed via the Internet. The ENUM user does not notice anything of this reversal and the DNS database look-up, as this is done automatically behind the scenes using a user agent software in his PC or terminal, or at the PABX or Gateway. For instance, when the user types the telephone number in his web browser ENUM enabled agent and indicates what item of information he is looking for (email address, telephone number, web address, etc.) in the PC or terminal the number is converted to a domain name. This is sent to ENUM servers on the Internet, which send back the NAPTR records associated with the name. The access information and any priority indicated for them are stored in these. The user gets the requested address back on his PC or terminal. ENUM therefore in fact functions as a mechanism for translating a telephone number into a domain name with the requested address or number associated with it, but without the user viewing how this is done, just as he is currently unaware that he is using the DNS when he makes a connection with the Internet or what is going on at the telephone switch when he makes a call. Uses Call forwarding One way of doing call forwarding with ENUM is illustrated in the next figure. The caller uses the telephone to dial the number of another subscriber, which leads to an ENUM lookup (such as is provided by SIP Broker). The DNS responds to the caller by returning a list with NAPTR records for VoIP communication, telephone numbers and email addresses. Next, an attempt will be made, using the VoIP record from this list, to establish a connection with the subscriber. If the subscriber is not online, the next record selected will be that for a connection to a PSTN or mobile telephone. If this attempt fails too, a voice message will be sent to the subscriber via a listed email address. Subdomains of e164.arpa are delegated on a country-code basis by the ITU. Each delegation is normally made to a regulatory body designated by the national government for the country code concerned. What happens at a country level is a National Matter. In general the conventional DNS registry-registrar model is used. The national ENUM registry manages and operates the DNS infrastructure and related systems for country-code.e164.arpa. It takes registration requests from registrars who are agents of the end users, the registrants. Registrars are typically VoIP providers and telcos who bundle an ENUM registration as part of a VoIP service package. People using an ENUM-enabled VoIP service can dial the registrant's existing number and be connected to the registrant's VoIP telephone over the Internet instead of using the PSTN. When they call someone who does not use ENUM, calls complete over the Public Switched Telephone Network or PSTN in the usual manner. Support for .e164.arpa varies widely between countries; many do not support it at all. Alternative ENUM-like registries have also emerged. These services verify PSTN numbers and can be used in addition to or as an alternative to e164.arpa. However, if the registry in which a callee's number is not known by the caller, the choice between registries can create confusion and complexity. Multiple DNS lookups may be needed and it is far from simple to know which E.164 numbers are registered in which of these alternate ENUM-like trees. It is also possible that if an E.164 number is registered in several of these trees, there can be inconsistencies in the information that is returned. Furthermore, the subscriber "owning" a particular E.164 number may not be aware that their number has been entered into one or more of these alternate ENUM-like trees or what information these alternate trees are returning for their E.164 number. Called party facility ENUM can also be viewed as a called party facility. Basically, it is an indirect dialling service designed to work seamlessly on PSTN and VoIP that builds on the great value of the E.164 numbers: billions of people knowing how to dial using numbers. If the called person has opted to use ENUM she/he will have published the ENUM number and have entered (via ENUM NAPTR) his/her wishes for how the call should be terminated. This might be a single VoIP identifier, but most likely it will be a list of how the call should be forwarded to various fixed-line, cellphones, secretarial or voice mail services, either at the IP or at the PSTN side of the network. It is the called party choice to opt-in ENUM and also to decide to let the calling party know her/his wishes. Today when a user places a regular phone call, he has to begin deciding how to establish the call with the other party: via VoIP, Fixed-line PSTN, cellphone, entering a URI or dialling a number. With ENUM indirect dialling it is the called party wishes that matter and solve that decision. Another benefit of indirect dialling is to free the user to change his phone telco, webpage, IMS, email or whatever telecom service he uses without having to tell all his contacts about that. A presence enhanced ENUM facility having various profiles could automatically change the called party wishes as a function of where he/she is available. This could be a mechanism to automatically switch between cellphone and VoIP to the most convenient (or the less costing) termination. ENUM varieties One potential source of confusion, when talking about ENUM, is the variety of ENUM implementations in place today. Quite often, people speaking of ENUM are really referring to only one of the following: Public ENUM: The original vision of ENUM as a global, public directory-like database, with subscriber opt-in capabilities and delegation at the country code level in the e164.arpa domain. This is also referred to as user ENUM. Open Enum: An effort of mobile carriers and other parties involved in mobile numbering plans to generate complete, public database of all international numbering plan, available via public dns. Private ENUM: A carrier, VoIP operator or ISP may use ENUM techniques within its own networks, in the same way DNS is used internally to networks. Carrier ENUM: Groups of carriers or communication service providers agree to share subscriber information via ENUM in private peering relationships. The carriers themselves control subscriber information, not the individuals. Carrier ENUM is also referred to as infrastructure ENUM, and is being the subject of new IETF recommendations to support VoIP peering..... Parties having a direct interest in ENUM Various parties are involved with ENUM. These include: The registrant or subscriber The registrant is the person or subscriber that makes his access information available to others through ENUM. The registrant or subscriber is thus the person whose information has been included in ENUM and must not be confused with the person who uses the Internet to find an address through ENUM. The registrar The registrar is the party who manages the registrantโ€™s access information and ensures that it is publicly available on the Internet. The registry The registry is the manager of a national ENUM zone. The registry forms, as it were, the top of the national ENUM hierarchy and ensures that reference is made to the registrarsโ€™ servers on which the access information is located. Because of the hierarchical structure of the DNS, there can only be one registry for every national ENUM zone. To prevent abuse of this position, requirements are strict with respect to the impartiality of the registry and the costs and quality of the service. In addition every registrant must receive equal and open access. The government or the regulator Usually a governmental entity or a regulatory authority has control over the National zone of ENUM and will play a role in the appointment of the registry. The number holder operator Telephony services or telecommunication services operators have been assigned blocks of numbers by the regulator. They subsequently enable their users to use individual telephone numbers from those number blocks. Examples are the numbers for fixed telephony and mobile telephony. The number holder operator will be interconnected to other operators and will receive from them calls to his assigned range of numbers, for the calls to be terminated. In ENUM the number holder operator will typically be the gateway operator or, alternatively, will have an arrangement with a gateway operator, to whom he will transit the calls. But ENUM is a personal number, meant to be valid for the registrant life. Consequently in ENUM once the operator number holder assigns a number to a registrant, the number belongs to that registrant during his/her entire life. Hence, if the registrant wishes to change his initial number holder operator (that might also coincide being his gateway operator) there have to be provisions for the ENUM number to be ported from the initial operator to other number holder operators. You can find more information and further parties involved in the ENUM ecosystem in RFC 4725. See also Carrier of Record DNS mapping of E.164 numbers References - The E.164 to Uniform Resource Identifiers (URI) Dynamic Delegation Discovery System (DDDS) Application (ENUM) - IANA Registration of Enumservices: Guide, Template, and IANA Considerations - ENUM Validation Architecture - Dynamic Delegation Discovery System (DDDS) Part Three: The Domain Name System (DNS) Database ENUM - The bridge between telephony and internet ENUM - It's All in the Numbers http://www.itu.int/osg/spu/enum/ External links GSMA PathFinder Carrier ENUM Technology CircleID: ENUM Convergence ENUM: Mapping the E.164 Number Space into the DNS International telecommunications Telephone numbers
6513
https://en.wikipedia.org/wiki/Client%E2%80%93server%20model
Clientโ€“server model
Client-server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs, which share their resources with clients. A client usually does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client-server model are email, network printing, and the World Wide Web. Client and server role The "client-server" characteristic describes the relationship of cooperating programs in an application. The server component provides a function or service to one or many clients, which initiate requests for such services. Servers are classified by the services they provide. For example, a web server serves web pages and a file server serves computer files. A shared resource may be any of the server computer's software and electronic components, from programs and data to processors and storage devices. The sharing of resources of a server constitutes a service. Whether a computer is a client, a server, or both, is determined by the nature of the application that requires the service functions. For example, a single computer can run a web server and file server software at the same time to serve different data to clients making different kinds of requests. The client software can also communicate with server software within the same computer. Communication between servers, such as to synchronize data, is sometimes called inter-server or server-to-server communication. Client and server communication Generally, a service is an abstraction of computer resources and a client does not have to be concerned with how the server performs while fulfilling the request and delivering the response. The client only has to understand the response based on the well-known application protocol, i.e. the content and the formatting of the data for the requested service. Clients and servers exchange messages in a requestโ€“response messaging pattern. The client sends a request, and the server returns a response. This exchange of messages is an example of inter-process communication. To communicate, the computers must have a common language, and they must follow rules so that both the client and the server know what to expect. The language and rules of communication are defined in a communications protocol. All protocols operate in the application layer. The application layer protocol defines the basic patterns of the dialogue. To formalize the data exchange even further, the server may implement an application programming interface (API). The API is an abstraction layer for accessing a service. By restricting communication to a specific content format, it facilitates parsing. By abstracting access, it facilitates cross-platform data exchange. A server may receive requests from many distinct clients in a short period. A computer can only perform a limited number of tasks at any moment, and relies on a scheduling system to prioritize incoming requests from clients to accommodate them. To prevent abuse and maximize availability, the server software may limit the availability to clients. Denial of service attacks are designed to exploit a server's obligation to process requests by overloading it with excessive request rates. Encryption should be applied if sensitive information is to be communicated between the client and the server. Example When a bank customer accesses online banking services with a web browser (the client), the client initiates a request to the bank's web server. The customer's login credentials may be stored in a database, and the webserver accesses the database server as a client. An application server interprets the returned data by applying the bank's business logic and provides the output to the webserver. Finally, the webserver returns the result to the client web browser for display. In each step of this sequence of client-server message exchanges, a computer processes a request and returns data. This is the request-response messaging pattern. When all the requests are met, the sequence is complete and the web browser presents the data to the customer. This example illustrates a design pattern applicable to the clientโ€“server model: separation of concerns. Early history An early form of client-server architecture is remote job entry, dating at least to OS/360 (announced 1964), where the request was to run a job, and the response was the output. While formulating the clientโ€“server model in the 1960s and 1970s, computer scientists building ARPANET (at the Stanford Research Institute) used the terms server-host (or serving host) and user-host (or using-host), and these appear in the early documents RFC 5 and RFC 4. This usage was continued at Xerox PARC in the mid-1970s. One context in which researchers used these terms was in the design of a computer network programming language called Decode-Encode Language (DEL). The purpose of this language was to accept commands from one computer (the user-host), which would return status reports to the user as it encoded the commands in network packets. Another DEL-capable computer, the server-host, received the packets, decoded them, and returned formatted data to the user-host. A DEL program on the user-host received the results to present to the user. This is a client-server transaction. Development of DEL was just beginning in 1969, the year that the United States Department of Defense established ARPANET (predecessor of Internet). Client-host and server-host Client-host and server-host have subtly different meanings than client and server. A host is any computer connected to a network. Whereas the words server and client may refer either to a computer or to a computer program, server-host and client-host always refer to computers. The host is a versatile, multifunction computer; clients and servers are just programs that run on a host. In the client-server model, a server is more likely to be devoted to the task of serving. An early use of the word client occurs in "Separating Data from Function in a Distributed File System", a 1978 paper by Xerox PARC computer scientists Howard Sturgis, James Mitchell, and Jay Israel. The authors are careful to define the term for readers, and explain that they use it to distinguish between the user and the user's network node (the client). By 1992, the word server had entered into general parlance. Centralized computing The client-server model does not dictate that server-hosts must have more resources than client-hosts. Rather, it enables any general-purpose computer to extend its capabilities by using the shared resources of other hosts. Centralized computing, however, specifically allocates a large number of resources to a small number of computers. The more computation is offloaded from client-hosts to the central computers, the simpler the client-hosts can be. It relies heavily on network resources (servers and infrastructure) for computation and storage. A diskless node loads even its operating system from the network, and a computer terminal has no operating system at all; it is only an input/output interface to the server. In contrast, a rich client, such as a personal computer, has many resources and does not rely on a server for essential functions. As microcomputers decreased in price and increased in power from the 1980s to the late 1990s, many organizations transitioned computation from centralized servers, such as mainframes and minicomputers, to rich clients. This afforded greater, more individualized dominion over computer resources, but complicated information technology management. During the 2000s, web applications matured enough to rival application software developed for a specific microarchitecture. This maturation, more affordable mass storage, and the advent of service-oriented architecture were among the factors that gave rise to the cloud computing trend of the 2010s. Comparison with peer-to-peer architecture In addition to the client-server model, distributed computing applications often use the peer-to-peer (P2P) application architecture. In the clientโ€“server model, the server is often designed to operate as a centralized system that serves many clients. The computing power, memory and storage requirements of a server must be scaled appropriately to the expected workload. Load-balancing and failover systems are often employed to scale the server beyond a single physical machine. Load balancing is defined as the methodical and efficient distribution of network or application traffic across multiple servers in a server farm. Each load balancer sits between client devices and backend servers, receiving and then distributing incoming requests to any available server capable of fulfilling them. In a peer-to-peer network, two or more computers (peers) pool their resources and communicate in a decentralized system. Peers are coequal, or equipotent nodes in a non-hierarchical network. Unlike clients in a client-server or clientโ€“queueโ€“client network, peers communicate with each other directly. In peer-to-peer networking, an algorithm in the peer-to-peer communications protocol balances load, and even peers with modest resources can help to share the load. If a node becomes unavailable, its shared resources remain available as long as other peers offer it. Ideally, a peer does not need to achieve high availability because other, redundant peers make up for any resource downtime; as the availability and load capacity of peers change, the protocol reroutes requests. Both clientโ€“server and masterโ€“slave are regarded as sub-categories of distributed peer-to-peer systems. See also Notes Inter-process communication Network architecture
60839553
https://en.wikipedia.org/wiki/Procreate%20%28software%29
Procreate (software)
Procreate is a raster graphics editor app for digital painting developed and published by Savage Interactive for iOS and iPadOS. Designed in response to the artistic possibilities of the iPad, it was launched on the App Store (iOS) in 2011. Versions Procreate Currently, in version 5.2, Procreate for iPad was first released in 2011 by Tasmanian software company Savage Interactive. In 2013 it received an Apple Design Award, and was subject to wide publicity when artist Kyle Lambert's hyper-realistic Procreate finger painting of Morgan Freeman went viral. In the hands of professional artists, Procreate has been used to create the posters for Stranger Things, Logan, and Blade Runner 2049, as well as several covers for The New Yorker. It has also been adopted by fine artists, tattoo artists, and creatives at Marvel Comics, DC Comics, Disney Animation, and Pixar. In 2016 Procreate became one of the top ten best-selling iPad apps on the App Store, rose into the top two in 2017, and became the overall best-selling iPad app in 2018. It is regularly featured in Apple Inc.'s product launches. It has become an important digital art software for beginning and advanced artists alike Procreate Pocket Featuring a stripped-back interface released to the App Store in December 2014. Initially, Procreate Pocket incorporated almost every tool found in Procreate; however, as Procreate amassed additional features via multiple updates over the following years, Pocket fell behind. In 2018, Savage launched Procreate Pocket 2.0, which brought the iPhone version of the app back to feature parity with the iPad version. In December of 2018, Procreate Pocket received Apple's 'App of the Year' award. Notable users Users of Procreate include comics artist and DC Comics co-publisher Jim Lee, who has used it to sketch Batman and the Joker. British fine artist David Hockney created a series of landscape paintings using Procreate. Kyle Lambert, a poster artist notable for creating the Stranger Things poster in Procreate, is also known for his viral Procreate finger-painting of Morgan Freeman. Artist James Jean uses Procreate for film poster work, as with his poster for Blade Runner 2049. Art Director for Ubisoft and Electronic Arts, Raphael Lacoste, uses Procreate for studies. Concept artist Doug Chiang creates a robot, vehicle, and creature designs for Star Wars in Procreate. John Dyer, the English landscape painter, used Procreate as part of the 'Last Chance to Paint' project, a partnership with the Eden Project that sent Dyer to stay with the Yaminawรก in the Amazon rainforest, where he painted the experience. Procreate is also used in-house at Disney and Pixar. After winning an Apple Design Award in June 2013, Savage launched Procreate 2 in conjunction with iOS7, adding 4K file support, continuous auto-save, and customizable brushes. In December 2018, Procreate became the best-selling app on the App Store, and Procreate Pocket received Apple's 'App of the Year award. See also Digital art Digital painting Tradigital art Graphic art software Raster graphics Comparison of raster graphics editors References External links 2011 software Digital art Graphics software IOS software IPadOS software Raster graphics editors
1088744
https://en.wikipedia.org/wiki/Gadjah%20Mada%20University
Gadjah Mada University
Gadjah Mada University (; , abbreviated as UGM) is a public research university located in Sleman, Special Region of Yogyakarta, Indonesia. Officially founded on 19 December 1949, Gadjah Mada University is one of the oldest and largest institutions of higher education in the country. It is widely known as the largest and the first state university in the nation. It has been credited as one of the best universities in Indonesia. In the 2021 QS World Universities Ranking, UGM is ranked 1st in Indonesia and 254th in the world. When the university was established in the 1940s under Dutch rule, it was the first medicine faculty freely open to native Indonesians, at a time when native education was often restricted. Comprising 18 faculties and 27 research centers, UGM offers 68 undergraduate, 23 diploma, 104 master and specialist, 43 doctorate and 4 clusters of post doctoral study programs. The university has enrolled approximately 55,000 students, 1,187 foreign students, and has 2,500 faculty members. UGM maintains a campus of , with facilities that include a stadium and a fitness center. The university is named after Gajah Mada, a 14th-century leader of the Majapahit Empire of Java, considered by some historians to be the nation's first unifier; the university's name still reflects the old Dutch-era spelling. The 7th President of Indonesia, Joko Widodo, earned his degree in forestry in 1985. History UGM was the first state university in Indonesia, established as Universiteit Negeri Gadjah Mada (UNGM) when Indonesia was still facing threats from the Netherlands, who wanted to regain control. At the time, the capital of Indonesia had moved from Jakarta to Yogyakarta. UGM was established through Government Regulation (PP) No. 23 of 1949, regarding the merger of colleges to form a university. Although the regulations were dated 16 December, UGM's inauguration took place on 19 December, intentionally chosen to coincide with the anniversary of the Dutch invasion of Yogyakarta, exactly one year prior on 19 December 1948. The intentional date was meant to show that one year after the Netherlands had invaded the city, the government would establish a nationwide university there. When it was founded, UGM had six faculties: Medicine, Dentistry, and Pharmacy; Law, Social and Political Sciences; Engineering; Letters, Pedagogy and Philosophy; Agriculture; and Veterinary Medicine. From 1952 until 1972, the Faculty of Law, Social and Political Sciences was split into two faculties: the Surabaya branch of the Faculty of Law, Social, and Political Sciences; and the Faculty of Education and Teacher Training, which was integrated into IKIP Yogyakarta (now Universitas Negeri Yogyakarta). During its initial years of Dutch resistance, the university taught literature and law in the buildings and other facilities belonging to the palace of Sultan Hamengkubuwono IX, who volunteered his palace for the university's use. UGM gradually established a campus of its own in Bulaksumur, on the northern side of Yogyakarta, and now occupies an area of three square kilometres. Main buildings The UGM main building is called the Balairung, a rectorate building, in Sleman. Nearby is the Graha Sabha Pramana, a large building utilized for graduation ceremonies, with an adjoining square used for sport and recreation. There is also a university library and a sports center, consisting of a stadium, tennis court, and basketball field. Most of the main campus is located in Sleman, with the small parts (such as part of the Vocational School and part of the Faculty of Social and Political Sciences) is located within Yogyakarta city. Faculties and schools The UGM administration is divided into 18 faculties, offering study programs from the undergraduate to post doctoral level. There is also a vocational school offering vocational study programmes. Faculties Faculty of Biology Faculty of Agricultural Technology Faculty of Agriculture Faculty of Animal Science Faculty of Cultural Sciences (Arts and Humanities) Faculty of Dentistry Faculty of Economics and Business Faculty of Engineering Faculty of Forestry Faculty of Geography Faculty of Law Faculty of Mathematics and Natural Sciences Faculty of Medicine Faculty of Pharmacy Faculty of Philosophy Faculty of Psychology Faculty of Social and Political Science Faculty of Veterinary Medicine Undergraduate programmes International undergraduate programmes Chemistry Tourism Accounting, Business, Economics Law International Relations Computer Science English Medicine Computer Science International Undergraduate Programme CSIUP began in the 2012 academic year. It offers undergraduate computer science classes in English. It teaches algorithm and software design, intelligent systems, programmable logic and embedded systems, and mobile computing. The Faculty of Mathematics and Natural Sciences has been teaching Computer Science courses since 1987 (BSc), 2000 (MSc), and 2003 (PhD), organized jointly by the Department of Mathematics and the Department of Physics, which has also offered courses in Electronics and Instrumentation since 1987 (BSc). In 2010, the Department of Computer Science and Electronics (DCSE) was formed by merging Computer Science resources within the Department of Mathematics with the Electronics and Instrumentation group within the Department of Physics. Students of DCSE have won gold medals in robotics competitions both nationally and internationally (in Korea in 2012 with a humanoid robot, and in the US in 2013 with a legged robot). Medicine International Undergraduate Programme In 2002, UGM began offering an English-language-based medicine programme for overseas and Indonesian students to study medicine with an international standard curriculum. The International Medicine Programme is over five years, with the first three and a half years being study and a further one and a half years of clinical rotations. The programme is designed around a problem based learning approach, making use of small study groups. Schools UGM Graduate School UGM Vocational School Business school In 1988, UGM opened a master's programme in management (MM-UGM), to train students in business practices. The program is a collaboration with the University of Kentucky and Temple University. The Faculty of Economics and Business UGM is ranked among 5% of world best business schools after it received an international Association to Advance Collegiate Schools of Business (AACSB) accreditation. Medical school The Faculty of Medicine UGM is one of the oldest medical schools in Indonesia, having been established on 5 March 1946. It is ranked number 72 by the Times Higher Education Supplement 2006 for biomedicine. Post Doctoral Programmes Cluster of Social & Humanity Sciences Cluster of Medical & Health Sciences Cluster of Science & Technology Cluster of Agro Complex / Life & Agro Sciences Research centers UGM has 24 university-level research and study centers: Center for Agroecology and Land Resources Studies Center for Asia - Pacific Studies Center for Disaster Studies Research Center for Biotechnology Center for Economic and Public Policy Studies Center for Economic Democracy Studies Center for Energy Studies Center for Clinical Pharmacology Studies and Drug Policy Center for Security and Peace Studies. Center for Cultural Studies Center for Population and Policy Studies Center For Environmental Studies Center for Pancasila Studies Center for Pharmaceutical Industry and Health Technology Studies Center for Food and Nutrition Studies Center for Tourism Studies Center for Rural and Regional Development Studies Research Center for Management of Biological Resources Center for World Trade Studies Center for Studies in Regional Development Planning Center for Southeast Asian Social Studies Center For Marine Resource Development and Technology Center For Transportation and Logistics Studies Center For Women Studies UGM maintains the Integrated Research and Testing Laboratory (LPPT), which is the university's central laboratory. Achievements In 2013, the chemistry undergraduate program received accreditation from the Royal Society of Chemistry (RSC) in the United Kingdom, the largest European-based international organization devoted to the advancement of chemical science. The first such international accreditation received by the university, it is effective from 5 March 2013 until March 2018. Rankings The university was ranked 254th in the world be QS World University Rankings for 2021. UGM is ranked in the top 50 in the world, according to Times Higher Education (THE) Impact Ranking on the seven criteria of Sustainable Development Goals (SDGs). THE Impact Ranking this year was attended by 766 prestigious institutions throughout the world. In the overall assessment this year, UGM ranked 72 in the world. UGM ranks 16th in the world for Zero Hunger, 24th in the Partnership for the Goals indicator, 25th in the world for No Poverty indicators, and 26th in the world for the indicator for Mainland Ecosystems (Life on Land). For the Clean Water and Sanitation indicator, UGM ranks 34th in the world, the Decent Work and Economic Growth indicator rank 41st in the world. For the Reduced Inequalities indicator, it ranks 49th in the world. UGM also ranks 51โ€“100 in the world for 5 SDGs, 101-200 for the 2 SDGs, and 201-300 for the other three SDGs. Student achievement 1st-place winner of Fire-Fighting category, 1st-place winner of Stand Balancing, and 2nd-place winner of Walker Challenge, Robogames competition, USA 2012 3rd Best Memorial Award Asia Cup 2012. International Law Moot Court Competition Asia Cup 2012, Japan The Best Technical Innovation Award for eSemar Xperimental, Shell Eco-Marathon (SEM) 2011, Malaysia Winner of Outstanding Achievement in 62nd Intel International Science and Engineering Fair โ€“ China Association for Science and Technology 2011 in Los Angeles 1st winner of Creative Robot, The 13th International Robot Olympiad 2011, Indonesia The Standard Commercial Movie Category Award, 7th GATSBY Student CM Award Student life Student orientation Every year UGM welcomes new students by holding a one-week student orientation session called PPSMB Palapa (Pelatihan Pembelajar Sukses bagi Mahasiswa Baru Palapa, "Training for New Students to be Successful Learners", named after Palapa oath), which involves a short course introducing UGM's common knowledge, values, rules, and soft-skill education. On the last day of the program, there is a closing ceremony where students make a formation of a symbol or logo. In 2018, the students created a formation called Bersatu Nusantara Indonesia ("United Indonesian Archipelago") with the Indonesian national flag, to encourage a spirit of unity across differences in the country. Community service UGM organizes a community service called KKN-PPM (short for Kuliah Kerja Nyata-Pembelajaran Pemberdayaan Masyarakat or "Student Community Service-Community Empowerment Learning", in English), which is obligatory for undergraduate students. KKN-PPM is a research-based community service offered three times each academic year, in the middle of both the odd semester and even semester and between these two semesters. Not only local students joining the KKN, but also international academicians, including lecturers and students, are involved in KKN-PPM UGM. In 2011, 150 international students participated in KKN-PPM, coming from many countries, such as South Korea, Australia, France, the US, and Norway. Other activities The Sports Activities Unit is coordinated by the Secretariat of Joint Sports, and the Arts Unit is coordinated by the Joint Secretariat of Arts. Sports activities include swimming, diving, inkai karate, kenpล, the Indonesian martial art pencak silat (including the variants of pencak silat merpati putih, self periasi pencak silat, pencak silat pro patria, and pencak silat setia hati terate), taekwondo, judo, hockey, soccer, softball, volleyball, basketball, athletics, equestrian, bridge, badminton, chess, and tennis. Arts activities include Arts Style Yogyakarta (Swagayugama), Art Style Surakarta, Balinese dance, creative dance, photography, fine arts, Gamma Band, marching band, โ€˜โ€™keroncongโ€™โ€™, student choir, theatre, and others arts. Other activities include Publisher Student Press Agency, Mapagama, Student Health Unit, Scout, Satmenwa, Cooperative Students "Kopma UGM", and AIESEC. Spirituality activities include the Unit of Islamic Spirituality (Jama'ah Shalahuddin), Unit of Catholic Spirituality, Christian Spirituality Unit, Hindu Spirituality Unit, Buddhism and Spiritual Unit. Reasoning activities include the Interdisciplinary Unit of Scientific Reasoning, Gama Scholar Reasoning Unit, and English Debating Society. Transportation There are sepeda kampus (campus bike) service available inside UGM, with 8 stations and 5 substations across the campus. UGM campus is also served by Trans Jogja bus stations in several locations, notably near the Faculty of Medicine, Vocational School and lecturer's eastern housing. Other facilities UGM Campus Mosque is a mosque owned by UGM and situated within its campus. It was designed entirely by the students of UGM Architecture Engineering department. It holds maximum capacity of 10,000 pilgrims, making it one of the largest mosques in Southeast Asia. Madya Stadium, the softball/baseball field, and the tennis courts are located in the valley of UGM. The stadium can be used for football, athletics, hockey, and other activities. These facilities are available to UGM students, staff and the public. The Student Center Hall is used for sports activities such as basketball, volleyball, badminton, and martial arts, and for exhibitions and artistic performances. The open field in the valley of UGM can be used for musical performances or other student activities that require a wide open space. UGM also has several student dormitories across Yogyakarta Controversies Yogyakarta Principles The Yogyakarta Principlesโ€”a set of principles set forth at Geneva, Switzerland, which were intended to apply international human rights law guidelines in support of the human rights of lesbian, gay, bisexual, and transgender (LGBT) peopleโ€”were developed at Gadjah Mada University. However, the Yogyakarta communities, civil societies, and the Sultanate of Yogyakarta have not subscribed to these principles. The principles were deemed as being against the Constitution of Indonesia and Pancasila ideology by the Regional People's Representative Council (DPRD), Islamic and religious groups, and civil prosecutors, who attacked the LGBT community as being suspect in "promoting communism or westernization", although the Yogyakarta Principles merely address ending violence, abuse, and discrimination of LGBT people. 2016 student demonstration In 2016, more than 1000 of UGM's student and staff flocked to the university's headquarters for a demonstration that was said to be the biggest after the 1998 national demonstration. The demonstration went peacefully, with no damage reported by the university, although it got a bit heated when the university's rector, Mrs. Dwikorita Karnawati, claimed that the demonstration was a simulation officially held by UGM. There were three factors that led to this demonstration: tuition (uang kuliah tunggal) that was deemed too expensive; the university's status as a "state university with corporation status" (PTNBH), which led to the tuition fee rate ruling by the university; and to stop the relocation of so-called "bonbin" canteen located between Faculty of Cultural Sciences and Faculty of Psychology. 2017 refusal to report alleged sexual assault On 5 November 2018, UGM's student publication body BPPM Balairung through its online portal Balairungpress.com published an article containing the account, from a female student ("Agni"), of an alleged rape she experienced at the hands of a male fellow student ("HS") while doing a student work experience (Kuliah Kerja Nyata โ€“ KKN) program in Seram Island, Maluku in June 2017. However, the case is still under investigation. When learning of the rape allegation, UGMโ€“KKN officials chose not to forward Agni's accusation to the police. Instead, they were skeptical of Agni's account. Regardless, HS was pulled from the KKN program about a week later because he was deemed to be "incompatible" with other KKN participants. After Agni returned to Yogyakarta in September 2017, she received a C-grade for the program, apparently in retaliation for the shame her allegation had brought upon an official. Agni then filed a formal complaint about her alleged rape to higher-ranking officials at the university, who raised her grade to A/B but still did not report HS to law enforcement. Instead, the university agreed to pay for the counseling Agni had been seeking to deal with her trauma, as well as requiring HS to go to counseling as well. HS was allowed to take part in another KKN program the semester after the alleged rape, and he is expected to graduate soon. UGM Spokesperson Iva Ariani confirmed the account as told in Balairung Press and says that the university is now taking further steps to investigate the rape allegation. "The case as told in Balairung Press did indeed happen. UGM has extraordinary empathy for the victim, we are also concerned about the incident", she told Kompas. Notable alumni University rectors Sukadji Ranuwihardjo โ€“ Rector of Gadjah Mada University (1973โ€“1981) Pratikno โ€“ Rector of Gadjah Mada University (2012โ€“2014), current Minister of State Secretariat Education Anies Baswedan โ€“ Minister of Education and Culture of the Republic of Indonesia (2014โ€“2016), academician, current Governor of Special Capital Region of Jakarta Economics J Soedrajad Djiwandono โ€“ former Governor of the Central Bank of Indonesia (1993โ€“1998), Junior Minister of Trade (1988โ€“1993) Perry Warjiyo โ€“ Governor of the Central Bank of Indonesia Health Siti Fadillah Supari โ€“ Minister of Health (2004โ€“2009), cardiologist Politics Dewa Made Beratha โ€“ Governor of Bali (1998โ€“2008) Boediono โ€“ Vice President of Indonesia (2009โ€“2014), former Coordinating Minister for Economic Affairs, former Governor of the Central Bank of Indonesia Brigida Antรณnia Correia - East Timor MP (2007โ€“18) & agricultural scientist Sri Sultan Hamengkubuwono X โ€“ 10th and current Sultan of Yogyakarta, Governor of the modern Yogyakarta Special Region Airlangga Hartarto โ€“ politician, Minister of Industry (2016 - 2019), Coordinating Minister for Economic Affairs (2019โ€“Present) Retno Marsudi โ€“ current Minister of Foreign Affairs, former Indonesian Ambassador to the Netherlands (2012โ€“2015) Jahja Muhaimin โ€“ former Education Minister of Indonesia Fadel Muhammad โ€“ Vice President of ASEAN Business Forum, Governor of Gorontalo (2001โ€“2006) Ganjar Pranowo โ€“ Politician & Governor of Central Java (2013โ€“2018) & (2018โ€“Present) Amien Rais โ€“ former leader of Muhammadiyah Abdul Rahman Saleh โ€“ Attorney General of Indonesia Ben Mang Reng Say โ€“ politician, founder and rector of Atma Jaya Catholic University Budiman Sudjatmiko โ€“ politician Joko Widodo โ€“ President of Indonesia, former Governor of Jakarta, former Mayor of Surakarta Religion Ahmad Wahib โ€“ progressive Islamic intellectual Arts and culture Sapardi Djoko Damono โ€“ poet, professor at University of Indonesia Artika Sari Devi โ€“ actress, model, Puteri Indonesia 2004 and Top 15 Miss Universe 2005 in Bangkok, Thailand Helmi Johannes โ€“ Voice of America (VOA) Indonesia Executive Producer (2005โ€“present) Umar Kayam โ€“ author and former President of Jakarta Art Institute Kuntowijoyo โ€“ historian, author Eka Kurniawan โ€“ author, first Indonesian nominated for the Man Booker International Prize Emha Ainun Nadjib โ€“ poet, public speaker Jakob Oetama โ€“ founder of Kompas & CEO of Kompas Gramedia Susanto Pudjomartono โ€“ second chief editor of The Jakarta Post (1991โ€“2003), Ambassador to Russia (2003โ€“2008) Willibrordus S. Rendra โ€“ poet, lyricist, dramatist, and stage writer Putu Wijaya โ€“ novelist Science and technology Marlina Flassy - anthropologist and Dean of the Faculty of Social and Political Sciences at Cenderawasih University, where she was the first woman dean, and first indigenous Papuan to lead her faculty. Basuki Hadimuljono โ€” Minister of Public Works & Housing (2014 - 2019) & (2019 - Present) Teuku Jacob โ€“ Palaeoanthropologist, physician, anatomist Herman Johannes โ€“ Rector, scientist, former Minister of Public Works (1950โ€“1951) Djoko Kirmanto โ€” Minister of Public Works & Housing (2004 - 2014) Sutopo Purwo Nugrohoโ€”Leading spokesperson on issues about natural disasters in Indonesia Mohammad Sadli โ€“ Minister of Mineral Resources (1973โ€“1978), Minister of Labor (1971โ€“1973), Professor of Economics at University of Indonesia Lolo Soetoro โ€“ Geographer and stepfather of Barack Obama, the 44th President of the United States Budi Karya Sumadi โ€” Minister of Transportation (2016 - 2019) & (2019 - Present) See also Education in Indonesia List of universities in Indonesia List of Gadjah Mada University people, including notable alumni Yogyakarta Principles References External links Official website (English version) Universities in Indonesia Educational institutions established in 1949 ASEAN University Network Universities using Problem-based learning Veterinary schools in Indonesia Forestry education 1949 establishments in Indonesia Education in Yogyakarta Universities in the Special Region of Yogyakarta Indonesian state universities
12856894
https://en.wikipedia.org/wiki/Qmodem
Qmodem
Qmodem was an MS-DOS shareware telecommunications program and terminal emulator. Qmodem was widely used to access bulletin boards in the 1980s and was well respected in the Bulletin Board System community. Qmodem was also known as Qmodem SST and QmodemPro. History Qmodem was developed by John Friel III in 1984 and sold as shareware through a company called The Forbin Project. Qmodem gained in popularity very quickly because it was much faster and had many new features compared to PC-Talk, the dominant shareware IBM PC communications program of that time. Originally developed in Borland Turbo Pascal, the application originally supported the Xmodem protocol, gradually added support for other protocols such as the popular Zmodem protocol and CompuServe-specific protocols such as CIS-B and CIS-B+. Qmodem evolved to include features such as the ability to host a simple Bulletin Board System. The application was sold to Mustang Software in 1991 and in 1992 version 5 of the program was released. Qmodem Pro It is a successor of Qmodem, by Mustang Software, Inc. Several versions had been released for MS-DOS and for Microsoft Windows with the final version being QmodemPro 2.1 for Windows 95 and Windows NT which was released July 7, 1997. QmodemPro continued to be sold by Mustang Software through 2000 when the rights to it were purchased by Quintus Corporation. Its status is now abandonware. Awards 1992 John Friel received the Dvorak Award for his development of Qmodem. 1994 Mustang Software, Inc., received the Dvorak Award for QmodemPro for Windows. Qodem An independent free software re-implementation of Qmodem for Unix-like systems called Qodem started development in 2003. Qodem is in active development and has features common to modern communications programs, such as Unicode display, and support for the telnet and ssh network protocols. It has also been ported to Microsoft Windows. See also List of terminal emulators References External links Mustang Software, Inc. page: QmodemPro Qmodem release archive 1984 software Shareware DOS software Windows software Communication software Discontinued software Free communication software Free terminal emulators Software clones
31573777
https://en.wikipedia.org/wiki/TSS%20Manx%20Maid%20%281962%29
TSS Manx Maid (1962)
TSS (RMS) Manx Maid (II) was built by Cammell Laird at Birkenhead in 1962, and was the second ship in the Company's history to bear the name. Dimensions Tonnage 2724; length 325'; beam 50'; depth 18'; speed 21 knots; bhp 9,500. Construction costs were ยฃ1,087,000, the first vessel of the Isle of Man Steam Packet Company to cost over one million pounds. Manx Maid was launched by Mrs. A. Alexander at Birkenhead, on Tuesday 23 January 1962. Service life The "Maid", as she was always affectionately known, was certified for 1400 passengers and a crew of 60. In engineering terms she was very similar to her predecessor except for Babcock & Wilcox integral furnace boilers, installed instead of the sectional header type. Manx Maid was a great success and was of major importance in the history of the Isle of Man Steam Packet Company, as she was the first vessel to be designed as a car ferry; she had the capacity for up to 90 cars and light commercial vans. The design principle for vehicle loading was simple. A spiral set of ramps at the stern linked with the car deck, so that vehicles could be driven on or off from the appropriate level on departure or arrival. This patented system of ramps facilitated loading and unloading at any state of the tide, at any of the ports served by the company. Cars had been carried to the Isle of Man for many years prior to Manx Maid's arrival, but with the tidal range at Douglas being considerable, it necessitated taking the vehicle on and off by crane, a slow and irksome process. Consequently, the carriage of cars had never reached large proportions. The decision to construct a new generation of car-ferrying vessels was taken by the company in 1959, and in 1960 a contract was placed with Cammell Laird. Manx Maid was launched on 23 January 1962. The design of the 'side-loader' with a spiral ramp at the stern was a unique feature of the Steam Packet Company's car ferries (Manx Maid, , and ). She was the first Company vessel to be fitted with anti-roll stabilisers. In November 1974 Manx Maid collided with the Fort Ann Jetty in Douglas Harbour during rough conditions. No one was hurt in the collision, but the vessel had to be dry-docked at Birkenhead. During her repairs she was the focus of an industrial dispute and only returned to service on 27 May 1975, just in time for the busy T.T. Period. Manx Maid was the thirteenth vessel built for the Steam Packet by Cammell Laird; since the first was delivered by the yard in 1910. In 1979 Manx Maid was fitted with a 500 horsepower bow thruster mechanism, similar to that fitted to her younger sister Ben-my-Chree the previous winter. Disposal With the introduction of Manx Line's ro-ro service (operated by ) between Douglas and Heysham, the inefficiency of the Steam Packet's side-loading car ferries became increasingly apparent, and the decision was made to retire both the Manx Maid and her younger sister Ben-my-Chree. Whilst their higher fuel consumption would initially be seen as the cost to dispose, the reality was the steam plants were very expensive to maintain, and just not as efficient. The Steamers averaged 9 tons of fuel on a Douglas - Liverpool trip whilst the motor ships and less than 4. After over 20 years of reliable service, Manx Maid made her final sailing from Douglas on Sunday 9 September 1984, ten days before her younger sister. Gallery References Bibliography Chappell, Connery (1980). Island Lifeline T.Stephenson & Sons Ltd Ships of the Isle of Man Steam Packet Company Ships built on the River Mersey 1962 ships Ferries of the Isle of Man Steamships of the United Kingdom Merchant ships of the United Kingdom
18301841
https://en.wikipedia.org/wiki/Comparison%20of%20computer-aided%20design%20software
Comparison of computer-aided design software
The table below provides an overview of notable computer-aided design (CAD) software. It does not judge power, ease of use, or other user-experience aspects. The table does not include software that is still in development (beta software). For all-purpose 3D programs, see Comparison of 3D computer graphics software. CAD refers to a specific type of drawing and modelling software application that is used for creating designs and technical drawings. These can be 3D drawings or 2D drawings (like floor plans). See also 3D data acquisition and object reconstruction CAD/CAM in the footwear industry Comparison of CAD, CAM and CAE file viewers Comparison of EDA software List of CAx companies References Computer-aided design editors
2248859
https://en.wikipedia.org/wiki/Yoix
Yoix
In computer programming, Yoix is a high-level, general-purpose, interpreted, dynamic programming language. The Yoix interpreter is implemented using standard Java technology without any add-on packages and requires only a Sun-compliant JVM to operate. Initially developed by AT&T Labs researchers for internal use, it has been available as free and open source software since late 2000. History In 1998, Java technology was still emerging: the Swing toolkit was an add-on package; interruptible I/O, regular expressions, and a printf capability were not yet features; nor had Java Web Start been developed. Moreover, Java scripting languages were largely non-existent at that time: Groovy and JRuby had not yet been invented and Jython had just been created in late 1997. Browsers in 1998 had limited feature sets, were too unstable for production use in an 8-hour shift and were still fighting skirmishes in the Browser Wars. In this environment, Yoix technology was created in response to a pressing need for a reliable, easy to distribute and maintain, GUI front-end for a mission-critical application being developed within AT&T, namely its Global Fraud Management System, which to this day monitors and tracks fraud activity related to voice traffic on AT&T's expanding networks: wireline, wireless, and IP. Yoix technology was first released to the public in late 2000 under the Open Source Initiative Common Public License V1.0. The Yoix name came about partially from the fox hunting cry of encouragement to the hounds, partially to echo another familiar four-letter name that ends in ix, and partially to avoid too many false-positives in a Google search. Overview Yoix technology provides a pure Java programming language implementation of a general purpose dynamic programming language developed by researchers at AT&T Labs. Its syntax and grammar should be easy to learn for those familiar with the C programming language and Java. To an end-user, a Yoix application is indistinguishable from a Java application, but to the application developer Yoix should provide a simpler coding experience than working in Java directly, much like writing Perl code can be simpler than writing C code. Features The Yoix language is not an object oriented language, but makes use of over 165 object types that provide access to most of the standard Java classes. Because the Yoix interpreter is built entirely using Java technology, it means that Yoix applications are cross-platform, GUI-capable and both network and thread friendly, yet Yoix developers find themselves insulated from the more complex and error-prone parts of coding the same functionality directly in Java. It does not use reflection to access Java functionality and thus adds value by not only simplifying access to that functionality, but also improving application reliability by coding through both Java glitches and complicated Java features one-time, behind-the-scenes. The Yoix language includes safe pointers, addressing, declarations, and global and local variables. In addition to supporting native user functions, users can add their own builtin functions written in Java. Design The two central elements in the Yoix design are borrowed from the PostScript language: dictionaries as language components and permissions-protected dictionaries as exposed system components. Homage to the Tcl language and its exposure philosophy should also be given, though it did not have a direct influence. Another key Yoix design element involves pointers and addressing. Pointers and pointer arithmetic in the Yoix language is syntactically similar to what is found in the C language, but the Yoix implementation prevents using a pointer outside its bounds. In addition, the address operator always produces a valid, usable result. Overall, the Yoix design attempted to make the language easy to learn by programmers experienced with the C and Java languages. Applications The Yoix distribution includes the Yoix Web Application Instant Template (YWAIT), a software framework for building a Yoix web application. A Yoix web application resides on a web server and is downloaded piecemeal at run-time on an as-needed basis by Yoix interpreters running on client machines. This model, analogous to the familiar model of client web browsers downloading a website piecemeal as-needed at run-time, permits simple, efficient distribution and maintenance of applications and relies only on the ubiquitous web server and the Yoix interpreter. Building a web application using the YWAIT framework requires just a few standard Unix tools available in most modern operating systems, such as Linux or Mac OS X, or under Microsoft Windows with the help of add-on packages such as U/Win. The client side of a YWAIT-based application relies only on the Yoix interpreter and is thus platform independent, running wherever Java runs. Because the Yoix software development philosophy aims to keep things simple by eschewing the popular tendency for multiple embedded specialized languages and the YWAIT framework permits easy, incremental screen development in a simple, logical source tree hierarchy, development of a Yoix web application is reduced to the basics: a command prompt and a text editor. IDE enthusiasts may be nonplussed, but this Small Is Beautiful approach to software development has been highly effective in practice at AT&T. Data visualization In addition to its role as a tool for building GUI applications, Yoix technology supports several modes of data visualization. Data mining A data visualization module called YDAT (Yoix Data Analysis Tool) has been included in the public Yoix distribution since release 2.1.2. YDAT uses a data manager component to coordinate data display and filtering among its several visualization components that include an event plot, a graph drawing pane, histogram filters and tabular detail. YDAT is able to display graphs generated by the GraphViz graph drawing and layout tool, which is another open source tool freely available from AT&T Labs. YDAT is highly configurable at the Yoix language level. The image below is a screenshot of a Yoix YDAT instantiation, which in this example is being used to analyze vehicle auction transactions. Graph drawing Yoix technology provides good support for graph drawing. In addition to graph display mentioned above as part of the YDAT module, data types in the Yoix language support building, manipulating and traversing graph structures. Native Yoix functions support the DOT language output and a built-in DOT language parser to facilitate interaction with the GraphViz layout engines. Organizing cells of data The YChart data visualization toolkit was added to the Yoix distribution with release 2.2.0. YChart allows one to organize and display cells of data. Two interactive YChart applications contained in the Yoix distribution are a Periodic Table of the Elements and a Unicode Chart. A program to demonstrate using YChart with variable width cells, as might occur with a schedule, is also available in the Yoix distribution. Interactive 2D graphics The Yoix distribution also includes a Yoix package, called Byzgraf, for rendering basic data plots such as line charts, histograms and statistical box plots. Limitations and focus As currently implemented, the Yoix language is interpreted, which means that, for example, it is probably not the right choice for computationally intensive applications unless one codes those computations in a Java module extension. Similarly, excessive looping will also display the limitations of this interpreted language. The focus of the language is interactive standalone or client/server GUI and data visualization applications. Licensing Yoix technology is free software licensed under the Open Source Initiative Common Public License. Yoix is a registered trademark of At&T Inc. Examples 1. Extract all HTML directives from the AT&T home page and write them to standard output. import yoix.*.*; URL att = open("https://www.att.com", "r"); String text; int cnt = 0; while (cnt >= 0) { if ((cnt = fscanf(att, " <%[^>]>", &text)) > 0) printf("<%s>\n", text); else cnt = fscanf(att, " %*[^<]"); // discard } 2. Build and display a GUI with two buttons in a titled frame (i.e., window) that also has a titled border. One button pops up a message when pressed, the other quits the example. The window is sized automatically to just fit its components, and some additional code calculates its location to put it in the center of the screen before making it visible. import yoix.*.*; JFrame jf = { Dimension size = NULL; // auto-size window FlowLayout layoutmanager = { int hgap = 18; // 0.25 inch gap }; String title = "Wikipedia Yoix Example"; // window title String border = "Simple Button Example"; // border title Array layout = { new JButton { String text = "Press for Message"; actionPerformed(ActionEvent ev) { showMessageDialog(root, "Hello, world.", "Message Example"); } }, new JButton { String text = "Press to Exit"; actionPerformed(ev) { // ActionEvent declaration can be omitted exit(0); } }, }; }; // set frame location to center of screen now that frame size is known jf.location = new Point { int x = (VM.screen.width - jf.size.width) / 2; int y = (VM.screen.height - jf.size.height) / 2; }; // make it visible jf.visible = TRUE; 3. The code shown here was used to generate the Yoix logo image in PNG format that can be seen in the language description box near the top of this page. Command-line arguments allow the size of the image to be specified as well as select between PNG image output or display in an on-screen window. import yoix.*.*; BuildYoixLogoImage(double height, Color color, int addshadow) { // create the basic image, without shadow GenImage(double height, Color color, Font imagefont, double scale) { Image yoixlogo = { int type = TYPE_RGB_ALPHA; Color imgcolor = color; double scale = scale; Font imagefont = imagefont; Font regfont = imagefont.scalefont(0.5, 0.5); Graphics graphics = { Font font = imagefont; int textantialiasing = TRUE; }; double ywd = stringWidth(graphics.font, "Y"); Dimension size = { double height = height; double width = ywd * 5.25; }; double owd = stringWidth(graphics.font, "o"); double iwd = stringWidth(graphics.font, "i"); double xwd = stringWidth(graphics.font, "x"); ywd += iwd; ywd /= 2.0; paint(Rectangle r) { double alpha = 1.0; double alpha2 = 0.3333; int limit = 12; graphics { gsave(); erasedrawable(0.0); // for transparent PNG rectclip(r); setrgbcolor(imgcolor.red, imgcolor.green, imgcolor.blue); translate(48 * this.scale, 44 * this.scale); for (n=0; n<limit; n++) { moveto(0.0, 0.0); setfont(this.imagefont); // "handmade" kerning show("Y", alpha); if (n == 0) { moveto(ywd, 0.0); show("o", alpha); moveto(ywd + owd - 0.3 * iwd, 0.0); show("i", alpha); moveto(ywd + owd + 0.8 * iwd, 0.0); show("x", alpha); moveto(ywd + owd + 0.8 * iwd + xwd, -this.imagefont.height * 0.33); setfont(this.regfont); show("\xAE", alpha); alpha = alpha2; } alpha *= 0.75; rotate(30); } grestore(); } } }; return(yoixlogo); } Font basefont = { String name = "ClearviewATT-plain-48"; }; double scale = height / 90.0; Font imagefont = basefont.scalefont(scale, scale); if (addshadow) { Image logo = GenImage(height, color, imagefont, scale); image = new Image { int type = TYPE_RGB_ALPHA; Image source = logo; Image img = logo; // convolve image to make a (lightened) shadow Image shadow = new Image { int type = TYPE_RGB_ALPHA; Image source = img; Array kernel = new Array[100]; Pointer ptr; for (ptr in kernel) *ptr = 0.0055; paint() { convolve(kernel); } }; // combine the image and shadow into one image paint(Rectangle r) { graphics { gsave(); moveto(0, 0); showimage(this.img); moveto(this.img.size.height * 0.005, this.img.size.height * 0.02); showimage(this.shadow); grestore(); } } }; } else { image = GenImage(height, color, imagefont, scale); } return(image); } // rudimentary argument processing (getopt is also available) // first argument is height of image double sz = (argc > 1) ? atof(argv[1]) : 270; int shdw = 1; int print = 0; // second argument: if 0/1 turn shadow off/on, otherwise // assume it is a filename for printing. if (argc > 2) { if (argv[2] =~ "^[01]$") { shdw = atoi(argv[2]); } else { print = 1; } } Image yoixlogo = BuildYoixLogoImage(sz, Color.black, (sz >= 72) && shdw); if (print) { Stream output; if ((output = open(argv[2], "w")) != NULL) { encodeImage(yoixlogo, "png", output); close(output); } } else { JFrame jf = { int visible = TRUE; Dimension size = NULL; Array layout = { new JPanel { Dimension preferredsize = { double width = yoixlogo.size.width; double height = yoixlogo.size.height; }; Color background = Color.white; Image backgroundimage = yoixlogo; int backgroundhints = SCALE_NONE; }, }; }; } References External links Archive of original AT&T Labs-Research: Yoix Home Page Web Engineering Workshop Paper Software - Practice & Experience Paper Free compilers and interpreters Procedural programming languages Scripting languages Dynamic programming languages Programming languages created in 2000 JVM programming languages Software using the CPL license
2785391
https://en.wikipedia.org/wiki/Microsoft%20Virtual%20Server
Microsoft Virtual Server
Microsoft Virtual Server was a virtualization solution that facilitated the creation of virtual machines on the Windows XP, Windows Vista and Windows Server 2003 operating systems. Originally developed by Connectix, it was acquired by Microsoft prior to release. Virtual PC is Microsoft's related desktop virtualization software package. Virtual machines are created and managed through a Web-based interface that relies on Internet Information Services (IIS) or through a Windows client application tool called VMRCplus. The last version using this name was Microsoft Virtual Server 2005 R2 SP1. New features in R2 SP1 include Linux guest operating system support, Virtual Disk Precompactor, SMP (but not for the guest OS), x64 host operating system support, the ability to mount virtual hard drives on the host machine and additional operating systems support, including Windows Vista. It also provides a Volume Shadow Copy writer that enables live backups of the Guest OS on a Windows Server 2003 or Windows Server 2008 host. A utility to mount VHD images has also been included since SP1. Virtual Machine Additions for Linux are available as a free download. Officially supported Linux guest operating systems include Red Hat Enterprise Linux versions 2.1-5.0, Red Hat Linux 9.0, SUSE Linux and SUSE Linux Enterprise Server versions 9 and 10. Virtual Server has been discontinued and replaced by Hyper-V. Differences from Virtual PC VPC has multimedia support and Virtual Server does not (e.g. no sound driver support). VPC uses a single thread whereas Virtual Server is multi-threaded. VPC will install on Windows 7, but Virtual Server is restricted from install on NT 6.1 or higher operating systems i.e. Server 2008 R2 and Windows 7. VPC is limited to 127GB .vhd (per IDE CHS specification), however Virtual Server can be made to access .vhd up to 2048GB (NTFS max file size). Version history Microsoft acquired an unreleased Virtual Server from Connectix in February 2003. The initial release of Microsoft's Virtual Server, general availability, was announced on September 13, 2004. Virtual Server 2005 was available in two editions: Standard and Enterprise. The Standard edition was limited to a maximum 4 processors for the host operating system while the Enterprise edition was not. On 2006-04-03, Microsoft made Virtual Server 2005 R2 Enterprise Edition a free download, in order to better compete with the free virtualization offerings from VMware and Xen, and discontinued the Standard Edition. Microsoft Virtual Server R2 SP1 added support for both Intel VT (IVT) and AMD Virtualization (AMD-V). Limitations Known limitations of Virtual Server, , include the following: Will not install on Windows 7 and Server 2008 R2 or newer operating systems. Upgrades from Vista/Server2008 can be patched. Although Virtual Server 2005 R2 can run on hosts with x86-64 processors, it cannot run x64 guests that require x86-64 processors (guests cannot be 64-bit). It also makes use of SMP, but does not virtualize it (it does not allow guests to use more than 1 CPU each). Performance may suffer due to the way the instruction set is virtualized in this platform, with very limited direct interaction with the host hardware. See also Virtual appliance Virtual disk image x86 virtualization References External links Benchmarking Microsoft Virtual Server 2005 Benchmarking VMware ESX Server 2.5 vs Microsoft Virtual Server 2005 Enterprise Edition HyperAdmin: Microsoft Virtual Server Management Microsoft Server Virtualization for government agencies Microsoft Virtual Server 2005 R2 Resource Kit Book Microsoft Virtual Server 2005 R2 SP1 System Center Virtual Machine Manager System requirements for Microsoft Virtual Server 2005 Virtualization software Virtual Server
1977241
https://en.wikipedia.org/wiki/Commodore%2064%20Games%20System
Commodore 64 Games System
The Commodore 64 Games System (often abbreviated C64GS) is the cartridge-based home video game console version of the popular Commodore 64 home computer. It was released in December 1990 by Commodore into a booming console market dominated by Nintendo and Sega. It was only released in Europe and was a considerable commercial failure. The C64GS came bundled with a cartridge that featured four games: Fiendish Freddy's Big Top O'Fun, International Soccer, Flimbo's Quest and Klax. The C64GS was not Commodore's first gaming system based on the C64 hardware. However, unlike the 1982 MAX Machine (a game-oriented computer based on a very cut-down version of the same hardware family), the C64GS is internally very similar to the complete C64, with which it is compatible. Available software Support from games companies was limited, as many were unconvinced that the C64GS would be a success in the console market. Ocean Software was the most supportive, offering a wide range of titles, some C64GS cartridge-based only, offering features in games that would have been impossible on cassette-based games, others straight ports of games for the original C64. Domark and System 3 also released a number of titles for the system, and conversions of some Codemasters and MicroProse games also appeared. Denton Designs also released some games, among them Bounces, which was released in 1985. The software bundled with the C64GS, a four-game cartridge containing Fiendish Freddy's Big Top O'Fun, International Soccer, Flimbo's Quest and Klax, were likely the most well known on the system. These games, with the exception of International Soccer, were previously ordinary tape-based games, but their structure and control systems (no keyboard needed) made them well-suited to the new console. International Soccer was previously released in 1983 on cartridge for the original C64 computer. Ocean produced a number of games for the C64GS, among them a remake of Double Dragon (which was only sold at trade shows), Navy SEALS, RoboCop 2, RoboCop 3, Chase HQ 2: Special Criminal Investigation, Pang, Battle Command, Toki, Shadow of the Beast and Lemmings. They also produced Batman The Movie for the console, but this was a direct conversion of the cassette game, evidenced by the screens prompting the player to "press PLAY" that briefly appeared between levels. Some of the earliest Ocean cartridges had a manufacturing flaw, where the connector was placed too far back in the cartridge case. The end result was that the cartridge could not be used with the standard C64 computer. Members of Ocean staff had to manually drill holes in the side of the cartridges to make them fit. System 3 released Last Ninja Remix and Myth: History in the Making, although both were also available on cassette. Domark also offered two titles, Badlands and Cyberball, which were available on cartridge only. Through publisher The Disc Company, a number of Codemasters and MicroProse titles were also reworked and released as compilations for the C64GS. Fun Play featured three Codemasters titles: Fast Food, Professional Skateboard Simulator and Professional Tennis Simulator. Power Play featured three MicroProse titles: Rick Dangerous, Stunt Car Racer and MicroProse Soccer, although Rick Dangerous was produced by Core Design, not MicroProse themselves. Stunt Car Racer and MicroProse Soccer needed to be heavily modified to enable them to run on the C64GS. Commodore never produced or published a single title for the C64GS beyond the bundled four-game cartridge. International Soccer was the only widely available game for the C64GS but had actually been written for the C64. Hardware-based problems The C64GS was plagued with problems from the outset. Firstly, despite the wealth of software already available on cartridge for C64, the lack of a keyboard means that most cannot be used with the console. This means that much of the cartridge-based C64 software, while fundamentally compatible with the C64GS, was unplayable. The standard C64 version of Terminator 2: Judgment Day was designed for the console, but was included on a cartridge that required the user to press a key in the initial menu to access the game, rendering it unplayable, despite the game itself being entirely playable with joystick only on a conventional C64. To partially compensate for the lack of a keyboard, the basic control system for the C64GS was a joystick supplied by Cheetah called the Annihilator. This joystick, while using the standard Atari 9-pin plug, offers two independent buttons, with the second button located on the base of the joystick. The joystick standard is fundamentally compatible with the ZX Spectrum's Kempston Interface and the Sega Master System, but no other joystick on the market offered compatibility with the proprietary second-button function. Standard C64 joysticks and Sega Master System controllers were fundamentally supported, but the lack of second-button support (the Sega Master System's second button did not function in the same way) meant that the Cheetah Annihilator was essential for playing certain titles such as Last Ninja Remix and Chase HQ 2. However, it was poorly built, had a short life, and was not widely available, making replacements difficult to come by. Primary reasons for failure Prior to the console's release, Commodore had generated a great deal of marketing hype to drum up interest in an already crowded market. Zzap!64 and Your Commodore, Commodore 64 magazines of the era, reported that Commodore had promised "up to 100 titles before December", even though December was two months from the time of its writing. In reality 28 games were produced for the console during its shelf life - most of which were compilations of older titles, and a majority of which were from Ocean. Of those 28 titles, only 9 were cartridge-exclusive titles, the remainder being ports of older cassette-based games. While most of the titles that Ocean announced did appear for the GS (with the notable exception of Operation Thunderbolt), a number of promises from other publishers failed to materialize. Although Thalamus, The Sales Curve, Mirrorsoft and Hewson had expressed an interest, nothing ever materialized from these firms. Similar problems plagued rival company Amstrad when they released their GX4000 console the same year. There were other reasons attributed to the failure of the C64GS, the major ones being the following: Poor software support: Most of the existing software on cartridge did not function well with the C64GS, and enthusiasm from publishers was low. Ocean Software, Codemasters, System 3, MicroProse and Domark developed titles for the system, but probably only because the games were compatible with the original C64, providing the titles with a commercial safety net in case the C64GS failed. And failure to reprogram the games for use with the cut-back system was another blame for the fault. The C64 computer: The C64GS was essentially a cut-back version of the original Commodore 64, and the games developed for it could also be run on the original computer. The C64 was already at an affordable price, and the C64GS was sold for the same. People preferred the original C64, particularly since the cassette versions of games could often be picked up for a fraction of the cost of the cartridge versions. Obsolete technology: The C64 was introduced in 1982. An already saturated console market: The 8-bit C64GS entered the market in 1990, parallel to 16-bit fourth generation consoles such as the Mega Drive and the Super Nintendo. The Nintendo Entertainment System and Sega Master System were already dominating the market with more popular titles, and did so until around 1992. TV hookup, joystick support and cartridge slots were already found on regular C64 machines. Hence normal C64s were already recognized as "game consoles" despite actually being home computers with integrated keyboards. Commodore eventually shipped the four-game cartridge and Cheetah Annihilator joysticks in a "Playful Intelligence" bundle with the standard Commodore 64C computer. Several years later, Commodore's next attempt at a games console, the Amiga CD32, encountered many of the same problems. Technical specifications The specifications of the C64GS are a subset of those of the regular C64; the main differences being the omission of the user port, serial interface, and cassette port. Since the system board is a regular C64C board these ports are actually present, but simply not exposed at the rear. See also Commodore 64 Commodore MAX Machine References External links "The C64 Console!" / "Inside the future: The C64GS" โ€“ By Ed Stu, Zzap 64 magazine, issue 66, October 1990 The Commodore C64 Games System โ€“ Photos and information from Bo Zimmermann's collection 8Bit-Homecomputermuseum โ€“ Nice pictures of the C64GS Third-generation video game consoles Commodore 64 Products introduced in 1990 1990s toys
14686186
https://en.wikipedia.org/wiki/Security%20Support%20Provider%20Interface
Security Support Provider Interface
Security Support Provider Interface (SSPI) is a component of Windows API that performs a security-related operations such as authentication. SSPI functions as a common interface to several Security Support Providers (SSPs): A Security Support Provider is a dynamic-link library (DLL) that makes one or more security packages available to apps. Providers The following SSPs are included in Windows: NTLMSSP (msv1_0.dll) โ€“ Introduced in Windows NT 3.51. Provides NTLM challenge/response authentication for Windows domains prior to Windows 2000 and for systems that are not part of a domain. Kerberos (kerberos.dll) โ€“ Introduced in Windows 2000 and updated in Windows Vista to support AES. Performs authentication for Windows domains in Windows 2000 and later. NegotiateSSP (secur32.dll) โ€“ Introduced in Windows 2000. Provides single sign-on capability, sometimes referred to as Integrated Windows Authentication (especially in the context of IIS). Prior to Windows 7, it tries Kerberos before falling back to NTLM. On Windows 7 and later, NEGOExts is introduced, which negotiates the use of installed custom SSPs which are supported on the client and server for authentication. Secure Channel (schannel.dll) โ€“ Introduced in Windows 2000 and updated in Windows Vista to support stronger AES encryption and ECC This provider uses SSL/TLS records to encrypt data payloads. TLS/SSL โ€“ Public key cryptography SSP that provides encryption and secure communication for authenticating clients and servers over the internet. Updated in Windows 7 to support TLS 1.2. Digest SSP (wdigest.dll) โ€“ Introduced in Windows XP. Provides challenge/response based HTTP and SASL authentication between Windows and non-Windows systems where Kerberos is not available. CredSSP (credssp.dll) โ€“ Introduced in Windows Vista and available on Windows XP SP3. Provides single sign-on and Network Level Authentication for Remote Desktop Services. Distributed Password Authentication (DPA, msapsspc.dll) โ€“ Introduced in Windows 2000. Provides internet authentication using digital certificates. Public Key Cryptography User-to-User (PKU2U, pku2u.dll) โ€“ Introduced in Windows 7. Provides peer-to-peer authentication using digital certificates between systems that are not part of a domain. Comparison SSPI is a proprietary variant of Generic Security Services Application Program Interface (GSSAPI) with extensions and very Windows-specific data types. It shipped with Windows NT 3.51 and Windows 95 with the NTLMSSP. For Windows 2000, an implementation of Kerberos 5 was added, using token formats conforming to the official protocol standard RFC 1964 (The Kerberos 5 GSSAPI mechanism) and providing wire-level interoperability with Kerberos 5 implementations from other vendors. The tokens generated and accepted by the SSPI are mostly compatible with the GSS-API so an SSPI client on Windows may be able to authenticate with a GSS-API server on Unix depending on the specific circumstances. One significant shortcoming of SSPI is its lack of channel bindings, which makes some GSSAPI interoperability impossible. Another fundamental difference between the IETF-defined GSSAPI and Microsoft's SSPI is the concept of "impersonation". In this model, a server can operate with the full privileges of the authenticated client, so that the operating system performs all access control checks, e.g. when opening new files. Whether these are less privileges or more privileges than that of the original service account depends entirely on the client. In the traditional (GSSAPI) model, when a server runs under a service account, it cannot elevate its privileges, and has to perform access control in a client-specific and application-specific fashion. The obvious negative security implications of the impersonation concept are prevented in Windows Vista by restricting impersonation to selected service accounts. Impersonation can be implemented in a Unix/Linux model using the seteuid or related system calls. While this means an unprivileged process cannot elevate its privileges, it also means that to take advantage of impersonation the process must run in the context of the root user account. References External links SSPI Reference on MSDN SSPI Information and Win32 samples Example of use of SSPI for HTTP authentification Microsoft application programming interfaces Microsoft Windows security technology Transport Layer Security implementation
36513581
https://en.wikipedia.org/wiki/Zerto
Zerto
Zerto Ltd., through its main product the Zerto IT Resilience Platform, provides disaster recovery, backup and workload mobility software for virtualized infrastructures and cloud environments. Zerto is co-headquartered in Boston and Israel and is a subsidiary of Hewlett Packard Enterprise company. History Ziv Kedem, Zerto's founder and CEO, previously co-founded Kashya. Zerto has received investments from venture capital firms such as 83North (formerly Greylock IL), Battery Ventures, Harmony Partners, RTP Ventures, IVP, and USVP. In 2016, the company was ranked #45 on the Deloitte Fast 500 North America list. Zerto IT Resilience Platform 6.0 is a Silver Winner in the Backup and Disaster Recovery Software Category in Storage Magazine and SearchStorageโ€™s 2018 Product of the Year. Zerto was bought by Hewlett Packard Enterprise in 2021 for $374 million. Products Zerto provides disaster recovery software for virtualized and cloud infrastructures. The company's original product, Zerto Virtual Replication, was released in August 2011. The technology leverages 'hypervisor-based replication', which moves data replication up the server stack from the storage layer into the hypervisor. Zerto is entirely hypervisor and storage-agnostic, so data can be replicated to and from any VM operating to a different platform. ZVR initially did not support Microsoft Azure. References Disaster recovery Software companies established in 2009 Software companies of Israel Software companies based in Massachusetts VMware Hewlett-Packard Enterprise acquisitions 2021 mergers and acquisitions Israeli companies established in 2009 2009 establishments in Massachusetts
557097
https://en.wikipedia.org/wiki/International%20Alphabet%20of%20Sanskrit%20Transliteration
International Alphabet of Sanskrit Transliteration
The International Alphabet of Sanskrit Transliteration (IAST) is a transliteration scheme that allows the lossless romanisation of Indic scripts as employed by Sanskrit and related Indic languages. It is based on a scheme that emerged during the nineteenth century from suggestions by Charles Trevelyan, William Jones, Monier Monier-Williams and other scholars, and formalised by the Transliteration Committee of the Geneva Oriental Congress, in September 1894. IAST makes it possible for the reader to read the Indic text unambiguously, exactly as if it were in the original Indic script. It is this faithfulness to the original scripts that accounts for its continuing popularity amongst scholars. Usage Scholars commonly use IAST in publications that cite textual material in Sanskrit, Pฤแธทi and other classical Indian languages. IAST is also used for major e-text repositories such as SARIT, Muktabodha, GRETIL, and sanskritdocuments.org. The IAST scheme represents more than a century of scholarly usage in books and journals on classical Indian studies. By contrast, the ISO 15919 standard for transliterating Indic scripts emerged in 2001 from the standards and library worlds. For the most part, ISO 15919 follows the IAST scheme, departing from it only in minor ways (e.g., แนƒ/แน and แน›/rฬฅ)โ€”see comparison below. The Indian National Library at Kolkata romanization, intended for the romanisation of all Indic scripts, is an extension of IAST. Inventory and conventions The IAST letters are listed with their Devanagari equivalents and phonetic values in IPA, valid for Sanskrit, Hindi and other modern languages that use Devanagari script, but some phonological changes have occurred: Some letters are modified with diacritics: Long vowels are marked with an overline. Vocalic (syllabic) consonants, retroflexes and แนฃ () have an underdot. One letter has an overdot: แน… (). One has an acute accent: ล› (). Unlike ASCII-only romanizations such as ITRANS or Harvard-Kyoto, the diacritics used for IAST allow capitalization of proper names. The capital variants of letters never occurring word-initially () are useful only when writing in all-caps and in Pฤแน‡ini contexts for which the convention is to typeset the IT sounds as capital letters. Comparison with ISO 15919 For the most part, IAST is a subset of ISO 15919 that merges: the retroflex (underdotted) liquids with the vocalic ones (ringed below); and the short close-mid vowels with the long ones. The following seven exceptions are from the ISO standard accommodating an extended repertoire of symbols to allow transliteration of Devanฤgarฤซ and other Indic scripts, as used for languages other than Sanskrit. Computer input by alternative keyboard layout The most convenient method of inputting romanized Sanskrit is by setting up an alternative keyboard layout. This allows one to hold a modifier key to type letters with diacritical marks. For example, = ฤ. How this is set up varies by operating system. Linux/Unix and BSD desktop environments allow one to set up custom keyboard layouts and switch them by clicking a flag icon in the menu bar. macOS One can use the pre-installed US International keyboard, or install Toshiya Unebe's Easy Unicode keyboard layout. Microsoft Windows Windows also allows one to change keyboard layouts and set up additional custom keyboard mappings for IAST. This Pali keyboard installer made by Microsoft Keyboard Layout Creator (MSKLC) supports IAST (works on Microsoft Windows up to at least version 10, can use Alt button on the right side of the keyboard instead of Ctrl+Alt combination). Computer input by selection from a screen Many systems provide a way to select Unicode characters visually. ISO/IEC 14755 refers to this as a screen-selection entry method. Microsoft Windows has provided a Unicode version of the Character Map program (find it by hitting then type charmap then hit ) since version NT 4.0ย โ€“ appearing in the consumer edition since XP. This is limited to characters in the Basic Multilingual Plane (BMP). Characters are searchable by Unicode character name, and the table can be limited to a particular code block. More advanced third-party tools of the same type are also available (a notable freeware example is BabelMap). macOS provides a "character palette" with much the same functionality, along with searching by related characters, glyph tables in a font, etc. It can be enabled in the input menu in the menu bar under System Preferences โ†’ International โ†’ Input Menu (or System Preferences โ†’ Language and Text โ†’ Input Sources) or can be viewed under Edit โ†’ Emoji & Symbols in many programs. Equivalent toolsย โ€“ such as gucharmap (GNOME) or kcharselect (KDE)ย โ€“ exist on most Linux desktop environments. Users of SCIM on Linux based platforms can also have the opportunity to install and use the sa-itrans-iast input handler which provides complete support for the ISO 15919 standard for the romanization of Indic languages as part of the m17n library. Font support Only certain fonts support all the Latin Unicode characters essential for the transliteration of Indic scripts according to the ISO 15919 standard. For example, the Arial, Tahoma and Times New Roman font packages that come with Microsoft Office 2007 and later versions also support precomposed Unicode characters like ฤ, แธ, แธฅ, ฤซ, แธท, แธน, แนƒ, รฑ, แน…, แน‡, แน›, แน, แนฃ, ล›, แนญ and ลซ, glyphs for some of which are only to be found in the Latin Extended Additional Unicode block. The majority of other text fonts commonly used for book production are defective in their support for one or more characters from this block. Accordingly, many academics working in the area of Sanskrit studies now make use of free and open-source software like LibreOffice, instead of Microsoft Word, in conjunction with free OpenType fonts like FreeSerif or Gentium, both of which have complete support for the full repertoire of conjoined diacritics in the IAST character set. Released under the GNU FreeFont or SIL Open Font License, respectively, such fonts may be freely shared and do not require the person reading or editing a document to purchase proprietary software to make use of its associated fonts. See also Devanagari transliteration ฤ€ryabhaแนญa numeration Hunterian transliteration Harvard-Kyoto ITRANS National Library at Kolkata romanization ISO 15919 Shiva Sutra Template:IAST References External links Sanskrit Pronunciation Tips for beginners & Simple Charts to help memorize where the diacritics fit in. - pages from Dina-Anukampana Das Hindustani orthography Romanization of Brahmic Sanskrit transliteration Encodings
1833304
https://en.wikipedia.org/wiki/Ben%20Shneiderman
Ben Shneiderman
Ben Shneiderman (born August 21, 1947) is an American computer scientist, a Distinguished University Professor in the University of Maryland Department of Computer Science, which is part of the University of Maryland College of Computer, Mathematical, and Natural Sciences at the University of Maryland, College Park, and the founding director (1983-2000) of the University of Maryland Human-Computer Interaction Lab. He conducted fundamental research in the field of humanโ€“computer interaction, developing new ideas, methods, and tools such as the direct manipulation interface, and his eight rules of design. Early life and education Born in New York, Shneiderman, attended the Bronx High School of Science, and received a BS in Mathematics and Physics from the City College of New York in 1968. He then went on to study at the State University of New York at Stony Brook, where he received an MS in Computer Science in 1972 and graduated with a PhD in 1973. Career Shneiderman started his academic career at the State University of New York at Farmingdale in 1968 as instructor at the Department of Data Processing. In the last year before his graduation he was instructor at the Department of Computer Science of Stony Brook University (then called State University of New York at Stony Brook). In 1973 he was appointed Assistant Professor at the Indiana University, Department of Computer Science. In 1976 he moved to the University of Maryland. He started out as Assistant Professor in its Department of Information Systems Management, and became Associate Professor in 1979. In 1983 he moved to its Department of Computer Science as Associate Professor, and was promoted to full professor in 1989. In 1983 he was the Founding Director of its Human-Computer Interaction Lab, which he directed until 2000. In 2002 his book Leonardo's Laptop: Human Needs and the New Computing Technologies was Winner of an IEEE-USA Award for Distinguished Contributions Furthering Public Understanding of the Profession. His 2016 book, The New ABCs of Research: Achieving Breakthrough Collaborations, encourages applied and basic research to be combined. In 2019, he published Encounters with HCI Pioneers: A Personal History and Photo Journal, and Human-Centered AI in 2022. Awards and honors Shneiderman was inducted as a Fellow of the Association for Computing Machinery in 1997, a Fellow of the American Association for the Advancement of Science in 2001, a Member of the National Academy of Engineering in 2010, an IEEE Fellow in 2012, and a Fellow of the National Academy of Inventors in 2015. He is an ACM CHI Academy Member and received their Lifetime Achievement Award in 2001. He received the IEEE Visualization Career Award in 2012 and was inducted into the IEEE VIS Academy in 2019. In 2021 he received the InfoVis Conference Test of Time Award with co-authors Ben Bederson and Martin M. Wattenberg. He received Honorary Doctorates from the University of Guelph (Canada) in 1995, the University of Castile-La Mancha (Spain) in 2010, Stony Brook University in 2015, the University of Melbourne in 2017, Swansea University (in Wales, UK) in 2018, and the University of Pretoria (in South Africa) in 2018. Personal life Shneiderman resides in Bethesda, Maryland. He is the nephew of photographer David Seymour. Work Nassiโ€“Shneiderman diagram In the 1973 article "Flowchart techniques for structured programming" presented at a 1973 SIGPLAN meeting Isaac Nassi and Shneiderman argued: The new model technique for structured programming they presented has become known as the Nassiโ€“Shneiderman diagram; a graphical representation of the design of structured software. Flowchart research In the 1970s Shneiderman continued to study programmers, and the use of flow charts. In the 1977 article "Experimental investigations of the utility of detailed flowcharts in programming" Shneiderman et al. summarized the origin and status quo of flowcharts in computer programming: Flowcharts have been a part of computer programming since the introduction of computers in the 1940s. In 1947 Goldstein and von Neumann [7] presented a system of describing processes using operation, assertion, and alternative boxes. They felt that "coding begins with the drawing of flow diagram." Prior to coding, the algorithm had been identified and understood. The flowchart represented a high level definition of the solution to be implemented on a machine. Although they were working only with numerical algorithms, they proposed a programming methodology which has since become standard practice in the computer programming field. Furthermore, Shneiderman had conducted experiments which suggested that flowcharts were not helpful for writing, understanding, or modifying computer programs. At the end of their 1977 paper, Shneiderman et al. concluded: Although our original intention was to ascertain under which conditions detailed flowcharts were most helpful, our repeated negative results have led us to a more skeptical opinion of the utility of detailed flowcharts under modern programming conditions. We repeatedly selected problems and tried to create test conditions which would favor the flowchart groups, but found no statistically significant differences between the flowchart and non-flowchart groups. In some cases the mean scores for the non-flowchart groups even surpassed the means for the flowchart groups. We conjecture that detailed flowcharts are merely a redundant presentation of the information contained in the programming language statements. The flowcharts may even be at a disadvantage because they are not as complete (omitting declarations, statement labels, and input/output formats) and require many more pages than do the concise programming language statements. Designing the User Interface In 1986, he published the first edition (now on its sixth edition) of his book "Designing the User Interface: Strategies for Effective Human-Computer Interaction." Included in this book is his most popular list of "Eight Golden Rules of Interface Design," which read: Strive for consistency. Consistent sequences of actions should be required in similar situations ... Enable frequent users to use shortcuts. As the frequency of use increases, so do the user's desires to reduce the number of interactions ... Offer informative feedback. For every operator action, there should be some system feedback ... Design dialog to yield closure. Sequences of actions should be organized into groups with a beginning, middle, and end ... Offer simple error handling. As much as possible, design the system so the user cannot make a serious error ... Permit easy reversal of actions. This feature relieves anxiety, since the user knows that errors can be undone ... Support internal locus of control. Experienced operators strongly desire the sense that they are in charge of the system and that the system responds to their actions. Design the system to make users the initiators of actions rather than the responders. Reduce short-term memory load. The limitation of human information processing in short-term memory requires that displays be kept simple, multiple page displays be consolidated, window-motion frequency be reduced, and sufficient training time be allotted for codes, mnemonics, and sequences of actions. These guidelines are frequently taught in courses on Human-Computer Interaction. The Craft of Information Visualization: Readings and Reflections, 2003 In 2003, Ben Bederson and Shneiderman coauthored the book "The Craft of Information Visualization: Readings and Reflections". Included in Chapter 8: Theories for Understanding Information Visualization in this book are five goals of theories for HCI practitioners and researchers, which read: The typical goals of theories are to enable practitioners and researchers to: Describe objects and actions in a consistent and clear manner to enable cooperation Explain processes to support education and training Predict performance in normal and novel situations so as to increase the chances of success Prescribe guidelines, recommend best practices, and caution about dangers Generate novel ideas to improve research and practice. These goals are frequently taught in courses on Human-Computer Interaction and cited in works by authors such as Yvonne Rogers, Victor Kaptelinin, and Bonnie Nardi. Direct manipulation interface Shneiderman's cognitive analysis of user needs led to principles of direct manipulation interface design in 1982: (1) continuous representation of the objects and actions, (2) rapid, incremental, and reversible actions, and (3) physical actions and gestures to replace typed commands, which enabled designers to craft more effective graphical user interfaces. He applied those principles to design innovative user interfaces such as the highlighted selectable phrases in text, that were used in the commercially successful Hyperties. Hyperties was used to author the world's first electronic scientific journal issue, which was the July 1988 issue of the Communications of the ACM with seven papers from the 1987 Hypertext conference. It was made available as a floppy disk accompanying the printed journal. Tim Berners-Lee cited this disk as the source for his "hot spots" in his Spring 1989 manifesto for the World Wide Web. Hyperties was also used to create the world's first commercial electronic book, Hypertext Hands-On! in 1988. Direct manipulation concepts led to touchscreen interfaces for home controls, finger-painting, and the now ubiquitous small touchscreen keyboards. The development of the "Lift-off strategy" by University of Maryland Humanโ€“Computer Interaction Lab (HCIL) researchers enabled users to touch the screen, getting feedback as to what will be selected, adjust their finger position, and complete the selection by lifting the finger off the screen. The HCIL team applied direct manipulation principles for touchscreen home automation systems, finger-painting programs, and the double-box range sliders that gained prominence by their inclusion in Spotfire. The visual presentation inherent in direct manipulation emphasized the opportunity for information visualization. In 1997, Pattie Maes and Shneiderman had a public debate on Direct Manipulation vs. Interface Agents at CHI'97 and IUI 1997 (with the IUI Proceedings showing two separate papers but no remaining internet trace of the panel.) Those events helped define the two current dominant themes in human-computer interaction: direct human control of computer operations via visual user interfaces vs delegation of control to interface agents that know the users desires and act on their behalf, thereby requiring less human attention. Their debate continues to be highly cited (with 479 citations in January 2022 for the original CHI debate), especially in user interface design communities where return debates took place at the ACM CHI 2017 and ACM CHI 2021 conferences. Information visualization His major work in recent years has been on information visualization, originating the treemap concept for hierarchical data. Treemaps are implemented in most information visualization tools including Spotfire, Tableau Software, QlikView, SAS, JMP, and Microsoft Excel. Treemaps are included in hard drive exploration tools, stock market data analysis, census systems, election data, gene expression, and data journalism. The artistic side of treemaps are on view in the Treemap Art Project. He also developed dynamic queries sliders with multiple coordinated displays that are a key component of Spotfire, which was acquired by TIBCO in 2007. His work continued on visual analysis tools for time series data, TimeSearcher, high dimensional data, Hierarchical Clustering Explorer, and social network data, SocialAction. Shneiderman contributed to the widely used social network analysis and visualization tool NodeXL. Current work deals with visualization of temporal event sequences, such as found in Electronic Health Records, in systems such as LifeLines2 and EventFlow. These tools visualize the categorical data that make up a single patient history and they present an aggregated view that enables analysts to find patterns in large patient history databases. Taxonomy of interactive dynamics for visual analysis, 2012 In 2012, Jeffrey Heer and Shneiderman coauthored the article โ€œInteractive Dynamics for Visual Analysisโ€ in Association for Computing Machinery Queue vol. 10, no. 2. Included in this article is a taxonomy of interactive dynamics to assist researchers, designers, analysts, educators, and students in evaluating and creating visual analysis tools. The taxonomy consists of 12 task types grouped into three high-level categories, as shown below. Universal usability He also defined the research area of universal usability to encourage greater attention to diverse users, languages, cultures, screen sizes, network speeds, and technology platforms. Human-Centered AI The current topic of Shneiderman's Scholarship is Human-Centered Artificial Intelligence Shneiderman proposes an alternative vision of AI which focuses on the need for reliable, safe and trustworthy systems that enable people to benefit from the power of AI while remaining in control. Shneiderman emphasizes the need for technologies that "augment, amplify, empower, and enhance humans rather than replace them". Publications List of articles: Shneiderman, Ben, Human-Centered AI, Oxford University Press, 2022 Shneiderman, Ben. The New ABCs of Research: Achieving Breakthrough Collaborations; Oxford University Press, 2016. Shneiderman, Ben. Software Psychology: Human Factors in Computer and Information Systems; Little, Brown and Co, 1980. Shneiderman, Ben. Designing the User Interface: Strategies for Effective Humanโ€“Computer Interaction, 1st edition. Addison-Wesley, 1986; 2nd ed. 1992; 3rd ed. 1998; 4th ed. 2005; 5th ed. 2010; 6th ed., 2016. Card, Stuart K., Jock D. Mackinlay, and Ben Shneiderman, eds. Readings in Information Visualization: Using Vision to Think. Morgan Kaufmann, 1999. Shneiderman, Ben. Leonardo's Laptop: Human Needs and the New Computing Technologies; MIT Press, 2002. Hansen, Derek, Ben Shneiderman, and Marc A. Smith. Analyzing social media networks with NodeXL: Insights from a connected world. Morgan Kaufmann, 2010. Johnson, Brian, and Ben Shneiderman. "Tree-maps: A space-filling approach to the visualization of hierarchical information structures." Visualization, 1991. Visualization'91, Proceedings., IEEE Conference on. IEEE, 1991. Shneiderman, Ben. "Tree visualization with tree-maps: 2-d space-filling approach." ACM Transactions on Graphics 11.1 (1992): 92โ€“99. Ahlberg, Christopher, and Ben Shneiderman. "Visual information seeking: tight coupling of dynamic query filters with starfield displays." Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 1994. Shneiderman, Ben. "The eyes have it: A task by data type taxonomy for information visualizations." Visual Languages, 1996. Proceedings., IEEE Symposium on. IEEE, 1996. Bederson, B., Shneiderman, B. 2003. The Craft of Information Visualization: Readings and Reflections. Morgan Kaufmann. Heer, J., Shneiderman, B. 2012. Interactive Dynamics for Visual Analysis. ACM Queue, 10(2), Issue 2. Shneiderman, B. (2020). Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy. International Journal of Humanโ€“Computer Interaction, 1-10. References External links Ben Shneiderman's home page Ben Shneiderman papers at the University of Maryland Libraries Treemap Art Project Interviewed by Alan Macfarlane 7 August 2009 (video) 1947 births Living people American computer scientists Humanโ€“computer interaction Humanโ€“computer interaction researchers Information visualization experts The Bronx High School of Science alumni City College of New York alumni Fellows of the Association for Computing Machinery Fellow Members of the IEEE University of Maryland, College Park faculty Fellows of the American Association for the Advancement of Science Members of the United States National Academy of Engineering Stony Brook University alumni Scientists from New York City
41947332
https://en.wikipedia.org/wiki/KGraft
KGraft
kGraft is a feature of the Linux kernel that implements live patching of a running kernel, which allows kernel patches to be applied while the kernel is still running. By avoiding the need for rebooting the system with a new kernel that contains the desired patches, kGraft aims to maximize the system uptime and availability. At the same time, kGraft allows kernel-related security updates to be applied without deferring them to scheduled downtimes. Internally, kGraft allows entire functions in a running kernel to be replaced with their patched versions, doing that safely by selectively using original versions of functions to ensure per-process consistency while the live patching is performed. kGraft is developed by SUSE, with its source code licensed under the terms of versions two and three of the GNU General Public License (GPL). In April 2014, kGraft was submitted for inclusion into the Linux kernel mainline, and the minimalistic foundations for live patching were merged into the Linux kernel mainline in kernel version 4.0, which was released on April 12, 2015. Internals Internally, kGraft consists of two parts the core kernel module executes the live patching mechanism by altering kernel's inner workings, while userspace utilities prepare individual hot patch kernel modules from source diffs. Live kernel patching is performed at the function level, meaning that kGraft can replace entire functions in the running kernel with their patched versions, while relying on the mechanisms and infrastructure established by ftrace to "route around" old versions of functions. No changes to the kernel's internal data structures are possible; however, security patches, which are one of the natural candidates to be used with kGraft, rarely contain changes to the kernel's data structures. While applying hot patches, kGraft does not require a running kernel to be stopped for patched versions of functions to be introduced into it. Instead of replacing functions atomically, kGraft provides consistent "world views" (or "universes") to userspace processes, kernel threads and interrupt handlers, which are monitored during their execution so the original versions of patched kernel functions can continue to be used. To accomplish that, kGraft maintains original versions of patched functions in a read-copy-update (RCU) fashion, and dynamically selects between the original and patched versions depending on which process, kernel thread or interrupt handler executes them. More specifically, original versions of functions continue to be usedat the time when a hot patch is appliedfor processes currently executing within the kernel space, for kernel threads until they reach their completion points, and for currently executing interrupt handlers. Due to its design, kGraft does not introduce additional latency while applying hot patches. As the downside, original versions of patched kernel functions may be required to be maintained for extended periods of time in case there are processes that remain for too long within the kernel space; for example, a process may wait for I/O on a network socket. Also, as both original and patched versions of functions are allowed to be executed in parallel, troubles may arise if they use kernel's internal data structures in different ways. History SUSE announced kGraft in January 2014 and released it publicly in March 2014 under the terms of the GNU General Public License versionย 2 (GPLv2) for the kernel part, and under the terms of versionย 3 (GPLv3) for the userspace part. It was released shortly after Red Hat released its own live kernel patching implementation called kpatch. kGraft aims to become merged into the Linux kernel mainline, and it was submitted for the inclusion in April 2014. kGraft was made available for SUSE Linux Enterprise Serverย 12 on November 18, 2014, as an additional feature called SUSE Linux Enterprise Live Patching. Minimalistic foundations for live kernel patching were merged into the Linux kernel mainline in kernel version 4.0, which was released on April 12, 2015. Those foundations, based primarily on the kernel's ftrace functionality, form a common core capable of supporting hot patching by both kGraft and kpatch, by providing an application programming interface (API) for kernel modules that contain hot patches and an application binary interface (ABI) for the userspace management utilities. However, the common core included into Linux kernelย 4.0 supports only the x86 architecture and does not provide any mechanisms for ensuring function-level consistency while the hot patches are applied. Since April 2015, there is ongoing work on porting kGraft to the common live patching core provided by the Linux kernel mainline. However, implementation of the required function-level consistency mechanisms has been delayed because the call stacks provided by the Linux kernel may be unreliable in situations that involve assembly code without proper stack frames; as a result, the porting work remains in progress . In an attempt to improve the reliability of kernel's call stacks, a specialized sanity-check userspace utility has also been developed. See also Dynamic software updating, a field of research focusing on upgrading programs while they are running kexec, a method for loading a whole new Linux kernel from a running system Ksplice and KernelCare, other Linux kernel live patching technologies developed by Ksplice, Inc. (later acquired by Oracle) and CloudLinux, respectively References External links Free security software programmed in C Linux kernel live patching Linux-only free software SUSE Linux
605485
https://en.wikipedia.org/wiki/Not%20Another%20Completely%20Heuristic%20Operating%20System
Not Another Completely Heuristic Operating System
Not Another Completely Heuristic Operating System, or Nachos, is instructional software for teaching undergraduate, and potentially graduate level operating systems courses. It was developed at the University of California, Berkeley, designed by Thomas Anderson, and is used by numerous schools around the world. Originally written in C++ for MIPS, Nachos runs as a user-process on a host operating system. A MIPS simulator executes the code for any user programs running on top of the Nachos operating system. Ports of the Nachos code exist for a variety of architectures. In addition to the Nachos code, a number of assignments are provided with the Nachos system. The goal of Nachos is to introduce students to concepts in operating system design and implementation by requiring them to implement significant pieces of functionality within the Nachos system. In Nachos' case, Operating System simulator simply means that you can run an OS (a guest OS) on top of another one (the host OS), similar to Bochs/VMware. It features emulation for: A CPU (an MIPS CPU) A hard drive An interrupt controller, timer, and misc. other components which are there to run the Nachos user space applications. That means that you can write programs for Nachos, compile them with a real compiler (an old gcc compiler that produces code for MIPS) and run them. The Nachos kernel instead is compiled to the platform of the Host OS and thus runs natively on the Host OS' CPU. Nachos version 3.4 has been the stable, commonly used version of Nachos for many years. Nachos version 4.0 has existed as a beta since approximately 1996. Implementation Nachos has various modules implementing the functionality of a basic operating system. The wrapper functions for various system calls of the OS kernel are generally implemented in a manner similar to that of the UNIX system calls . Various parts of the OS are instantiated as objects using the native code. For example, a class Machineis used as the master class of the simulated machine. It contains various objects, such as FileSystem, Processor, Timer, etc. which are defined to simulate various hardware aspects. Major components NachOS Machine - Nachos simulates a machine that roughly approximates the MIPS architecture. The machine has registers, memory and a cpu. The Nachos/MIPS machine is implemented by the Machine object, an instance of which is created when Nachos starts up. It contains methods like Run, ReadRegister, WriteRegister, etc. It also defines an interrupt object to handle interrupts. Timer and statistics are also implemented in this. NachOS Threads - In NachOS a thread class has been defined. A thread has an associated state with it which maybe ready, running, blocked or just created. The thread object has various methods like PutThreadToSleep, YieldCPU, ThreadFork, ThreadStackAllocate, etc. Each thread runs at a virtual address space. NachOS UserPrograms - Nachos runs user programs in their own private address space. Nachos can run any MIPS binary, assuming that it restricts itself to only making system calls that Nachos understands. In Unix, "a.out" files are stored in "coff" format. Nachos requires that executables be in the simpler "Noff" format. To convert binaries of one format to the other, use the coff2noff program. Successors As Nachos has not been in active development for a number of years, and possesses a number of recognized flaws (particularly with regards to portability: Nachos relies on MIPS assembly code, and requires porting to run on x86 architecture), successor projects have been initiated. In 2004, Stanford University created Pintos, a Nachos-inspired system written in C and designed to run on actual x86 hardware. In 2000, Dan Hettena at UC Berkeley ported Nachos to Java as Nachos 5.0j, in an effort to make Nachos more portable, more accessible to undergraduates, and less susceptible to subtle bugs in student code that had in earlier versions often dominated student project development time. Another Java-based version was created by Professor Peter Druschel at Rice University. It was later adapted by Professor Eugene Stark at Stony Brook University in 2003 and applied in the Operating System course. At Graz University of Technology (Austria), a system called SWEB ("Schon wieder ein Betriebssystem") has been implemented and is used to teach operating system principles. References External links Nachos Home Page Original Usenix 1993 paper by Christopher, Procter, and Anderson. Extensive writeup on Nachos Thomas Narten's Nachos Roadmap Nachos for Java Walkthrough JNachos Home Page, another Java-based version; ported by Patrick J. McSweeney and WonKyung Park Discontinued operating systems Educational operating systems MIPS operating systems
69725
https://en.wikipedia.org/wiki/Chaosnet
Chaosnet
Chaosnet is a local area network technology. It was first developed by Thomas Knight and Jack Holloway at MIT's AI Lab in 1975 and thereafter. It refers to two separate, but closely related, technologies. The more widespread was a set of computer communication packet-based protocols intended to connect the then-recently developed and very popular (within MIT) Lisp machines; the second was one of the earliest local area network (LAN) hardware implementations. Origin The Chaosnet protocol originally used an implementation over CATV coaxial cable modeled on the early Xerox PARC Ethernet, the early ARPANET, and Transmission Control Protocol (TCP). It was a contention-based system intended to work over a range, that included a pseudo-slotted feature intended to reduce collisions, which worked by passing a virtual token of permission from host to host; successful packet transmissions updated each host's knowledge of which host had the token at that time. Collisions caused a host to fall silent for a duration depending on the distance from the host it collided with. Collisions were never a real problem, and the pseudo-slotting fell into disuse. Chaosnet's network topology was usually series of linear (not circular) cables, each up to a maximum of a kilometer and roughly 12 clients. The individual segments were interconnected by "bridges" (much in the ARPANET mold), generally older computers like PDP-11s with two network interfaces. The protocols were also later implemented as a payload that could be carried over Ethernet (usually the later variety). Chaosnet was specifically for LANs; features to support WANs were left out for the sake of simplicity. Chaosnet can be regarded as a contemporary of both the PUP protocols invented by PARC, and the Internet Protocol (IP), and was recognized as one of the other network classes (other than "IN" and "HS") in the Domain Name System. BIND uses a built-in pseudo-top-level-domain in the "CHAOS class" for retrieving information about a running DNS server. Chaosnet protocol The Chaosnet protocol identifies hosts by 16-bit addresses, 8 bits of which identify the subnet, 8 bits of which identify the host within the subnet. The basic protocol was a full-duplex reliable packet transmission between two user processes. The packet contents could be treated as bytes of 8 or 16 bits, with support for other word sizes provided by higher-level protocols. The connection was identified by a combination of the 16-bit addresses of each host and a 16-bit "connection index" assigned by each host to maintain uniqueness. "Controlled" packets within a connection were identified by a 16-bit packet number, which was used to deliver controlled packets reliably and in order, with re-transmission and flow control. "Uncontrolled" packets were not retransmitted, and were used at a lower level to support the flow-control and re-transmission. Chaosnet also supported "BRD" broadcast packets to multiple subnets. Initial establishment of the connection was made using "contact names." These names identified the network service and higher-level protocol. For example, "STATUS" was the contact name which requested basic network statistics from a host. "TELNET" was a contact name for the Arpanet TELNET protocol. "FILE" was a contact name for the Lisp Machine network file service. Other contact names included "SUPDUP", "MAIL", "NAME" for the Arpanet Finger protocol, "TIME", "SEND" for interactive messaging, "ARPA" for a gateway service to Arpanet. "DOVER" was the contact name for sending print jobs to Chaosnet hosts with a Xerox Dover printer attached (an early laser printer). Developers could easily experiment with new protocols by inventing new contact names. In ITS, a new server for that protocol could be installed by creating a link to the program in the location DSK:DEVICE;CHAOS <cname> where <cname> was up to six letters of the contact name. Simple transactions could be completed by a single "RFC" packet containing a contact name, answered by a single "ANS" packet with the relevant information. For example, an RFC to contact name "TIME" would result in a single ANS packet containing a 32-bit number indicating the time. The original GNU Manifesto mentioned that it aimed to, among other things, support the Chaosnet protocol. Symbolics, a maker of the Lisp machines, licensed the MIT Chaosnet hardware and software implementation from the CADR computer design. Notes References Online documentation from the ITS SYSDOC; directory External links Cisco's Implementation of Chaosnet Chaosnet implementations Another reference to AI Memo 628 A better scan of AI Memo 628 than the one below (pdf) Another place to get AI Memos 500 to 999 (FTP) Chaosnet (Linux source driver) Local area networks
1040239
https://en.wikipedia.org/wiki/Cray-3
Cray-3
The Cray-3 was a vector supercomputer, Seymour Cray's designated successor to the Cray-2. The system was one of the first major applications of gallium arsenide (GaAs) semiconductors in computing, using hundreds of custom built ICs packed into a CPU. The design goal was performance around 16 GFLOPS, about 12 times that of the Cray-2. Work started on the Cray-3 in 1988 at Cray Research's (CRI) development labs in Chippewa Falls, Wisconsin. Other teams at the lab were working on designs with similar performance. To focus the teams, the Cray-3 effort was moved to a new lab in Colorado Springs, Colorado later that year. Shortly thereafter, the corporate headquarters in Minneapolis decided to end work on the Cray-3 in favor of another design, the Cray C90. In 1989 the Cray-3 effort was spun off to a newly formed company, Cray Computer Corporation (CCC). The launch customer, Lawrence Livermore National Laboratory, cancelled their order in 1991 and a number of company executives left shortly thereafter. The first machine was finally ready in 1993, but with no launch customer, it was instead loaned as a demonstration unit to the nearby National Center for Atmospheric Research in Boulder. The company went bankrupt in May 1995, and the machine was officially decommissioned. With the delivery of the first Cray-3, Seymour Cray immediately moved on to the similar-but-improved Cray-4 design, but the company went bankrupt before it was completely tested. The Cray-3 was Cray's last completed design; with CCC's bankruptcy, he formed SRC Computers to concentrate on parallel designs, but died in a car accident in 1996 before this work was delivered. History Background Seymour Cray began the design of the Cray-3 in 1985, as soon as the Cray-2 reached production. Cray generally set himself the goal of producing new machines with ten times the performance of the previous models. Although the machines did not always meet this goal, this was a useful technique in defining the project and clarifying what sort of process improvements would be needed to meet it. For the Cray-3, he decided to set an even higher performance improvement goal, an increase of 12x over the Cray-2. Cray had always attacked the problem of increased speed with three simultaneous advances; more execution units to give the system higher parallelism, tighter packaging to decrease signal delays, and faster components to allow for a higher clock speed. Of the three, Cray was normally least aggressive on the last; his designs tended to use components that were already in widespread use, as opposed to leading-edge designs. For the Cray-2, he introduced a novel 3D-packaging system for its integrated circuits to allow higher densities, and it appeared that there was some room for improvement in this process. For the new design, he stated that all wires would be limited to a maximum length of . This would demand the processor be able to fit into a block, about that of the Cray-2 CPU. This would not only increase performance but make the system 27 times smaller. For a 12x performance increase, the packaging alone would not be enough, the circuits on the chips themselves would also have to speed up. The Cray-2 appeared to be pushing the limits of the speed of silicon-based transistors at 4.1ย ns (244ย MHz), and it did not appear that anything more than another 2x would be possible. If the goal of 12x was to be met, more radical changes would be needed, and a "high tech" approach would have to be used. Cray had intended to use gallium arsenide circuitry in the Cray-2, which would not only offer much higher switching speeds but also used less energy and thus ran cooler as well. At the time the Cray-2 was being designed, the state of GaAs manufacturing simply was not up to the task of supplying a supercomputer. By the mid-1980s, things had changed and Cray decided it was the only way forward. Given a lack of investment on the part of large chip makers, Cray decided to invest in a GaAs chipmaking startup, GigaBit Logic, and use them as an internal supplier. Describing the system in November 1988, Cray stated that the 12 times performance increase would be made up of a three times increase due to GaAs circuits, and four times due to the use of more processors. One of the problems with the Cray-2 had been poor multiprocessing performance due to limited bandwidth between the processors, and to address this the Cray-3 would adopt the much faster architecture used in the Cray Y-MP. This would provide a design performance of 8000 MIPS, or 16 GFLOPS. Development The Cray-3 was originally slated for delivery in 1991. This was during a time when the supercomputer market was rapidly shrinking from 50% annual growth in 1980, to 10% in 1988. At the same time, Cray Research was also working on the Y-MP, a faster multi-processor version of the system architecture tracing its ancestry to the original Cray-1. In order to focus the Y-MP and Cray-3 groups, and with Cray's personal support, the Cray-3 project moved to a new research center in Colorado Springs. By 1989, the Y-MP was starting deliveries, and the main CRI lab in Chippewa Falls, Wisconsin, moved on to the C90, a further improvement in the Y-MP series. With only 25 Cray-2s sold, management decided that the Cray-3 should be put on "low priority" development. In November 1988, the Colorado Springs lab was spun off as Cray Computer Corporation (CCC), with CRI retaining 10% of the new company's stock and providing an $85 million promissory note to fund development. Cray himself was not a shareholder in the new company, and worked under contract. As CRI retained the lease on the original building, the new company had to move once again, introducing further delays. By 1991, development was behind schedule. Development slowed even more when Lawrence Livermore National Laboratory cancelled its order for the first machine, in favor of the C90. Several executives, including the CEO, left the company. The company then announced they would be looking for a customer that needed a smaller version of the machine, with four to eight processors. The first (and only) production model (serial number S5, named Graywolf) was loaned to NCAR as a demonstration system in May 1993. NCAR's version was configured with 4 processors and a 128ย MWord (64-bit words, 1ย GB) common memory. In service, the static RAM proved to be problematic. It was also discovered that the square root code contained a bug that resulted in 1 in 60 million calculations being wrong. Additionally, one of the four CPUs was not running reliably. CCC declared bankruptcy in March 1995, after spending about $300 million of financing. NCAR's machine was officially decommissioned the next day. Seven system cabinets, or "tanks", serial numbers S1 to S7, were built for Cray-3 machines. Most were for smaller two-CPU machines. Three of the smaller tanks were used on the Cray-4 project, essentially a Cray-3 with 64 faster CPUs running at 1ย ns (1ย GHz) and packed into an even smaller space. Another was used for the Cray-3/SSS project. The failure of the Cray-3 was in large part due to the changing political and technical climate. The machine was being designed during the collapse of the Warsaw Pact and ending of the cold war, which led to a massive downsizing in supercomputer purchases. At the same time, the market was increasingly investing in massively parallel (MP or MPP) designs. Cray was critical of this approach, and was quoted by The Wall Street Journal as saying that MPP systems had not yet proven their supremacy over vector computers, noting the difficulty many users have had programming for large parallel machines. "I don't think they'll ever be universally successful, at least not in my lifetime". Architecture Logical design The Cray-3 system architecture comprised a foreground processing system, up to 16 background processors and up to 2 gigawords (16 GB) of common memory. The foreground system was dedicated to input/output and system management. It included a 32-bit processor and four synchronous data channels for mass storage and network devices, primarily via HiPPI channels. Each background processor consisted of a computation section, a control section and local memory. The computation section performed 64-bit scalar, floating point and vector arithmetic. The control section provided instruction buffers, memory management functions, and a real-time clock. 16 kwords (128 kbytes) of high-speed local memory was incorporated into each background processor for use as temporary scratch memory. Common memory consisted of silicon CMOS SRAM, organized into octants of 64 banks each, with up to eight octants possible. The word size was 64-bits plus eight error-correction bits, and total memory bandwidth was rated at 128 gigabytes per second. CPU design As with previous designs, the core of the Cray-3 consisted of a number of modules, each containing several circuit boards packed with parts. In order to increase density, the individual GaAs chips were not packaged, and instead several were mounted directly with ultrasonic gold bonding to a board approximately square. The boards were then turned over and mated to a second board carrying the electrical wiring, with wires on this card running through holes to the "bottom" (opposite the chips) side of the chip carrier where they were bonded, hence sandwiching the chip between the two layers of board. These submodules were then stacked four-deep and, as in the Cray-2, wired to each other to make a 3D circuit. Unlike the Cray-2, the Cray-3 modules also included edge connectors. Sixteen such submodules were connected together in a 4ร—4 array to make a single module measuring . Even with this advanced packaging the circuit density was low even by 1990s standards, at about 96,000 gates per cubic inch. Modern CPUs offer gate counts of millions per square inch, and the move to 3D circuits was still just being considered . Thirty-two such modules were then stacked and wired together with a mass of twisted-pair wires into a single processor. The basic cycle time was 2.11ย ns, or 474ย MHz, allowing each processor to reach about 0.948 GFLOPS, and a 16 processor machine a theoretical 15.17 GFLOP. Key to the high performance was the high-speed access to main memory, which allowed each process to burst up to 8 GB/s. Mechanical design The modules were held together in an aluminum chassis known as a "brick". The bricks were immersed in liquid fluorinert for cooling, as in the Cray-2. A four-processor system with 64 memory modules dissipated about 88ย kW of power. The entire four-processor system was about tall and front-to-back, and a little over wide. For systems with up to four processors, the processor assembly sat under a translucent bronzed acrylic cover at the top of a cabinet wide, deep and high, with the memory below it, and then the power supplies and cooling systems on the bottom. Eight and 16-processors system would have been housed in a larger octagonal cabinet. All in all, the Cray-3 was considerably smaller than the Cray-2, itself relatively small compared to other supercomputers. In addition to the system cabinet, a Cray-3 system also needed one or two (depending on number of processors) system control pods (or "C-Pods"), square and high, containing power and cooling control equipment. System configurations The following possible Cray-3 configurations were officially specified: Software The Cray-3 ran the Colorado Springs Operating System (CSOS) which was based upon Cray Research's UNICOS operating system version 5.0. A major difference between CSOS and UNICOS was that CSOS was ported to standard C with all PCC extensions that were used in UNICOS removed. Much of the software available under the Cray-3 was derived from Cray Research and included for instance the X Window System, vectorizing FORTRAN and C compilers, NFS and a TCP/IP stack. References Citations Bibliography External links Digibarn's Cray-3 Modules Cray Research and Cray computers FAQ Part 2 Cray-2 and โˆ’3 instruction sets (archived) 3 Vector supercomputers
16183488
https://en.wikipedia.org/wiki/Camtasia
Camtasia
Camtasia (; formerly Camtasia Studio and Camtasia for Mac) is a software suite, created and published by TechSmith, for creating and recording video tutorials and presentations via screencast, or via a direct recording plug-in to Microsoft PowerPoint. Other multimedia recordings (microphone, webcam and system audio) may be recorded at the same time or added separately (like background music and narration/voice tracks). Camtasia is available in English, French, German, Japanese, Portuguese, Spanish and Chinese versions. Features The features are structured around the 3 main steps of the program workflow: record, edit and export/share. First step is to record a video (from a specific region or fullscreen) with Camtasia Recorder. Multi-display configurations are supported. Second step is to edit into Camtasia the recorded video, adding transitions, annotations and all kind of advanced editing features and effects (cursor effects, visual effects...). Third step is to export the produced video, as a local file (MP4...), or to upload it to a media or file sharing platform (YouTube, Google Drive...). Camtasia Recorder In Camtasia Recorder, users can start and stop recording with shortcuts at any time, at which point the recording is halted and Camtasia Recorder can render the input that has been captured into the TREC format. The TREC file can be saved to disk or directly imported into the Camtasia component for editing. Camtasia Recorder allows audio (and webcam) recording while screen recording is in progress, so the presenter can capture live narration during a tutorial or presentation. Camtasia also supports dubbing in other audio tracks or voiceover during post-capture editing. Windows users may also install an add-in for Microsoft PowerPoint that will allow them to initiate recording of a presentation from within PowerPoint itself. Camtasia In Camtasia (also known as the Editor), the Media Bin is where media (screen recordings, voiceovers...) for the current project are stored. The Library stores reusable media across multiple projects. On the Timeline, overlays of various types like annotations may be added, including user-defined settings, such as when and how to display the cursor and pan-and-zoom effects such as the Ken Burns effect. In order to provide localized versions of the produced videos, subtitles can be added with the captioning feature. The Editor allows import of various types of video, audio and image files including MP4, AVI, MP3, WAV, PNG, JPEG and other formats into the Camtasia proprietary TREC format, which is readable and editable by Camtasia. The TREC file format is a single container for various multimedia objects including video clips, images, screen captures and audio/video effects. The produced video can be exported as a local file MP4, Animated GIF, AVI (Windows version only), MOV (Mac version only) or uploaded directly to a media or file sharing platform (YouTube, Google Drive...). Reviews Camtasia's shortcomings noted in the PC World review of January 17, 2013 and CNET review of June 19, 2012 are as follows: Rotation of objects is applied via a dialog rather than interactively, though many lower-priced video editors do provide interactive rotation and manipulation of objects such as text and video frames Recording live from a DV camera is not supported Still potentially overwhelming for the introductory user, tempered by the tutorial material available. NOTE the V8 release is a complete rewrite so much of the prior tutorial material written for the popular Camtasia v6 and v7 software for Microsoft Windows is not usable with this release. Audio handling has minimal capabilities and no integration with other packages compared to some competitors in this price range Lacks any video-clip manipulation or integration with other packages that have such capabilities In 2005, PC World mentioned that Camtasia is "powerful". In 2013, PC World published a 4ย 1/2 star review and noted Camtasia is a "full-featured education/information video tool". In 2012, CNET published a review and noted that Camtasia is a "feature-packed screencast app" and "does have a learning curve". See also Comparison of screencasting software Distance education Instructional design Podcast References External links Screencasting software Shareware Windows multimedia software MacOS multimedia software
201462
https://en.wikipedia.org/wiki/UNIVAC%201100/2200%20series
UNIVAC 1100/2200 series
The UNIVAC 1100/2200 series is a series of compatible 36-bit computer systems, beginning with the UNIVAC 1107 in 1962, initially made by Sperry Rand. The series continues to be supported today by Unisys Corporation as the ClearPath Dorado Series. The solid-state 1107 model number was in the same sequence as the earlier vacuum-tube computers, but the early computers were not compatible with the solid-state successors. Architecture Data formats Fixed-point, either integer or fraction Whole word โ€“ 36-bit (ones' complement) Half word โ€“ two 18-bit fields per word (unsigned or ones' complement) Third word โ€“ three 12-bit fields per word (ones' complement) Quarter word โ€“ four 9-bit fields per word (unsigned) Sixth word โ€“ six 6-bit fields per word (unsigned) Floating point Single precision โ€“ 36 bits: sign bit, 8-bit characteristic, 27-bit mantissa Double precision โ€“ 72 bits: sign bit, 11-bit characteristic, 60-bit mantissa Alphanumeric FIELDATA โ€“ UNIVAC 6-bit code variant (no lower case characters) six characters in each 36-bit word. (FIELDATA was originally a seven-bit code of which only 64 code positions (occupying six bits) were formally defined.) ASCII โ€“ 9 bits per character (right-most eight used for an ASCII character) four characters in each 36-bit word Instruction format Instructions are 36 bits long with the following fields: f (6 bits) - function designator (opcode), j (4 bits) - partial word designator, J-register designator, or minor function designator, a (4 bits) - register (A, X, or R) designator or I/O designator, x (4 bits) - index register (X) designator, h (1 bit ) - index register increment designator, i (1 bit) - indirect address designator, u (16 bits) - address or operand designator. Registers The 128 registers of the high-speed "general register stack" ("integrated circuit registers" on the UNIVAC 1108 and UNIVAC 1106 models), map to the current data space in main storage starting at memory address zero. These registers include both user and executive copies of the A, X, R, and J registers and many special function executive registers. The table on the right shows the addresses (in octal) of the user registers. There are 15 index registers (X1 ... X15), 16 accumulators (A0 ... A15), and 15 special function user registers (R1 .. R15). The 4 J registers and 3 "staging registers" are uses of some of the special function R registers. One interesting feature is that the last four index registers (X12 ... X15) and the first four accumulators (A0 ... A3) overlap, allowing data to be interpreted either way in these registers. This also results in four unassigned accumulators (A15+1 ... A15+4) that can only be accessed by their memory address (double word instructions on A15 do operate on A15+1). Vacuum tube machines not mutually compatible Prior to the UNIVAC 1107, UNIVAC produced several vacuum-tube-based machines with model numbers from 1101 to 1105. These machines had different architectures and word sizes and were not compatible with each other or with the 1107 and its successors. They all used vacuum tubes and many used drum memory as their main memory. Some were designed by Engineering Research Associates (ERA) which was later purchased and merged with the UNIVAC company. The UNIVAC 1101, or ERA 1101, was a computer system designed by ERA and built by the Remington Rand corporation in the 1950s. It was never sold commercially. It was developed under Navy Project 13, which is 1101 in binary. The UNIVAC 1102 or ERA 1102 was designed by Engineering Research Associates for the United States Air Force. The 36-bit UNIVAC 1103 was introduced in 1953 and an upgraded version (UNIVAC 1103A) was released in 1956. This was the first commercial computer to use core memory instead of the Williams tube. The UNIVAC 1105 was the successor to the 1103A, and was introduced in 1958. The UNIVAC 1104 system was a 30-bit version of the 1103 built for Westinghouse Electric, in 1957, for use on the BOMARC Missile Program. However, by the time the BOMARC was deployed in the 1960s, a more modern computer (a version of the AN/USQ-20, designated the G-40) had replaced the UNIVAC 1104. UNIVAC 1100 compatible series These machines had a common architecture and word size. They all used transistorized electronics and integrated circuits. Early machines used core memory (the 1110 used plated wire memory) until that was replaced with semiconductor memory in 1975. 1107 The UNIVAC 1107 was the first solid-state member of Sperry Univac's UNIVAC 1100 series of computers, introduced in October 1962. It was also known as the Thin-Film Computer because of its use of thin-film memory for its register storage. It represented a marked change of architecture: unlike previous models, it was not a strict two-address machine: it was a single-address machine with up to 65,536 words of 36-bit core memory. The machine's registers were stored in 128 words of thin-film memory, a faster form of magnetic storage. With six cycles of thin-film memory per 4 microsecond main memory cycle, address indexing was performed without a cycle time penalty. Only 36 systems were sold. The core memory was available in 16,384 36-bit words in a single bank; or in increments of 16,384 words to a maximum of 65,536 words in two separately accessed banks. With a cycle time of 4 microseconds, the effective cycle time was 2 microseconds when instruction and data accesses overlapped in two banks. The 128-word thin-film memory general register stack (16 each arithmetic, index, and repeat with a few in common) had a 300-nanosecond access time with a complete cycle time of 600 nanoseconds. Six cycles of thin-film memory per core memory cycle and fast adder circuitry permitted memory address indexing within the current instruction core memory cycle and also modification of the index value (the signed upper 18 bits were added to the lower 18 bits) in the specified index register (16 were available). The 16 input/output (I/O) channels also used thin-film memory locations for direct-to-memory I/O memory location registers. Programs could not be executed from unused thin-film memory locations. Both UNISERVO IIA and UNISERVO III tape drives were supported, both of which could use either metallic (UNIVAC I) or mylar tape. The FH880 drum memory unit was also supported as a spooling and file-storage media. Spinning at 1800 RPM, it stored approximately 300,000 36-bit words. The 1107, without any peripherals, weighed about . Univac provided a batch operating system, EXEC I. Computer Sciences Corporation was contracted to provide a powerful optimizing Fortran IV compiler, an assembler named SLEUTH with sophisticated macro capabilities, and a very flexible linking loader. 1108 The 1108 was introduced in 1964. Integrated circuits replaced the thin-film memory that the UNIVAC 1107 used for register storage. Smaller and faster cores, compared to the 1107, were used for main memory. In addition to faster components, two significant design improvements were incorporated: base registers and additional hardware instructions. The two 18-bit base registers (one for instruction storage and one for data storage) permitted dynamic relocation: as a program got swapped in and out of main memory, its instructions and data could be placed anywhere each time it got reloaded. To support multiprogramming, the 1108 had memory protection using two base and limit registers, with 512-word resolution. One was called the I-bank or instruction bank, and the other the D-bank or data bank. If the I-bank and D-bank of a program were put into different physical banks of memory, a 1/2 microsecond advantage accrued, called "alternate bank timing." The 1108 also introduced the Processor State Register, or PSR. In addition to controlling the Base Registers, it included various control "bits" that enabled the various Storage Protection features, allowed selection of either the User or Exec set of A, X & R registers, and enabled "Guard Mode" for user programs. Guard Mode prevented user programs from execution of Executive Only "privileged" instructions, and from accessing memory locations outside the program's allocated memory. Additional 1108 hardware instructions included double precision arithmetic, double-word load, store, and comparison instructions. The processor could have up to 16 input/output channels for peripherals. The 1108 CPU was, with the exception of the 128-word (200 octal) ICR (Integrated Control Register) stack, entirely implemented via discrete component logic cards, each with a 55-pin high density connector, which interfaced to a machine wire wrapped backplane. Additional hand applied twisted pair wiring was utilized to implement backplane connections with sensitive timing, connections between machine wire wrapped backplanes, and connections to the I/O channel connector panel in the lower section of the CPU Cabinet. The ICR (Integrated Control Register) stack was implemented with "new" integrated circuit technology, replacing the thin film registers on the 1107. The ICR consisted of 128 38-bits, with a half-word Parity Bit calculated and checked with each access. The ICR was logically the first 128 memory addresses (200 Octal), but was contained in the CPU. The core memory was contained in a one or more separate cabinet(s), and consisted of two separate 32K modules, for a total capacity of 64K 38-bit words (36-bits data and a Parity Bit for each 18-bit half-word). The basic cycle time of the core memory was 750 ns, and the supporting circuitry was implemented with the same circuit card/backplane technology as the 1108 CPU. Just as the first UNIVAC 1108 systems were being delivered in 1965, Sperry Rand announced the UNIVAC 1108 II (also known as the UNIVAC 1108A) which had support for multiprocessing: up to three CPUs, four memory banks totaling 262,144 words, and two independent programmable input/output controllers (IOCs). With everything busy, five activities could be going on at the same moment: three programs running in the CPUs and two input/output processes in the IOCs. One more instruction was incorporated: test-and-set, to provide for synchronization between the CPUs. Although a 1964 internal study indicated only about 43 might sell, in all, 296 processors were produced. The 1108 II, or 1108A, was the first multiprocessor machine in the series, capable of expansion to three CPUs and two IOCs (Input/Output Control Units). To support this, it had up to 262,144 words (four cabinets) of eight-ported main memory: separate instruction and data paths for each CPU, and one path for each IOC. The memory was organized in physical banks of 65,536 words, with separate odd and even ports in each bank. The instruction set was very similar to that of the 1107, but included some additional instructions, including the "Test and Set" instruction for multiprocessor synchronization. Some models of the 1108 implemented the ability to divide words into four nine-bit bytes, allowing use of ASCII characters. Most 1108A configurations included one or two CPUs, each with eight or (optionally) 16 36-bit parallel I/O channels, and two or three 64K core memory cabinets. Three CPU systems, with four core memory cabinets were the exception due to cost considerations. The IOC was a separate cabinet that contained 8 or (optionally) 16 additional I/O channels to support configurations with very large Mass Storage requirements. A very limited number of IOCs were produced, with United Air Lines (UAL) being the primary customer. The UNIVAC Array Processor, or UAP, was produced in even more limited numbers that the IOC. It was a custom-built, stand-alone math coprocessor to the 1108A system. The UAP, at its most basic level, consisted of four 1108A arithmetic units, and associated control circuitry, contained in a standalone cabinet almost identical to the 1108A CPU. The UAP was physically and logically situated between two 1108A multiprocessor systems. It was capable of directly addressing and interfacing to the four 65K core memory cabinets of two independent 1108A systems. It was capable of executing a number of array-processing instructions, the most important being fast Fourier transform (FFT). At a simplified level, one of the 1108A CPUs would move data arrays into core memory, and send the UAP an instruction packet, containing the function to be executed, and the memory address(es) of the data array(s), across a standard I/O channel. The UAP would then perform the operation, totally independent of the CPU(s), and, when the operation was complete, "interrupt" the originating CPU via the I/O channel. A very small number of UAPs were built, with Shell Oil Company being (likely) the only customer. The UAPs were installed in Shell's Houston Data Center, and were used to process seismic data. When Sperry Rand replaced the core memory with semiconductor memory, the same machine was released as the UNIVAC 1100/20. In this new naming convention, the final digit represented the number of CPUs (e.g., 1100/22 was a system with two CPUs) in the system. 1106 The 1107 and early 1108 machines were aimed at the engineering/scientific computing community, so much so that the 1100 Series User Group was named the UNIVAC Scientific Exchange, or USE. The operating systems were batch oriented, with FORTRAN and (to a much lesser extent) ALGOL being the most commonly used languages. As the market for commercial computing became more mature, these operating systems were no longer able to meet the growing demand for business computing, where applications were commonly written in COBOL. UNIVAC responded to this change in the market with the 1108A multiprocessor system and with the EXEC 8 operating system. Where engineering and scientific programs could often be "compute bound" (i.e. utilizing the entire CPU and core memory), business applications, typically written in COBOL, were almost always "I/O bound" (i.e. waiting for I/O operations to complete). Instrumentation of the EXEC 8 operating system showed that, in a 1108A multiprocessor configuration, the CPU(s) were often in the "idle loop" as much as 50% of the time (see note below). Since CPU performance was not an issue in these applications, it made commercial sense to create a lower-priced, lower-performance system to address the rapidly growing commercial business market. The UNIVAC 1106 was introduced in December 1969 and was absolutely identical to the UNIVAC 1108, both physically and in instruction set. Like the 1108, it was multiprocessor capable, though it appears that it was never supplied with more than two CPUs, and did not support IOCs. In fact, the only difference between an 1108A CPU and an 1106 CPU was a couple of timing cards. In order to keep costs low, an 1106 CPU could be ordered with as few as four word channels. This meant that only three I/O channels were available for peripheral subsystems, as channel 15 (the highest-numbered channel) was always, in both 1106 and 1108 systems, dedicated to the operator's console. Early versions of the UNIVAC 1106 were simply half-speed UNIVAC 1108 systems. Later Sperry Univac used a different memory system which was inherently slower and cheaper than that of the UNIVAC 1108. Sperry Univac sold a total of 338 processors in 1106 systems. When Sperry Rand replaced the core memory with semiconductor memory, the same machine was released as the UNIVAC 1100/10. Note: EXEC 8 Idle Loop - the "Idle Loop" was entered when a CPU had no available task to execute (typically when waiting for an I/O operation to complete). A simplified description is that the CPU executed a block transfer (op code 022) of the ICR stack (the first 0200 memory addresses) back to the same addresses. Since the ICR stack was contained in the CPU, this minimized use of core memory cycles, freeing them up for active CPUs. 1110 The UNIVAC 1110 was the fourth member of the series, introduced in 1972. The UNIVAC 1110 had enhanced multiprocessing support: sixteen-way memory access allowed up to six CAUs (Command Arithmetic Unit, the new name for CPU and so called because the CAU no longer had any I/O capability) and four IOAUs (Input Output Access Units, the name for separate units which performed the I/O channel programs). The 1110 CAU expanded the memory address range from the 18-bits (1108 and 1106) to 24-bits, allowing for up to 16 million words of addressable memory. The core memory used on the 1108/1106 systems was replaced with faster plated wire memory. Each memory cabinet contained eight independent 8K plated wire memory modules, or 64K for the entire cabinet. As with the 1108/1106, there was a maximum of four 64K cabinets per system. The 1110 also had 'Extended Memory' cabinets accessible in a 'daisy chain' arrangement to augment main storage. It was possible to utilize the 1108 64K core memory cabinets as Extended Storage, but in most systems utilized, the larger, less expensive 131K memory cabinets from the 1106 system. Up to eight Extended Memory cabinets were allowed, for a maximum of one million words of Extended Storage. An ESC (Extended Storage Controller) was required for each pair of memory cabinets to provide the physical connection, and address translation, from the 1110 CAUs and IOAU(s). The minimum configuration for a 1110 system was two CAUs and one IOAU. The largest configuration, 6x4 was only used by NASA. The 1110 CAU was the first pipelined processor to be designed by UNIVAC. The CAU could have as many as four instructions in various stages of execution at any given instant. The IOAU was completely separate, both physically and logically from the CAU, and had its own access path to the various Main and Extended Memory Modules. This allowed I/O operations to be independent from the compute operations, no longer "stealing" memory cycles from CAU(s). The IOAU included 8 (optionally 16 or 24) 1108/1106 compatible 36-bit Word Channels, and also included the Hardware Maintenance Panel. Pictures/illustrations of a 1110 system typical showed the IOAU Maintenance Panel, as the CAU cabinet had no indicator lights. The IOAU Maintenance Panel could display the various CAU registers from one or two associated CAU(s). The 1110 CAU also introduced an extension to the instruction set of 'Byte Instructions'. The major components of the 1110 system, the CAU, IOAU and Main Memory cabinets were designed using the same 55-pin high density card connectors, and machine wire wrapped backplane(s) as the 1108/1106. The discrete component logic used by the older systems was replaced by transistorโ€“transistor logic (TTL) integrated circuits (see Note, below). The CAU was an extremely complex unit, utilizing over 1000 cards. When Sperry Rand replaced the plated wire memory with semiconductor memory, the same machine was released as the UNIVAC 1100/40. In this new naming convention, the final digit represented the number of CPUs in the system. The 1100/40 utilized a new Main Memory cabinet, replacing the 8K plated wire memory modules with 16K static RAM modules (based on 1024x1-bit static RAM chips), for a total of 131K per cabinet. This allowed expansion of the Main Memory to a maximum of 524K. As with the 1110, the 1100/40 CAU had four base and limit registers, so a program could access four 64k banks. New instructions were added to allow a program to change the contents of the banks, rather than the banks being fixed when the program was prepared Sperry Rand sold a total of 290 processors in 1110 systems. Note: TTL Integrated circuits used in 1110 (1100/40) CAU, IOAU and Main Memory cabinets were ceramic 14-pin DIPs, where pins 4 and 10 were +5 volts and ground respectively: state-of-the-art in 1969. #3007500 - Integrated Circuit - IC32, Hex Inverter #3007501 - Integrated Circuit - IC33, Quad 2 Input NAND #3007502 - Integrated Circuit - IC34, Triple 3 Input NAND #3007503 - Integrated Circuit - IC35, Dual 4 Input NAND with Split Output #3007504 - Integrated Circuit - IC36, 8 Input NAND with Split Output #3007505 - Integrated Circuit - IC37, Quad 2 Input NOR #3007506 - Integrated Circuit - IC38, Dual And-Or Inverter-2 Wide OR, 2, 2 Input AND, with Split Output #3007507 - Integrated Circuit - IC39, Triple FLIP-FLOP with Set, Over-Ride, and Reset #3007508 - Integrated Circuit - IC40, Dual FLIP-FLOP, "D" Type #3007509 - Integrated Circuit - IC41, AND-OR Inverter-4 Wide OR, 2, 2, 3, 4 Input AND #3007603 - Integrated Circuit - IC50, Quad Two-Input Line Driver Part Numbers beginning with "3" originated in the Univac Blue Bell (Philadelphia), PA location. Part numbers beginning with "4" originated in the Roseville (St. Paul), MN location. Purchased Components group was in Blue Bell. Semiconductor memory series In 1975, Sperry Univac introduced a new series of machines with semiconductor memory replacing core, with a new naming convention: An upgraded 1106 was called the UNIVAC 1100/10. In this new naming convention, the final digit represented the number of CPUs or CAUs in the system, so that, for example, a two-processor 1100/10 system was designated an 1100/12. An upgraded 1108 was called the UNIVAC 1100/20. An upgraded 1110 was released as the UNIVAC 1100/40. The biggest change was the replacement of the Type 7015 64K Plated Wire Memory cabinet with a new Type 7030 131K solid state (static RAM) Memory Cabinet. The allowed Main Storage to be expanded from maximum of 262K to a maximum of 524K. The Type 7030 Main Memory cabinet still contained eight separate Memory Modules, but they were now 16K (38-bit words, 36 Data and 2 Parity), instead of 8K each. The Type 7013 131K Core Memory Cabinet (originally used on the later 1106 Systems as Main Storage) was also replaced with a Solid-State Memory Cabinet, based on Intel 1103A DRAM. The UNIVAC 1100/80 was introduced in 1979. It was intended to combine 1100 and 494 systems. As with the 1100/10, 1100/20 and 1100/40, the final digit represented the number of CAUs in the system. The 1100/80 introduced a high-speed cache memory - the SIU or Storage Interface Unit. The SIU contained either 8K, or (optionally) 16K 36-bit words of buffer memory, and was logically and physically positioned between the CAU(s)/IOU(s) and the (larger, slower) Main Memory units. The first version of the 1100/80 system could be expanded to a maximum of two CAUs, and two IOUs. A later version was expandable to four CAUs and four IOUs. The SIU control panel of the updated 1100/80 (pictured above) was able to logically and physically partition larger Multi-Processor configurations into completely independent systems, each with its separate Operating System. The CAU was capable of executing both 36-bit 1100 series instructions, and 30-bit 490 series instructions. The CAU contained the same basic register stack, in the first 128 words of addressable memory, as previous generations of 1100 Series machines, but since these registers were implemented with the same ECL chips as the rest of the system, the registers did not require parity to be generated/checked with each write/read. The IOU, or Input/Output Unit was modular in design and could be configured with different Channel Modules to support varying I/O requirements. The Word Channel Module included four 1100 Series (parallel) Word Channels. Block Multiplexer and Byte Channel Modules allowed direct connection of high-speed disk/tape systems, and low speed printers, etc. respectively. The Control/Maintenance Panel was now on the SIU, and provided a minimum of indicator/buttons since the system incorporated a mini-computer, based on the BC/7 (business computer) as a maintenance processor. This was used to load microcode, and for diagnostic purposes. The CAU, IOU, and SIU units were implemented using emitter-coupled logic (ECL) on high density multi-layer PC boards. The ECL circuitry utilized DC voltages of +0 and -2 volts, with the CAU requiring four 50 amp -2 volt power supplies. Power was 400ย Hz, to reduce large scale DC power supplies. The 400ย Hz power was supplied by a motor/alternator, because even though solid state 400ย Hz inverters were available, they were not considered reliable enough to meet the system uptime requirements. An 1100/84 Multiprocessor 4x2 system, in two clusters (could be "partitioned" into two separate systems), including four CPU cabinets, two IOU cabinets, two SIU buffer storage units (16K words each) and 2,096K words of Main Memory (backing storage) in four cabinets, two System Maintenance Units (SMU), two Motor Alternators, a transition unit, and two System Consoles at list price was $5,414,871. in October 1980. This configuration could be rented for $127,764 per month, or leased (5 year) for $95,844 per month. Monthly maintenance was $10,235 on this configuration. It was fairly common to discount list prices for large and/or Government customers. The UNIVAC 1100/60 was introduced in 1979. It replaced the 1108/1106-based 1100/10 and 1100/20 systems. The 1100/60 System was available in both Single Processor 1100/61 (Model C1) and Dual Processor 1100/62 (Model H1) configurations. It was implemented using custom Sperry Univac designed Micro-Processor Integrated Circuits. Main Storage (524K to 1048K) words per CPU, optional Semiconductor Buffer Storage (up to 8K words per CPU), and the Input/Output Unit (IOU) were contained in CPU cabinet. The IOU (optionally) supported both Block and Word Channels. The system also included a System Support Processor for diagnostic testing and system console support. An 1100/62 Model E1 (upgraded version) - Medium Performance Multiprocessor Complex - two CPUs with 2K Buffer Storage, two IOUs with one Block Mux, and one Word Channel module (four channels), 1048K words of Main Storage, two System Support Processors, two System Consoles, and a Maintenance Console listed for $889,340. in March 1980. This configuration could be rented for $21,175 per month, or leased (5 year) for $16,780 per month. Monthly maintenance was $3,000 on this configuration. As with the 1100/80 System discounting was common for large and/or Government customers. The UNIVAC 1100/70 was introduced in 1981. The technology was an upgraded version of the 1100/60 design. It replaced the 1110-based 1100/40 systems. The UNIVAC 1100/90 was introduced in 1982. As with the 1100/80, it was available with up to four processors, and four I/O units. It was the largest, and final, member of the 1100 Series, and was the only system to be liquid-cooled. The Sperry Integrated Scientific Processor (ISP) is an attachment to the 1100/90. SPERRY 2200 series In 1983, Sperry Corporation discontinued the name UNIVAC for their products. SPERRY 2200/100 introduced in 1985 SPERRY Integrated Scientific Processor introduced in 1985 UNISYS 2200 series In 1986, Sperry Corporation merged with Burroughs Corporation to become Unisys, and this corporate name change was henceforth reflected in the system names. Each of the systems listed below represents a family with similar characteristics and architecture, with family members having different performance profiles. UNISYS 2200/200 introduced in 1986 UNISYS 2200/400 introduced in 1988 UNISYS 2200/600 introduced in 1989 UNISYS 2200/100 introduced in 1990 UNISYS 2200/500 introduced in 1993 UNISYS 2200/900 introduced in 1993 UNISYS 2200/300 introduced in 1995 UNISYS ClearPath IX4400 introduced in 1996 UNISYS ClearPath IX4800 introduced in 1997 UNISYS 2200/3800 introduced in 1997 UNISYS ClearPath IX5600 introduced in 1998 UNISYS ClearPath IX5800 introduced in 1998 UNISYS ClearPath IX6600 introduced in 1999 UNISYS ClearPath IX6800 introduced in 1999 UNISYS ClearPath Plus CS7800 introduced in 2001 (renamed Dorado 180 in 2003) UNISYS ClearPath Plus CS7400 introduced in 2002 (renamed Dorado 140 in 2003) UNISYS ClearPath Dorado 100 introduced in 2003 UNISYS ClearPath Dorado 200 introduced in 2005 UNISYS ClearPath Dorado 300 introduced in 2005 UNISYS ClearPath Dorado 400 introduced in 2007 UNISYS ClearPath Dorado 4000 introduced in 2008 UNISYS ClearPath Dorado 700 introduced in 2009 UNISYS ClearPath Dorado 4100 introduced in 2010 UNISYS ClearPath Dorado 800 introduced in 2011 UNISYS ClearPath Dorado 4200 introduced in 2012 UNISYS ClearPath Dorado 4300 introduced in 2014 UNISYS ClearPath Dorado 6300 introduced in 2014 UNISYS ClearPath Dorado 8300 introduced in 2015 UNISYS ClearPath IX series In 1996, Unisys introduced the ClearPath IX series. The ClearPath machines are a common platform that implement either the 1100/2200 architecture (the ClearPath IX series) or the Burroughs large systems architecture (the ClearPath NX series). Everything is common except the actual CPUs, which are implemented as ASICs. In addition to the IX (1100/2200) CPUs and the NX (Burroughs large systems) CPU, the architecture had Xeon (and briefly Itanium) CPUs. Unisys' goal was to provide an orderly transition for their 1100/2200 customers to a more modern architecture. See also List of UNIVAC products Unisys OS 2200 operating system Unisys 2200 Series system architecture IBM 7090 IBM's top-of-the-line 36-bit computer series of the late 1950s. References External links UNIVAC Memories A history of Univac computers and Operating Systems (PDF file) UNIVAC timeline UNIVAC 1108-II The big system with the big reputation (PDF) The Case 1107 UNIVAC Thin-Film Memory Computer 1107 UNIVAC 1107 documentation on Bitsavers.org 1105 1100 2200 series 36-bit computers Unisys
62312434
https://en.wikipedia.org/wiki/Universal%20USB%20Installer
Universal USB Installer
Universal USB Installer (UUI) is an open-source live Linux USB flash drive creation software. It allows users to create a bootable live USB flash drive using an ISO image from a supported Linux distribution, antivirus utility, system tool, and Microsoft Windows installer. UUI was originally created by Lance. Features Creates a bootable live USB flash drive of many Linux distributions Optionally create a persistent file for saving changes made from the running environment back to the flash drive. Provides additional information regarding each distribution, including category, website URL, and download link for quick reference Use formatting methods that allow the USB flash drive to remain accessible for other storage purposes Unsupported or (unlisted) ISO files can also be tried against several unlisted ISO options Example supported Linux distributions Ubuntu, Kubuntu and Xubuntu Debian Live Linux Mint Kali Linux OpenSUSE Fedora Damn Small Linux Puppy Linux PCLinuxOS CentOS GParted Clonezilla Reception It's FOSS editor wrote that Universal USB Installer is his "favorite tool and is extremely easy to use." Lifehacker called it "usefull". See also List of tools to create Live USB systems References External links Free system software Live USB
4092580
https://en.wikipedia.org/wiki/Software%20Communications%20Architecture
Software Communications Architecture
The Software Communications Architecture (SCA) is an open architecture framework that defines a standard way for radios to instantiate, configure, and manage waveform applications running on their platform. The SCA separates waveform software from the underlying hardware platform, facilitating waveform software portability and re-use to avoid costs of redeveloping waveforms. The latest version is SCA 4.1. Overview The SCA is published by the Joint Tactical Networking Center (JTNC). This architecture was developed to assist in the development of Software Defined Radio (SDR) communication systems, capturing the benefits of recent technology advances which are expected to greatly enhance interoperability of communication systems and reduce development and deployment costs. The architecture is also applicable to other embedded, distributed-computing applications such as Communications Terminals or Electronic Warfare (EW). The SCA has been structured to: Provide for portability of applications software between different SCA implementations, Leverage commercial standards to reduce development cost, Reduce software development time through the ability to reuse design modules, and Build on evolving commercial frameworks and architectures. The SCA is deliberately designed to meet commercial application requirements as well as those of military applications. Since the SCA is intended to become a self-sustaining standard, a wide cross-section of industry has been invited to participate in the development and validation of the SCA. The SCA is not a system specification but an implementation independent set of rules that constrain the design of systems to achieve the objectives listed above. Core Framework The Core Framework (CF) defines the essential "core" set of open software interfaces and profiles that provide for the deployment, management, interconnection, and intercommunication of software application components in an embedded, distributed-computing communication system. In this sense, all interfaces defined in the SCA are part of the CF. Standard Waveform Application Programming Interfaces (APIs) The Standard Waveform APIs define the key software interfaces that allow the waveform application and radio platform to interact. SCA use the APIs to separate waveform software from the underlying hardware platform, facilitating waveform software portability and re-use to avoid costs of redeveloping waveforms. Development Tools Reservoir Labs' R-Check - SCA Compliance Testing NordiaSoft eCo Suite - SCA 4.1 Integrated Development Environment and Core Framework ADLINK Spectra CX4 - SCA 4.1 Model Driven Tools Top News Software Communications Architecture v4.1 entered into the Department of Defense (DoD) Information Technology (IT) Standards Registry (DISR) as a mandated standard External links Software Communications Architecture Homepage Introduction to SCA Part I (Video) Introduction to SCA Part II (Video) SCA 4.1 Release Webinar SCA 2.2.2 Migration to SCA 4.1 (Video) Cobham Development Platform SCA and FACE Alignment SCA 4.1 Required in Major U.S. Navy Acquisition Navy Requires Open Architecture Wireless Innovation Forum - International Consortium Adoption by Germany Adoption by India Increasing Flexibility in Wireless SDR Systems New - R&S SDTR Link protocols Military radio systems Mobile telecommunications standards Radio technology
11738155
https://en.wikipedia.org/wiki/Spatiotemporal%20Epidemiological%20Modeler
Spatiotemporal Epidemiological Modeler
The Spatiotemporal Epidemiological Modeler (STEM) is free software available through the Eclipse Foundation. Originally developed by IBM Research, STEM is a framework and development tool designed to help scientists create and use spatial and temporal models of infectious disease. STEM uses a component software architecture based on the OSGi standard. The Eclipse Equinox platform is a reference implementation of that standard. By using a component software architecture, all of the components or elements required for a disease model, including the code and the data are available as software building blocks that can be independently exchanged, extended, reused, or replaced. These building blocks or plug-ins are called eclipse "plug-ins" or "extensions". STEM plug-ins contain denominator data for administrative regions of interest. The regions are indexed by standard (ISO3166) codes. STEM currently includes a large number of plug-ins for the 244 countries and dependent areas defined by the Geographic Coding Standard maintained by the International Organization for Standardization. These plug-ins contain global data including geographic data, population data, demographics, and basic models of disease. The disease models distributed with STEM include epidemiological compartment models. Other plug-ins describe relationships between regions including nearest-neighbor or adjacency relationships as well as information about transportation, such as connections by roads and a model of air transportation. Relationships between regions can then be included in models of how a disease spreads from place to place. To accomplish this, STEM represents the world as a "graph". The nodes in the graph correspond to places or regions, and the edges in the graph describe relationships or connections between regions. Both the nodes and the edges can be labeled or "decorated" with a variety of denominator data and models. This graphical representation is implemented using the Eclipse Modeling Framework (EMF). Since a model can be built up using separate subgraphs, STEM enables model composition. Predefined subgraphs defining different countries can be assembled with a drag and drop interface. New disease vectors can simply be added to existing models by augmenting the model with a new set of edges. The architecture also supports collaboration as users can not only create new models and compose new scenarios but also exchange these models and scenarios as reusable components and thereby build on each other's work. As an open source project, users are encouraged to create their own plug-ins (both data and models) and, if appropriate, to contribute their work back to the project. External links STEM Free science software Free health care software Epidemiology Public health and biosurveillance software IBM software
40010333
https://en.wikipedia.org/wiki/Noel%20Chiappa
Noel Chiappa
Joseph Noel Chiappa (b. 1956 Bermuda) is an Internet pioneer. He is a US resident and a retired researcher working in the area of information systems architecture and software, principally computer networks. Education Chiappa attended Saltus Grammar School in Bermuda, and Phillips Academy and MIT in the US. Career As a staff researcher and Internet technology pioneer at the MIT Laboratory for Computer Science, Chiappa co-invented the multi-protocol router. In addition to wide use at MIT, that router was later used at Stanford in 1982; other multi-protocol routers at Stanford were invented independently by William Yeager. The MIT multi-protocol router became the basis of the multi-protocol router from Proteon, Inc., the first commercially available multi-protocol router (January, 1986). Chiappa was the first to propose and design the original version of Trivial File Transfer Protocol (TFTP). It was only revised by others including Bob Baldwin, Dave Clark, and Steve Szymanski. He is acknowledged in several other RFC's, such as RFC-826, RFC-919, RFC-950 and others. He has worked extensively on the Locator/Identifier Separation Protocol (LISP). In 1992, Chiappa was also credited for fixing the "Sorcerer's Apprentice" protocol bug as well as other document problems. Chiappa is listed on the "Birth of the Internet" plaque at the entrance to the Gates Computer Science Building, Stanford. He served as the first Internet Area Director on the Internet Engineering Steering Group, from 1989 to 1992. From 2012, Chiappa was working on long-term issues in both the Internet Research Task Force and Internet Engineering Task Force and its predecessors; he served as the Area Director for Internet Services of the Internet Engineering Steering Group from 1987-1992. He is also involved in the development of the IP: next generation (IPng). A report, for instance, documented his objection to the IPng selection process and cited his alternative IPng project called Nimrod. Other interests Among many non-technical interests, he is particularly interested in Japanese woodblock prints, and helps maintain online catalogue raisonnรฉs for two major woodblock artists, Tsukioka Yoshitoshi and Utagawa Hiroshige II. Personal life Chiappa lives in Yorktown, Virginia with his family. Notes External links Official homepage RFC-1251 "Who's Who in the Internet: Biographies of IAB, IESG and IRSG Members Catalogue Raisonnรฉ of the Work of Tsukioka Yoshitoshi (1839-1892) Catalogue Raisonnรฉ of the Work of Utagawa Hiroshige II (1826-1869) 1956 births Living people History of the Internet Bermudian emigrants to the United States People from Yorktown, Virginia
18334481
https://en.wikipedia.org/wiki/ROHR2
ROHR2
ROHR2 is a pipe stress analysis CAE system from SIGMA Ingenieurgesellschaft mbH, based in Unna, Germany. The software performs both static and dynamic analysis of complex piping and skeletal structures, and runs on Microsoft Windows platform. ROHR2 software comes with built in industry standard stress codes; such as ASME B31.1, B31.3, B31.4, B31.5, B31.8, EN 13480, CODETI; along with several GRP pipe codes; as well as nuclear stress codes such as ASME Cl. 1-3, KTA 3201.2, KTA 3211.2. Name The brand name comes from the German word "Rohr" (pronounced as โ€œROARโ€œ) which means "Pipeโ€œ. History Early years as a MBP product : 1960's to 1989 ROHR2 was created in the late 1960s by the one of the first software companies in Germany, Mathematischer Beratungs- und Programmierungsdienst (MBP), based in Dortmund. ROHR2 first ran on mainframes such as UNIVAC 1, CRAY, and later Prime computer. At the time, the program was command line driven with a proprietary programming language to describe the piping systems and define the various load conditions. The 1987 launched version 26, was released for IBM PC as well as IBM PC compatible systems. As a EDS / SIGMA product : 1989 to 2000 MBP was later taken over by EDS (then a part of General Motors Corp., now part of HP Enterprise Services). In 1989, SIGMA Ingenieurgesellschaft mbH was founded in Dortmund, and the ROHR2 development and support team moved to the new office premises of SIGMA. The graphical user interface was added in 1994 to the product, which allowed the editing of piping systems without the need of mastering the earlier required programming language. Sigma Ingenieurgesellschaft mbH product : 2000 to present From the year 2000 onwards, the complete licensing and sales activities came under the management of SIGMA Ingenieurgesellschaft mbH; which by then evolved into an engineering company specializing in pipe engineering, as well as a software development firm. The recent developments include new bi-directional interfaces based on open standards for transfer of data with other CAD/CAE products such as - AVEVA PDMS, CADISON, Intergraph's PDS, Intergraph's SmartPlant, HICAD, MPDS4, Bentley System's AutoPLANT, Autodesk's PLANT3D and other PCF supported software. The integration of ROHR2 into the users workflow is supported by third-party interface products to ensure interoperability - a norm in the present engineering software industry. Software environment The ROHR2 program system comes with the following software environments; consisting of ROHR2win - the graphical user interface, the ROHR2 - calculation core, and various additional programs (see : related products). Calculation basics The static analysis includes the calculation of static loads of any value, or combination in accordance with the theories of first - and second order for linear and non-linear boundary conditions (friction, support lift). Additional load conditions can also be applied, such as dynamic loads or harmonic excitation. Furthermore, the dynamic analysis include the calculation of eigenvalues and mode shapes as well as their processing in various modal response methods - for the analysis of, for example, earthquakes and fluid hammer. A non-linear time history module (ROHR2stoss) allows the analysis of dynamic events in the time domain, while taking into account non-linear components such as snubbers or visco dampers based on the Maxwell model. An efficient superposition module enables a manifold selection and combination of static and dynamic results. Related products ROHR2fesu - Finite Element Analysis of Substructures in ROHR2 ROHR2iso - Creation of isometric drawings in ROHR2 ROHR2stoss - Structural Analysis with Dynamic Loads using Direct Integration ROHR2nozzle - Analysis of nozzles in piping systems according to API 610, 617, 661, NEMA SM23, DIN EN ISO 5199, 9905, 10437 and others ROHR2press - Internal pressure analysis of piping components SINETZ - Steady State Calculation of Flow Distribution, Pressure Drop and Heat Loss in Branched and Intermeshed Piping Networks for compressible and incompressible media SINETZfluid - Calculation of Flow Distribution and Pressure Drop of incompressible Media in Branched and Intermeshed Piping Networks PROBAD - Code-based strength calculations of pressure parts. See also Pipe Stress analysis References External links ROHR2 Homepage English Structural analysis Computer-aided design Computer-aided design software for Windows
23155924
https://en.wikipedia.org/wiki/Ember%20%28company%29
Ember (company)
Ember was an American company based in Boston, Massachusetts, USA, which is now owned by Silicon Labs. Ember had a radio development centre in Cambridge, England, and distributors worldwide. It developed ZigBee wireless networking technology that enabled companies involved in energy technologies to help make buildings and homes smarter, consume less energy, and operate more efficiently. The low-power wireless technology can be embedded into a wide variety of devices to be part of a self-organizing mesh network. All Ember products conform to IEEE 802.15.4-2003 standards. In May, 2012, Ember was acquired by Silicon Labs. History Ember was founded in 2001 by Andrew Wheeler and Robert Poor. Both were students at MIT when they founded Ember with $3 million in seed funding led by Polaris Venture Partners with DFJ New England, Stata Venture Partners, and Bob Metcalfe. The company began by making mesh networking software for other companies' microchips and has since evolved to manufacturing ZigBee compliant chips itself. In 2003 Ember released its first chip, the EM2420 which was fully compliant with IEEE 802.15.4-2003 standards. Since then, Ember has released the EM260 ZigBee network co-processor and the EM250, ZigBee system-on-chip (SoC), and EmberZNet ZigBee Software in 2005. In 2007, the EmberZNet PRO was launched to provide software that supports the ZigBee PRO Feature Set. The Smart Energy Suite and new versions of ZigBee Development Tools came out in 2008. In 2009 Ember released its third generation chips, the EM300 series. In May, 2012, Ember was acquired by Silicon Labs. Products Ember produces Zigbee chips, software and development tools. Chips EM351 integrates a programmable ARM Cortex-M 3 processor, IEEE 802.15.4 RF transceiver, 128kB of Flash, 12 KB RAM, and the EmberZNet PRO network protocol stack which supports the ZigBee PRO Feature Set. EM357 incorporates the features of the EM351 but has 192 KB of Flash for applications that require more memory. EM250 SoC combines a radio transceiver with a 16-bit XAP2 microprocessor. It has embedded mesh networking software, on-chip debugging, 128kB of Flash and 5kB of RAM. It was designed for applications that require long battery life, low external component count, and a reliable networking solution. EM260 Co-Processor combines a radio transceiver with a flash-based microprocessor. The interface allows application development with any microcontroller and tool-chain. Like the EM250 it was designed for applications that require long battery life, low external component count, and a reliable networking solution. EM2420 was the first chip Ember created. It has since become obsolete and has been replaced by second and third generation Ember chips. Software EmberZNet PRO is a ZigBee protocol software package that runs the mesh networking applications. It provides networking for applications such as Advanced Metering Infrastructure (AMI), home automation Networks (HANs), and building automation systems. It is compliant with all the Ember chips. Development Tools InSight Development Kits provide the hardware and software tools needed for application development. InSight Desktop allows for the programming and debugging of applications. It combines a packet sniffer, network analysis features, API tracing, and a virtual UART. InSight Adapter is used for network and microprocessor debugging and for programming chips. InSight USB Link is a FLASH programming device that connects to any PC via USB and to Emberโ€™s Radio Control Module (RCM). It contains the hardware and software tools that read and write applications and program FLASH memory on the chips. AppBuilder makes network customization possible. It generates a template application that allows developers to tailor the EmberZNet PRO software to their specifications and complete the application, readying it for hardware integration and testing. It also allows configuration of the Hardware Abstraction Layer (HAL) and generates source code application with places for the developer to insert their own OEM-specific code. xIDE is a tool-chain that supports applications being written for the EM250. It has a C-language compiler, assembler, source-level debugger, and graphical editing environment. Applications Advanced Metering Infrastructure (AMI)/Advanced Meter Reading (AMR): provides two way meter communications, allowing commands to be sent toward the home for multiple purposes, including โ€œtime-of-useโ€ pricing information, demand-response actions, or remote service disconnects. During periods of peak demand, utilities use these networks to throttle high-load devices in participating homes. Utilities may also institute time-of-use pricing schemes, where the home area network (HAN) is used to communicate the current price of energy to the consumer. Home Automation: allows household devices such as light switches and fixtures; thermostats and sensors; music, video and speaker systems; security controllers and appliances to network with one another to wirelessly automate the home. Hospitality: allows for wireless networking of the devices in the room without the need for a retrofit. The doors and the devices in the rooms can also be remotely monitored from a central location. Building Automation: provides building owners and property managers with HVAC, lighting, access and refrigeration control to monitor energy usage in real time to create more energy efficient environments. Asset Management: allows for remote monitoring and tracking of assets and cold chain. It also provides container security in shipping. Industrial Automation: wirelessly networks and automates industrial processes and allows for temperature, pressure and level sensing as well as providing temperature and flow control. Defense: provides both battlefield and shipboard monitoring. See also ZigBee ZigBee specification IEEE 802.15.4-2003 mesh networking home automation building automation References External links Zigbee Alliance homepage IEEE 802.15.4 web site Xconomy Home automation companies Building automation Personal area networks
51262279
https://en.wikipedia.org/wiki/Don%20Betourne
Don Betourne
Donald Joseph Betourne (February 27, 1915 โ€“ March 18, 2002) was an American professional basketball player and head coach. He played in the National Basketball League for the Kankakee Gallagher Trojans during the 1937โ€“38 season. Betourne served as a player-coach for the Trojans (the only year the team existed). He played at St. Viator College prior to his time in the NBL. References 1915 births 2002 deaths American men's basketball players Basketball coaches from Illinois Basketball players from Illinois Forwards (basketball) Kankakee Gallagher Trojans coaches Kankakee Gallagher Trojans players People from Bourbonnais, Illinois Player-coaches Sportspeople from the Chicago metropolitan area St. Viator Irish basketball players
44442903
https://en.wikipedia.org/wiki/Identity%20interrogation
Identity interrogation
Identity interrogation is a method of authentication or identity proofing that involves posing one or more knowledge-based authentication questions to an individual. Identity interrogation questions such as "What is your motherโ€™s maiden name?" or "What are the last four digits of your social security number?" It is a method businesses use to prevent identity theft or impersonation of customers. Identity interrogation is primarily employed during remote, not in-person interactions, such as with a teller at a bank. Many interactions that require user authentication over the Internet or the telephone employ Identity interrogation as a substitute for stronger authentication methods such as physical ownership authentication (i.e. presenting a driver's license or a bankcard) or biometrics (i.e. fingerprint or facial recognition) available mainly during in-person interactions. Identity interrogation is used to assist with risk management, account security, and legal and regulatory compliance during remote interactions. In addition, the technique was developed to assist in the prevention of identity fraud, or the illegal use of another person's identity to commit fraud or other criminal activities. Identity interrogation methods are most commonly used by governments, organizations and companies such as banks or financial intermediaries, credit card companies, internet providers, telecommunications companies, insurance providers and others. See also TRUSTID Notes Computer network security Identity management
241409
https://en.wikipedia.org/wiki/Astroturfing
Astroturfing
Astroturfing is the practice of masking the sponsors of a message or organization (e.g., political, advertising, religious or public relations) to make it appear as though it originates from and is supported by grassroots participants. It is a practice intended to give the statements or organizations credibility by withholding information about the source's financial connection. The term astroturfing is derived from AstroTurf, a brand of synthetic carpeting designed to resemble natural grass, as a play on the word "grassroots". The implication behind the use of the term is that instead of a "true" or "natural" grassroots effort behind the activity in question, there is a "fake" or "artificial" appearance of support. Definition In political science, it is defined as the process of seeking electoral victory or legislative relief for grievances by helping political actors find and mobilize a sympathetic public, and is designed to create the image of public consensus where there is none. Astroturfing is the use of fake grassroots efforts that primarily focus on influencing public opinion and typically are funded by corporations and governmental entities to form opinions. On the internet, astroturfers use software to mask their identity. Sometimes one individual operates through many personas to give the impression of widespread support for their client's agenda. Some studies suggest astroturfing can alter public viewpoints and create enough doubt to inhibit action. In the first systematic study of astroturfing in the United States, Oxford Professor Philip N. Howard argued that the internet was making it much easier for powerful lobbyists and political movements to activate small groups of aggrieved citizens to have an exaggerated importance in public policy debates. Astroturfed accounts on social media do not always require humans to write their posts; one January 2021 study detailed a "set of human-looking bot accounts" used to post political content, which was able to operate automatically for fourteen days (and make 1,586 posts) before being detected and suspended by Twitter. Policies and enforcement Many countries have laws that prohibit more overt astroturfing practices. In the United States, the Federal Trade Commission (FTC) may send cease-and-desist orders or require a fine of $16,000 per day for those that violate its "Guides Concerning the Use of Endorsements and Testimonials in Advertising". The FTC's guides were updated in 2009 to address social media and word-of-mouth marketing. According to an article in the Journal of Consumer Policy, the FTC's guides holds advertisers responsible for ensuring bloggers or product endorsers comply with the guides, and any product endorsers with a material connection are required to provide honest reviews. In the European Union, the Unfair Commercial Practices Directive requires that paid-for editorial content in the media provide a clear disclosure that the content is a sponsored advertisement. Additionally, it prohibits those with a material connection from misleading readers into thinking they are a regular consumer. The United Kingdom has the Consumer Protection from Unfair Trading Regulations, which prohibits "Falsely representing oneself as a consumer." They allow for up to two years in prison and unlimited fines for breaches. Additionally, the advertising industry in the UK has adopted many voluntary policies, such as the Code of Non-Broadcast Advertising, Sale, Promotion and Direct Marketing. A trade association, the Advertising Standards Authority, investigates complaints of breaches. The policy requires that marketing professionals not mislead their audience, including by omitting a disclosure of their material connection. In Australia, astroturfing is regulated by Section 18 of the Australian Consumer Law, which broadly prohibits "misleading and deceptive conduct". According to the Journal of Consumer Policy, Australia's laws, which were introduced in 1975, are more vague. In most cases, they are enforced through lawsuits from competitors, rather than the regulatory body, the Australian Competition & Consumer Commission. There is also an International Consumer Protection and Enforcement Network (ICPEN). Legal regulations are primarily targeted towards testimonials, endorsements and statements as to the performance or quality of a product. Employees of an organization may be considered acting as customers if their actions are not guided by authority within the company. In October 2018, after denying that they had paid for people to show up in support of a controversial power plant development project in New Orleans, Entergy was fined five million dollars for using astroturf firm The Hawthorn Group to provide actors to prevent real community members' voices from being counted at city council meetings and show false grassroots support. Debate Effectiveness In the book Grassroots for Hire: Public Affairs Consultants in American Democracy, Edward Walker defines "astroturfing" as public participation that is perceived as heavily incented, as fraudulent (claims are attributed to those who did not make such statements), or as an elite campaign masquerading as a mass movement. Although not all campaigns by professional grassroots lobbying consultants meet this definition, the book finds that the elite-sponsored grassroots campaigns often fail when they are not transparent about their sources of sponsorship and/or fail to develop partnerships with constituencies that have an independent interest in the issue. Walker highlights the case of Working Families for Wal-Mart, in which the campaign's lack of transparency led to its demise. A study published in the Journal of Business Ethics examined the effects of websites operated by front groups on students. It found that astroturfing was effective at creating uncertainty and lowering trust about claims, thereby changing perceptions that tend to favor the business interests behind the astroturfing effort. The New York Times reported that "consumer" reviews are more effective, because "they purport to be testimonials of real people, even though some are bought and sold just like everything else on the commercial Internet." Some organizations feel that their business is threatened by negative comments, so they may engage in astroturfing to drown them out. Online comments from astroturfing employees can also sway the discussion through the influence of groupthink. Justification Some astroturfing operatives defend their practice. Regarding "movements that have organized aggressively to exaggerate their sway," author Ryan Sager said that this "isn't cheating. Doing everything in your power to get your people to show up is basic politics." According to a Porter/Novelli executive, "There will be times when the position you advocate, no matter how well framed and supported, will not be accepted by the public simply because you are who you are." Impact on society Data mining expert Bing Liu (University of Illinois) estimated that one-third of all consumer reviews on the Internet are fake. According to The New York Times, this has made it hard to tell the difference between "popular sentiment" and "manufactured public opinion". According to an article in the Journal of Business Ethics, astroturfing threatens the legitimacy of genuine grassroots movements. The authors argued that astroturfing that is "purposefully designed to fulfill corporate agendas, manipulate public opinion and harm scientific research represents a serious lapse in ethical conduct." A 2011 report found that often paid posters from competing companies are attacking each other in forums and overwhelming regular participants in the process. George Monbiot said that persona-management software supporting astroturfing "could destroy the Internet as a forum for constructive debate". An article in the Journal of Consumer Policy said that regulators and policy makers needed to be more aggressive about astroturfing. The author said that it undermines the public's ability to inform potential customers of sub-standard products or inappropriate business practices, but also noted that fake reviews were difficult to detect. Techniques Use of one or more front groups is one astroturfing technique. These groups typically present themselves as serving the public interest, while actually working on behalf of a corporate or political sponsor. Front groups may resist legislation and scientific consensus that is damaging to the sponsor's business by emphasizing minority viewpoints, instilling doubt and publishing counterclaims by corporate-sponsored experts. Fake blogs can also be created that appear to be written by consumers, while actually being operated by a commercial or political interest. Some political movements have provided incentives for members of the public to send a letter to the editor at their local paper, often using a copy and paste form letter that is published in dozens of newspapers verbatim. Another technique is the use of sockpuppets, where a single person creates multiple identities online to give the appearance of grassroots support. Sockpuppets may post positive reviews about a product, attack participants that criticize the organization, or post negative reviews and comments about competitors, under fake identities. Astroturfing businesses may pay staff based on the number of posts they make that are not flagged by moderators. Persona management software may be used so that each paid poster can manage five to seventy convincing online personas without getting them confused. Online astroturfing using sockpuppets is a form of Sybil attack against distributed systems. Pharmaceutical companies may sponsor patient support groups and simultaneously push them to help market their products. Bloggers who receive free products, paid travel or other accommodations may also be considered astroturfing if those gifts are not disclosed to the reader. Analysts could be considered astroturfing, since they often cover their own clients without disclosing their financial connection. To avoid astroturfing, many organizations and press have policies about gifts, accommodations and disclosures. Detection Persona management software can age accounts and simulate the activity of attending a conference automatically to make it more convincing that they are genuine. At HBGary, employees are given separate thumb drives that contain online accounts for individual identities and visual cues to remind the employee which identity they are using at the time. Mass letters may be printed on personalized stationery using different typefaces, colors and words to make them appear personal. According to an article in The New York Times, the Federal Trade Commission rarely enforces its astroturfing laws. However, astroturfing operations are frequently detected if their profile images are recognized or if they are identified through the usage patterns of their accounts. Filippo Menczer's group at Indiana University developed software in 2010 that detects astroturfing on Twitter by recognizing behavioral patterns. Business and adoption According to an article in the Journal of Consumer Policy, academics disagree on how prolific astroturfing is. According to Nancy Clark from Precision Communications, grass-roots specialists charge $25 to $75 for each constituent they convince to send a letter to a politician. Paid online commentators in China are purportedly paid 50 cents for each online post that is not removed by moderators, leading to the nickname of the "50-cent party". The New York Times reported that a business selling fake online book reviews charged $999 for 50 reviews and made $28,000 a month shortly after opening. According to the Financial Times, astroturfing is "commonplace" in American politics, but was "revolutionary" in Europe when it was exposed that the European Privacy Association, an anti-privacy "think-tank", was actually sponsored by technology companies. History of incidents Origins Although the term "astroturfing" was not yet developed, an early example of the practice was in Act 1, Scene 2 of Shakespeare's play Julius Caesar. In the play, Gaius Cassius Longinus writes fake letters from "the public" to convince Brutus to assassinate Julius Caesar. The term "astroturfing" was first coined in 1985 by Texas Democratic Party senator Lloyd Bentsen when he said, "a fellow from Texas can tell the difference between grass roots and AstroTurf... this is generated mail." Bentsen was describing a "mountain of cards and letters" sent to his office to promote insurance industry interests. AstroTurf itself had recently been invented, and installed in the Houston Astrodome, where natural turf could not grow. According to the manufacturer, "a certain belief that man could conquer the constraints of nature with ingenuity and forward-thinking progress pervaded. The Astrodome was built in the midst of this feverish pursuit of the impossible." Tobacco In response to the passage of tobacco control legislation in the US, Philip Morris, Burson-Marsteller and other tobacco interests created the National Smokers Alliance (NSA) in 1993. The NSA and other tobacco interests initiated an aggressive public relations campaign from 1994 to 1999 in an effort to exaggerate the appearance of grassroots support for smoker's rights. According to an article in the Journal of Health Communication, the NSA had mixed success at defeating bills that were damaging revenues of tobacco interests. Internet Email, automated phone calls, form letters, and the Internet made astroturfing more economical and prolific in the late 1990s. In 2001, as Microsoft was defending itself against an antitrust lawsuit, Americans for Technology Leadership (ATL), a group heavily funded by Microsoft, initiated a letter-writing campaign. ATL contacted constituents under the guise of conducting a poll and sent pro-Microsoft consumers form and sample letters to send to involved lawmakers. The effort was designed to make it appear as though there was public support for a sympathetic ruling in the antitrust lawsuit. In January 2018, YouTube user Isaac Protiva uploaded a video alleging that internet service provider Fidelity Communications was behind an initiative called "Stop City-Funded Internet", based on how some images on the Stop City-Funded Internet website had "Fidelity" in their file names. The campaign appeared to be in response to the city of West Plains expanding their broadband network, and advocated for the end of municipal broadband on the basis that it was too risky. Days later, Fidelity released a letter admitting to sponsoring the campaign. Politics In 2009โ€“2010, an Indiana University research study developed a software system to detect astroturfing on Twitter due to the sensitivity of the topic in the run up to the 2010 U.S. midterm elections and account suspensions on the social media platform. The study cited a limited number of examples, all promoting conservative policies and candidates. In 2003, GOPTeamLeader.com offered the site's users "points" that could be redeemed for products if they signed a form letter promoting George Bush and got a local paper to publish it as a letter to the editor. More than 100 newspapers published an identical letter to the editor from the site with different signatures on it. Similar campaigns were used by GeorgeWBush.com, and by MoveOn.org to promote Michael Moore's film Fahrenheit 9/11. The Committee for a Responsible Federal Budget's "Fix the Debt" campaign advocated to reduce government debt without disclosing that its members were lobbyists or high-ranking employees at corporations that aim to reduce federal spending. It also sent op-eds to various students that were published as-is. Some organizations in the Tea Party movement have been accused of being astroturfed. In October and November 2018, conservative marketing firm Rally Forge created what The New Yorker described as "a phony left-wing front group, America Progress Now, which promoted Green Party candidates online in 2018, apparently to hurt Democrats in several races." Its ads on Facebook used socialist memes and slogans to attack Democrats and urge third-party protest voting in several tight races, including the Wisconsin governor contest. In 2018, Jeff Ballabon, a Republican operative in his mid-50s, set up a website called "Jexodus" claiming to be by "proud Jewish Millennials tired of living in bondage to leftist politics", but has been denounced as "likely a clumsy astroturf effort rather than an actual grassroots movement". The website was registered November 5, 2018, before the congressional election, and before those representatives accused of antisemitism had even been voted in. This website was later cited by Donald Trump as though it were an authentic movement. Environment The Koch brothers started a public advocacy group to prevent the development of wind turbines offshore in Massachusetts. The Kennedy family was also involved. Corporate efforts to mobilize the public against environmental regulation accelerated in the US following the election of president Barack Obama. In 2014, the Toronto Sun conservative media organization has published an article accusing Russia of using astroturf tactics to drum up anti-fracking sentiment across Europe and the West, supposedly in order to maintain dominance in oil exports through Ukraine. In Canada, a coalition of oil and gas company executives grouped under the Canadian Association of Petroleum Producers also initiated a series of Canadian actions to advocate for the oil and gas industry in Canada through mainstream and social media, and using online campaigning to generate public support for fossil fuel energy projects. Commercial In 2006, two Edelman employees created a blog called "Wal-Marting Across America" about two people traveling to Wal-Marts across the country. The blog gave the appearance of being operated by spontaneous consumers, but was actually operated on behalf of Working Families for Walmart, a group funded by Wal-Mart. In 2007, Ask.com deployed an anti-Google advertising campaign portraying Google as an "information monopoly" that was damaging the Internet. The ad was designed to give the appearance of a popular movement and did not disclose it was funded by a competitor. In 2010, the Federal Trade Commission settled a complaint with Reverb Communications, who was using interns to post favorable product reviews in Apple's iTunes store for clients. In September 2012, one of the first major identified case of astroturfing in Finland involved criticisms about the cost of a โ‚ฌ1.8 billion patient information system, which was defended by fake online identities operated by involved vendors. In September 2013, New York Attorney General Eric T. Schneiderman announced a settlement with 19 companies to prevent astroturfing. "'Astroturfing' is the 21st century's version of false advertising, and prosecutors have many tools at their disposal to put an end to it," said Scheiderman. The companies paid $350,000 to settle the matter, but the settlement opened the way for private suits as well. "Every state has some version of the statutes New York used," according to lawyer Kelly H. Kolb. "What the New York attorney general has done is, perhaps, to have given private lawyers a road map to file suit." State-sponsored An Al Jazeera TV series The Lobby documented Israel's attempt to promote more friendly, pro-Israel rhetoric to influence the attitudes of British youth, partly through influencing already established political bodies, such as the National Union of Students and the Labour Party, but also by creating new pro-Israel groups whose affiliation with the Israeli administration was kept secret. In 2008, an expert on Chinese affairs, Rebecca MacKinnon, estimated the Chinese government employed 280,000 in a government-sponsored astroturfing operation to post pro-China propaganda on social media and drown out voices of dissent. In June 2010, the United States Air Force solicited for "persona management" software that would "enable an operator to exercise a number of different online persons from the same workstation and without fear of being discovered by sophisticated adversaries. Personas must be able to appear to originate in nearly any part of the world and can interact through conventional online services and social media platforms..." The $2.6 million contract was awarded to Ntrepid for astroturfing software the military would use to spread pro-American propaganda in the Middle East, and disrupt extremist propaganda and recruitment. The contract is thought to have been awarded as part of a program called Operation Earnest Voice, which was first developed as a psychological warfare weapon against the online presence of groups ranged against coalition forces. See also Crowds on Demand Front organization Greenwashing Government-organized non-governmental organization Internet activism Internet Water Army Pinkwashing Purplewashing Redwashing Shill State-sponsored internet sockpuppetry Whitewashing References Further reading King, Gary; Pan, Jennifer; Roberts, Margaret E. (2017). "How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, Not Engaged Argument". American Political Science Review. 111 (3): 484โ€“501. . . Ethically disputed business practices Internet manipulation and propaganda Political campaign techniques Political corruption Political metaphors Political science terminology Political terminology of the United States Public relations techniques
481262
https://en.wikipedia.org/wiki/SRI%20International
SRI International
SRI International (SRI) is an American nonprofit scientific research institute and organization headquartered in Menlo Park, California. The trustees of Stanford University established SRI in 1946 as a center of innovation to support economic development in the region. The organization was founded as the Stanford Research Institute. SRI formally separated from Stanford University in 1970 and became known as SRI International in 1977. SRI performs client-sponsored research and development for government agencies, commercial businesses, and private foundations. It also licenses its technologies, forms strategic partnerships, sells products, and creates spin-off companies. SRI's headquarters are located near the Stanford University campus. SRI's annual revenue in 2014 was approximately $540 million, which tripled from 1998 under the leadership of Curtis Carlson. In 1998, the organization was on the verge of bankruptcy when Carlson took over as CEO. Over the next sixteen years with Carlson as CEO, the organizational culture of SRI was transformed. SRI tripled in size, became very profitable, and created many world-changing innovations, using NABC Framework. Its best-known successโ€”Siri, the personal assistant on the iPhoneโ€”came from a company SRI created and then sold to Apple. William A. Jeffrey has served as SRI's president and CEO since September 2014. SRI employs about 2,100 people. Sarnoff Corporation, a wholly owned subsidiary of SRI since 1988, was fully integrated into SRI in January 2011. SRI's focus areas include biomedical sciences, chemistry and materials, computing, Earth and space systems, economic development, education and learning, energy and environmental technology, security and national defense, as well as sensing and devices. SRI has received more than 4,000 patents and patent applications worldwide. History Foundation In the 1920s, Stanford University professor Robert E. Swain proposed creating a research institute in the Western United States. Herbert Hoover, then a trustee of Stanford University, was also an early proponent of an institute but became less involved with the project after he was elected president of the United States. The development of the institute was delayed by the Great Depression in the 1930s and World War II in the 1940s, with three separate attempts leading to its formation in 1946. In August 1945, Maurice Nelles, Morlan A. Visel, and Ernest L. Black of Lockheed made the first attempt to create the institute with the formation of the "Pacific Research Foundation" in Los Angeles. A second attempt was made by Henry T. Heald, then president of the Illinois Institute of Technology. In 1945, Heald wrote a report recommending a research institute on the West Coast and a close association with Stanford University with an initial grant of $500,000 (equivalent to $ in ). A third attempt was made by Fred Terman, Stanford University's dean of engineering. Terman's proposal followed Heald's, but focused on faculty and student research more than contract research. The trustees of Stanford University voted to create the organization in 1946. It was structured so that its goals were aligned with the charter of the universityโ€”to advance scientific knowledge and to benefit the public at large, not just the students of Stanford University. The trustees were named as the corporation's general members, and elected SRI's directors (later known as presidents); if the organization were dissolved, its assets would return to Stanford University. Research chemist William F. Talbot became the first director of the institute. Stanford University president Donald Tresidder instructed Talbot to avoid work that would conflict with the interests of the university, particularly federal contracts that might attract political pressure. The drive to find work and the lack of support from Stanford faculty caused the new research institute to violate this directive six months later through the pursuit of a contract with the Office of Naval Research. This and other issues, including frustration with Tresidder's micromanagement of the new organization, caused Talbot to repeatedly offer his resignation, which Tresidder eventually accepted. Talbot was replaced by Jesse Hobson, who had previously led the Armour Research Foundation, but the pursuit of contract work remained. Early history SRI's first research project investigated whether the guayule plant could be used as a source of natural rubber. During World War II, rubber was imported into the U.S. and was subject to shortages and strict rationing. From 1942 to 1946, the United States Department of Agriculture (USDA) supported a project to create a domestic source of natural rubber. Once the war ended, the United States Congress cut funding for the program; in response, the Office of Naval Research created a grant for the project to continue at SRI, and the USDA staff on the project worked through SRI until Congress reauthorized funding in 1947. SRI's first economic study was for the United States Air Force. In 1947, the Air Force wanted to determine the expansion potential of the U.S. aircraft industry; SRI found that it would take too long to escalate production in an emergency. In 1948, SRI began research and consultation with Chevron Corporation to develop an artificial substitute for tallow and coconut oil in soap production; SRI's investigation confirmed the potential of dodecylbenzene as a suitable replacement. Later, Procter & Gamble used the substance as the basis for Tide laundry detergent. The institute performed much of the early research on air pollution and the formation of ozone in the lower atmosphere. SRI sponsored the First National Air Pollution Symposium in Pasadena, California, in November 1949. Experts gave presentations on pollution research, exchanged ideas and techniques, and stimulated interest in the field. The event was attended by 400 scientists, business executives, and civic leaders from the U.S. SRI co-sponsored subsequent events on the subject. In April 1953, Walt and Roy Disney hired SRI (and in particular, Harrison Price) to consult on their proposal for establishing an amusement park in Burbank, California. SRI provided information on location, attendance patterns, and economic feasibility. SRI selected a larger site in Anaheim, prepared reports about operation, and provided on-site administrative support for Disneyland and acted in an advisory role as the park expanded. In 1955, SRI was commissioned to select a site and provide design suggestions for the John F. Kennedy Center for the Performing Arts. In 1952, the Technicolor Corporation contracted with SRI to develop a near-instantaneous, electro-optical alternative to the manual process of timing during film copying. In 1959, the Academy of Motion Picture Arts and Sciences presented the Scientific and Engineering Award jointly to SRI and Technicolor for their work on the design and development of the Technicolor electronic printing timer which greatly benefited the motion picture industry. In 1954, Southern Pacific asked SRI to investigate ways of reducing damage during rail freight shipments by mitigating shock to railroad box cars. This investigation led to William K. MacCurdy's development of the Hydra-Cushion technology, which remains standard today. In the 1950s, SRI worked under the direction of the Bank of America to develop ERMA (Electronic Recording Machine, Accounting) and magnetic ink character recognition (MICR). The ERMA project was led by computer scientist Jerre Noe, who was at the time SRI's assistant director of engineering. As of 2011, MICR remains the industry standard in automated check processing. Rapid expansion Douglas Engelbart, the founder of SRI's Augmentation Research Center (ARC), was the primary force behind the design and development of the multi-user oN-Line System (or NLS), featuring original versions of modern computer-human interface elements including bit-mapped displays, collaboration software, hypertext, and precursors to the graphical user interface such as the computer mouse. As a pioneer of human-computer interaction, Engelbart is arguably SRI's most notable alumnus. He was awarded the National Medal of Technology and Innovation in 2000. Bill English, then chief engineer at ARC, built the first prototype of a computer mouse from Engelbart's design in 1964. SRI also developed inkjet printing (1961) and optical disc recording (1963). Liquid-crystal display (LCD) technology was developed at RCA Laboratories in the 1960s, which later became Sarnoff Corporation in 1988, a wholly owned subsidiary of SRI. Sarnoff was fully integrated into SRI in 2011. In the early 1960s, Hewitt Crane and his colleagues developed the world's first all-magnetic digital computer, based upon extensions to magnetic core memories. The technology was licensed to AMP Inc., who then used it to build specialized computers for controlling tracks in the New York City Subway and on railroad switching yards. In 1966, SRI's Artificial Intelligence Center began working on "Shakey the robot", the first mobile robot to reason about its actions. Equipped with a television camera, a triangulating rangefinder, and bump sensors, Shakey used software for perception, world-modeling, and acting. The project ended in 1972. SRI's Artificial Intelligence Center marked its 45th anniversary in 2011. On October 29, 1969, the first connection on a wide area network to use packet switching, ARPANET, was established between nodes at Leonard Kleinrock's laboratory at University of California, Los Angeles (UCLA) and Douglas Engelbart's laboratory at SRI using Interface Message Processors at both sites. The following year, Engelbart's laboratory installed the first TENEX system outside of BBN where it was developed. In addition to SRI and UCLA, University of California, Santa Barbara and the University of Utah were part of the original four network nodes. By December 5, 1969, the entire four-node network was connected. In the 1970s, SRI developed packet-switched radio (a precursor to wireless networking), over-the-horizon radar, Deafnet, vacuum microelectronics, and software-implemented fault tolerance. The first true Internet transmission occurred on November 22, 1977, when SRI originated the first connection between three disparate networks. Data flowed seamlessly through the mobile Packet Radio Van between SRI in Menlo Park, California and the University of Southern California in Los Angeles via University College London, England, across three types of networks: packet radio, satellite, and the ARPANET. In 2007, the Computer History Museum presented a 30th anniversary celebration of this demonstration, which included several participants from the 1977 event. SRI would go on to run the Network Information Center under the leadership of Jake Feinler. Split and diversification The Vietnam War (1955โ€“1975) was an important issue on college campuses across the United States in the 1960s and 1970s. As a belated response to Vietnam War protesters who believed that funding from the Defense Advanced Research Projects Agency (DARPA) made the university part of the militaryโ€“industrial complex, the Stanford Research Institute split from Stanford University in 1970. The organization subsequently changed its name from the Stanford Research Institute to SRI International in 1977. In 1972, physicists Harold E. Puthoff and Russell Targ undertook a series of investigations of psychic phenomena sponsored by the CIA, for which they coined the term remote viewing. Among other activities, the project encompassed the work of consulting "consciousness researchers" including artist/writer Ingo Swann, military intelligence officer Joseph McMoneagle, and psychic/illusionist Uri Geller. This ESP work continued with funding from the US intelligence community until Puthoff and Targ left SRI in the mid-1980s. For more information, see Parapsychology research at SRI. Social scientist and consumer futurist Arnold Mitchell created the Values, Attitudes and Lifestyles (VALS) psychographic methodology in the late 1970s to explain changing U.S. values and lifestyles. VALS was formally inaugurated as an SRI product in 1978 and was called "one of the ten top market research breakthroughs of the 1980s" by Advertising Age magazine. Throughout the 1980s, SRI developed Zylon, stealth technologies, improvements to ultrasound imaging, two-dimensional laser fluorescence imaging, and many-sorted logic. In computing and software, SRI developed a multimedia electronic mail system, a theory of non-interference in computer security, a multilevel secure (MLS) relational database system called Seaview, LaTeX, Open Agent Architecture (OAA), a network intrusion detection system, the Maude system, a declarative software language, and PacketHop, a peer-to-peer wireless technology to create scalable ad hoc networks. SRI's research in network intrusion detection led to the patent infringement case SRI International, Inc. v. Internet Security Systems, Inc. The AI center's robotics research led to Shakey's successor, Flakey the robot, which focused on fuzzy logic. In 1986, SRI.com became the 8th registered ".com" domain. The Artificial Intelligence Center developed the Procedural Reasoning System (PRS) in the late 1980s and into the early 1990s. PRS launched the field of BDI-based intelligent agents. In the 1990s, SRI developed a letter sorting system for the United States Postal Service and several education and economic studies. Military-related technologies developed by SRI in the 1990s and 2000s include ground- and foliage-penetrating radar, the INCON and REDDE command and control system for the U.S. military, and IGRS (integrated GPS radio system)โ€”an advanced military personnel and vehicle tracking system. To train armored combat units during battle exercises, SRI developed the Deployable Force-on-Force Instrumented Range System (DFIRST), which uses GPS satellites, high-speed wireless communications, and digital terrain map displays. SRI created the Centibots in 2003, one of the first and largest teams of coordinated, autonomous mobile robots that explore, map, and survey unknown environments. It also created BotHunter, a free utility for Unix, which detects botnet activity within a network. With DARPA-funded research, SRI contributed to the development of speech recognition and translation products and was an active participant in DARPA's Global Autonomous Language Exploitation (GALE) program. SRI developed DynaSpeak speech recognition technology which was used in the handheld VoxTec Phraselator, allowing U.S. soldiers overseas to communicate with local citizens in near real time. SRI also created translation software for use in the IraqComm, a device which allows two-way, speech-to-speech machine translation between English and colloquial Iraqi Arabic. In medicine and chemistry, SRI developed dry-powder drugs, laser photocoagulation (a treatment for some eye maladies), remote surgery (also known as telerobotic surgery), bio-agent detection using upconverting phosphor technology, the experimental anticancer drugs Tirapazamine and TAS-108, ammonium dinitramide (an environmentally benign oxidizer for safe and cost-effective disposal of hazardous materials), the electroactive polymer ("artificial muscle"), new uses for diamagnetic levitation, and the antimalarial drug Halofantrine. SRI performed a study in the 1990s for Whirlpool Corporation that led to modern self-cleaning ovens. In the 2000s, SRI worked on Pathway Tools software for use in bioinformatics and systems biology to accelerate drug discovery using artificial intelligence and symbolic computing techniques. The software system generates the BioCyc database collection, SRI's growing collection of genomic databases used by biologists to visualize genes within a chromosome, complete biochemical pathways, and full metabolic maps of organisms. Early 21st century SRI researchers made the first observation of visible light emitted by oxygen atoms in the night-side airglow of Venus, offering new insight into the planet's atmosphere. SRI education researchers conducted the first national evaluation of the growing U.S. charter schools movement. For the World Golf Foundation, SRI compiled the first-ever estimate of the overall scope of the U.S. golf industry's goods and services ($62 billion in 2000), providing a framework for monitoring the long-term growth of the industry. In April 2000, SRI formed Atomic Tangerine, an independent consulting firm designed to bring new technologies and services to market. In 2006, SRI was awarded a $56.9 million contract with the National Institute of Allergy and Infectious Diseases to provide preclinical services for the development of drugs and antibodies for anti-infective treatments for avian influenza, SARS, West Nile virus and hepatitis. Also in 2006, SRI selected St. Petersburg, Florida, as the site for a new marine technology research facility targeted at ocean science, the maritime industry and port security; the facility is a collaboration with the University of South Florida College of Marine Science and its Center for Ocean Technology. That facility created a new method for underwater mass spectrometry, which has been used to conduct "advanced underwater chemical surveys in oil and gas exploration and production, ocean resource monitoring and protection, and water treatment and management" and was licensed to Spyglass Technologies in March 2014. In December 2007, SRI launched a spin-off company, Siri Inc., which Apple acquired in April 2010. In October 2011, Apple announced the Siri personal assistant as an integrated feature of the Apple iPhone 4S. Siri's technology was born from SRI's work on the DARPA-funded CALO project, described by SRI as the largest artificial intelligence project ever launched. Siri was co-founded in December 2007 by Dag Kittlaus (CEO), Adam Cheyer (vice president, engineering), and Tom Gruber (CTO/vice president, design), together with Norman Winarsky (vice president of SRI Ventures). Investors included Menlo Ventures and Morgenthaler Ventures. For the National Science Foundation (NSF), SRI operates the advanced modular incoherent scatter radar (AMISR), a novel relocatable atmospheric research facility. Other SRI-operated research facilities for the NSF include the Arecibo Observatory in Puerto Rico and the Sondrestrom Upper Atmospheric Research Facility in Greenland. In May 2011, SRI was awarded a $42 million contract to operate the Arecibo Observatory from October 1, 2011, to September 30, 2016. The institute also manages the Hat Creek Radio Observatory in Northern California, home of the Allen Telescope Array. In February 2014, SRI announced a "photonics-based testing technology called FASTcell" for the detection and characterization of rare circulating tumor cells from blood samples. The test is aimed at cancer-specific biomarkers for breast, lung, prostate, colorectal and leukemia cancers that circulate in the blood stream in minute quantities, potentially diagnosing those conditions earlier. In September 2018, the NSF announced that SRI International will be rewarded $4.4 million to establish the backbone organization of a national network. Description Employees and financials As of February 2015, SRI employs approximately 2,100 people. In 2014, SRI had about $540 million in revenue. In 2013, the United States Department of Defense consisted of 63% of awards by value; the remainder was composed of the National Institutes of Health (11%); businesses and industry (8%); other United States agencies (6%); the National Science Foundation (6%); the United States Department of Education (4%); and foundations (2%). As of February 2015, approximately 4,000 patents have been granted to SRI International and its employees. Facilities SRI is primarily based on a campus located in Menlo Park, California, which is considered part of Silicon Valley. This campus encompasses of office and lab space. In addition, SRI has a campus in Princeton, New Jersey, with of research space. There are also offices in Washington, D.C., and Tokyo, Japan. In total, SRI has of office and laboratory space. Organization SRI International is organized into seven units (generally referred to as divisions) that focus on specific subject areas. Staff members and alumni SRI has had a chief executive of some form since its establishment. Prior to the split with Stanford University, the position was known as the director; after the split, it is known as the company's president and CEO. SRI has had nine so far, including William F. Talbot (1946โ€“1947), Jesse E. Hobson (1947โ€“1955), E. Finley Carter (1956โ€“1963), Charles Anderson (1968โ€“1979), William F. Miller (1979โ€“1990), James J. Tietjen (1990โ€“1993), William P. Sommers (1993โ€“1998) Curtis Carlson (1998โ€“2014). More recently, the role was split into two. The current CEO is William A. Jeffrey (2014โ€“present) and the president is Manish Kothari (formerly president of SRI Ventures). SRI also has had a board of directors since its inception, which has served to both guide and provide opportunities for the organization. The current board of directors includes Samuel Armacost (Chairman of the Board Emeritus), Mariann Byerwalter (chairman), William A. Jeffrey, Charles A. Holloway (vice chairman), Vern Clark, Robert L. Joss, Leslie F. Kenne, Henry Kressel, David Liddle, Philip J. Quigley, Wendell Wierenga and John J. Young Jr. Its notable researchers include Elmer Robinson (meteorologist), co-author of the 1968 SRI report to the American Petroleum Institute (API) on the risks of fossil fuel burning to the global climate. Many notable researchers were involved with the Augmentation Research Center. These include Douglas Engelbart, the developer of the modern GUI; William English, the inventor of the mouse; Jeff Rulifson, the primary developer of the NLS; Elizabeth J. Feinler, who ran the Network Information Center; and David Maynard, who would help found Electronic Arts. The Artificial Intelligence Center has also produced a large number of notable alumni, many of whom contributed to Shakey the robot; these include project manager Charles Rosen as well as Nils Nilsson, Bertram Raphael, Richard O. Duda, Peter E. Hart, Richard Fikes and Richard Waldinger. AI researcher Gary Hendrix went on to found Symantec. Former Yahoo! President and CEO Marissa Mayer performed a research internship in the Center in the 1990s. The CALO project (and its spin-off, Siri) also produced notable names including C. Raymond Perrault and Adam Cheyer. Several SRI projects produced notable researchers and engineers long before computing was mainstream. Early employee Paul M. Cook founded Raychem. William K. MacCurdy developed the Hydra-Cushion freight car for Southern Pacific in 1954; Hewitt Crane and Jerre Noe were instrumental in the development of Electronic Recording Machine, Accounting; Harrison Price helped The Walt Disney Company design Disneyland; James C. Bliss developed the Optacon; and Robert Weitbrecht invented the first telecommunications device for the deaf. Spin-off companies Working with investment and venture capital firms, SRI and its former employees have launched more than 60 spin-off ventures in a wide range of fields, including Siri (acquired by Apple), Tempo AI (acquired by Salesforce.com), Redwood Robotics (acquired by Google), Desti (acquired by HERE), Grabit, Kasisto, Artificial Muscle, Inc. (acquired by Bayer MaterialScience), Nuance Communications, Intuitive Surgical, Ravenswood Solutions, and Orchid Cellmark. Former SRI staff members have also established new companies. In engineering and analysis, for example, notable companies formed by SRI alumni include Weitbrecht Communications, Exponent and Raychem. Companies in the area of legal, policy and business analysis include Fair Isaac Corporation, Global Business Network and Institute for the Future. Research in computing and computer science-related areas led to the development of many companies, including Symantec, the Australian Artificial Intelligence Institute, E-Trade, and Verbatim Corporation. Wireless technologies spawned Firetide and venture capital firm enVia Partners. Health systems research inspired Telesensory Systems. See also References Notes Works cited Further reading SRI history Specific topics External links SRI International website Engineering companies of the United States Companies based in Menlo Park, California Research institutes established in 1946 1946 establishments in California Multidisciplinary research institutes Research institutes in the San Francisco Bay Area Computer science research organizations Science and technology in the San Francisco Bay Area Corporate spin-offs Contract research organizations Non-profit organizations based in the San Francisco Bay Area