id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
710813 | https://en.wikipedia.org/wiki/Tim%20Keefe | Tim Keefe | Timothy John Keefe (January 1, 1857 – April 23, 1933), nicknamed "Smiling Tim" and "Sir Timothy", was an American Major League Baseball pitcher. He stood tall and weighed . He was one of the most dominating pitchers of the 19th century and posted impressive statistics in one category or another for almost every season he pitched. He was the second MLB pitcher to record 300 wins. He was elected to the Baseball Hall of Fame in 1964.
Keefe's career spanned much of baseball's formative stages. His first season was the last in which pitchers threw from 45 feet, so for most of his career he pitched from 50 feet. His final season was the first season in which pitchers hurled from the modern distance of 60 feet, 6 inches.
Early life
Keefe was born on January 1, 1857, in Cambridge, Massachusetts. His father, Patrick, was an Irish immigrant. When Tim Keefe was a child, Patrick served in the Union Army during the American Civil War. Patrick was a prisoner of war for several years. All four of Patrick's brothers were killed in the war; Tim had been named after two of them. Tim's brother became a major and fought in the Spanish–American War.
After the war, Patrick had high expectations for his son, and the two frequently fought over Tim's pursuit of baseball. With the help of local former pitcher Tommy Bond, Keefe persisted and became known as a standout local pitcher by 1876. Keefe's early professional career included minor league stints in Lewiston, Clinton, New Bedford, Utica, and Albany.
Major league career
Keefe entered the major leagues in 1880 with the Troy Trojans. He immediately established himself as a talented pitcher, posting an astounding 0.86 ERA in 105 innings pitched, a record that still stands. (He also posted the best Adjusted ERA+ in baseball history in 1880.) Despite the sterling ERA, he managed but a 6–6 record, pitching in 12 games, all complete games.
In 1883, after the Trojans folded, Keefe rose to stardom with the New York Metropolitans of the American Association under manager "Gentleman" Jim Mutrie and had one of the most dominating seasons in baseball's early history. On July 4 of that year, Keefe pitched both ends of a doubleheader against Columbus, winning the first game with a one-hitter; the second a two-hit gem. He went 41–27 over 619 innings pitched with a 2.41 ERA and 361 strikeouts. His 1884 campaign was almost as dominant, winning 37 games, losing 17, and striking out 334.
In 1885, John B. Day, who owned the Metropolitans and the New York Giants of the National League, moved Keefe and Mutrie to the Giants. Here, Keefe joined future Hall of Famers Buck Ewing, Monte Ward, Roger Connor, Mickey Welch, and "Orator" Jim O'Rourke to form an outstanding team that finished with a fine 85–27 record. Keefe went 32–13 with a 1.58 ERA and 227 strikeouts. In 1887, Keefe sat out several weeks of the season after he struck a batter in the head with a pitch; he was said to have suffered a nervous breakdown.
He had arguably his greatest season in 1888, when he led the league with a 35–12 record, 1.74 ERA and 335 strikeouts (see Triple Crown). He won 19 consecutive games that season, a record that stood for 24 years. The Giants played the St. Louis Browns of the American Association in a postseason series for the Dauvray Cup, and Keefe added four more wins to his tally. Keefe even designed the famous all-black "funeral" uniforms the Giants wore that season.
Keefe was very well paid for his career, yet he was a leading member of the Brotherhood of Professional Base Ball Players, an early players' union that fought for the welfare of players. He assisted his brother-in-law Monte Ward to form the Players' League for the 1890 season. As a co-organizer of the Players' League, he recognized that he might be financially vulnerable if the league failed to make money. Keefe transferred ownership of his real estate assets to his mother so that they would remain safe from any legal rulings.
Shortly before the Players' League was founded, Keefe had started a sporting goods business in New York with W. H. Becannon, a former employee of baseball owner and sporting goods entrepreneur Albert Spalding. Keefe and Becannon manufactured the Keefe ball, the official baseball of the league. Spalding and the other NL owners fought against the new league, employing legal and financial maneuvers (such as slashing NL ticket prices) that made competition difficult. The Players' League folded after one season.
In the 1891 preseason, Keefe refused a salary offer of $3,000 from New York; he had earned $4,500 in the previous season. Keefe said, "I want to play in New York, but I never will for a $3,000 salary... To tell you the truth, however, I do not think I am wanted in the New York team, and this cutting method is being pursued to keep me out." Keefe ultimately signed with the team for a $3,500 salary.
During the 1891 season, Keefe was released by New York. He was drawing a high salary and was not meeting the expectations of the team's leadership. After his release, Keefe said, "I hate to leave New York, am very fond of it, and would do all in my power for New York, but what am I to do? I have been systematically done by the New York Baseball Club... They would not let me play, and when I did get a chance, I worked under a disadvantage. I feel that I am just as good a player as I ever was."
Keefe moved to the Philadelphia Phillies after his release from the Giants. He retired after the 1893 season with 342 wins (10th all time), a 2.62 ERA, and 2,562 strikeouts. His 2,562 strikeouts were a major league record at the time of his retirement. He was also the first pitcher to achieve three 300-plus strikeout seasons, done during his dominating prime in the 1880s in which he won the most games of the decade with 291. He still holds the record for having wins in the most ballparks, with 47.
Keefe was nicknamed "Sir Timothy" because of his gentlemanly behavior on and off the field. He never drank or smoked.
Later life and legacy
Late in his playing career, Keefe began to coach college baseball and he continued in this capacity after his retirement as a player. Beginning in the spring of 1893, Keefe began to work as a pitching coach for Harvard. Keefe also worked as an umpire for a total of 243 major league games; his most active year was 1895, when he umpired 129 games. He was also involved in real estate. He died in his hometown of Cambridge, Massachusetts, at the age of 76.
Keefe was inducted into the Baseball Hall of Fame in 1964 after being elected by the Veterans Committee. Six players were inducted that year, and Keefe was one of five who had been voted in by the Veterans Committee.
Career statistics
Official career statistics as recognized by Baseball-Reference.com.
' * ' denotes stats that were not officially recognized during parts or all of his career, and are incomplete.
See also
300 win club
Top 100 Major League Baseball strikeout pitchers
Major League Baseball Triple Crown
List of Major League Baseball career wins leaders
List of Major League Baseball annual ERA leaders
List of Major League Baseball annual strikeout leaders
List of Major League Baseball annual wins leaders
List of most hit batsman by MLB pitcher
References
External links
, or Retrosheet
The Deadball Era
1857 births
1933 deaths
19th-century baseball players
American people of Irish descent
National Baseball Hall of Fame inductees
Major League Baseball pitchers
Baseball players from Massachusetts
Sportspeople from Cambridge, Massachusetts
Troy Trojans players
New York Metropolitans players
New York Giants (NL) players
New York Giants (PL) players
Philadelphia Phillies players
National League Pitching Triple Crown winners
National League ERA champions
National League strikeout champions
National League wins champions
Utica Pent Ups players
New Bedford (minor league baseball) players
Albany (minor league baseball) players |
23521191 | https://en.wikipedia.org/wiki/W3C%20Geolocation%20API | W3C Geolocation API | The W3C Geolocation API is an effort by the World Wide Web Consortium (W3C) to standardize an interface to retrieve the geographical location information for a client-side device. It defines a set of objects, ECMAScript standard compliant, that executing in the client application give the client's device location through the consulting of Location Information Servers, which are transparent for the application programming interface (API). The most common sources of location information are IP address, Wi-Fi and Bluetooth MAC address, radio-frequency identification (RFID), Wi-Fi connection location, or device Global Positioning System (GPS) and GSM/CDMA cell IDs. The location is returned with a given accuracy depending on the best location information source available.
The result of W3C Geolocation API will usually give 4 location properties, including latitude and longitude (coordinates), altitude (height), and [accuracy of the position gathered], which all depend on the location sources. In some queries, altitude may yield or return no value.
Deployment in web browsers
Web pages can use the Geolocation API directly if the web browser implements it. Historically, some browsers could gain support via the Google Gears plugin, but this was discontinued in 2010 and the server-side API it depended on stopped responding in 2012.
The Geolocation API is ideally suited to web applications for mobile devices such as personal digital assistants (PDA) and smartphones. On desktop computers, the W3C Geolocation API works in Firefox since version 3.5, Google Chrome, Opera 10.6, Internet Explorer 9.0, and Safari 5. On mobile devices, it works on Android (firmware 2.0+), iOS, Windows Phone and Maemo. The W3C Geolocation API is also supported by Opera Mobile 10.1 – available for Android and Symbian devices (S60 generations 3 & 5) since 24 November 2010.
Google Gears provided geolocation support for older and non-compliant browsers, including Internet Explorer 7.0+ as a Gears plugin, and Google Chrome which implemented Gears natively. It also supported geolocation on mobile devices as a plugin for the Android browser (pre version 2.0) and Opera Mobile for Windows Mobile. However, the Google Gears Geolocation API is incompatible with the W3C Geolocation API and is no longer supported.
Location sources
The Geolocation API does not provide the location information. The location information is obtained by a device (such as a smartphone, PC or modem), which is then served by the API to be brought in browser. Usually geolocation will try to determine a device's position using one of these several methods.
GPS (Global Positioning System) This happens for any device which has GPS capabilities. A smartphone with GPS capabilities and set to high accuracy mode will be likely to obtain the location data from this. GPS calculate location information from the satellite signal. It has the highest accuracy; in most Android smartphones, the accuracy can be up to 10 metres.
Mobile Network Location Mobile phone tracking is used if a cellphone or wireless modem is used without a GPS chip built in.
Wi-Fi Positioning System If Wi-Fi is used indoors, a Wi-Fi positioning system is the likeliest source. Some Wi-Fi spots have location services capabilities.
IP Address Location Location is detected based on the nearest public IP address on a device (which can be a computer, the router it is connected to, or the Internet Service Provider (ISP) the router uses). The location depends on the IP information available, but in many cases where the IP is hidden behind an ISP network address translation, the accuracy is only to the level of a city, region or even country.
Implementation
Though the implementation is not specified, W3C Geolocation API is built on extant technologies, and is heavily influenced by Google Gears Geolocation API. Example: Firefox's Geolocation implementation uses Google's network location provider. Google Gears Geolocation works by sending a set of parameters that could give a hint as to where the user's physical location is to a network location provider server, which is by default the one provided by Google (code.l.google.com). Some of the parameters are lists of sensed mobile cell towers and Wi-Fi networks, all with sensed signal strengths. These parameters are encapsulated into a JavaScript Object Notation (JSON) message and sent to the network location provider via HTTP POST. Based on these parameters, the network location provider can calculate the location. Common uses for this location information include enforcing access controls, localizing and customizing content, analyzing traffic, contextual advertising and preventing identity theft.
Example code
Simple JavaScript code that checks if the browser has the Geolocation API implemented and then uses it to get the current position of the device. this code creates a function which can be called on HTML using <body onload="geoFindMe()">:
const geoFindMe = () => {
if (navigator.geolocation) {
navigator.geolocation.getCurrentPosition(success, error, geoOptions);
} else {
console.log("Geolocation services are not supported by your web browser.");
}
}
const success = (position) => {
const latitude = position.coords.latitude;
const longitude = position.coords.longitude;
const altitude = position.coords.altitude;
const accuracy = position.coords.accuracy;
console.log(`lat: ${latitude} long: ${longitude}`);
}
const error = (error) => {
console.log(`Unable to retrieve your location due to ${error.code}: ${error.message}`);
}
const geoOptions = {
enableHighAccuracy: true,
maximumAge: 30000,
timeout: 27000
};
See also
Local search (Internet)
Location-based service
References
External links
W3C Geolocation API Specification
Application programming interfaces
HTML5
Internet geolocation
Location-based software
Web standards |
32114360 | https://en.wikipedia.org/wiki/List%20of%20moths%20of%20Mauritania | List of moths of Mauritania | Moths of Mauritania represent about 165 known moth species. The moths (mostly nocturnal) and butterflies (mostly diurnal) together make up the taxonomic order Lepidoptera.
This is a list of moth species which have been recorded in Mauritania.
Arctiidae
Utetheisa pulchella (Linnaeus, 1758)
Cossidae
Nomima prophanes Durrant, 1916
Crambidae
Filodes costivitralis Guenée, 1862
Lasiocampidae
Braura othello Zolotuhin & Gurkovich, 2009
Odontocheilopteryx ferlina Gurkovich & Zolotuhin, 2009
Noctuidae
Abrostola confusa Dufay, 1958
Acantholipes circumdata (Walker, 1858)
Achaea catella Guenée, 1852
Achaea lienardi (Boisduval, 1833)
Acontia biskrensis Oberthür, 1887
Acontia imitatrix Wallengren, 1856
Acontia insocia (Walker, 1857)
Acontia nigrimacula Hacker, Legrain & Fibiger, 2008
Acontia opalinoides Guenée, 1852
Acontia wahlbergi Wallengren, 1856
Adisura callima Bethune-Baker, 1911
Aegocera rectilinea Boisduval, 1836
Agrotis biconica Kollar, 1844
Agrotis herzogi Rebel, 1911
Agrotis ipsilon (Hufnagel, 1766)
Agrotis sardzeana Brandt, 1941
Agrotis segetum ([Denis & Schiffermüller], 1775)
Agrotis trux (Hübner, 1824)
Amyna axis Guenée, 1852
Amyna delicata Wiltshire, 1994
Anarta trifolii (Hufnagel, 1766)
Androlymnia clavata Hampson, 1910
Antarchaea conicephala (Staudinger, 1870)
Anumeta spilota (Erschoff, 1874)
Argyrogramma signata (Fabricius, 1775)
Aspidifrontia berioi Hacker & Hausmann, 2010
Aspidifrontia hemileuca (Hampson, 1909)
Aspidifrontia pallidula Hacker & Hausmann, 2010
Aspidifrontia villiersi (Laporte, 1972)
Asplenia melanodonta (Hampson, 1896)
Attatha metaleuca Hampson, 1913
Audea kathrina Kühne, 2005
Audea melaleuca Walker, 1865
Audea paulumnodosa Kühne, 2005
Autoba teilhardi (de Joannis, 1909)
Brevipecten confluens Hampson, 1926
Brithys crini (Fabricius, 1775)
Calliodes pretiosissima Holland, 1892
Calophasia platyptera (Esper, [1788])
Cardepia affinis Rothschild, 1913
Cardepia sociabilis de Graslin, 1850
Cerocala albicornis Berio, 1966
Cerocala caelata Karsch, 1896
Chalciope pusilla (Holland, 1894)
Chasmina vestae (Guenée, 1852)
Chrysodeixis acuta (Walker, [1858])
Chrysodeixis chalcites (Esper, 1789)
Clytie infrequens (Swinhoe, 1884)
Clytie sancta (Staudinger, 1900)
Clytie tropicalis Rungs, 1975
Condica capensis (Guenée, 1852)
Condica conducta (Walker, 1857)
Condica viscosa (Freyer, 1831)
Crypsotidia maculifera (Staudinger, 1898)
Crypsotidia remanei Wiltshire, 1977
Cyligramma fluctuosa (Drury, 1773)
Cyligramma magus (Guérin-Méneville, [1844])
Diparopsis watersi (Rothschild, 1901)
Drasteria kabylaria (Bang-Haas, 1906)
Dysgonia torrida (Guenée, 1852)
Eublemma baccalix (Swinhoe, 1886)
Eublemma ecthaemata Hampson, 1896
Eublemma gayneri (Rothschild, 1901)
Eublemma parva (Hübner, [1808])
Eublemma ragusana (Freyer, 1844)
Eublemma robertsi Berio, 1969
Eublemma scitula (Rambur, 1833)
Eublemma tytrocoides Hacker & Hausmann, 2010
Eublemmoides apicimacula (Mabille, 1880)
Eutelia polychorda Hampson, 1902
Gesonia obeditalis Walker, 1859
Gnamptonyx innexa (Walker, 1858)
Grammodes congenita Walker, 1858
Grammodes stolida (Fabricius, 1775)
Haplocestra similis Aurivillius, 1910
Helicoverpa armigera (Hübner, [1808])
Helicoverpa assulta (Guenée, 1852)
Heliocheilus confertissima (Walker, 1865)
Heliothis nubigera Herrich-Schäffer, 1851
Heliothis peltigera ([Denis & Schiffermüller], 1775)
Heteropalpia acrosticta (Püngeler, 1904)
Heteropalpia exarata (Mabille, 1890)
Hypena laceratalis Walker, 1859
Hypena lividalis (Hübner, 1790)
Hypena obacerralis Walker, [1859]
Hypocala rostrata (Fabricius, 1794)
Hypotacha ochribasalis (Hampson, 1896)
Iambia jansei Berio, 1966
Leoniloma convergens Hampson, 1926
Leucania loreyi (Duponchel, 1827)
Marathyssa cuneata (Saalmüller, 1891)
Masalia albiseriata (Druce, 1903)
Masalia bimaculata (Moore, 1888)
Masalia nubila (Hampson, 1903)
Masalia rubristria (Hampson, 1903)
Maxera nigriceps (Walker, 1858)
Melanephia nigrescens (Wallengren, 1856)
Metachrostis quinaria (Moore, 1881)
Metopoceras kneuckeri (Rebel, 1903)
Mitrophrys magna (Walker, 1854)
Mythimna languida (Walker, 1858)
Mythimna umbrigera (Saalmüller, 1891)
Ophiusa mejanesi (Guenée, 1852)
Ophiusa tirhaca (Cramer, 1777)
Oraesia intrusa (Krüger, 1939)
Ozarba rubrivena Hampson, 1910
Ozarba subtilimba Berio, 1963
Ozarba variabilis Berio, 1940
Pandesma muricolor Berio, 1966
Pandesma robusta (Walker, 1858)
Parachalciope benitensis (Holland, 1894)
Pericyma mendax (Walker, 1858)
Pericyma metaleuca Hampson, 1913
Plecopterodes moderata (Wallengren, 1860)
Polydesma umbricola Boisduval, 1833
Polytela cliens (Felder & Rogenhofer, 1874)
Polytelodes florifera (Walker, 1858)
Prionofrontia ochrosia Hampson, 1926
Pseudozarba bipartita (Herrich-Schäffer, 1950)
Rhabdophera arefacta (Swinhoe, 1884)
Rhabdophera clathrum (Guenée, 1852)
Rhabdophera hansali (Felder & Rogenhofer, 1874)
Rhynchina leucodonta Hampson, 1910
Sesamia nonagrioides (Lefèbvre, 1827)
Sphingomorpha chlorea (Cramer, 1777)
Spodoptera cilium Guenée, 1852
Spodoptera exempta (Walker, 1857)
Spodoptera exigua (Hübner, 1808)
Spodoptera littoralis (Boisduval, 1833)
Tachosa fumata (Wallengren, 1860)
Tathorhynchus exsiccata (Lederer, 1855)
Thiacidas meii Hacker & Zilli, 2007
Trichoplusia ni (Hübner, [1803])
Trichoplusia orichalcea (Fabricius, 1775)
Tytroca leucoptera (Hampson, 1896)
Ulotrichopus primulina (Hampson, 1902)
Ulotrichopus tinctipennis (Hampson, 1902)
Nolidae
Arcyophora patricula (Hampson, 1902)
Bryophilopsis tarachoides Mabille, 1900
Earias biplaga Walker, 1866
Earias insulana (Boisduval, 1833)
Leocyma appollinis Guenée, 1852
Meganola reubeni Agassiz, 2009
Neaxestis mesogonia (Hampson, 1905)
Negeta luminosa (Walker, 1858)
Negeta purpurascens Hampson, 1912
Odontestis striata Hampson, 1912
Pardoxia graellsii (Feisthamel, 1837)
Xanthodes albago (Fabricius, 1794)
Xanthodes brunnescens (Pinhey, 1968)
Pterophoridae
Agdistis tamaricis (Zeller, 1847)
Pyralidae
Hypotia numidalis Hampson, 1900)
Tineidae
Anomalotinea cubiculella (Staudinger, 1859)
Ceratophaga infuscatella (de Joannis, 1897)
Infurcitinea marcunella (Rebel, 1901)
Myrmecozela lambessella Rebel, 1901
Perissomastix agenjoi (Petersen, 1957)
Perissomastix biskraella (Rebel, 1901)
Rhodobates algiricella (Rebel, 1901)
Trichophaga bipartitella (Ragonot, 1892)
Tortricidae
Epinotia hesperidana Kennel, 1921
References
External links
AfroMoths
Maur
Moths
Moths
Mauritania
Mauritania |
1184796 | https://en.wikipedia.org/wiki/Intuit | Intuit | Intuit Inc. is an American business that specializes in financial software. The company is headquartered in Mountain View, California, and the CEO is Sasan Goodarzi. more than 95% of its revenues and earnings come from its activities within the United States. Intuit's products include the tax preparation application TurboTax, personal finance app Mint and the small business accounting program QuickBooks. Intuit has lobbied extensively against the IRS providing taxpayers with free pre-filled forms, as is the norm in other developed countries.
Intuit offers a free online service called TurboTax Free File as well as a similarly named service called TurboTax Free Edition which is not free for most users. TurboTax Free File was developed as part of an agreement whereby members of the Free File Alliance would offer tax preparation for individuals below an income threshold for free in exchange for the IRS not providing taxpayers with free pre-filled forms. In 2019, investigations by ProPublica found that Intuit deliberately steered taxpayers from the free TurboTax Free File to the paid TurboTax Free Edition using tactics including search engine delisting and a deceptive discount targeted to members of the military. Subsequent investigations by the Senate Committee on Homeland Security and Governmental Affairs and the New York State Department of Financial Services reached similar conclusions, the latter concluding that Intuit engaged in "unfair and abusive practices".
As of 2021, Intuit is the subject of multiple lawsuits, state-level investigations, and is under investigation by the FTC.
History
The company was founded in 1983 by Scott Cook and Tom Proulx in Palo Alto, California.
Intuit was conceived by Scott Cook, whose prior work at Procter & Gamble helped him realize that personal computers would lend themselves towards replacements for paper-and-pencil based personal accounting. On his quest to find a programmer he ended up running into Tom Proulx at Stanford. The two started Intuit, which initially operated out of a modest room on University Avenue in Palo Alto. The first version of Quicken was coded in Microsoft's BASIC programming language for the IBM PC and UCSD Pascal for the Apple II by Tom Proulx and had to contend with a dozen serious competitors.
In 1991, Microsoft decided to produce a competitor to Quicken called Microsoft Money. To win retailers' loyalty, Intuit included a US$15 rebate coupon, redeemable on software customers purchased in their stores. This was the first time a software company offered a rebate.
Roughly around the same time the company engaged John Doerr of Kleiner Perkins and diversified its product lineup. In 1993 Intuit went public and used the proceeds to make a key acquisition: the tax-preparation software company Chipsoft based in San Diego. The time after the IPO was marked by rapid growth and culminated with a buyout offer from Microsoft in 1994; at this time Intuit's market capitalization reached US$2 billion.
When the buyout fell through because of the United States Department of Justice's disapproval, the company came under intense pressure in the late 1990s when Microsoft started to compete vigorously with its core Quicken business. In response, Intuit launched new web-based products and put more emphasis on QuickBooks and on TurboTax. The company made a number of investments around this time. Among others, it purchased a large stake in Excite and acquired Lacerte Software, a Dallas-based developer of tax preparation software used by tax professionals. It also divested itself of its online bill payment service unit and extended and strengthened its partnership with CheckFree.
In June 2013, Intuit announced it would sell its financial services unit to private equity firm Thoma Bravo for $1.03 billion.
In June 2015, the firm laid off approximately 5% of its workforce as part of a company reorganization.
As of May 2018, Intuit had more than US$5 billion in annual revenue and a market capitalization of about US$50 billion. In August 2018, the company announced that Sasan Goodarzi would become Intuit's leader and CEO at the beginning of 2019. Smith will remain chairman of Intuit's board of directors. In August 2020, Intuit QuickBooks Canada was expected to reveal intentions to partner with Digital Main Street, as the company aims to help digitally turn Canadian small businesses.
Current products
CEO Sasan Goodarzi oversees all productions in all countries.
TurboTax
Offered in Basic, Standard, Premier, and Home & Business versions, as well as TurboTax 20 for preparing multiple returns.
QuickBooks
Small business accounting and financial management software, offered in EasyStart, Pro, and Premier versions.
QuickBooks Online
Web-based accounting software designed for companies to review business financials through live data and insights to help make clear business decisions.
Mint.com
Web-based personal finance service.
ProConnect
Professional tax products, including ProConnect Tax Online, Lacerte, ProSeries Professional, ProSeries Basic, and EasyAcct.
Credit Karma
Access to credit scores, reports, and monitoring.
QuickBooks Commerce
Open platform that consolidate sales channels into a central hub for product-based small businesses.
Mailchimp
E-mail marketing platform.
International operations
Canada
Intuit Canada ULC, an indirect wholly owned subsidiary of Intuit, is a developer of financial management and tax preparation software for personal finance and small business accounting. Services are delivered on a variety of platforms including application software, software connected to services, software as a service, platform as a service and mobile applications. Intuit Canada has employees located all across Canada, with offices in Edmonton, Alberta, and Mississauga, Ontario.
Intuit Canada traces its origins to the 1993 acquisition by Intuit of a Canadian tax preparation software developer. In 1992, Edmontonians and University of Alberta graduates Bruce Johnson and Chad Frederick had built a tax preparation product called WINTAX – Canada's first Microsoft Windows-based personal tax preparation software. In 1993, they agreed to be acquired by Chipsoft, manufacturer of the U.S. personal income tax software TurboTax. Shortly after the WINTAX acquisition, Chipsoft agreed to merge with Intuit, the developer of the Quicken financial software. Intuit Canada continued to update and support the WINTAX software, which was renamed QuickTax in 1995 and then renamed TurboTax in 2010. Intuit Canada quickly became the hub for international development at Intuit, producing localized versions of Quicken and QuickBooks for Canada (in French and English) and the United Kingdom. The U.K. version of Quicken was discontinued in 2005.
Current products of Intuit Canada
TurboTax (formerly QuickTax) – offered in Basic, Standard, Premier, and Home & Business versions, as well as TurboTax 20 for preparing multiple returns.
– French-language version of TurboTax – offered in de base, de luxe, premier and particuliers et entreprises versions.
TurboTax online – Online versions of Free, Student, Standard, Premier and Home & Business.
– Online versions of gratuit, étudiant, de luxe, premier and particuliers et entreprises.
SnapTax – an iPhone app that allows users to complete their income tax return on their iPhone
TurboTax Refund Calculator – an iPad app that estimates tax returns and illustrates how changes, such as having a baby, can impact your income tax return
QuickBooks – Small business accounting and financial management software, offered in EasyStart, Pro and Premier versions.
QuickBooks Payroll Solutions – extends QuickBooks Pro and Premier into an in-house payroll solution.
Intuit Merchant Service for QuickBooks – lets you process credit and debit transactions directly in any version of QuickBooks.
QuickBooks Enterprise Solutions – for midsized companies that require more capacity, functionality and support than is offered by traditional small business accounting software; includes QuickBooks Payroll.
QuickBooks Online – an online small business accounting and financial management solution, offered in EasyStart, Essentials, and Plus versions.
Intuit GoPayment – process and receive payments on the go through your mobile device.
– French-language version of QuickBooks, offered in Succès PME, Succès PME Pro, and Succès PME Premier versions
QuickBooks Succès PME Service de paie – French-language version of the Payroll Solutions
ProFile Basic and Premier Editions – Professional Tax Preparation Packages
Discontinued products of Intuit Canada
TaxWiz – Tax preparation software – the company purchased in 2002, discontinued in 2007
WillExpert – A software package for preparing personal wills (for use within all of Canada with the exception of Quebec, due to specific provincial legislation)
In 2008, Intuit Canada discontinued the TaxWiz software and added QuickTax Basic to their lineup. Changes made by the Canada Revenue Agency forced Intuit and other tax preparing software companies to limit the number of returns available from their software to 20. This caused Intuit Canada to stop offering QuickTax Pro50 and Pro100 products, and they now offer QuickTax 20 as an alternative. Intuit Canada has since announced that for the 2010 tax year, they will discontinue use of the name QuickTax and replace it with the name TurboTax – thus bringing the product in line with Intuit's American tax-filing software.
Online communities
Intuit has several online communities, some of which offer integration or cross-sells into its other products. These include QuickBooks online community for QuickBooks users and small business owners, Quicken Online Community for Quicken users and those who need help with the personal finances, and the Accountant Online Community and Jump Up. Each consists of blogs, an expert locator map and event calendar, forums and discussion groups, podcasts, videocasts and webinars, and other user-created content.
JumpUp (formerly JackRabbit Beta) is a free social networking and resources site for small business owners and/or start-ups. Free tools and services include an interactive business planner, online training for developing a successful business plan, starting costs calculator, cash flow calculator, break-even calculator, templates for business planning and sample business plans.
TaxAlmanac is a free online tax research resource. The site includes information including the Internal Revenue Code, Treasury Regulations, Tax Court Cases, and a variety of articles.
Modeled after English Wikipedia, TaxAlmanac was launched in May 2005. The June 6, 2005 edition of Time magazine featured an article entitled "It's a Wiki, Wiki World" about English Wikipedia in which TaxAlmanac was highlighted as "A Community of Customers". The November 21, 2005 edition of Business Week featured an article titled "50 Smart Ways to Use the Web" in which TaxAlmanac was selected as one of the 50. The product made the short list as one of the 7 in the collaboration category. Intuit shut down TaxAlmanac effective June 1, 2014. Many of the users have migrated to a new site called TaxProTalk.com.
Zipingo was a free website where users could rate services such as contractors, restaurants, and other businesses. Ratings and comments were either entered from the website or through Quicken and QuickBooks. The site was closed by Intuit on August 23, 2007.
Finances
For the fiscal year 2021, Intuit reported earnings of US$2.062 billion, with an annual revenue of US$9.633 billion, an increase of 25.4% over the previous fiscal cycle. Intuit's shares traded at over $498.18 per share and total international net revenue was less than 5% of total net revenue.
Acquisitions and carve-outs
1990s
In 1993, Intuit acquired Chipsoft, a tax preparation software company based in San Diego.
In 1994, the firm acquired the tax preparation software division of Best Programs of Reston, VA. In the same year, Intuit acquired Parsons Technology from Bob Parsons for $64 million.
In 1996, it acquired GALT Technologies, Inc of Pittsburgh, PA.
In 1998, it acquired Lacerte Software Corp., which now operates as an Intuit subsidiary. The Lacerte subsidiary focuses on tax software used by professional accountants who prepare taxes for a living. It is generally used by larger firms with more complex workflows and clients.
On March 2, 1999, Intuit acquired Computing Resources Inc. of Reno, Nevada for approximately $200 million. This acquisition allowed Intuit to offer a payroll processing platform through its QuickBooks software program. In December 1999, Intuit purchased Rock Financial for a sum of $532M. The company was renamed Quicken Loans. In June 2002, Rock Financial founder Dan Gilbert led a small group of private investors in purchasing the Quicken Loans subsidiary back from Intuit.
2000s
In 2001, Intuit invested in UK market, hiring a local management team led by Stephen Lee, managing director, and Neil Atkins, marketing director, with an aim to become Europe's leading B2B & B2C packaged accounts solution.
In 2002, the firm acquired Management Reports International, a Cleveland-based real estate management software firm. The firm was renamed Intuit Real Estate Solutions (IRES) and offers real estate management products for Windows and the web. In 2002, it acquired Eclipse ERP for $88 million, a real-time transaction processing accounting software used for order fulfillment, inventory control, accounting, purchasing, and sales
In 2003, it acquired 'Innovative Merchant Solutions' (IMS).a firm that provided merchant services to all types of businesses nationwide. The acquisition gave Intuit the ability to process credit cards through its core product, QuickBooks, without the need for hardware leasing. They can also provide traditional terminal-based credit card processing and downloading transactions directly into the QuickBooks software.
In November 2005, Intuit acquired MyCorporation.com, an online business document filing service, for $20 million from original founders Philip and Nellie Akalp.
In September 2006, it acquired StepUp Commerce, an online localized product listing syndicator, for $60 million in cash. In December 2006, it acquired Digital Insight, a provider of online banking services.
On August 17, 2007, Intuit sold Eclipse ERP to Activant, for $100.5 million in cash .
In December 2007, it acquired Electronic Clearing House to add check processing power.In December 2007, it acquired Homestead Technologies which offers web site creation and e-commerce tools targeted at the small business market, for $170 million.
In April 2009, it acquired Boorah, a restaurant review site. On June 2, 2009, it announced the signing of a definitive agreement to purchase PayCycle Inc., an online payroll services, in an all-cash transaction for approximately $170 million. On September 14, 2009, Intuit Inc. agreed to acquire Mint.com, a free online personal finance service, for $170 million.
2010s
On January 15, 2010, Intuit Inc. spun off Intuit Real Estate Solutions (which Intuit acquired in 2002) as a stand-alone company. The new company took on its previous moniker, and is now known as MRI Software.
On May 21, 2010, Intuit acquired MedFusion, a Cary, NC leader of Patient to Provider communications for approximately $91 million. On August 10, 2010, it. acquired the personal finance management app Cha-Ching. On June 28, 2011, it acquired the Web banking technology assets of Mobile Money Ventures, a mobile finance provider, for an undisclosed amount. This acquisition is expected to position Intuit as the largest online and mobile technology provider to financial institutions.
On May 18, 2012, it. acquired Demandforce, an automated small business marketing, and customer communications SaaS provider for approximately $423.5 million.
On August 15, 2012, it announced an agreement to sell their 'Grow Your Business' business unit to Endurance International. The sale included the Intuit Websites and Weblistings products which had been formed from the Homestead Technologies and StepUp Commerce acquisitions.
On July 1, 2013, it announced an agreement to sell their Intuit Financial Services (IFS) business unit (formerly known as Digital Insight) to Thoma Bravo for more than $1.03 billion. On August 19, 2013, it announced that they had sold their Intuit Health business unit (formerly known as MedFusion) back to MedFusion's founder, Steve Malik.
In August 2013, Intuit Inc. acquired tax planning software Good April for an undisclosed amount. On October 23, 2013, it acquired Level Up Analytics, a data consulting firm. On October 30, 2013,it acquired Full Slate, a developer of appointment scheduling software for small businesses.
In May 2014, Intuit Inc. bought Invitco to help bookkeepers put bill processing in the cloud. In May 2014, it acquired Check for approximately $360 million to offer bill pay across small business and personal finance products. In December 2014, it. acquired Acrede, UK-based provider of global, cross-border and cloud-based payroll services.
In March 2015, Intuit Inc. acquired Playbook HR.
In January 2016, Intuit Inc. announced an agreement to sell Demandforce to Internet Brands. On March 3, 2016, Intuit announced plans to sell Quicken to H.I.G. Capital. On March 8, 2016, it announced plans to sell Quickbase to private equity firm Welsh, Carson, Anderson & Stowe.
On May 1, 2017, Intuit announced it was selling TruPay.
Intuit acquired Bankstream in 2017. On December 5, 2017, Intuit announced its acquisition of TSheets for $340 million.
2020s
On February 24, 2020, Intuit CEO and leader Sasan Goodarzi announced that it planned to acquire Credit Karma for $7.1 billion. On August 3, 2020, Intuit announced its acquisition of TradeGecko for $100 million.
On September 13, 2021, Intuit announced its acquisition of Mailchimp for $12 billion.
Lobbying
In 2007, Intuit lobbied to make sure taxpayers cannot electronically file their tax returns directly to the IRS by negotiating a deal preventing the IRS from setting up its own web portal for e-filing.
In 2009, the Los Angeles Times reported that Intuit spent nearly $2 million in political contributions to eliminate free online state tax filing for low-income residents in California. According to the New York Times, from 2009 to 2014, Intuit spent nearly $13 million lobbying, as reported by Open Secrets, as much as Apple. Intuit spent $1 million on the race for the California state comptroller to support Tony Strickland, a Republican who opposed ReadyReturn, against John Chiang, a Democrat who supported ReadyRun (and won). Joseph Bankman, professor of tax law, Stanford Law School, and advocate of simplified filing, believes that the campaign warned politicians that if they supported free filing, Intuit would help their opponents.
On March 26, 2013, ProPublica reported that the company lobbied against return-free filing as recently as 2011. One year later, ProPublica reported that the company appeared to be linked to a number of op-eds and letters to Congress in a campaign advocating against direct tax filing backed by the Computer & Communications Industry Association, an advocacy organization of which Intuit is a member.
Awards
2020
Forbes Best Employers for Diversity 2020 - Ranked #48
Fortune Business Person of the Year 2020 - CEO and leader Sasan Goodarzi ranked #16
Fortune World’s Best Workplaces 2020 - Ranked #11
HRC Corporate Equality List 2020
2021
Forbes World's Best Employers - Ranked #40
Forbes America's Best Employers for Women - Ranked #8
Fortune 100 Best Companies to Work For - Ranked #11
Lawsuits
An antitrust lawsuit and a class-action suit relating to cold calling employees of other companies were settled out of court along with Apple and Google.
In March 2015, The Washington Post and computer reporter Brian Krebs reported that two former employees alleged that Intuit knowingly allowed fraudulent returns to be processed on a massive scale as part of a revenue-boosting scheme. Both employees, former security team members for the company, stated that the company had ignored repeated warnings and suggestions on how to prevent fraud. One of the employees was reported to have filed a whistleblower complaint with the US Securities and Exchange Commission.
See also
Comparison of accounting software
APS Payroll
Automatic Data Processing (ADP)
H&R Block
Paychex
Paylocity Corporation
Reckon
Square (payment service)
SurePayroll
Sage Software
TaxACT
Xero
References
Citations
General references
- recounts the early years of Intuit, including the aborted acquisition by Microsoft.
Intuit to Make Health 'Quicken' – Health Data Management, April 13, 2006
CIGNA to offer members Intuit's Quicken Health – San Jose Business Journal, April 25, 2007
Business tax software: Take control of your tax claims – review of QuickTax Business Incorporated 2007 and Business Unincorporated 2007
External links
1983 establishments in California
1993 initial public offerings
American companies established in 1983
Companies based in Mountain View, California
Financial services companies established in 1983
Financial software companies
Software companies based in the San Francisco Bay Area
Software companies established in 1983
Software companies of the United States
Tax preparation companies of the United States |
188211 | https://en.wikipedia.org/wiki/Pidgin%20%28software%29 | Pidgin (software) | Pidgin (formerly named Gaim) is a free and open-source multi-platform instant messaging client, based on a library named libpurple that has support for many instant messaging protocols, allowing the user to simultaneously log in to various services from a single application, with a single interface for both popular and obsolete protocols (from AOL to Discord), thus avoiding the hassle of having to deal with a new software for each device and protocol.
The number of Pidgin users was estimated to be over three million in 2007.
Pidgin is widely used for its Off-the-Record Messaging (OTR) plugin, which offers end-to-end encryption. For this reason it is included in the privacy- and anonymity-focused operating system Tails.
History
The program was originally written by Mark Spencer, an Auburn University sophomore, as an emulation of AOL's IM program AOL Instant Messenger on Linux using the GTK+ toolkit. The earliest archived release was on December 31, 1998. It was named GAIM (GTK+ AOL Instant Messenger) accordingly. The emulation was not based on reverse engineering, but instead relied on information about the protocol that AOL had published on the web. Development was assisted by some of AOL's technical staff. Support for other IM protocols was added soon thereafter.
On 6 July 2015, Pidgin scored seven out of seven points on the Electronic Frontier Foundation's secure messaging scorecard. They have received points for having communications encrypted in transit, having communications encrypted with keys the providers don't have access to (end-to-end encryption), making it possible for users to independently verify their correspondent's identities, having past communications secure if the keys are stolen (forward secrecy), having their code open to independent review (open source), having their security designs well-documented, and having recent independent security audits.
Naming dispute
In response to pressure from AOL, the program was renamed to the acronymous-but-lowercase gaim. As AOL Instant Messenger gained popularity, AOL trademarked its acronym, "AIM", leading to a lengthy legal struggle with the creators of GAIM, who kept the matter largely secret.
On April 6, 2007, the project development team announced the results of their settlement with AOL, which included a series of name changes: Gaim became Pidgin, libgaim became libpurple, and gaim-text (the command-line interface version) became finch. The name Pidgin was chosen in reference to the term "pidgin", which describes communication between people who do not share a common language. The name "purple" refers to "prpl", the internal libgaim name for an IM protocol plugin.
Due to the legal issues, version 2.0 of the software was frozen in beta stages. Following the settlement, it was announced that the first official release of Pidgin 2.0.0 was hoped to occur during the two weeks from April 8, 2007. However, Pidgin 2.0 was not released as scheduled; Pidgin developers announced on April 22, 2007 that the delay was due to the preferences directory ".gaim".
Pidgin 2.0.0 was released on May 3, 2007. Other visual changes were made to the interface in this version, including updated icons.
Features
Pidgin provides a graphical front-end for libpurple using GTK+. Libpurple supports many instant-messaging protocols.
Pidgin supports multiple operating systems, including Windows and many Unix-like systems such as Linux, the BSDs, and AmigaOS. It is included by default in the operating systems Tails and Xubuntu.
Pluggability
The program is designed to be extended with plugins. Plugins are often written by third-party developers. They can be used to add support for protocols, which is useful for those such as Skype or Discord which have licensing issues (however, the users' data and interactions are still subject to their policies and eavesdropping). They can also add other significant features. For example, the "Off-the-Record Messaging" (OTR) plugin provides end-to-end encryption.
The TLS encryption system is pluggable, allowing different TLS libraries to be easily substituted. GnuTLS is the default, and NSS is also supported. Some operating systems' ports, such as OpenBSD's, choose to use OpenSSL or LibreSSL by default instead.
Contacts
Contacts with multiple protocols can be grouped into one single contact instead of managing multiple protocols, and contacts can be given aliases or placed into groups.
To reach users as they log on or a status change occurs (such as moving from "Away" to "Available"), Pidgin supports on-action automated scripts called Buddy Pounces to automatically reach the user in customizable ways.
File transfer
Pidgin supports file transfers for many protocols. It lacks some protocol-specific features like the folder sharing available from Yahoo. Direct, peer-to-peer file transfers are supported over protocols such as XMPP and MSN.
Voice and video chat
As of version 2.6 (released on August 18, 2009), Pidgin supports voice/video calls using Farstream. , calls can only be initiated through the XMPP protocol.
Miscellaneous
Further features include support for themes, emoticons, spell checking, and notification area integration.
Supported protocols
The following protocols are officially supported by libpurple 2.12.0, without any extensions or plugins:
Bonjour (Apple's implementation of Zeroconf)
Gadu-Gadu
IRC
Lotus Sametime
Novell GroupWise
OSCAR (AIM, ICQ, MobileMe, ...)
SIMPLE
SILC
XMPP/Jingle (Google Talk, LJ Talk, Gizmo5, ...)
Zephyr
Some XMPP servers provide transports, which allow users to access networks using non-XMPP protocols without having to install plugins or additional software. Pidgin's support for XMPP means that these transports can be used to communicate via otherwise unsupported protocols, including not only instant messaging protocols, but also protocols such as SMS or E-mail.
Additional protocols, supported by third-party plugins, include Discord, Telegram, Microsoft OCS/LCS (extended SIP/SIMPLE), Facebook Messenger, QQ, Skype via skype4pidgin plugin, WhatsApp, Signal and the Xfire gaming network (requires the Gfire plugin).
Plugins
Various other features are supported using third-party plugins. Such features include:
End-to-end encryption, through Off-the-Record Messaging (OTR)
Notifications (such as showing "toaster" popups or Snarl notifications, or lighting LEDs on laptops)
Showing contacts what the user is listening to in various media players
Adding mathematical formulas written in LaTeX to conversations
Skype text chat via skype4pidgin and newer SkypeWeb plugin
Discord text chat via the purple-discord plugin
Watching videos directly into a conversation when receiving a video sharing website link (YouTube, Vimeo)
Mascot
The mascot of Pidgin is a purple pigeon with the name of The Purple Pidgin.
Criticisms
As observed by Wired in 2015, the libpurple codebase is "known for its bountiful security bugs". In 2011, security vulnerabilities were already discovered in popular OTR plugins using libpurple.
As of version 2.4 and later, the ability to manually resize the text input box of conversations was removed. This led to a fork, Carrier (originally named Funpidgin).
Passwords are stored in a plaintext file, readable by any person or program that can access the user's files. Version 3.0 of Pidgin (no announced release date) will support password storage in system keyrings such as KWallet and the GNOME Keyring.
Pidgin does not currently support pausing or reattempting file transfers.
Pidgin does not allow disabling the group sorting on the contact list.
Other notable software based on libpurple
Adium and Proteus (both for macOS)
Meebo (web-based, no longer available)
Telepathy Haze (a Tube for some of the protocols supported by the Telepathy framework)
QuteCom (cross-platform, focused on VoIP and video)
Instantbird (cross-platform, based on Mozilla's Gecko engine)
BitlBee and Minbif are IRCd-like gateways to multiple IM networks, and can be compiled with libpurple to increase functionality.
See also
Multiprotocol instant messaging application
Comparison of instant messaging protocols
Comparison of instant messaging clients
Comparison of Internet Relay Chat clients
Comparison of XMPP clients
Online chat
List of computing mascots
:Category:Computing mascots
References
External links
1998 software
Free instant messaging clients
Free software programmed in C
Instant messaging clients that use GTK
Windows instant messaging clients
AIM (software) clients
Free XMPP clients
Internet Relay Chat clients
Free Internet Relay Chat clients
Windows Internet Relay Chat clients
Portable software
Cross-platform free software
Applications using D-Bus
Yahoo! instant messaging clients
Software that uses Meson |
8377 | https://en.wikipedia.org/wiki/Database | Database | In computing, a database is an organized collection of data stored and accessed electronically. Small databases can be stored on a file system, while large databases are hosted on computer clusters or cloud storage. The design of databases spans formal techniques and practical considerations including data modeling, efficient data representation and storage, query languages, security and privacy of sensitive data, and distributed computing issues including supporting concurrent access and fault tolerance.
A database management system (DBMS) is the software that interacts with end users, applications, and the database itself to capture and analyze the data. The DBMS software additionally encompasses the core facilities provided to administer the database. The sum total of the database, the DBMS and the associated applications can be referred to as a database system. Often the term "database" is also used loosely to refer to any of the DBMS, the database system or an application associated with the database.
Computer scientists may classify database management systems according to the database models that they support. Relational databases became dominant in the 1980s. These model data as rows and columns in a series of tables, and the vast majority use SQL for writing and querying data. In the 2000s, non-relational databases became popular, collectively referred to as NoSQL because they use different query languages.
Terminology and overview
Formally, a "database" refers to a set of related data and the way it is organized. Access to this data is usually provided by a "database management system" (DBMS) consisting of an integrated set of computer software that allows users to interact with one or more databases and provides access to all of the data contained in the database (although restrictions may exist that limit access to particular data). The DBMS provides various functions that allow entry, storage and retrieval of large quantities of information and provides ways to manage how that information is organized.
Because of the close relationship between them, the term "database" is often used casually to refer to both a database and the DBMS used to manipulate it.
Outside the world of professional information technology, the term database is often used to refer to any collection of related data (such as a spreadsheet or a card index) as size and usage requirements typically necessitate use of a database management system.
Existing DBMSs provide various functions that allow management of a database and its data which can be classified into four main functional groups:
Data definition – Creation, modification and removal of definitions that define the organization of the data.
Update – Insertion, modification, and deletion of the actual data.
Retrieval – Providing information in a form directly usable or for further processing by other applications. The retrieved data may be made available in a form basically the same as it is stored in the database or in a new form obtained by altering or combining existing data from the database.
Administration – Registering and monitoring users, enforcing data security, monitoring performance, maintaining data integrity, dealing with concurrency control, and recovering information that has been corrupted by some event such as an unexpected system failure.
Both a database and its DBMS conform to the principles of a particular database model. "Database system" refers collectively to the database model, database management system, and database.
Physically, database servers are dedicated computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. Hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating system to provide these functions.
Since DBMSs comprise a significant market, computer and storage vendors often take into account DBMS requirements in their own development plans.
Databases and DBMSs can be categorized according to the database model(s) that they support (such as relational or XML), the type(s) of computer they run on (from a server cluster to a mobile phone), the query language(s) used to access the database (such as SQL or XQuery), and their internal engineering, which affects performance, scalability, resilience, and security.
History
The sizes, capabilities, and performance of databases and their respective DBMSs have grown in orders of magnitude. These performance increases were enabled by the technology progress in the areas of processors, computer memory, computer storage, and computer networks. The concept of a database was made possible by the emergence of direct access storage media such as magnetic disks, which became widely available in the mid 1960s; earlier systems relied on sequential storage of data on magnetic tape. The subsequent development of database technology can be divided into three eras based on data model or structure: navigational, SQL/relational, and post-relational.
The two main early navigational data models were the hierarchical model and the CODASYL model (network model). These were characterized by the use of pointers (often physical disk addresses) to follow relationships from one record to another.
The relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content, rather than by following links. The relational model employs sets of ledger-style tables, each used for a different type of entity. Only in the mid-1980s did computing hardware become powerful enough to allow the wide deployment of relational systems (DBMSs plus applications). By the early 1990s, however, relational systems dominated in all large-scale data processing applications, and they remain dominant: IBM DB2, Oracle, MySQL, and Microsoft SQL Server are the most searched DBMS. The dominant database language, standardised SQL for the relational model, has influenced database languages for other data models.
Object databases were developed in the 1980s to overcome the inconvenience of object–relational impedance mismatch, which led to the coining of the term "post-relational" and also the development of hybrid object–relational databases.
The next generation of post-relational databases in the late 2000s became known as NoSQL databases, introducing fast key–value stores and document-oriented databases. A competing "next generation" known as NewSQL databases attempted new implementations that retained the relational/SQL model while aiming to match the high performance of NoSQL compared to commercially available relational DBMSs.
1960s, navigational DBMS
The introduction of the term database coincided with the availability of direct-access storage (disks and drums) from the mid-1960s onwards. The term represented a contrast with the tape-based systems of the past, allowing shared interactive use rather than daily batch processing. The Oxford English Dictionary cites a 1962 report by the System Development Corporation of California as the first to use the term "data-base" in a specific technical sense.
As computers grew in speed and capability, a number of general-purpose database systems emerged; by the mid-1960s a number of such systems had come into commercial use. Interest in a standard began to grow, and Charles Bachman, author of one such product, the Integrated Data Store (IDS), founded the Database Task Group within CODASYL, the group responsible for the creation and standardization of COBOL. In 1971, the Database Task Group delivered their standard, which generally became known as the CODASYL approach, and soon a number of commercial products based on this approach entered the market.
The CODASYL approach offered applications the ability to navigate around a linked data set which was formed into a large network. Applications could find records by one of three methods:
Use of a primary key (known as a CALC key, typically implemented by hashing)
Navigating relationships (called sets) from one record to another
Scanning all the records in a sequential order
Later systems added B-trees to provide alternate access paths. Many CODASYL databases also added a declarative query language for end users (as distinct from the navigational API). However CODASYL databases were complex and required significant training and effort to produce useful applications.
IBM also had their own DBMS in 1966, known as Information Management System (IMS). IMS was a development of software written for the Apollo program on the System/360. IMS was generally similar in concept to CODASYL, but used a strict hierarchy for its model of data navigation instead of CODASYL's network model. Both concepts later became known as navigational databases due to the way data was accessed: the term was popularized by Bachman's 1973 Turing Award presentation The Programmer as Navigator. IMS is classified by IBM as a hierarchical database. IDMS and Cincom Systems' TOTAL database are classified as network databases. IMS remains in use .
1970s, relational DBMS
Edgar F. Codd worked at IBM in San Jose, California, in one of their offshoot offices that was primarily involved in the development of hard disk systems. He was unhappy with the navigational model of the CODASYL approach, notably the lack of a "search" facility. In 1970, he wrote a number of papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks.
In this paper, he described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in CODASYL, Codd's idea was to organize the data as a number of "tables", each table being used for a different type of entity. Each table would contain a fixed number of columns containing the attributes of the entity. One or more columns of each table were designated as a primary key by which the rows of the table could be uniquely identified; cross-references between tables always used these primary keys, rather than disk addresses, and queries would join tables based on these key relationships, using a set of operations based on the mathematical system of relational calculus (from which the model takes its name). Splitting the data into a set of normalized tables (or relations) aimed to ensure that each "fact" was only stored once, thus simplifying update operations. Virtual tables called views could present the data in different ways for different users, but views could not be directly updated.
Codd used mathematical terms to define the model: relations, tuples, and domains rather than tables, rows, and columns. The terminology that is now familiar came from early implementations. Codd would later criticize the tendency for practical implementations to depart from the mathematical foundations on which the model was based.
The use of primary keys (user-oriented identifiers) to represent cross-table relationships, rather than disk addresses, had two primary motivations. From an engineering perspective, it enabled tables to be relocated and resized without expensive database reorganization. But Codd was more interested in the difference in semantics: the use of explicit identifiers made it easier to define update operations with clean mathematical definitions, and it also enabled query operations to be defined in terms of the established discipline of first-order predicate calculus; because these operations have clean mathematical properties, it becomes possible to rewrite queries in provably correct ways, which is the basis of query optimization. There is no loss of expressiveness compared with the hierarchic or network models, though the connections between tables are no longer so explicit.
In the hierarchic and network models, records were allowed to have a complex internal structure. For example, the salary history of an employee might be represented as a "repeating group" within the employee record. In the relational model, the process of normalization led to such internal structures being replaced by data held in multiple tables, connected only by logical keys.
For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach, all of this data would be placed in a single variable-length record. In the relational approach, the data would be normalized into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided.
As well as identifying rows/records using logical identifiers rather than disk addresses, Codd changed the way in which applications assembled data from multiple records. Rather than requiring applications to gather data one record at a time by navigating the links, they would use a declarative query language that expressed what data was required, rather than the access path by which it should be found. Finding an efficient access path to the data became the responsibility of the database management system, rather than the application programmer. This process, called query optimization, depended on the fact that queries were expressed in terms of mathematical logic.
Codd's paper was picked up by two people at Berkeley, Eugene Wong and Michael Stonebraker. They started a project known as INGRES using funding that had already been allocated for a geographical database project and student programmers to produce code. Beginning in 1973, INGRES delivered its first test products which were generally ready for widespread use in 1979. INGRES was similar to System R in a number of ways, including the use of a "language" for data access, known as QUEL. Over time, INGRES moved to the emerging SQL standard.
IBM itself did one test implementation of the relational model, PRTV, and a production one, Business System 12, both now discontinued. Honeywell wrote MRDS for Multics, and now there are two new implementations: Alphora Dataphor and Rel. Most other DBMS implementations usually called relational are actually SQL DBMSs.
In 1970, the University of Michigan began development of the MICRO Information Management System based on D.L. Childs' Set-Theoretic Data model. MICRO was used to manage very large data sets by the US Department of Labor, the U.S. Environmental Protection Agency, and researchers from the University of Alberta, the University of Michigan, and Wayne State University. It ran on IBM mainframe computers using the Michigan Terminal System. The system remained in production until 1998.
Integrated approach
In the 1970s and 1980s, attempts were made to build database systems with integrated hardware and software. The underlying philosophy was that such integration would provide higher performance at a lower cost. Examples were IBM System/38, the early offering of Teradata, and the Britton Lee, Inc. database machine.
Another approach to hardware support for database management was ICL's CAFS accelerator, a hardware disk controller with programmable search capabilities. In the long term, these efforts were generally unsuccessful because specialized database machines could not keep pace with the rapid development and progress of general-purpose computers. Thus most database systems nowadays are software systems running on general-purpose hardware, using general-purpose computer data storage. However, this idea is still pursued for certain applications by some companies like Netezza and Oracle (Exadata).
Late 1970s, SQL DBMS
IBM started working on a prototype system loosely based on Codd's concepts as System R in the early 1970s. The first version was ready in 1974/5, and work then started on multi-table systems in which the data could be split so that all of the data for a record (some of which is optional) did not have to be stored in a single large "chunk". Subsequent multi-user versions were tested by customers in 1978 and 1979, by which time a standardized query language – SQL – had been added. Codd's ideas were establishing themselves as both workable and superior to CODASYL, pushing IBM to develop a true production version of System R, known as SQL/DS, and, later, Database 2 (DB2).
Larry Ellison's Oracle Database (or more simply, Oracle) started from a different chain, based on IBM's papers on System R. Though Oracle V1 implementations were completed in 1978, it wasn't until Oracle Version 2 when Ellison beat IBM to market in 1979.
Stonebraker went on to apply the lessons from INGRES to develop a new database, Postgres, which is now known as PostgreSQL. PostgreSQL is often used for global mission-critical applications (the .org and .info domain name registries use it as their primary data store, as do many large companies and financial institutions).
In Sweden, Codd's paper was also read and Mimer SQL was developed from the mid-1970s at Uppsala University. In 1984, this project was consolidated into an independent enterprise.
Another data model, the entity–relationship model, emerged in 1976 and gained popularity for database design as it emphasized a more familiar description than the earlier relational model. Later on, entity–relationship constructs were retrofitted as a data modeling construct for the relational model, and the difference between the two have become irrelevant.
1980s, on the desktop
The 1980s ushered in the age of desktop computing. The new computers empowered their users with spreadsheets like Lotus 1-2-3 and database software like dBASE. The dBASE product was lightweight and easy for any computer user to understand out of the box. C. Wayne Ratliff, the creator of dBASE, stated: "dBASE was different from programs like BASIC, C, FORTRAN, and COBOL in that a lot of the dirty work had already been done. The data manipulation is done by dBASE instead of by the user, so the user can concentrate on what he is doing, rather than having to mess with the dirty details of opening, reading, and closing files, and managing space allocation." dBASE was one of the top selling software titles in the 1980s and early 1990s.
1990s, object-oriented
The 1990s, along with a rise in object-oriented programming, saw a growth in how data in various databases were handled. Programmers and designers began to treat the data in their databases as objects. That is to say that if a person's data were in a database, that person's attributes, such as their address, phone number, and age, were now considered to belong to that person instead of being extraneous data. This allows for relations between data to be relations to objects and their attributes and not to individual fields. The term "object–relational impedance mismatch" described the inconvenience of translating between programmed objects and database tables. Object databases and object–relational databases attempt to solve this problem by providing an object-oriented language (sometimes as extensions to SQL) that programmers can use as alternative to purely relational SQL. On the programming side, libraries known as object–relational mappings (ORMs) attempt to solve the same problem.
2000s, NoSQL and NewSQL
XML databases are a type of structured document-oriented database that allows querying based on XML document attributes. XML databases are mostly used in applications where the data is conveniently viewed as a collection of documents, with a structure that can vary from the very flexible to the highly rigid: examples include scientific articles, patents, tax filings, and personnel records.
NoSQL databases are often very fast, do not require fixed table schemas, avoid join operations by storing denormalized data, and are designed to scale horizontally.
In recent years, there has been a strong demand for massively distributed databases with high partition tolerance, but according to the CAP theorem it is impossible for a distributed system to simultaneously provide consistency, availability, and partition tolerance guarantees. A distributed system can satisfy any two of these guarantees at the same time, but not all three. For that reason, many NoSQL databases are using what is called eventual consistency to provide both availability and partition tolerance guarantees with a reduced level of data consistency.
NewSQL is a class of modern relational databases that aims to provide the same scalable performance of NoSQL systems for online transaction processing (read-write) workloads while still using SQL and maintaining the ACID guarantees of a traditional database system.
Use cases
Databases are used to support internal operations of organizations and to underpin online interactions with customers and suppliers (see Enterprise software).
Databases are used to hold administrative information and more specialized data, such as engineering data or economic models. Examples include computerized library systems, flight reservation systems, computerized parts inventory systems, and many content management systems that store websites as collections of webpages in a database.
Classification
One way to classify databases involves the type of their contents, for example: bibliographic, document-text, statistical, or multimedia objects. Another way is by their application area, for example: accounting, music compositions, movies, banking, manufacturing, or insurance. A third way is by some technical aspect, such as the database structure or interface type. This section lists a few of the adjectives used to characterize different kinds of databases.
An in-memory database is a database that primarily resides in main memory, but is typically backed-up by non-volatile computer data storage. Main memory databases are faster than disk databases, and so are often used where response time is critical, such as in telecommunications network equipment.
An active database includes an event-driven architecture which can respond to conditions both inside and outside the database. Possible uses include security monitoring, alerting, statistics gathering and authorization. Many databases provide active database features in the form of database triggers.
A cloud database relies on cloud technology. Both the database and most of its DBMS reside remotely, "in the cloud", while its applications are both developed by programmers and later maintained and used by end-users through a web browser and Open APIs.
Data warehouses archive data from operational databases and often from external sources such as market research firms. The warehouse becomes the central source of data for use by managers and other end-users who may not have access to operational data. For example, sales data might be aggregated to weekly totals and converted from internal product codes to use UPCs so that they can be compared with ACNielsen data. Some basic and essential components of data warehousing include extracting, analyzing, and mining data, transforming, loading, and managing data so as to make them available for further use.
A deductive database combines logic programming with a relational database.
A distributed database is one in which both the data and the DBMS span multiple computers.
A document-oriented database is designed for storing, retrieving, and managing document-oriented, or semi structured, information. Document-oriented databases are one of the main categories of NoSQL databases.
An embedded database system is a DBMS which is tightly integrated with an application software that requires access to stored data in such a way that the DBMS is hidden from the application's end-users and requires little or no ongoing maintenance.
End-user databases consist of data developed by individual end-users. Examples of these are collections of documents, spreadsheets, presentations, multimedia, and other files. Several products exist to support such databases. Some of them are much simpler than full-fledged DBMSs, with more elementary DBMS functionality.
A federated database system comprises several distinct databases, each with its own DBMS. It is handled as a single database by a federated database management system (FDBMS), which transparently integrates multiple autonomous DBMSs, possibly of different types (in which case it would also be a heterogeneous database system), and provides them with an integrated conceptual view.
Sometimes the term multi-database is used as a synonym to federated database, though it may refer to a less integrated (e.g., without an FDBMS and a managed integrated schema) group of databases that cooperate in a single application. In this case, typically middleware is used for distribution, which typically includes an atomic commit protocol (ACP), e.g., the two-phase commit protocol, to allow distributed (global) transactions across the participating databases.
A graph database is a kind of NoSQL database that uses graph structures with nodes, edges, and properties to represent and store information. General graph databases that can store any graph are distinct from specialized graph databases such as triplestores and network databases.
An array DBMS is a kind of NoSQL DBMS that allows modeling, storage, and retrieval of (usually large) multi-dimensional arrays such as satellite images and climate simulation output.
In a hypertext or hypermedia database, any word or a piece of text representing an object, e.g., another piece of text, an article, a picture, or a film, can be hyperlinked to that object. Hypertext databases are particularly useful for organizing large amounts of disparate information. For example, they are useful for organizing online encyclopedias, where users can conveniently jump around the text. The World Wide Web is thus a large distributed hypertext database.
A knowledge base (abbreviated KB, kb or Δ) is a special kind of database for knowledge management, providing the means for the computerized collection, organization, and retrieval of knowledge. Also a collection of data representing problems with their solutions and related experiences.
A mobile database can be carried on or synchronized from a mobile computing device.
Operational databases store detailed data about the operations of an organization. They typically process relatively high volumes of updates using transactions. Examples include customer databases that record contact, credit, and demographic information about a business's customers, personnel databases that hold information such as salary, benefits, skills data about employees, enterprise resource planning systems that record details about product components, parts inventory, and financial databases that keep track of the organization's money, accounting and financial dealings.
A parallel database seeks to improve performance through parallelization for tasks such as loading data, building indexes and evaluating queries.
The major parallel DBMS architectures which are induced by the underlying hardware architecture are:
Shared memory architecture, where multiple processors share the main memory space, as well as other data storage.
Shared disk architecture, where each processing unit (typically consisting of multiple processors) has its own main memory, but all units share the other storage.
Shared-nothing architecture, where each processing unit has its own main memory and other storage.
Probabilistic databases employ fuzzy logic to draw inferences from imprecise data.
Real-time databases process transactions fast enough for the result to come back and be acted on right away.
A spatial database can store the data with multidimensional features. The queries on such data include location-based queries, like "Where is the closest hotel in my area?".
A temporal database has built-in time aspects, for example a temporal data model and a temporal version of SQL. More specifically the temporal aspects usually include valid-time and transaction-time.
A terminology-oriented database builds upon an object-oriented database, often customized for a specific field.
An unstructured data database is intended to store in a manageable and protected way diverse objects that do not fit naturally and conveniently in common databases. It may include email messages, documents, journals, multimedia objects, etc. The name may be misleading since some objects can be highly structured. However, the entire possible object collection does not fit into a predefined structured framework. Most established DBMSs now support unstructured data in various ways, and new dedicated DBMSs are emerging.
Database management system
Connolly and Begg define database management system (DBMS) as a "software system that enables users to define, create, maintain and control access to the database". Examples of DBMS's include MySQL, PostgreSQL, Microsoft SQL Server, Oracle Database, and Microsoft Access.
The DBMS acronym is sometimes extended to indicate the underlying database model, with RDBMS for the relational, OODBMS for the object (oriented) and ORDBMS for the object–relational model. Other extensions can indicate some other characteristic, such as DDBMS for a distributed database management systems.
The functionality provided by a DBMS can vary enormously. The core functionality is the storage, retrieval and update of data. Codd proposed the following functions and services a fully-fledged general purpose DBMS should provide:
Data storage, retrieval and update
User accessible catalog or data dictionary describing the metadata
Support for transactions and concurrency
Facilities for recovering the database should it become damaged
Support for authorization of access and update of data
Access support from remote locations
Enforcing constraints to ensure data in the database abides by certain rules
It is also generally to be expected the DBMS will provide a set of utilities for such purposes as may be necessary to administer the database effectively, including import, export, monitoring, defragmentation and analysis utilities. The core part of the DBMS interacting between the database and the application interface sometimes referred to as the database engine.
Often DBMSs will have configuration parameters that can be statically and dynamically tuned, for example the maximum amount of main memory on a server the database can use. The trend is to minimize the amount of manual configuration, and for cases such as embedded databases the need to target zero-administration is paramount.
The large major enterprise DBMSs have tended to increase in size and functionality and can have involved thousands of human years of development effort through their lifetime.
Early multi-user DBMS typically only allowed for the application to reside on the same computer with access via terminals or terminal emulation software. The client–server architecture was a development where the application resided on a client desktop and the database on a server allowing the processing to be distributed. This evolved into a multitier architecture incorporating application servers and web servers with the end user interface via a web browser with the database only directly connected to the adjacent tier.
A general-purpose DBMS will provide public application programming interfaces (API) and optionally a processor for database languages such as SQL to allow applications to be written to interact with the database. A special purpose DBMS may use a private API and be specifically customized and linked to a single application. For example, an email system performing many of the functions of a general-purpose DBMS such as message insertion, message deletion, attachment handling, blocklist lookup, associating messages an email address and so forth however these functions are limited to what is required to handle email.
Application
External interaction with the database will be via an application program that interfaces with the DBMS. This can range from a database tool that allows users to execute SQL queries textually or graphically, to a web site that happens to use a database to store and search information.
Application program interface
A programmer will code interactions to the database (sometimes referred to as a datasource) via an application program interface (API) or via a database language. The particular API or language chosen will need to be supported by DBMS, possible indirectly via a preprocessor or a bridging API. Some API's aim to be database independent, ODBC being a commonly known example. Other common API's include JDBC and ADO.NET.
Database languages
Database languages are special-purpose languages, which allow one or more of the following tasks, sometimes distinguished as sublanguages:
Data control language (DCL) – controls access to data;
Data definition language (DDL) – defines data types such as creating, altering, or dropping tables and the relationships among them;
Data manipulation language (DML) – performs tasks such as inserting, updating, or deleting data occurrences;
Data query language (DQL) – allows searching for information and computing derived information.
Database languages are specific to a particular data model. Notable examples include:
SQL combines the roles of data definition, data manipulation, and query in a single language. It was one of the first commercial languages for the relational model, although it departs in some respects from the relational model as described by Codd (for example, the rows and columns of a table can be ordered). SQL became a standard of the American National Standards Institute (ANSI) in 1986, and of the International Organization for Standardization (ISO) in 1987. The standards have been regularly enhanced since and is supported (with varying degrees of conformance) by all mainstream commercial relational DBMSs.
OQL is an object model language standard (from the Object Data Management Group). It has influenced the design of some of the newer query languages like JDOQL and EJB QL.
XQuery is a standard XML query language implemented by XML database systems such as MarkLogic and eXist, by relational databases with XML capability such as Oracle and DB2, and also by in-memory XML processors such as Saxon.
SQL/XML combines XQuery with SQL.
A database language may also incorporate features like:
DBMS-specific configuration and storage engine management
Computations to modify query results, like counting, summing, averaging, sorting, grouping, and cross-referencing
Constraint enforcement (e.g. in an automotive database, only allowing one engine type per car)
Application programming interface version of the query language, for programmer convenience
Storage
Database storage is the container of the physical materialization of a database. It comprises the internal (physical) level in the database architecture. It also contains all the information needed (e.g., metadata, "data about the data", and internal data structures) to reconstruct the conceptual level and external level from the internal level when needed. Databases as digital objects contain three layers of information which must be stored: the data, the structure, and the semantics. Proper storage of all three layers is needed for future preservation and longevity of the database. Putting data into permanent storage is generally the responsibility of the database engine a.k.a. "storage engine". Though typically accessed by a DBMS through the underlying operating system (and often using the operating systems' file systems as intermediates for storage layout), storage properties and configuration setting are extremely important for the efficient operation of the DBMS, and thus are closely maintained by database administrators. A DBMS, while in operation, always has its database residing in several types of storage (e.g., memory and external storage). The database data and the additional needed information, possibly in very large amounts, are coded into bits. Data typically reside in the storage in structures that look completely different from the way the data look in the conceptual and external levels, but in ways that attempt to optimize (the best possible) these levels' reconstruction when needed by users and programs, as well as for computing additional types of needed information from the data (e.g., when querying the database).
Some DBMSs support specifying which character encoding was used to store data, so multiple encodings can be used in the same database.
Various low-level database storage structures are used by the storage engine to serialize the data model so it can be written to the medium of choice. Techniques such as indexing may be used to improve performance. Conventional storage is row-oriented, but there are also column-oriented and correlation databases.
Materialized views
Often storage redundancy is employed to increase performance. A common example is storing materialized views, which consist of frequently needed external views or query results. Storing such views saves the expensive computing of them each time they are needed. The downsides of materialized views are the overhead incurred when updating them to keep them synchronized with their original updated database data, and the cost of storage redundancy.
Replication
Occasionally a database employs storage redundancy by database objects replication (with one or more copies) to increase data availability (both to improve performance of simultaneous multiple end-user accesses to a same database object, and to provide resiliency in a case of partial failure of a distributed database). Updates of a replicated object need to be synchronized across the object copies. In many cases, the entire database is replicated.
Security
Database security deals with all various aspects of protecting the database content, its owners, and its users. It ranges from protection from intentional unauthorized database uses to unintentional database accesses by unauthorized entities (e.g., a person or a computer program).
Database access control deals with controlling who (a person or a certain computer program) is allowed to access what information in the database. The information may comprise specific database objects (e.g., record types, specific records, data structures), certain computations over certain objects (e.g., query types, or specific queries), or using specific access paths to the former (e.g., using specific indexes or other data structures to access information). Database access controls are set by special authorized (by the database owner) personnel that uses dedicated protected security DBMS interfaces.
This may be managed directly on an individual basis, or by the assignment of individuals and privileges to groups, or (in the most elaborate models) through the assignment of individuals and groups to roles which are then granted entitlements. Data security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database or subsets of it called "subschemas". For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data. If the DBMS provides a way to interactively enter and update the database, as well as interrogate it, this capability allows for managing personal databases.
Data security in general deals with protecting specific chunks of data, both physically (i.e., from corruption, or destruction, or removal; e.g., see physical security), or the interpretation of them, or parts of them to meaningful information (e.g., by looking at the strings of bits that they comprise, concluding specific valid credit-card numbers; e.g., see data encryption).
Change and access logging records who accessed which attributes, what was changed, and when it was changed. Logging services allow for a forensic database audit later by keeping a record of access occurrences and changes. Sometimes application-level code is used to record changes rather than leaving this to the database. Monitoring can be set up to attempt to detect security breaches.
Transactions and concurrency
Database transactions can be used to introduce some level of fault tolerance and data integrity after recovery from a crash. A database transaction is a unit of work, typically encapsulating a number of operations over a database (e.g., reading a database object, writing, acquiring or releasing a lock, etc.), an abstraction supported in database and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are included in that transaction (determined by the transaction's programmer via special transaction commands).
The acronym ACID describes some ideal properties of a database transaction: atomicity, consistency, isolation, and durability.
Migration
A database built with one DBMS is not portable to another DBMS (i.e., the other DBMS cannot run it). However, in some situations, it is desirable to migrate a database from one DBMS to another. The reasons are primarily economical (different DBMSs may have different total costs of ownership or TCOs), functional, and operational (different DBMSs may have different capabilities). The migration involves the database's transformation from one DBMS type to another. The transformation should maintain (if possible) the database related application (i.e., all related application programs) intact. Thus, the database's conceptual and external architectural levels should be maintained in the transformation. It may be desired that also some aspects of the architecture internal level are maintained. A complex or large database migration may be a complicated and costly (one-time) project by itself, which should be factored into the decision to migrate. This in spite of the fact that tools may exist to help migration between specific DBMSs. Typically, a DBMS vendor provides tools to help importing databases from other popular DBMSs.
Building, maintaining, and tuning
After designing a database for an application, the next stage is building the database. Typically, an appropriate general-purpose DBMS can be selected to be used for this purpose. A DBMS provides the needed user interfaces to be used by database administrators to define the needed application's data structures within the DBMS's respective data model. Other user interfaces are used to select needed DBMS parameters (like security related, storage allocation parameters, etc.).
When the database is ready (all its data structures and other needed components are defined), it is typically populated with initial application's data (database initialization, which is typically a distinct project; in many cases using specialized DBMS interfaces that support bulk insertion) before making it operational. In some cases, the database becomes operational while empty of application data, and data are accumulated during its operation.
After the database is created, initialized and populated it needs to be maintained. Various database parameters may need changing and the database may need to be tuned (tuning) for better performance; application's data structures may be changed or added, new related application programs may be written to add to the application's functionality, etc.
Backup and restore
Sometimes it is desired to bring a database back to a previous state (for many reasons, e.g., cases when the database is found corrupted due to a software error, or if it has been updated with erroneous data). To achieve this, a backup operation is done occasionally or continuously, where each desired database state (i.e., the values of its data and their embedding in database's data structures) is kept within dedicated backup files (many techniques exist to do this effectively). When it is decided by a database administrator to bring the database back to this state (e.g., by specifying this state by a desired point in time when the database was in this state), these files are used to restore that state.
Static analysis
Static analysis techniques for software verification can be applied also in the scenario of query languages. In particular, the *Abstract interpretation framework has been extended to the field of query languages for relational databases as a way to support sound approximation techniques. The semantics of query languages can be tuned according to suitable abstractions of the concrete domain of data. The abstraction of relational database system has many interesting applications, in particular, for security purposes, such as fine grained access control, watermarking, etc.
Miscellaneous features
Other DBMS features might include:
Database logs – This helps in keeping a history of the executed functions.
Graphics component for producing graphs and charts, especially in a data warehouse system.
Query optimizer – Performs query optimization on every query to choose an efficient query plan (a partial order (tree) of operations) to be executed to compute the query result. May be specific to a particular storage engine.
Tools or hooks for database design, application programming, application program maintenance, database performance analysis and monitoring, database configuration monitoring, DBMS hardware configuration (a DBMS and related database may span computers, networks, and storage units) and related database mapping (especially for a distributed DBMS), storage allocation and database layout monitoring, storage migration, etc.
Increasingly, there are calls for a single system that incorporates all of these core functionalities into the same build, test, and deployment framework for database management and source control. Borrowing from other developments in the software industry, some market such offerings as "DevOps for database".
Design and modeling
The first task of a database designer is to produce a conceptual data model that reflects the structure of the information to be held in the database. A common approach to this is to develop an entity–relationship model, often with the aid of drawing tools. Another popular approach is the Unified Modeling Language. A successful data model will accurately reflect the possible state of the external world being modeled: for example, if people can have more than one phone number, it will allow this information to be captured. Designing a good conceptual data model requires a good understanding of the application domain; it typically involves asking deep questions about the things of interest to an organization, like "can a customer also be a supplier?", or "if a product is sold with two different forms of packaging, are those the same product or different products?", or "if a plane flies from New York to Dubai via Frankfurt, is that one flight or two (or maybe even three)?". The answers to these questions establish definitions of the terminology used for entities (customers, products, flights, flight segments) and their relationships and attributes.
Producing the conceptual data model sometimes involves input from business processes, or the analysis of workflow in the organization. This can help to establish what information is needed in the database, and what can be left out. For example, it can help when deciding whether the database needs to hold historic data as well as current data.
Having produced a conceptual data model that users are happy with, the next stage is to translate this into a schema that implements the relevant data structures within the database. This process is often called logical database design, and the output is a logical data model expressed in the form of a schema. Whereas the conceptual data model is (in theory at least) independent of the choice of database technology, the logical data model will be expressed in terms of a particular database model supported by the chosen DBMS. (The terms data model and database model are often used interchangeably, but in this article we use data model for the design of a specific database, and database model for the modeling notation used to express that design).
The most popular database model for general-purpose databases is the relational model, or more precisely, the relational model as represented by the SQL language. The process of creating a logical database design using this model uses a methodical approach known as normalization. The goal of normalization is to ensure that each elementary "fact" is only recorded in one place, so that insertions, updates, and deletions automatically maintain consistency.
The final stage of database design is to make the decisions that affect performance, scalability, recovery, security, and the like, which depend on the particular DBMS. This is often called physical database design, and the output is the physical data model. A key goal during this stage is data independence, meaning that the decisions made for performance optimization purposes should be invisible to end-users and applications. There are two types of data independence: Physical data independence and logical data independence. Physical design is driven mainly by performance requirements, and requires a good knowledge of the expected workload and access patterns, and a deep understanding of the features offered by the chosen DBMS.
Another aspect of physical database design is security. It involves both defining access control to database objects as well as defining security levels and methods for the data itself.
Models
A database model is a type of data model that determines the logical structure of a database and fundamentally determines in which manner data can be stored, organized, and manipulated. The most popular example of a database model is the relational model (or the SQL approximation of relational), which uses a table-based format.
Common logical data models for databases include:
Navigational databases
Hierarchical database model
Network model
Graph database
Relational model
Entity–relationship model
Enhanced entity–relationship model
Object model
Document model
Entity–attribute–value model
Star schema
An object–relational database combines the two related structures.
Physical data models include:
Inverted index
Flat file
Other models include:
Multidimensional model
Array model
Multivalue model
Specialized models are optimized for particular types of data:
XML database
Semantic model
Content store
Event store
Time series model
External, conceptual, and internal views
A database management system provides three views of the database data:
The external level defines how each group of end-users sees the organization of data in the database. A single database can have any number of views at the external level.
The conceptual level unifies the various external views into a compatible global view. It provides the synthesis of all the external views. It is out of the scope of the various database end-users, and is rather of interest to database application developers and database administrators.
The internal level (or physical level) is the internal organization of data inside a DBMS. It is concerned with cost, performance, scalability and other operational matters. It deals with storage layout of the data, using storage structures such as indexes to enhance performance. Occasionally it stores data of individual views (materialized views), computed from generic data, if performance justification exists for such redundancy. It balances all the external views' performance requirements, possibly conflicting, in an attempt to optimize overall performance across all activities.
While there is typically only one conceptual (or logical) and physical (or internal) view of the data, there can be any number of different external views. This allows users to see database information in a more business-related way rather than from a technical, processing viewpoint. For example, a financial department of a company needs the payment details of all employees as part of the company's expenses, but does not need details about employees that are the interest of the human resources department. Thus different departments need different views of the company's database.
The three-level database architecture relates to the concept of data independence which was one of the major initial driving forces of the relational model. The idea is that changes made at a certain level do not affect the view at a higher level. For example, changes in the internal level do not affect application programs written using conceptual level interfaces, which reduces the impact of making physical changes to improve performance.
The conceptual view provides a level of indirection between internal and external. On one hand it provides a common view of the database, independent of different external view structures, and on the other hand it abstracts away details of how the data are stored or managed (internal level). In principle every level, and even every external view, can be presented by a different data model. In practice usually a given DBMS uses the same data model for both the external and the conceptual levels (e.g., relational model). The internal level, which is hidden inside the DBMS and depends on its implementation, requires a different level of detail and uses its own types of data structure types.
Separating the external, conceptual and internal levels was a major feature of the relational database model implementations that dominate 21st century databases.
Research
Database technology has been an active research topic since the 1960s, both in academia and in the research and development groups of companies (for example IBM Research). Research activity includes theory and development of prototypes. Notable research topics have included models, the atomic transaction concept, and related concurrency control techniques, query languages and query optimization methods, RAID, and more.
The database research area has several dedicated academic journals (for example, ACM Transactions on Database Systems-TODS, Data and Knowledge Engineering-DKE) and annual conferences (e.g., ACM SIGMOD, ACM PODS, VLDB, IEEE ICDE).
See also
Comparison of database tools
Comparison of object database management systems
Comparison of object–relational database management systems
Comparison of relational database management systems
Data hierarchy
Data bank
Data store
Database theory
Database testing
Database-centric architecture
Flat-file database
INP (database)
Journal of Database Management
Question-focused dataset
Notes
References
Sources
Further reading
Ling Liu and Tamer M. Özsu (Eds.) (2009). "Encyclopedia of Database Systems, 4100 p. 60 illus. .
Gray, J. and Reuter, A. Transaction Processing: Concepts and Techniques, 1st edition, Morgan Kaufmann Publishers, 1992.
Kroenke, David M. and David J. Auer. Database Concepts. 3rd ed. New York: Prentice, 2007.
Raghu Ramakrishnan and Johannes Gehrke, Database Management Systems
Abraham Silberschatz, Henry F. Korth, S. Sudarshan, Database System Concepts
Teorey, T.; Lightstone, S. and Nadeau, T. Database Modeling & Design: Logical Design, 4th edition, Morgan Kaufmann Press, 2005.
External links
DB File extension – information about files with the DB extension |
43889881 | https://en.wikipedia.org/wiki/Public%20Schools%20%26%20Colleges%20Jutial%20Gilgit | Public Schools & Colleges Jutial Gilgit | Public Schools & Colleges Jutial Gilgit is an English-medium institution located at Jutial Gilgit covering more than 18 acres (75,000 sq. meters) area. It is one of the largest institutions in Gilgit Baltistan having a strength of more than 7000 students and around 200 faculty members assisted by more than 120 other supporting staff. It takes inspiration from the English public school model. It was established in 1980.
History
The school was established in 1980 as Federal Government Primary School Jutial Gilgit Under the resolution of [KANA Division, Government of Pakistan. Principal of the school is an Army Officer of the ranks of Colonel or Lieutenant Colonel who is held for a period of two to four years. Initially, it had a strength of 150 students up to Grade VII. By 1983, it was upgraded to F.Sc (Pre Engineering/ Pre Medical) for boys. In 1989, F.A/F.Sc classes were started for girls. By 1995, its volume rose for the undergraduate program.
Administrative authorities
One of the features of Public Schools & Colleges Jutial Gilgit is that it works under the strong supervision of the Pak Army and Local Government GB. The institution is governed and administered by the board of governance (BoG). Force Commander FCNA is the chairman and Chief Secretary Gilgit Baltistan is the co-chairman of BoG. The principal, who is one of the members of BoG, has overall internal control and administration of the institution assisted by vice principals and wing heads.
Facilities
Public Schools & Colleges is segregated into five wings.
Academics
Following are the wings with their range:
Boys Senior Wing and Girls Senior Wing are administered by Vice Principal (Boys Wing) and Vice Principal (Girls Wing) respectively. Each of the other wings is administered and controlled by a wing head. The wing head is responsible to manage class and teachers time table, make teachers and students aware of yearly college calendar and addressing any issues within the wing.
Sports
The institute is keen to provide each student an opportunity to participate in various indoor and outdoor games, therefore game period for each class is included in regular timetable. In order to promote harmony, sportsmanship and to build sense of competition, the institution organizes Inter House Sports Competition during second week of October each year. Usually Sports Week include 7 to 10 different events like Cricket, Football, Basketball, Badminton, Table Tennis, Running, Gymnastic etc. The institution has following sports facilities for students:
Football/cricket: one stadium for football and cricket;
Basketball: three basketball grounds;
Badminton: two badminton courts;
Table tennis: six table tennis courts;
Martial arts: Martial art training is provided under the guidance of a Black Belt (4th Dan (Yondan) instructor.
Gymnasium: The college has one gymnasium.
Competitions include Tug of War, Gymnastics, and Long Jump.
Sports rivalry
Public School's biggest rival in academics and sports is Army Public School and College which is also located in Jutial. The slogan PUBLIC VS APS is very popular among the students. The clash in any games is considered as the game of prestige and students make all the efforts to win the game.
Achievements
Achievements at national level are:
Old Jutialian, Diana Baig is our proud who is former player of Pakistan Women Football Team (selected in 2012) and now represents Pakistan Cricket Team (international cricket debut in 2015 against Bangladesh).
Three Gold Medals (to Abdul Rahim Chopa, Saad Zulfiqar and Zahid Jamal), two Silver Medals (to Jibran Faisal and Usama Faisal), four Bronze Medals (to Sumair Ahmad, Ayan Yamin, Naeem Ahmad and Aftab Ahmad) in inter divisional Taekwondo championship at Gilgit Baltistan level in April 2015;
Bronze Medal in inter provincial basketball tournament held at Gilgit in October/November 2014;
One Gold Medal (to Mehvish Karim), one Silver Medal (to Zahid Jamal), two Bronze Medals (to Zainul Abideen and Asadullah) in inter provincial Martial Art Judo Championship at Quetta in 2010;
Three Gold, three Silver and two Bronze medals in inter provincial Judo Championship at Lahore in 2009.
Ahsan Aman Secured 3rd Position in the national Table Tennis event held by PSB(Pakistan Sports Board)
Aziz ullah Baig – (an old Jutialian) bagged third position in all Pakistan easy writing competition conducted by Pakistan Academy of Sciences (PAS) in 1989
Laboratories
Physics: Two laboratories
Chemistry: Two laboratories
Zoology: One laboratory
Botany: One laboratory
Psychology: One laboratory
Computer: Three laboratories
Medical Facilities
For quick response and Primary Care of medical related issues, the institution has got three dispensaries where three qualified Nurses have been appointed by the Government of Pakistan. Basic medication and treatment is provided in these dispensaries for staff members and students.
Banking Facility
Though students can deposit their dues and make any financial transactions in Bank Alfalah Jutial, Askary Bank Gilgit and Karakuram Cooperative Bank Kashrote. But for making the system even more smooth, the institution has a sub branch of Karakuram Cooperative Bank in the school that works during office hours of the institution.
Other Facilities
Seven transport buses for students' pick and drop
Two libraries
One Masjid
One Daycare Center
Five tuck shops (canteens)
Housing and hostel facility for staff members
WiFi connections throughout campus;
Staff members and students use the gymnasium of FCNA.
Courses
Courses at degree level
Degree level courses Associate Degree of Arts (ADA), Associate Degree of Science (ADS) only for women. The college is affiliated with University of the Punjab for this program. List of the courses offered at this level is as under:
Associate Degree of Science (ADS)
Compulsory Courses:
English Language (100 Marks); Islamiat/Ethics (100 Marks), Pakistan Studies (100 Marks)
Elective Subjects:
Students have to select any of the following groups; each subject in a group carries 200 Marks:
Physics, Mathematics A, Mathematics B
Zoology, Chemistry, Botany
Applied Psychology, Botany and Zoology
Applied Psychology, Physics and Chemistry
Associate Degree of Arts (ADA)
Compulsory Courses:
English Literature (200 Marks); Islamiat/Ethics (100 Marks), Pakistan Studies (100 Marks)
Elective Subjects:
Students select any two subjects (200 Marks each) from the list:
Arabic, Applied Psychology, Economics, Education, Islamic Studies and Sociology
Optional Subjects:
Students select any one subject (100 Marks) from the list (other than the two selected from elective subjects):
Arabic, Applied Psychology, Economics, Education, Islamic Studies and Sociology
Courses at HSSC and SSC Level
Higher Secondary School Certificate (HSSC)
Compulsory courses:
English, Urdu, Islamiat, Pakistan Studies
Optional Courses:
Sciences: 1- Pre-Engineering (Mathematics, Physics, Chemistry); 2- Pre-Medical (Biology, Physics, Chemistry); 3- Intermediate in Computer Sciences (ICA: Mathematics, Chemistry, Computer Science)
Humanities: this program is offered only for girls. Students opt any three subjects from: Applied Psychology, Economics, Education, Islamic Studies and Sociology.
Secondary School Certificate (SSC)
Compulsory Subjects:
Mathematics, English, Urdu, Islamiat, Pakistan Studies
Optional Subjects:
Basic science subjects like Physics, Chemistry, Biology and Computer Science are taught at this level. Students have the only choice: either to study Biology or Computer Science along with Physics and Chemistry.
Houses
Students have been allotted the following houses:
Jinnah House Iqbal House Kernal Sher House Lalik Jan House Razia House Zubaida House Fatima House Rabia House
References
PS&CS Overview Jutial Gilgit
External links
Video lectures on YouTube
Facebook
Website PSCJ.EDU.PK
Military schools in Pakistan
Universities and colleges in Gilgit-Baltistan
Schools in Gilgit-Baltistan
Gilgit |
67712700 | https://en.wikipedia.org/wiki/Jason%20Saine | Jason Saine | Jason Saine is a Republican member of the North Carolina House of Representatives, having represented the 97th district (based in Lincoln County) since his appointment in 2011. A public relations and social media manager from Lincolnton he was re-elected to the seat in 2012, 2014, 2016, 2018, and 2020.
Electoral history
2020
2018
2016
2014
2012
Committee Assignments
2021-2022
Appropriations (Senior Chair)
Appropriations - Information Technology
Energy and Public Utilities
Ethics
Judiciary I
Redistricting (Vice Chair)
Rules, Calendar, and Operations
Alcoholic Beverage Control
2019-2020
Appropriations (Senior Chair)
Appropriations - Information Technology
Energy and Public Utilities
Ethics
Redistricting
Rules, Calendar, and Operations
Alcoholic Beverage Control
2017-2018
Appropriations (Vice Chair)
Appropriations - Information Technology
Rules, Calendar, and Operations
Finance (Chair)
Education (K-12)
Alcoholic Beverage Control
2015-2016
Appropriations (Vice Chair)
Appropriations - Information Technology (Chair)
Rules, Calendar, and Operations
Finance (Senior Chair)
Elections
Health
Judiciary II
Commerce and Job Development
Alcoholic Beverage Control
2013-2014
Appropriations (Vice Chair)
Rules, Calendar, and Operations
Education
Elections
Judiciary
Transportation
Commerce and Job Development (Vice Chair)
Alcoholic Beverage Control
References
External links
Campaign Website
Year of birth missing (living people)
Living people
University of North Carolina at Charlotte alumni
Columbia Southern University alumni
Members of the North Carolina House of Representatives
North Carolina Republicans
21st-century American politicians
People from Lincolnton, North Carolina |
710908 | https://en.wikipedia.org/wiki/Ps%20%28Unix%29 | Ps (Unix) | In most Unix and Unix-like operating systems, the ps program (short for "process status") displays the currently-running processes. A related Unix utility named top provides a real-time view of the running processes.
Implementations
KolibriOS includes an implementation of the command. The command has also been ported to the IBM i operating system. In Windows PowerShell, ps is a predefined command alias for the Get-Process cmdlet, which essentially serves the same purpose.
Examples
# ps
PID TTY TIME CMD
7431 pts/0 00:00:00 su
7434 pts/0 00:00:00 bash
18585 pts/0 00:00:00 ps
Users can pipeline ps with other commands, such as less to view the process status output one page at a time:
$ ps -A | less
Users can also utilize the ps command in conjunction with the grep command (see the pgrep and pkill commands) to find information about a single process, such as its id:$ # Trying to find the PID of `firefox-bin` which is 2701
$ ps -A | grep firefox-bin
2701 ? 22:16:04 firefox-bin
The use of pgrep simplifies the syntax and avoids potential race conditions:
$ pgrep -l firefox-bin
2701 firefox-bin
To see every process running as root in user format:
# ps -U root -u
USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
root 1 0.0 0.0 9436 128 - ILs Sun00AM 0:00.12 /sbin/init --
Header line
* = Often abbreviated
Options
ps has many options. On operating systems that support the SUS and POSIX standards, ps commonly runs with the options -ef, where "-e" selects every process and "-f" chooses the "full" output format. Another common option on these systems is -l, which specifies the "long" output format.
Most systems derived from BSD fail to accept the SUS and POSIX standard options because of historical conflicts. (For example, the "e" or "-e" option will display environment variables.) On such systems, ps commonly runs with the non-standard options aux, where "a" lists all processes on a terminal, including those of other users, "x" lists all processes without controlling terminals and "u" adds a column for the controlling user for each process. For maximum compatibility, there is no "-" in front of the "aux". "ps auxww" provides complete information about the process, including all parameters.
See also
Task manager
kill (command)
List of Unix commands
nmon — a system monitor tool for the AIX and Linux operating systems.
pgrep
pstree (Unix)
top (Unix)
lsof
References
Further reading
External links
Show all running processes in Linux using ps command
In Unix, what do the output fields of the ps command mean?
Unix SUS2008 utilities
Unix process- and task-management-related software
Plan 9 commands
Inferno (operating system) commands |
17868275 | https://en.wikipedia.org/wiki/Second%20Life%20Grid | Second Life Grid | The Second Life Grid is the platform and technology behind 3D online virtual world Second Life. In April 2008, IBM announced that it would explore future deployment of a portion of the Second Life Grid behind a corporate firewall.
Technical information
The flat, Earth-like world of Second Life is simulated on a large array of Debian servers, referred to as the Grid. The world is divided into 256x256 m areas of land, called Regions. Each Region is simulated by a single named server instance, and is given a unique name and content rating (PG, Mature or Adult). Multiple server instances can be run on a single physical server, but generally each instance is given a dedicated CPU core of its own. Modern servers with two dual-core processors usually support four separate server instances.
The Second Life world runs on Linden Time, which is identical to the Pacific Time Zone. The virtual world follows the North American Daylight Saving Time convention. Hence it runs 7 hours behind UTC most of the year, and 8 hours behind when Standard Time is in effect during the winter. The servers' log files actually record events in UTC, however.
Physics simulation
Each server instance runs a physics simulation to manage the collisions and interactions of all objects in that region. Objects can be nonphysical and nonmoving, or actively physical and movable. Complex shapes may be linked together in groups of up to 255 separate primitives. Additionally, each player's avatar is treated as a physical object so that it may interact with physical objects in the world.
As of April 1, 2008, Second Life simulators use the Havok 4 physics engine for all in-game dynamics. This new engine is capable of simulating thousands of physical objects at once. However, more than 500 constantly interacting collisions have noticeable impact on simulator performance. The previous Havok 1 installment of the physics engine caused what is known as the Deep Think condition; processing overlapping object collisions endlessly. It has been alleviated through the introduction of an overlap ejection capability. This allows overlapped objects to separate and propel apart as if compressing two springs against each other.
Asset storage
Every item in the Second Life universe is referred to as an asset. This includes the shapes of the 3D objects known as primitives, the digital images referred to as textures that decorate primitives, digitized audio clips, avatar shape and appearance, avatar skin textures, LSL scripts, information written on notecards, and so on. Each asset is referenced with a universally unique identifier or UUID.
Assets are stored in their own dedicated MySQL server farm, comprising all data that has ever been created by anyone who has been in the SL world. As of December 2007, the total storage was estimated to consume 100 terabytes of server capacity. The asset servers function independently of the region simulators, though the region simulators request object data from the asset servers when a new object loads into the simulator.
As the popularity of Second Life has increased, the strain on the database engine to quickly and efficiently store and retrieve data has also continued to increase, frequently outpacing the ability of the Linden staff to keep their asset farm equipped to handle the number of users logged into the world at the same time.
Under severe load conditions it is common for the database engine to simply not reply to requests in a timely fashion, causing objects to not rez or delete as expected, or for the client inventory to not load, or the currency balance to not appear in the client program. Searching for locations, people, or classifieds may also fail under heavy load conditions. The database load is typically the most severe on weekends, particularly Sunday afternoons (Second Life Time), while the system can function just fine when accessed during low-load times such as at night or in the middle of the week during the day.
Software
The Second Life software comprises the viewer (also known as the client) executing on the Resident's computer, and several thousand servers operated by Linden Lab. There is an active beta-grid that has its own special client, which is updated very regularly, and is used for constant software testing by volunteers. This testing software was introduced to eliminate the short amounts of time between real updates, and increase its overall quality. The beta-grid reflects the standard main-grid, except that the actions taken within it are not stored by the servers; it is for testing purposes only. Every few months, the standard software is replaced by the beta-grid software, intended as a big upgrade. The Second Life user-base is growing rapidly, and this has stimulated both social and technological changes to the world; the addition of new features also provides periodic boosts to the growth of the economy.
Linden Lab pursues the use of open standards technologies, and uses free and open source software such as Apache, MySQL and Squid. The plan is to move everything to open standards by standardizing the Second Life protocol. Cory Ondrejka, former CTO of Second Life, has stated that some time after everything has been standardized, both the client and the server will be released as free and open source software.
The current in-house virtual machine will soon be replaced with Mono, which will reportedly produce a dramatic speed improvement.
uBrowser, an OpenGL port of the Gecko rendering engine, which has been used in the client since version 1.10.1 to display the Help documentation, will also be used to display webpages on any of the surfaces of any 3D object the Resident creates.
Linden Lab provides viewers for Microsoft Windows 2000/XP, Mac OS X, and most distributions of Linux. As of mid-2007, Microsoft Windows Vista is not yet officially supported although the viewer will generally run on Vista systems. In the past, viewer upgrades were usually mandatory; the old viewer would not work with the new version of the server software. However, Linden Lab is working on a more flexible protocol that will allow clients and servers to send and take whatever data they may require, hence differing versions would nonetheless be able to work together. The project is known as Het-Grid or heterogeneous grid and the first iteration of the server software was deployed to the Main Grid over a few weeks in August 2007.
As of January 8, 2007, the Viewer is distributed under version 2 of the GNU General Public License, with an additional clause allowing combination with certain other free software packages which have otherwise-incompatible licenses. Currently not all of the required dependencies have been released.
Modified viewer software is available from third parties. The most popular is the Nicholaz Edition; this viewer, produced by Nicholaz Beresford, includes bug fixes developed outside Linden Lab that are not yet included in the Linden Lab code. The Electric Sheep Company has introduced the OnRez Viewer, which makes substantial changes to the design of the user interface. ShoopedLife is a commonly used Second Life client that generates randomized hardware details and sends them to the Second Life server as part of the login, rendering the user anonymous, save for their IP address.
An independent project, libopenmetaverse, offers a function library for interacting with Second Life servers. libopenmetaverse has been used to create non-graphic third party viewers, including SLEEK, a text browser using .NET, and Ajaxlife, a text viewer that runs in a web browser.
The OS X viewer is a universal binary and is about twice the size of the Windows and Linux binaries.
Animation editors using the Biovision Hierarchy file format such as Poser, and Avimator are compatible with SL.
Further development
In 2007, Linden Lab began work on improving the User Experience of second life. On December 6, 2007, a new download client (commonly known as a viewer) was announced. 'Windlight', so it was codenamed, came with many improvements to system stability as well as having a completely new rendering engine to include the use of atmospheric shaders, a new sky, new water as well as hundreds of other improvements to improve the quality of Second Life. Until 2010, the download client known as Windlight was the default client available as the main client download from the Second Life website. However, on February 19, 2008, Linden Lab announced the release of yet another client codenamed 'Dazzle'. This client came with changes to the stability of the client itself as well as an overhauled User Interface, which was given mixed feedback by users who chose to download the client. As well as many fixes to the client, usability is also being improved. While the 'First Look' Dazzle client no longer exists, the further developed version of the client formerly known as Dazzle currently exists as a 'release candidate' from the Second Life test software page on their website.
Dazzle was finally released as v2 of the official client with many new user interface features in Spring 2010.
Protocol
In May 2006 it was announced that the Second Life protocol had been reverse-engineered. A wiki was set up to further the effort.
Since this project produced some useful software, Linden Lab modified the TOS to allow third-party programs to access Second Life, enabling the project to be formalized under the name libsecondlife. Among functions developed are a map API, the ability to create objects larger than normally allowed (recently disabled), and other unforeseen capabilities such as CopyBot.
OpenSimulator
In January 2007 OpenSimulator was founded as an open-source simulator project. The aim of this project is to develop a full open-source server software for third parties who wish to establish separate grids.
OpenSIM is BSD Licensed and it is written in C# and can run .NET Framework or Mono environments. The community is fast growing and there are some existing alternative Second Life grids which are using OpenSimulator.
References
Second Life |
68183484 | https://en.wikipedia.org/wiki/David%20M.%20Berry | David M. Berry | David M. Berry is a Professor of Digital Humanities at the University of Sussex, writer and musician. He is widely published on academic work related to the fields of critical theory, digital humanities, media theory and algorithms.
Biography
Berry's early work focused on the philosophy of technology and particularly understanding open source and free software. More recently his work has explored the area of critical digital humanities, the notion of explainability, and the historical idea of a university.
In 1994, Berry co-founded, with Gibby Zobel, the radical newspaper SchNEWS whilst living in Brighton and he was involved in the protests against the Criminal Justice and Public Order Act 1994. Berry later went to work for Reuters Ltd working in London, and whilst in London also founded a record label, LOCA Records in Old Street with Marcus McCallion in 1999. Loca Records was notable for releasing music experimentally in open access formats, such as the Gnu GPL and Creative Commons licenses. Whilst running the record label he released electronica music under the names Meme (Loca Records) and Ward (Static Caravan Recordings). On 19 April 2000, John Peel played Meme's track Mandibles on BBC Radio 1 and later played Ward's track Sesquipedalian Origins on 5 February 2002 on the John Peel Show on BBC Radio 1. On 7 March 2002, John Peel again played the track Sesquipedalian Origins again, this time confused over the rpm he played it twice, notably the second time incorrectly at 45 rpm.
In 2000, Berry returned to Brighton to study a Masters in Social and Political Thought and in 2002 a PhD at the University of Sussex (funded by the ESRC). In 2007 he began work at Swansea University as a lecturer, moving in 2013 as a Reader (and then Professor) at the University of Sussex. In 2015 he co-founded the Sussex Humanities Lab at the University of Sussex, exploring the relation between digital cultures, literatures, materialities and philosophy.
Berry's first book published in 2008, Copy, Rip, Burn: The Politics of Copyleft and Open Source, undertook an examination of the way in which the proponents of the free software and open source communities understood their respective projects and how they articulated them in terms of an often implicit political ideology. The aim was to situate their ideas and practices within a broader movement of economic change brought on by the digitalisation of the economy and the shift to a so-called information society. Part of this change involves a movement in the way in which society conceptualises many of the background assumptions in terms of new notions, such as computational metaphors, stories and claims of an "open" or "free" norm that governs particular spheres of activity, such as "open science".
The book, The Philosophy of Software: Code and Mediation in a Digital Age, is Berry's second book, and is widely seen as both an important contribution to thinking about software, code and algorithms from a philosophical standpoint but also the outlines of a useful research programme. From questions about the "whatness" of software and code, to issues raised by reading and writing code, to a general programme of a phenomenology of software, the book concludes with a discussion of the becoming-stream of contemporary life due to software and the "real-time" of streams. Considering the book was written in 2011 it is remarkably prescient about the direction that technology has taken with the wide adoption of "streams" as a major form of interface in social media and other software products.
In Critical Theory and the Digital, published in 2014, the author looks to the Frankfurt School to develop a critical framework for thinking about software and algorithms. In this book he raises the particular issue of a form of software or computational metaphysics becoming prevalent as a new ideology which serves to mystify and obscure computation and its origins. As a result he proposes a new critical reading of the digital informed by an understanding of alienation and exploitation that is generated by computational technologies.
In Digital Humanities: Knowledge and Critique in a Digital Age, together with the Norwegian academic Anders Fagerjord, they examine the history and theory in the area known as Digital Humanities. The book serves as an important contribution by bringing together the debates that have been happening within digital humanities, but also widen and deepen the discussions by proposing a theoretical critical digital humanities that complements the often highly technical nature of digital humanities work.
In 2019 he released new music under the name ØxØ on Truant Recordings with fellow musician Barnaby Thorn in the genre of Conceptronica.
More recently Berry is a member of the Internation Collectiv led by the late Bernard Stiegler which addressed the challenges of 21st century climate change and sustainability in relation to imagining a new political economy in a post-computational world. The collective published its first book called Bifurquer: Il n'y a pas d'alternative in 2020. The English translation of this book called Bifurcation: There is no Alternative, was published by Open Humanities Press in 2021. His recent discussions of explainability have built on this previous work, particularly in relation to artificial intelligence, machine-learning and meaning together with theories of explanation.
He has held visiting fellowships at Kings College, London, Forschungskolleg Humanwissenschaften (Institute for Advanced Studies) at Goethe University Frankfurt am Main, The School of Advanced Study, London, Lincoln College and Mansfield College at the University of Oxford, Wolfson College and CRASSH at the University of Cambridge, the Parliamentary Office of Science and Technology at the Houses of Parliament, and the University of Oslo.
Notable works
As author
Berry, D. M. and Fagerjord (2017) Digital Humanities: Knowledge and Critique in a Digital Age, London: Polity. ISBN 978-0745697666, pp. 248. [Translated into Japanese and Chinese]
Berry, D. M. (2014) Critical Theory and the Digital. New York: Bloomsbury Academic. ISBN 978-1441166395. Pp. 279.
Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age. London: Palgrave Macmillan, pp 216. ISBN 978-0230244184, Pp. 216.
Berry, D. M. (2008) Copy, Rip, Burn: The Politics of Copyleft and Open Source. London: Pluto Press, pp 270. ISBN 978-0745324142, pp. 270.
As editor
Berry, D. M. and Dieter, M. (eds.) (2015) Postdigital Aesthetics: Art, Computation and Design, Palgrave Macmillan. ISBN 978-1137437198, Pp. 300.
Berry, D. M. (ed.) (2012) Life in Code and Software, Open Humanities Press. ISBN 978-1607852834.
Berry, D. M. (ed.) (2012) Understanding Digital Humanities. London: Palgrave Macmillan, pp. 318, ISBN 978-0230292659.
Discography
Albums
EPs
Singles
References
Academics of the University of Sussex
Living people
Philosophers of technology
Philosophy academics
Philosophy writers
21st-century British philosophers
Continental philosophers
Critical theorists
Social critics
Social philosophers
Alumni of the University of Sussex
1974 births |
12414832 | https://en.wikipedia.org/wiki/Completely%20Fair%20Scheduler | Completely Fair Scheduler | The Completely Fair Scheduler (CFS) is a process scheduler that was merged into the 2.6.23 (October 2007) release of the Linux kernel and is the default scheduler of the tasks of the SCHED_NORMAL class (i.e., tasks that have no real-time execution constraints). It handles CPU resource allocation for executing processes, and aims to maximize overall CPU utilization while also maximizing interactive performance.
In contrast to the previous O(1) scheduler used in older Linux 2.6 kernels, which maintained and switched run queues of active and expired tasks, the CFS scheduler implementation is based on per-CPU run queues, whose nodes are time-ordered schedulable entities that are kept sorted by red–black trees. The CFS does away with the old notion of per-priorities fixed time-slices and instead it aims at giving a fair share of CPU time to tasks (or, better, schedulable entities).
Algorithm
A task (i.e., a synonym for thread) is the minimal entity that Linux can schedule. However, it can also manage groups of threads, whole multi-threaded processes, and even all the processes of a given user. This design leads to the concept of schedulable entities, where tasks are grouped and managed by the scheduler as a whole. For this design to work, each task_struct task descriptor embeds a field of type sched_entity that represents the set of entities the task belongs to.
Each per-CPU run-queue of type cfs_rq sorts sched_entity structures in a time-ordered fashion into a red-black tree (or 'rbtree' in Linux lingo), where the leftmost node is occupied by the entity that has received the least slice of execution time (which is saved in the vruntime field of the entity). The nodes are indexed by processor "execution time" in nanoseconds.
A "maximum execution time" is also calculated for each process to represent the time the process would have expected to run on an "ideal processor". This is the time the process has been waiting to run, divided by the total number of processes.
When the scheduler is invoked to run a new process:
The leftmost node of the scheduling tree is chosen (as it will have the lowest spent execution time), and sent for execution.
If the process simply completes execution, it is removed from the system and scheduling tree.
If the process reaches its maximum execution time or is otherwise stopped (voluntarily or via interrupt) it is reinserted into the scheduling tree based on its newly spent execution time.
The new leftmost node will then be selected from the tree, repeating the iteration.
If the process spends a lot of its time sleeping, then its spent time value is low and it automatically gets the priority boost when it finally needs it. Hence such tasks do not get less processor time than the tasks that are constantly running.
The complexity of the algorithm that inserts nodes into the cfs_rq runqueue of the CFS scheduler is O(log N), where N is the total number of entities. Choosing the next entity to run is made in constant time because the leftmost node is always cached.
History
Con Kolivas's work with scheduling, most significantly his implementation of "fair scheduling" named Rotating Staircase Deadline, inspired Ingo Molnár to develop his CFS, as a replacement for the earlier O(1) scheduler, crediting Kolivas in his announcement.
CFS is an implementation of a well-studied, classic scheduling algorithm called weighted fair queuing. Originally invented for packet networks, fair queuing had been previously applied to CPU scheduling under the name stride scheduling. CFS is the first implementation of a fair queuing process scheduler widely used in a general-purpose operating system.
The Linux kernel received a patch for CFS in November 2010 for the 2.6.38 kernel that has made the scheduler "fairer" for use on desktops and workstations. Developed by Mike Galbraith using ideas suggested by Linus Torvalds, the patch implements a feature called autogrouping that significantly boosts interactive desktop performance. The algorithm puts parent processes in the same task group as child processes.
(Task groups are tied to sessions created via the setsid() system call.)
This solved the problem of slow interactive response times on multi-core and multi-CPU (SMP) systems when they were performing other tasks that use many CPU-intensive threads in those tasks. A simple explanation is that, with this patch applied, one is able to still watch a video, read email and perform other typical desktop activities without glitches or choppiness while, say, compiling the Linux kernel or encoding video.
In 2016, the Linux scheduler was patched for better multicore performance, based on the suggestions outlined in the paper, "The Linux Scheduler: A Decade of Wasted Cores".
See also
Brain Fuck Scheduler
SCHED_DEADLINE
References
External links
Linux kernel process schedulers
Free software |
238725 | https://en.wikipedia.org/wiki/Loebner%20Prize | Loebner Prize | The Loebner Prize was an annual competition in artificial intelligence that awards prizes to the computer programs considered by the judges to be the most human-like. The prize is reported as defunct since 2020. The format of the competition was that of a standard Turing test. In each round, a human judge simultaneously holds textual conversations with a computer program and a human being via computer. Based upon the responses, the judge must decide which is which.
The contest was launched in 1990 by Hugh Loebner in conjunction with the Cambridge Center for Behavioral Studies, Massachusetts, United States. Since 2014 it has been organised by the AISB at Bletchley Park.
It has also been associated with Flinders University, Dartmouth College, the Science Museum in London, University of Reading and Ulster University, Magee Campus, Derry, UK City of Culture.
In 2004 and 2005, it was held in Loebner's apartment in New York City. Within the field of artificial intelligence, the Loebner Prize is somewhat controversial; the most prominent critic, Marvin Minsky, called it a publicity stunt that does not help the field along.
In 2019 the format of the competition changed. There was no panel of judges. Instead, the chatbots were judged by the public and there were to be no human competitors.
Prizes
Originally, $2,000 was awarded for the most human-seeming program in the competition. The prize was $3,000 in 2005 and $2,250 in 2006. In 2008, $3,000 was awarded.
In addition, there are two one-time-only prizes that have never been awarded. $25,000 is offered for the first program that judges cannot distinguish from a real human and which can convince judges that the human is the computer program. $100,000 is the reward for the first program that judges cannot distinguish from a real human in a Turing test that includes deciphering and understanding text, visual, and auditory input. Once this is achieved, the annual competition will end.
Competition rules and restrictions
The rules have varied over the years and early competitions featured restricted conversation Turing tests but since 1995 the discussion has been unrestricted.
For the three entries in 2007, Robert Medeksza, Noah Duncan and Rollo Carpenter, some basic "screening questions" were used by the sponsor to evaluate the state of the technology. These included simple questions about the time, what round of the contest it is, etc.; general knowledge ("What is a hammer for?"); comparisons ("Which is faster, a train or a plane?"); and questions demonstrating memory for preceding parts of the same conversation. "All nouns, adjectives and verbs will come from a dictionary suitable for children or adolescents under the age of 12." Entries did not need to respond "intelligently" to the questions to be accepted.
For the first time in 2008 the sponsor allowed introduction of a preliminary phase to the contest opening up the competition to previously disallowed web-based entries judged by a variety of invited interrogators. The available rules do not state how interrogators are selected or instructed. Interrogators (who judge the systems) have limited time: 5 minutes per entity in the 2003 competition, 20+ per pair in 2004–2007 competitions, 5 minutes to conduct simultaneous conversations with a human and the program in 2008–2009, increased to 25 minutes of simultaneous conversation since 2010.
Criticisms
The prize has long been scorned by experts in the field, for a variety of reasons.
It is regarded by many as a publicity stunt. Marvin Minsky scathingly offered a "prize" to anyone who could stop the competition. Loebner responded by jokingly observing that Minsky's offering a prize to stop the competition effectively made him a co-sponsor.
The rules of the competition have encouraged poorly qualified judges to make rapid judgements. Interactions between judges and competitors was originally very brief, for example effectively 2.5 mins of questioning, which permitted only a few questions. Questioning was initially restricted to "whimsical conversation", a domain suiting standard chatbot tricks.
Competition entrants do not aim at understanding or intelligence but resort to basic ELIZA style tricks, and successful entrants find deception and pretense is rewarded.
Reporting of the annual competition often confuses the imitation test with intelligence, a typical example being Brian Christian's introduction to his article "Mind vs. Machine" in The Atlantic, March 2011, stating that "in the race to build computers that can think like humans, the proving ground is the Turing Test".
Contests
2006
In 2006, the contest was organised by Tim Child (CEO of Televirtual) and Huma Shah. On August 30, the four finalists were announced:
Rollo Carpenter
Richard Churchill and Marie-Claire Jenkins
Noah Duncan
Robert Medeksza
The contest was held on 17 September in the VR theatre, Torrington Place campus of University College London. The judges included the University of Reading's cybernetics professor, Kevin Warwick, a professor of artificial intelligence, John Barnden (specialist in metaphor research at the University of Birmingham), a barrister, Victoria Butler-Cole and a journalist, Graham Duncan-Rowe. The latter's experience of the event can be found in an article in Technology Review. The winner was 'Joan', based on Jabberwacky, both created by Rollo Carpenter.
2007
The 2007 competition was held on October 21 in New York City. The judges were: computer science professor Russ Abbott, philosophy professor Hartry Field, psychology assistant professor Clayton Curtis and English lecturer Scott Hutchins.
No bot passed the Turing test, but the judges ranked the three contestants as follows:
1st: Robert Medeksza, creator of Ultra Hal
2nd: Noah Duncan, a private entry, creator of Cletus
3rd: Rollo Carpenter from Icogno, creator of Jabberwacky
The winner received $2,250 and the annual medal. The runners-up received $250 each.
2008
The 2008 competition was organised by professor Kevin Warwick, coordinated by Huma Shah and held on October 12 at the University of Reading, UK. After testing by over one hundred judges during the preliminary phase, in June and July 2008, six finalists were selected from thirteen original entrants - artificial conversational entity (ACE). Five of those invited competed in the finals:
Brother Jerome, Peter Cole and Benji Adams
Elbot, Fred Roberts / Artificial Solutions
Eugene Goostman, Vladimir Veselov, Eugene Demchenko and Sergey Ulasen
Jabberwacky, Rollo Carpenter
Ultra Hal, Robert Medeksza
In the finals, each of the judges was given five minutes to conduct simultaneous, split-screen conversations with two hidden entities. Elbot of Artificial Solutions won the 2008 Loebner Prize bronze award, for most human-like artificial conversational entity, through fooling three of the twelve judges who interrogated it (in the human-parallel comparisons) into believing it was human. This is coming very close to the 30% traditionally required to consider that a program has actually passed the Turing test. Eugene Goostman and Ultra Hal both deceived one judge each that it was the human.
Will Pavia, a journalist for The Times, has written about his experience; a Loebner finals' judge, he was deceived by Elbot and Eugene. Kevin Warwick and Huma Shah have reported on the parallel-paired Turing tests.
2009
The 2009 Loebner Prize Competition was held September 6, 2009, at the Brighton Centre, Brighton UK in conjunction with the Interspeech 2009 conference. The prize amount for 2009 was $3,000.
Entrants were David Levy, Rollo Carpenter, and Mohan Embar, who finished in that order.
The writer Brian Christian participated in the 2009 Loebner Prize Competition as a human confederate, and described his experiences at the competition in his book The Most Human Human.
2010
The 2010 Loebner Prize Competition was held on October 23 at California State University, Los Angeles. The 2010 competition was the 20th running of the contest. The winner was Bruce Wilcox with Suzette.
2011
The 2011 Loebner Prize Competition was held on October 19 at the University of Exeter, Devon, United Kingdom. The prize amount for 2011 was $4,000.
The four finalists and their chatterbots were Bruce Wilcox (Rosette), Adeena Mignogna (Zoe), Mohan Embar (Chip Vivant) and Ron Lee (Tutor), who finished in that order.
That year there was an addition of a panel of junior judges, namely Georgia-Mae Lindfield, William Dunne, Sam Keat and Kirill Jerdev. The results of the junior contest were markedly different from the main contest, with chatterbots Tutor and Zoe tying for first place and Chip Vivant and Rosette coming in third and fourth place, respectively.
2012
The 2012 Loebner Prize Competition was held on the 15th of May in Bletchley Park in Bletchley, Buckinghamshire, England, in honor of the Alan Turing centenary celebrations. The prize amount for 2012 was $5,000. The local arrangements organizer was David Levy, who won the Loebner Prize in 1997 and 2009.
The four finalists and their chatterbots were Mohan Embar (Chip Vivant), Bruce Wilcox (Angela), Daniel Burke (Adam), M. Allan (Linguo), who finished in that order.
That year, a team from the University of Exeter's computer science department (Ed Keedwell, Max Dupenois and Kent McClymont) conducted the first-ever live webcast of the conversations.
2013
The 2013 Loebner Prize Competition was held, for the only time on the Island of Ireland, on September 14 at the Ulster University, Magee College, Derry, Northern Ireland, UK.
The four finalists and their chatbots were Steve Worswick (Mitsuku), Dr. Ron C. Lee (Tutor), Bruce Wilcox (Rose) and Brian Rigsby (Izar), who finished in that order.
The judges were Professor Roger Schank (Socratic Arts), Professor Noel Sharkey (Sheffield University), Professor Minhua (Eunice) Ma (Huddersfield University, then University of Glasgow)
and Professor Mike McTear (Ulster University).
For the 2013 Junior Loebner Prize Competition the chatbots Mitsuku and Tutor tied for first place with Rose and Izar in 3rd and 4th place respectively.
2014
The 2014 Loebner Prize Competition was held at Bletchley Park, England, on Saturday 15 November 2014. The event was filmed live by Sky News. The guest judge was television presenter and broadcaster James May.
After 2 hours of judging, 'Rose' by Bruce Wilcox was declared the winner. Bruce will receive a cheque for $4000 and a bronze medal. The ranks were as follows:
Rose - Rank 1 ($4000 & Bronze Medal);
Izar - Rank 2.25 ($1500);
Uberbot - Rank 3.25 ($1000); and
Mitsuku - Rank 3.5 ($500).
The Judges were Dr Ian Hocking, Writer & Senior Lecturer in Psychology, Christ Church College, Canterbury;
Dr Ghita Kouadri-Mostefaoui, Lecturer in Computer Science and Technology, University of Bedfordshire;
Mr James May, Television Presenter and Broadcaster; and
Dr Paul Sant, Dean of UCMK, University of Bedfordshire.
2015
The 2015 Loebner Prize Competition was again won by 'Rose' by Bruce Wilcox.
The judges were Jacob Aaron, Physical sciences reporter for New Scientist; Rory Cellan-Jones, Technology correspondent for the BBC; Brett Marty, Film Director and Photographer; Ariadne Tampion, Writer.
2016
The 2016 Loebner Prize was held at Bletchley Park on 17 September 2016. After 2 hours of judging the final results were announced.
The ranks were as follows:
1st place: Mitsuku
2nd place: Tutor
3rd place: Rose
Winners
Official list of winners.
See also
List of computer science awards
Artificial intelligence
Glossary of artificial intelligence
Robot
Artificial general intelligence
Confederate effect
Computer game bot Turing Test
References
External links
Computer science competitions
Computer science awards
Artificial intelligence
Chatbots |
20326 | https://en.wikipedia.org/wiki/Motorola%206809 | Motorola 6809 | The Motorola 6809 ("sixty-eight-oh-nine") is an 8-bit microprocessor with some 16-bit features. It was designed by Motorola's Terry Ritter and Joel Boney and introduced in 1978. Although source compatible with the earlier Motorola 6800, the 6809 offered significant improvements over it and 8-bit contemporaries like the MOS Technology 6502, including a hardware multiplication instruction, 16-bit arithmetic, system and user stack registers allowing re-entrant code, improved interrupts, position-independent code and an orthogonal instruction set architecture with a comprehensive set of addressing modes.
Among the most powerful 8-bit processors of its era, it was also much more expensive. In 1980 a 6809 in single-unit quantities was $37 compared to $9 for a Zilog Z80 and $6 for a 6502. It was launched when a new generation of 16-bit processors were coming to market, like the Intel 8086, and 32-bit designs were on the horizon, including Motorola's own 68000. It was not feature competitive with newer designs and not price competitive with older ones.
The 6809 was used in the TRS-80 Color Computer, Dragon 32/64, SuperPET, and Thomson MO/TO home computers, the Vectrex game console, and early 1980s arcade machines including Star Wars, Defender, Robotron: 2084, Joust, and Gyruss. Series II of the Fairlight CMI digital audio workstation and Konami's Time Pilot '84 arcade game each use dual 6809 processors. Hitachi was a major user of the 6809 and later produced an updated version as the Hitachi 6309.
History
6800 and 6502
The Motorola 6800 was designed beginning in 1971 and released in 1974. In overall design terms, it has a strong resemblance to other CPUs that were designed from the start as 8-bit designs, like the Intel 8080. It was initially fabricated using early NMOS logic, which normally required several different power supply voltages. A key feature was an on-chip voltage doubler allowed it to run on a single +5 V supply, a major advantage over its competitors like the Intel 8080 which required -5 V, +5 V, -12 V and ground.
The 6800 was initially fabricated using the then-current contact lithography process. In this process, the photomask is placed in direct contact with the wafer, exposed, and then lifted off. There was a small chance that some of the etching material would be left on the wafer when it was lifted, causing future chips patterned with the mask to fail. For complex multi-patterned designs like a CPU, this led to about 90% of the chips failing when tested. To make a profit on the small number of chips that did work, the prices for the working models had to be fairly high, on the order of hundreds of dollars in small quantities. As a result, the 6800 had relatively low market acceptance after its release.
A number of the 6800's designers were convinced that a lower-cost system would be key to widespread acceptance. Notable among them was Chuck Peddle, who was sent on sales trips and saw prospective customers repeatedly reject the design as being too expensive for their intended uses. He began a project to produce a much less costly design, but Motorola's management proved uninterested and eventually told him to stop working on it. Peddle and a number of other members of the 6800 team left Motorola for MOS Technology and introduced this design in 1975 as the MOS Technology 6502. The 6800 was initially sold at $360 in single-unit quantities, but had been lowered to $295 by this point. The 6502 sold for $25.
There were three reasons for the 6502's low cost. One was that the designers stripped out any feature that wasn't absolutely required. This led to the removal of one of the two accumulators and the use of smaller 8-bit index registers, both resulting in less internal wiring. Another change was the move to depletion-load NMOS logic, a new technique that required only +5 V. The 6800 had only a single +5 V pin externally but had multiple voltages internally that required separate power rails to be routed around the chip. These two changes allowed the 6502 to be 16.6 mm2, as opposed to the 6800's 29.0 mm2, meaning twice as many chips could be produced from a single wafer. Finally, MOS was using the new Micralign lithography system that improved average yield from around 10% to 70%.
With the introduction of the 6502, Motorola immediately lowered the price of the 6800 to $125, but it remained uncompetitive and sales prospects dimmed. The introduction of the Micralign to Motorola's lines allowed further reductions and by 1981 the price of the then-current 6800P was slightly less than the equivalent 6502, at least in single-unit quantities. By that point, however, the 6502 had sold tens of millions of units and the 6800 had been largely forgotten.
6809
While the 6502 began to take over the 6800's market, Intel was experiencing the same problem when the upstart Zilog Z80 began to steal sales from the Intel 8080. Both Motorola and Intel began new design cycles to leapfrog those designs. This process led Intel to begin the design of a series of 16-bit processors, which emerged as the Intel 8086 in 1978. Motorola also began the design of a similar high-end design, in the MACSS project. When they polled their existing 6800 customers, they found that many remained interested in 8-bit designs and were not willing to pay for a 16-bit design for their simple needs. This led to the decision to produce a greatly improved but compatible 8-bit designs that became the 6809.
Analysis of 6800 code demonstrated that loads and stores were the vast majority of all the time in CPU terms, accounting for 39% of all the operations in the code they examined. In contrast, mathematical operations were relatively rare, only 2.8% of the code. However, a careful examination of the loads and stores noted that many of these were being combined with adds and subtracts, revealing that a significant amount of those math operations were being performed on 16-bit values. This led to the decision to include basic 16-bit mathematics in the new design; load, store, add and subtract. Similarly, increments and decrements accounted for only 6.1% of the code, but these almost always occurred within loops where each one was performed many times. This led to the addition of post-incrementing and pre-decrementing modes using the index registers.
The main goal for the new design was to support position-independent code. Motorola's market was mostly embedded systems and similar single-purpose systems, which often ran programs that were very similar to those on other platforms. Development for these systems often took the form of collecting a series of pre-rolled subroutines and combining them together. However, as assembly language is generally written starting at a "base address", combining pre-written modules normally required a lengthy process of changing constants (or "equates") that pointed to key locations in the code.
Motorola's idea was to eliminate this task and make the building-block concept much more practical. System integrators would simply combine off-the-shelf code in ROMs to handle common tasks. Libraries of common routines like floating point arithmetic, graphics primitives, Lempel-Ziv compression, and so forth would be available to license, combine together along with custom code, and burn to ROM.
In previous processor designs, including the 6800, there was a mix of ways to refer to memory locations. Some of these were relative to the current location in memory or to a value in an index register, while others were absolute, a 16-bit value that referred to a physical location in memory. The former style allows code to be moved because the address it references will move along with the code. The absolute locations do not; code that uses this style of addressing will have to be recompiled if it moves. To address this, the 6809 filled out its instruction opcodes so that there were more instances of relative addressing where possible.
As an example, the 6800 included a special "direct" addressing mode that was used to make code smaller and faster; instead of a memory address having 16-bits and thus requiring two bytes to store, direct addresses were only 8-bits long. The downside was that it could only refer to memory within a 256-byte window, the "direct page", which was normally at the bottom of memory - the 6502 referred to this as "zero page addressing". The 6809 added a new 8-bit DP register, for "direct page". Code that formerly had to be in the zero page could now be moved anywhere in memory as long as the DP was changed to point to its new location.
Using DP solved the problem of referring to addresses within the code, but data is generally located some distance from the code, outside ROM. To solve the problem of easily referring to data while remaining position independent, the 6809 added a variety of new addressing modes. Among these was program-counter-relative addressing which allowed any memory location to be referred to by its location relative to the instruction. Additionally, the stack was more widely used, so that a program in ROM could set aside a block of memory in RAM, set the SP to be the base of the block, and then refer to data within it using relative values.
To aid this type of access, the 6809 renamed the SP to U for "user", and added a second stack pointer, S, for "system". The idea was user programs would use U while the CPU itself would use S to store data during subroutine calls. This allowed system code to be easily called by changing S without affecting any other running program. For instance, a program calling a floating-point routine in ROM would place its data on the U stack and then call the routine, which could then perform the calculations using data on its own private stack pointed to by S, and then return, leaving the U stack untouched.
Another reason for the expanded stack access was to support reentrant code, code that can be called from various different programs concurrently without concern for coordination between them, or that can recursively call itself. This makes the construction of operating systems much easier; the operating system had its own stack, and the processor could quickly switch between a user application and the operating system simply by changing which stack pointer it was using. This also makes servicing interrupts much easier for the same reason. Interrupts on the 6809 save only the program counter and condition code register before calling the interrupt code, whereas the 6800, now referred saves all of the registers, taking additional cycles, then more to unwind the stack on exit.
The 6809 includes one of the earliest dedicated hardware multipliers. It takes 8-bit numbers in the A and B accumulators and produces a result in A:B, known collectively as D.
Market acceptance
Much of the design had been based around the market concept of building-block code. But the market for pre-rolled ROM modules never materialized: Motorola's only released example was the MC6839 floating-point ROM. The industry as a whole solved the problem of integrating code modules from separate sources by using automatic relocating linkers and loaders, which is the solution used today. However, the decisions made by the design team enabled multi-user, multitasking operating systems like OS-9 and UniFlex.
The added features of the 6809 were costly; the CPU had approximately 9,000 transistors compared to the 6800's 4,100 or the 6502's 3,500. While process improvements meant it could be fabricated for less cost than the original 6800, those same improvements were being applied to the other designs and so the relative cost remained the same. Such was the case in practice; in 1981 the 6809 sold in single-unit quantities for roughly six times the price of a 6502. For those systems that needed some of its special features, like the hardware multiplier, the system could justify its price, but in most roles, it was overlooked.
Another factor in its low use was the presence of newer designs with significantly higher performance. Among these was the Intel 8086, released the same year, and its lower-cost version, the Intel 8088 of 1979. A feeling for the problem can be seen in the Byte Sieve assembly language results against other common designs from the era (taken from 1981 and 1983):
Although the 6809 did offer a performance improvement over the likes of the 6502 and Z80, the improvement was not in line with the increase in price. For those where price was not the primary concern, but outright performance was, the new designs outperformed it by as much as an order of magnitude.
Even before the 6809 was released, in 1976 Motorola had launched its own advanced CPU project, then known as Motorola Advanced Computer System on Silicon project, or MACSS. Although too late to be chosen for the IBM PC project, when MACSS appeared as the Motorola 68000 in 1979 it took any remaining interest in the 6809. Motorola soon announced that their future 8-bit systems would be powered by cut-down versions of the 68000 rather than further improved versions of the 6809.
Major uses
Its first major use was in the TRS-80 Color Computer, which happened largely by accident. Motorola had been asked to design a color-capable computer terminal for an online farm-aid project, a system known as "AgVision". Tandy (Radio Shack) was brought in as a retail partner and sold them under the name "VideoTex", but the project was ultimately canceled shortly after its introduction in 1980. Tandy then re-worked the design to produce a home computer, which became one of the 6809's most notable design wins.
Looking for a low-cost programming platform for computer science students, the University of Waterloo developed a system that combined a 6809-based computer-on-a-card with an existing Commodore PET, including a number of programming languages and program editors in ROM. The result was later picked up by Commodore, who sold it as the SuperPET, or MicroMainframe in Europe. These were relatively popular in the mid-1980s before the introduction of the PC clone market took over the programming role for most users.
Other popular home computer uses include the Fujitsu FM-7, Canon CX-1, Dragon 32/64, and the Thomson TO7 series. It was also available as an option on the Acorn System 2, 3 and 4 computers. Most SS-50 bus designs that had been built around the 6800 also had options for the 6809 or switched to it exclusively. Examples include machines from SWTPC, Gimix, Smoke Signal Broadcasting, etc. Motorola also build a series of EXORmacs and EXORset development systems.
Hitachi produced its own 6809-based machines, the MB6890 and later the S1. These were primarily for the Japanese market, but some were exported to and sold in Australia, where the MB6890 was dubbed the "Peach", probably in reference to the Apple II. The S1 was notable in that it contained paging hardware extending the 6809's native 64 kilobyte (64×210 byte) addressing range to a full 1 megabyte (1×220 byte) in 4 KB pages. It was similar in this to machines produced by SWTPC, Gimix, and several other suppliers. TSC produced a Unix-like operating system uniFlex which ran only on such machines. OS-9 Level II, also took advantage of such memory management facilities. Most other computers of the time with more than 64 KB of memory addressing were limited to bank switching where much if not all the 64 KB was simply swapped for another section of memory, although in the case of the 6809, Motorola offered their own MC6829 MMU design mapping 2 megabytes (2×220 byte) in 2 KB pages.
The 6809 also saw some use in various videogame systems. Notable among these, in its 68A09 incarnation, in the unique vector graphics based Vectrex home videogame machine. It was also used in the Milton Bradley Expansion (MBX) system (an arcade console for use with the Texas Instruments TI-99/4A home computer, and a series of arcade games, released during the early to mid-1980s. Williams Electronics was a prolific user of the processor, which was deployed in Defender, Stargate, Joust, Robotron: 2084, Sinistar, and other games. The 6809 CPU forms the core of the successful Williams Pinball Controller. The KONAMI-1 is a modified 6809 used by Konami in Roc'n Rope, Gyruss, and The Simpsons.
Series II of the Fairlight CMI (computer musical instrument) used dual 6809 CPUs running OS-9, and also used one 6809 CPU per voice card. The 6809 was often employed in music synthesizers from other manufacturers such as Oberheim (Xpander, Matrix 6/12/1000), PPG (Wave 2/2.2/2.3, Waveterm A), and Ensoniq (Mirage sampler, SDP-1, ESQ1, SQ80). The latter used the 6809E as their main CPU. The (E) version was used in order to synchronize the microprocessor's clock to the sound chip (Ensoniq 5503 DOC) in those machines; in the ESQ1 and SQ80 the 68B09E was used, requiring a dedicated arbiter logic in order to ensure 1 MHz bus timing when accessing the DOC chip.
In contrast to earlier Motorola products, the 6809 did not see widespread use in the microcontroller field. It was used in traffic signal controllers made in the 1980s by several different manufacturers, as well as Motorola's SMARTNET and SMARTZONE Trunked Central Controllers (so dubbed the "6809 Controller"). These controllers were used as the central processors in many of Motorola's trunked two-way radio communications systems.
The 6809 was used by Mitel as the main processor in its SX20 Office Telephone System
Versions
The Motorola 6809 was originally produced in 1 MHz, 1.5 MHz (68A09) and 2 MHz (68B09) speed ratings. Faster versions were produced later by Hitachi. With little to improve, the 6809 marks the end of the evolution of Motorola's 8-bit processors; Motorola intended that future 8-bit products would be based on an 8-bit data bus version of the 68000 (the 68008). A micro-controller version with a slightly modified instruction set, the 6811, was discontinued as late as the second decade of the 21st century.
The Hitachi 6309 is an enhanced version of the 6809 with extra registers and additional instructions, including block move, additional multiply instructions, and division.
Legacy
Motorola spun off its microprocessor division in 2004. The division changed its name to Freescale and has subsequently been acquired by NXP.
Neither Motorola nor Hitachi produce 6809 processors or derivatives anymore. 6809 cores are available in VHDL and can be programmed into an FPGA and used as an embedded processor with speed ratings up to 40 MHz. Some 6809 opcodes also live on in the Freescale embedded processors. In 2015, Freescale authorized Rochester Electronics to start manufacturing the MC6809 once again as a drop-in replacement and copy of the original NMOS device. Freescale supplied Rochester the original GDSII physical design database. At the end of 2016, Rochester's MC6809 (including the MC68A09, and MC68B09) is fully qualified and available in production.
Australian developer John Kent has synthesized the Motorola 6809 CPU in hardware description language (HDL). This has made possible the use of the 6809 core at much higher clock speeds than were available with the original 6809. Gary Becker's CoCo3FPGA runs the Kent 6809 core at 25 MHz. Roger Taylor's Matchbox CoCo runs at 7.16 MHz. Dave Philipsen's CoCoDEV runs at 25 MHz.
Description
General design
The 6809's internal design is closer to simpler, non-microcoded CPU designs. Like most 8-bit microprocessors, the 6809 implementation is a register-transfer level machine, using a central PLA to implement much of the instruction decoding as well as parts of the sequencing.
Like the 6800 and 6502, the 6809 uses a two-phase clock to gate the latches. This two-phase clock cycle is used as a full machine cycle in these processors. Simple instructions can execute in as little as two or three such cycles. The 6809 has an internal two-phase clock generator (needing only an external crystal) whereas the 6809E needs an external clock generator. There are variants such as the 68A09(E) and 68B09(E); the internal letter indicates the processor's rated clock speed.
The 6800, 6502, the 6809's clock system differs from other processors of the era. For instance, the Z80 uses a single external clock and the internal steps of the instruction process continue on each transition. This means that the external clock generally runs much faster; 680x designs generally ran at 1 or 2 MHz while the Z80 generally ran at 2 or 4. Internally, the 680x's converted the slower external clock into a higher frequency internal schedule, so on an instruction-for-instruction basis, they ran roughly twice as fast when comparing the external clocks.
The advantage to the 680x style access was that dynamic RAM chips of the era generally ran at 2 MHz. Due to the cycle timing, there were periods of the internal clock where the memory bus was guaranteed to be free. This allowed the computer designer to interleave access to memory between the CPU and an external device, say a direct memory access controller, or more commonly, a graphics chip. By running both chips at 1 MHz and stepping them one after the other, they could share access to the memory without any additional complexity or circuitry. Depending on version and speed grade, approximately 40–60% of a single clock cycle is typically available for memory access in a 6800, 6502, or 6809.
Registers and instructions
The original 6800 included two 8-bit accumulators, A and B, a single 16-bit index register, X, a 16-bit program counter, PC, a 16-bit stack pointer, SP, and an 8-bit status register. The 6809 added a second index register, Y, a second stack pointer, U (while renaming the original S), and allowed the A and B registers to be treated as a single 16-bit accumulator, D. It also added another 8-bit register, DP, to set the base address of the direct page. These additions were invisible to 6800 code, and the 6809 was 100% source-compatible with earlier code.
Another significant addition was program-counter-relative addressing for all data manipulation instructions. This was a key addition for position-independent code, as it allows data to be referred to relative to the instruction, and as long as the resulting memory location exists then the instructions can be moved in memory freely. The system retained its previous addressing modes as well, although in the new assembler language, what were previously separate instructions were now considered to be different addressing modes on other instructions. This reduced the number of instructions from the 6800's 78 instructions to the 6809's 59. These new modes had the same opcodes as the previously separate instruction, so these changes were only visible to the programmer working on new code.
The instruction set and register complement are highly orthogonal, making the 6809 easier to program than contemporaries. Like the 6800, the 6809 includes an undocumented address bus test instruction which came to be nicknamed Halt and Catch Fire (HCF).
Notes
References
Citations
Bibliography
Further reading
Datasheets and manuals
MC6809 Datasheet; Motorola; 36 pages; 1983.
MC6809E Datasheet; Motorola; 34 pages.
Motorola 8-bit Microprocessors Data Book; Motorola; 1182 pages; 1981.
Books
6809 Assembly Language Programming; 1st Ed; Lance Leventhal; 579 pages; 1981; . (archive)
The MC6809 Cookbook; 1st Ed; Carl Warren; 180 pages; 1980; . (archive)
Advanced 8-bit Microprocessor: MC6809: Its Software, Hardware, Architecture and Interfacing Techniques; 1st Ed; Robert Simpson; 274 pages; 1998;
Magazines
A Microprocessor for the Revolution: The 6809; Terry Ritter & Joel Boney (co-designers of 6809); BYTE magazine; Jan-Feb 1979. (archive)
MC6809 microprocessor; Ian Powers; Microprocessors, Volume 2, Issue 3; July 1978; page 162; , .
Reference cards
MC6809 Reference Card; Motorola; 16 pages; 1981. (archive)
6809/6309 Reference Card; Chris Lomont; 10 pages; 2007. (archive)
External links
Simulators / Emulators
6809 Emulation Page – collection of 6809 instructions, emulators, tools, debuggers, disassemblers, assemblers
6809 Emulator based on the SWTPC 6809 system
Boards
Grant's 6-chip 6809 computer
6809 microprocessor training board
FPGA
System09 6809 CPU core - VHDL source code - OpenCores - project website
Motorola microprocessors
8-bit microprocessors |
1114943 | https://en.wikipedia.org/wiki/LinuxTV | LinuxTV | The LinuxTV project is an informal group of volunteers who develop software regarding digital television for the Linux kernel-based operating systems. The community develops and maintains the Digital Video Broadcasting (DVB) driver subsystem which is part of the Linux kernel since version 2.6.x. The Linux kernel and the LinuxTV CVS include a fair number of drivers for commonly available PCI cards and USB devices, but the DVB subsystem core is also targeted towards set-top boxes which run some (embedded) Linux.
The LinuxTV project was originally initiated by the Berlin, Germany based company Convergence Integrated Media GmbH with the goal to distribute free and open source software for the production, distribution and reception of digital television. In 1998, the Convergence founders claimed that "Only the access to the source code of our future television sets will guarantee the independence of content and technology".
After some financial troubles, in 2002 Convergence had been taken over by the German set top box manufacturer Galaxis AG, and renamed to Convergence GmbH. Although both Convergence GmbH and Galaxis AG went bankrupt in 2005, the LinuxTV project lives on independently, being supported by the large developer community that had gathered around the project over the years.
Another significant Convergence development is DirectFB, a thin library that provides hardware graphics acceleration and windowing features for GTK+-based and other graphical Linux applications without the use of X.Org Server, and which its developers claim "adds graphical power to embedded systems".
See also
Tvheadend
Video4Linux
List of free television software
Digital television
References
External links
DVB-Wiki
Sunray Linux DVB receiver
Free television software
Set-top box
Software that uses GTK
Television organizations |
7440381 | https://en.wikipedia.org/wiki/Digital%20Society%20Day | Digital Society Day | India celebrates October 17 as the Digital Society Day.
Importance of October 17 for India
October 17 is significant for the Digital Society in India since it was on October 17, 2000, that Information Technology Act 2000, the first law of the digital society in India was notified. This notification gave for the first time in the country, legal recognition for electronic documents. It also provided a legally recognized method of authentication of electronic documents by means of digital signatures. Additionally, Information Technology Act 2000 recognized Cyber Crimes and prescribed a fast track grievance redressal mechanism for Cyber Crimes.
These provisions were critical for the development of digital society since it enabled digital contracts to be formed in support of e-commerce and e-governance. It is therefore appropriate that the day be remembered as an important event in the e-history of India.
Cyber Law College was the first organization in India to recognize October 17 as a "Digital Society Day" and undertake specific programmes related to the fulfillment of the objectives of the Information technology Act.
Digital Society Foundation of India, a Charitable Trust promoted by Naavi the founder of Cyber Law College has also started several activities to encourage the participation of the community in creating better awareness of Cyber Laws in India.
October 17 also being the Poverty Alleviation Day, and Poverty Alleviation through ICT being the theme of the WSIS, Digital Society Foundation is undertaking projects that would help Digital Society contribute to the alleviation of poverty in rural India.
The formal celebration of the day started in 2006 at Bangalore where in an event organized by Digital Society Foundation, a trust promoted by Cyber Law College October 17 formally declared as the Digital Society Day by the honourable Judge of Karnataka High Court Sri N Kumar. Since then, this historic day has been commemorated each with activities of interest to the digital society in India.
External links
Digital Society Foundation
Naavi.org
Information society
Internet governance
Information technology in India
October observances |
29279 | https://en.wikipedia.org/wiki/SIGGRAPH | SIGGRAPH | SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques) is an annual conference on computer graphics (CG) organized by the ACM SIGGRAPH, starting in 1974. The main conference is held in North America; SIGGRAPH Asia, a second conference held annually, has been held since 2008 in countries throughout Asia.
Overview
The conference incorporates both academic presentations as well as an industry trade show. Other events at the conference include educational courses and panel discussions on recent topics in computer graphics and interactive techniques.
SIGGRAPH Proceedings
The SIGGRAPH conference proceedings, which are published in the ACM Transactions on Graphics, has one of the highest impact factors among academic publications in the field of computer graphics. The paper acceptance rate for SIGGRAPH has historically been between 17% and 29%, with the average acceptance rate between 2015 and 2019 of 27%. The submitted papers are peer-reviewed under a process that was historically single-blind, but was changed in 2018 to double-blind. The papers accepted for presentation at SIGGRAPH are printed since 2003 in a special issue of the ACM Transactions on Graphics journal.
Prior to 1992, SIGGRAPH papers were printed as part of the Computer Graphics publication; between 1993 and 2001, there was a dedicated SIGGRAPH Conference Proceedings series of publications.
Awards programs
SIGGRAPH has several awards programs to recognize contributions to computer graphics. The most prestigious is the Steven Anson Coons Award for Outstanding Creative Contributions to Computer Graphics. It has been awarded every two years since 1983 to recognize an individual's lifetime achievement in computer graphics.
Conference
The SIGGRAPH conference experienced significant growth starting in the 1970s, peaking around the turn of the century. A second conference, SIGGRAPH Asia, started in 2008.
See also
Association for Computing Machinery
ACM SIGGRAPH
ACM Transactions on Graphics
Computer Graphics, a publication of ACM SIGGRAPH
The list of computer science conferences contains other academic conferences in computer science.
References
External links
ACM SIGGRAPH website
ACM SIGGRAPH conference publications (ACM Digital Library)
ACM SIGGRAPH YouTube
SIGGRAPH 2017 Conference, Los Angeles, CA
SIGGRAPH Asia 2017 Conference, Bangkok, Thailand
Association for Computing Machinery conferences
Computer graphics conferences
Computer science conferences
Recurring events established in 1974
Articles containing video clips |
207972 | https://en.wikipedia.org/wiki/Amstrad%20Action | Amstrad Action | Amstrad Action was a monthly magazine, published in the United Kingdom, which catered to owners of home computers from the Amstrad CPC range and later the GX4000 console.
It was the first magazine published by Chris Anderson's Future Publishing, which with a varied line-up of computing and non-computing related titles has since become one of the foremost magazine publishers in the UK.
The publication, often abbreviated to AA by staff and readers, had the longest lifetime of any Amstrad magazine, running for 117 issues from October 1985 until June 1995 - long after the CPC had ceased production and games were no longer available.
History
Published by Future plc, a company set up by Chris Anderson (ex-Personal Computer Games and Zzap!64 editor). Launch Editor, Peter Connor, also an ex-PCG staff member, shared the writing duties with the only other staff writer, Bob Wade. Bob, another ex-PCG/Zzap!64 staff member, was given the title ‘Software Editor’ and would review the vast majority of the games featured, with Peter given a second opinion. Trevor Gilham, Art Editor, would complete the four man team.
Issue 1 dated October 1985 was released in September 1985 with the cover price of £1; 1p for every one of the 100 pages. It took the new publication a few issues to find its readers, but with the help of a bumper 116 page Christmas 1985 issue with a cover mounted tape, the circulation figures grew rapidly. In October 1986 Amstrad Action split into three separate publications. AA still catered for the CPC range, while 8000 Plus and PC Plus focused on the Amstrad PCW and PC range respectively.
AA eventually gave in to reader's pleas to have a permanent cover tape. An announcement was made, in AA66, that the following issue would not only include a cover tape, but contain more colour and be printed on different paper. Review pages were also slightly re-designed.
In April 1992 the Audit Bureau of Circulation figures showed an increase to 37,120, the highest circulation since July–December 1988's 38,457.
AA100 looked at the top 100 products for the CPC and took a trip down memory lane, looking back at past editors and staff. As circulation figures wound down further still there was a drastic drop in page numbers from 60 to 36 in July 1994's AA106. More compact issues mean no superfluous columns or features. AA107 became the first issue with only one member of official staff.
In AA111 there was no credits list, but the new editor, Karen Levell, answered the Reaction letters and confirmed her appointment. Although everything appeared as normal in June 1995's AA117, with AA118 advertised in the next month box, this was the last AA ever. The final headline (on issue AA117) was Publish and be Damned. The same month as AA's final issue (June 1995), publishing company HHL released the first issue of "CPC Attack!", a magazine which covered all Amstrad computer variations but also with additional non-Amstrad articles relating to the burgeoning console market. This more inclusive magazine, however, failed to become popular and was cancelled after just six issues, the last being published in November 1995 However, 'CPC Attack!' is usually quoted as being the successor to ACU (Amstrad Computer User), despite the fact that ACU magazine was cancelled after its final issue (May 1992), three years before the demise of 'AA'.
Features and editorial style
AA covered both 'games' and 'serious' side of the CPC, maintaining a 50/50 coverage throughout its run. The editorial coverage was always seen as being one of the three main areas; games/leisure, serious (programming, business software etc.), and the regulars, such as 'Amscene', 'Forum', 'Action Test', and 'Cheat Mode'.
Amscene
The latest CPC news regarding all things in the Amstrad world. Later included the games charts and games preview pages.
Reaction
The readers letters were answered in the Reaction section, where numerous arguments and, usually good natured, humour was found. Later during AAs run the standout letter of the month was highlighted and given the star prize award of £25. The technical problems page 'Problem Attic' started out in the Reaction pages in the early years before getting its own space. "If your CPC’s in danger, if you need help, then you can contact the AA team."
Action Test
The review approach included a main write up, a second opinion box, a good news / bad news comparison list and the percentages. Percentages were given to Graphics, Sonics, Grab Factor, Staying Power and an overall AA Rating. High rated games of 80% and above were given an 'AA Rave' accolade, while the highest rated game of the month received the 'Mastergame' award. This review style continued well into the early 1990s when the award accolades were scrapped. As budget games became more prominent during the CPC's life AA covered this growing market by including budget reviews in the 'Budget Bonanza' and later 'Action Replay' sections.
The Pilgrim
Interactive fiction was covered by "The Pilgrim", then "Balrog" and "The Examiner". The Pilgrim format included the latest adventure game reviews. 'Clue Sniffing With The Pilgrim' included adventure clues and tips. 'Pilgrim Post' was the letters column for adventure game topics. 'Adventure News' detailed the latest happenings in the world of adventure games.
Forum
The Forum carried on from the Problem Attic column where the resident Technical Editor answered reader's hardware or software problems and queries. As space in the magazine became restrictive other features like 'Helpline' and 'Ask Alex' were merged into the new 'Techy Forum'.
Type-In
One long running feature of AA was the Type-In section. This included utility, games and demo type-ins sent in by the readers. One had to type in the program code into the computer then run it. The core of this split the readership over whether the programs should be put on the covertape instead - over a six-month period this is what happened, until this practice (and ultimately the Type-Ins section) was abandoned due to space restrictions.
Helpline
The Helpline page was where eager Amstrad readers would offer contact details help fellow readers having problems. It was later merged with Technical Forum.
Cheat Mode
The tips pages included game pokes, tips, cheats and maps all contributed by the readers.
Aafterthought
Initially called Rear View, the back page was where all the loose ends were closed off, like competition winner results and last minute happenings.
Features
As activity in the Amstrad world declined, the editorial staff, and subsequently the editorial content, was constantly being reduced and the magazine adopted an increasingly eccentric style, with one edition in particular featuring an eight-page script for a Christmas pantomime. Later on, a double spread review for the 2nd Teenage Mutant Hero Turtles game was split between the review itself and a bizarre transcribed interview between Rod Lawton and Adam Peters (pretending to be one of the turtles). Peters would usually try and promote his band in some way (he featured on the cover of 'music orientated' issue and had one of his techno-MIDI band's songs on the covertape). The magazine is also notable for pioneering the kind of responses – sometimes dry, sometimes surreal, usually humorous and mildly rude – to readers' letters of a form now seen throughout UK gaming magazine culture. These characteristics, for many readers, added to AA'''s charm.
Cover Tapes
Chris Anderson using his previous success of covermounted cassette tapes with Personal Computer Games included one with the Christmas special issue of 1985. This included two unreleased games from Ocean Software; Kung Fu and Number 1. The covermount cassette tape was only an occurrence on the Christmas and AA birthday issues, not becoming a regular feature until AA67 in 1991, mainly due to requests from many readers. Cover-cassettes featured game demos, applications, software utilities and, in some instances, complete games. Due to the low quality of the cassettes used many Amstrad owners found them to be unreliable, something which was commonly reflected in the letters pages. One solution to fixing the unreliable tapes as posted to the letters section was to unwind the tape and put a warm iron on it. Later, a utility was released on the covertape to convert the contents to the proprietary 3" disk.
Dizzy, AA Special EditionCodemasters produced a Dizzy game specially for the AA birthday covertape in October 1988. This 'Special Edition' included different rooms and objects to explore.
Action Pack #1
AA67, dated April 1990, came with the first of the permanent cover tapes called Action Pack #1, along with a new cover price of £2.20. A playable demo of Ocean Software's Total Recall and complete games Hydrofool and Codemasters' Dizzy were included on the tape.
Action Pack #2
This tape caused some controversy among the readers as one of the featured games How To Be A Complete Bastard featured mild swearing, plus the game's quest was to be violent and obnoxious throughout a house party.
Stormlord Censored
December 1993 AA99's Serious Action cover tape included the complete Stormlord game, albeit a censored version. With the self-censoring of the Hewson game it seemed that AA was trying to avoid similar controversy that followed AA68's Action Pack #2.
Best Game Ever On Covertape
Voted the best game on the CPC, Firebird's Elite was the complete game given away with the 100th issue's Serious Action cover tape.
AA Games Accolades
Initially only the best rated game of the month earned an AA Mastergame accolade, but from issue 57 this was changed to all games that received a 90% or higher rating. Games receiving 80–90% were awarded an AA Rave. Publishers of CPC games such as Activision, Ocean and Infogrames proudly mounted these awards on their packaging to promote their games to potential customers. The first game to receive a 'Mastergame' award was Melbourne House's The Way of the Exploding Fist, gaining an impressive 94% AA Rating. Issue 38 was the first issue not to award any game the Mastergame accolade. Apparently there were no games worthy of the award that month. The lowest rated Mastergame was Target Renegade, from Imagine Software, receiving an 86% overall rating. Quite why it was awarded a Mastergame was not explained and remains a mystery.Laser Squad, by Blade Software, which has been mentioned many times as being an AA staff favourite, is awarded the Mastergame accolade, in AA49, with a 91% rating. March 1990 and the mysterious lost Mastergame that would be Chase HQ. The Ocean arcade game conversion received a score of 90%, coupled with being the highest rated game this issue. This would normally justify the Mastergame accolade, however the game only got an AA Rave accolade and no explanation or corrections were made since. June 1990 was the first issue to award the Mastergame accolade to more than one game; E-Motion by US Gold and Turrican by Rainbow Arts received ratings of 92% and 90% respectively. November 1990 and Rick Dangerous 2 received the highest rating so far. The MicroStyle game gained a MasterGame award and an AA Rating of 97%.
Psygnosis' Lemmings and Ocean's The Addams Family were the last games to receive a Mastergame accolade in July 1992's AA82; receiving 97% and 90% respectively. Following issues dispensed with AA Rave and Mastergame accolades. Lemmings joins Rick Dangerous 2 as gaining the highest AA rating given during its publication. March 1993's issue 90 featured the first highest rated game not to receive an AA accolade. Nigel Mansell’s World Championship received an overall rating of 93%, but no accolade of either Rave or Mastergame. The long standing AA signature accolade had been discarded.
Editorial staff
Memorable staff included Publisher Chris Anderson, Bob Wade, Richard Monteiro, Steve Carey, Rod "The Beard" Lawton, Trenton Webb, James Leach, Frank O'Connor and Adam Waring. Later editorial staff included Linda Barker, Dave Golder, Tim Norris and Simon Forrester, whose magazine nickname/handle was "The Hairy One", "The Hairy Happening" or often just "Hairy". Simon had written various programs himself for the platform and was known to jump down the throats of people who didn't agree with his fondness for the video game Chuckie Egg.
Editors
Bob Wade
Software Editor (AA1–AA12)
Deputy Editor (AA13–AA16)
Editor (AA17–AA34)
Like Chris, Bob started out at PCG and Zzap!64, before becoming the Software Editor on AA. Climbed the ranks of Deputy Editor before becoming the Editor. Bob left after issue 34 to Edit sister publication Advanced Computer Entertainment and later Amiga Format. While at Amiga Format he helped launch Amiga Power. Left journalism, in the mid-1990s, to form his own games development company; Binary Asylum, producing Amiga games like Zee Wolf and Zee Wolf 2. After Binary Asylum failed to establish itself into the PC market Bob moved over to the internet product monitoring service; Game Campaign. He is now back at Future.
Steve Carey
Editor (AA35–AA50)
Having spent some time at PC Plus as Production Editor, Steve replaced the departing Bob Wade as Editor on issue 35. Left after issue 50 in November 1989 to edit ST Format. Later went on to become a Publisher overseeing such titles as MEGA, Amiga Power, PC Gamer, .net and the games industries well respected EDGE, among others. In January 1995 he was made Publishing Director for the Consumer Division. He now lives in Australia.
Rod Lawton
Editor (AA51–AA89)
Previous experience of working on New Computer Express and ACE, Rod arrived at AA51 and holds the record for longest serving editor, spanning 39 issues and over three years. Left to work as Editor at Future's newly launched Leisure publishing section. Has written, or co-written, many computing and games books. Has written for many publications since, including PC Plus, PC Answers, PC Format. Most recently has written for the weekly "Computing for beginners" style magazine Computeractive. Also runs a Digital Imaging web site where photographers at all levels of expertise can find out more about the terms, concepts and techniques behind photography.
Dave Golder
Editor (AA96–AA109)
Previous work on Your Sinclair and Commodore Format before arriving as Editor on AA96. Left after issue 111 to edit fellow Future title Ultimate Future Games. In 1995 he helped launch the new Future Publishing Sci-Fi mag SFX, taking over the editor position in 1996 and remained there until 2005. Currently writes a Sci-Fi column on the Sci-Fi UK website.
Staff Writers
Richard Monteiro
Technical Editor (AA15–AA32)
Richard arrived as the new Technical Editor on issue 15. After 18 issues he left to launch new Future publication ST/Amiga Format. In 1990 Richard formed the company Words Works Limited, in Trowbridge with his own editorial team and produced RAZE under subcontract from Newsfield Publications. The first issue of RAZE appeared in October 1990 and ran for 12 issue until Newsfield couldn't sustain any more publications. In 1992 Richard, along with another ex-Future Publishing staff member Dianne Taverner, co-founded Paragon Publishing, holding the title Managing Director. Key titles published during the 1990s included Sega Pro, Play, XGen and Games World: The magazine.
Trenton Webb
Staff Writer (AA42–AA59)
Trenton arrived as the new games reviewing guru in June 1989's issue. After 18 issues had left to work on many other Future Publishing titles including Amiga Format and Your Sinclair. During this time he appeared on Channel 4's GamesMaster video games TV show in the reviews section. Later became Editor of magazines such as Game Zone, Commodore Format and ST Format. He left journalism in the mid-1990s to work in the industry itself, working with Bob Wade, at Binary Asylum, as a Games Designer. After Binary Asylum closed, he went to work for Internet and Intranet website design firm Zehuti as Project Manager.
James Leach
Staff Writer (AA60–AA64)
Experienced member of Future publishing who has worked on many magazines. Apart from Amstrad Action James had worked on Your Sinclair, Amiga Format, PC Format, GamesMaster and as Editor on SNES magazine Super Play. After leaving Future Publishing, in the mid-1990s, James went on to work for software company Bullfrog, contributing to many games including Syndicate Wars, Dungeon Keeper and Theme Hospital. Other companies James has worked for include Black & White Studios and Lionhead, holding positions such as Lead Writer and Head of Scripting & Writing respectively, working on such games as Black & White, Fable and Black & White 2. In 2006 James left Lionhead to go freelance where he now describes his skill and experience as "Writer of game plots, dialogue, websites, ads (ATL and BTL), children's books, sitcoms and more."
Frank O'Connor
Staff Writer (AA65–AA72)
Frank's first job in the industry was Amstrad Action position of Staff Writer. Frank left AA after issue 71 to work on EMAP's Computer & Video Games (a.k.a. C+VG). After his stint on C+VG Frank came back to Future Publishing to edit the Nintendo games magazine Total!. Appeared, as co-commentator, on many GamesMaster episodes during the second and third series from 1992 to 1994. Later moved into the games industry; worked as Editor in Chief on DailyRadar.com an online video games site. Later held the position of Executive Editor on the Official Xbox Magazine. Is currently Content Manager for Bungie; the developer of Halo, Myth, Oni, and Marathon.
Adam Waring
Technical Editor (AA50–AA83)
Joint second longest serving editorial staff, along with Bob Wade, Adam was the Technical Editor for 34 issues. Reviewed Rick Dangerous 2, which is the joint highest rated AA game. Adam had written several games himself, including Lost Caves and Ninja Massacre, and if one came up for review upon re-release, he would gracefully be allowed to write a second opinion. He also wrote Your Sinclair "Spec Tec" column where readers technical queries were answered. Left Future Publishing in 1992 to travel around the world. Returned to Future Publishing Editing magazines such as Max Magazine. Went on to edit Merricks Media's Spanish Magazine based in Bath.
Simon Forrester
Staff Writer (AA89–AA106)
One of the last Staff Writers to work on AA, arriving just as Rod Lawton was leaving in 1993. Later shared duties between AA and Commodore Format before taking over the editorship of CF in 1995. Later worked for Bath-based internet monitoring company called FYI, and their site gamecampaign.com, and then Bath-based web designers Zehuti Ltd.
Freelance writers
There were many freelance writers, with many producing a regular, monthly column. They included Steve "The Pilgrim" Cooke; Stuart "The Balrog" Whyte; PD columnists Jerry Glenwright, Caroline Lamb (a.k.a. Steve Williams), Tim Blackbond and Keith Wood; fanzine columnist David Crookes; and reviewers Richard Wildey and Angela Cook. David Crookes continues to write about the Amstrad as a freelance writer for Retro Gamer magazine.
One of the most memorable, however, was technical writer and covertape editor Richard Fairhurst, a.k.a. CRTC. The latter name matched the initialism of the CPC's Cathode Ray Tube Controller and was sometimes expanded to ChaRleyTroniC. He ran a public domain library called Robot PD and was also an accomplished computer programmer, producing the fully-fledged utilities PowerPage and RoutePlanner for the CPC as well as contributing to various demos. In the CPC fan community, he wrote articles about demos for CPC Attack, was editor of the Amstrad-centred disczine Better Than Life, and was the final editor of the more professional-centric fanzine WACCI''.
References
External links
TACGR 'The Amstrad Computer Games Resource' – AA list of Mastergames, Raves and all other rated games.
AA magazine cover scans AA cover scans from Nich Campbell's Amstrad CPC web pages.
CPCWIKI Amstrad Action entry
Archived Amstrad Action magazines on the Internet Archive
1985 establishments in the United Kingdom
1995 disestablishments in the United Kingdom
Amstrad CPC
Amstrad magazines
Video game magazines published in the United Kingdom
Defunct computer magazines published in the United Kingdom
Magazines established in 1985
Magazines disestablished in 1995 |
67627545 | https://en.wikipedia.org/wiki/Identity%20documents%20in%20Iran | Identity documents in Iran | Identity documents in Iran are official documents of Iranians identity and citizenship and are used to identification and authentication. The most important identity documents in Iran are the Iranian identity booklet () and the Iranian identity card (). Identity documents for foreign nationals are resident card and employment card.
List of identity documents
Postal code in Iran
Postal code is used to identify places. In Iran, the postal code is a 10-digit number, the first 5 digits representing the departure code (characteristic of the destination city) and the second 5 digits representing the distribution code (location characteristic of the destination). Its main use is to send letters or postal packages sent by the Iran Post Company. There are currently about 52 million different postcodes in Iran.
Postal code in Iran is issued by the General Geographical and Spatial Administration of the country under the supervision of the National Post Company.
The plan to allocate and use the 10-digit postal code was included in the agenda of the Government and the Islamic Consultative Assembly of Iran in 1991. Therefore, during a 5-year period, from 1991 to 1996, they began to collect statistics and informations, and in 1997, they announced 10-digit postal codes for all Iranian places and households.
Law of the allocation of national number and postal code
In 1997, the law requiring the allocation of national numbers and postal codes to all Iranian citizens was approved:
Article 1 - The Ministries, Post, Telegraph and Telephone are obliged to assign national numbers and postal codes to all Iranian citizens in accordance with the laws and regulations.
Article 2 - All natural and legal persons, ministries, organizations, companies and government-affiliated institutions, universities, banks, municipalities, institutions of the Islamic Revolution and the Islamic Republic of Iran Armed Forces whose inclusion in the law requires mentioning the persons name, they are obliged to follow and use the national number and ten-digit postal code, which will be allocated by the National Organization for Civil Registration of Iran in the form of an identification card and in cooperation with the Post Company of the Islamic Republic of Iran to identify individuals and their place of work or residence.
Article 3 - The card mentioned in the above article is a document identifying Iranian citizens and is subject to all relevant legal and criminal provisions and must always be with its owner.
Note: If the cardholder changes his / her place of residence or work, he / she must inform the National Organization for Civil Registration as soon as possible.
Article 4 - Issuance of any administrative or trade union identification card or driver's license and the like, without entering the national number and postal code is prohibited.
Note: Identity cards of administrative, trade union or driver's license and the like, which were issued before the approval and implementation of this law, will be valid until the conditions for their replacement in a new form are provided.
Article 5 - The government is obliged to anticipate the costs of implementing the national number and postal code plan in the budget of the relevant executive bodies.
Article 6 - The executive by-law of this law will be prepared by the Ministries, Post, Telegraph and Telephone within 2 months and will be approved by the Cabinet.
See also
Multi-factor authentication
List of passports
References
External links
Iran: Passports, ID and civil status documents
Iran Civil Registration Law text
National Organization for Civil Registration of Iran website
Iran Postal code Finder by address
Authentication methods in Iran
Iranian society
Iran
Government of Iran |
14608785 | https://en.wikipedia.org/wiki/Firewalls%20and%20Internet%20Security | Firewalls and Internet Security | Firewalls and Internet Security: Repelling the Wily Hacker is a 1994 book by William R. Cheswick and Steven M. Bellovin that helped define the concept of a network firewall.
Describing in detail one of the first major firewall deployments at AT&T, the book influenced the formation of the perimeter security model, which became the dominant network security architecture in the mid-1990s.
In 2003, a second edition was published, adding Aviel D. Rubin to its authors.
References
External links
Web page for the second edition
Firewalls and Internet Security at Google Books
Internet security
Computer security books
1994 non-fiction books
Books about the Internet
Works about security and surveillance
Works about computer hacking |
27412298 | https://en.wikipedia.org/wiki/Google%20Cloud%20Storage | Google Cloud Storage | Google Cloud Storage is a RESTful online file storage web service for storing and accessing data on Google Cloud Platform infrastructure. The service combines the performance and scalability of Google's cloud with advanced security and sharing capabilities. It is an Infrastructure as a Service (IaaS), comparable to Amazon S3 online storage service. Contrary to Google Drive and according to different service specifications, Google Cloud Storage appears to be more suitable for enterprises.
Feasibility
User activation is resourced through the API Developer Console. Google Account holders must first access the service by logging in and then agreeing to the Terms of Service, followed by enabling a billing structure.
Design
Google Cloud Storage stores objects (originally limited to 100 GiB, currently up to 5 TiB) in projects which are organized into buckets. All requests are authorized using Identity and Access Management policies or access control lists associated with a user or service account. Bucket names and keys are chosen so that objects are addressable using HTTP URLs:
https://storage.googleapis.com/bucket/object
http://bucket.storage.googleapis.com/object
https://storage.cloud.google.com/bucket/object
Features
Google Cloud Storage offers four storage classes, identical in throughput, latency and durability. The four classes, Multi-Regional Storage, Regional Storage, Nearline Storage, and Coldline Storage, differ in their pricing, minimum storage durations, and availability.
Interoperability - Google Cloud Storage is interoperable with other cloud storage tools and libraries that work with services such as Amazon S3 and Eucalyptus Systems.
Consistency - Upload operations to Google Cloud Storage are atomic, providing strong read-after-write consistency for all upload operations.
Access Control - Google Cloud Storage uses access control lists (ACLs) to manage object and bucket access. An ACL consists of one or more entries, each granting a specific permission to a scope. Permissions define what someone can do with an object or bucket (for example, READ or WRITE). Scopes define who the permission applies to. For example, a specific user or a group of users (such as Google account email addresses, Google Apps domain, public access, etc.)
Resumable Uploads - Google Cloud Storage provides a resumable data transfer feature that allows users to resume upload operations after a communication failure has interrupted the flow of data.
References
External links
Google Cloud Storage Discussion Group
Intro to new Google cloud technologies: Google Cloud Storage, Prediction API, BigQuery slideshare presentation by Chris Schalk (Developer Advocate at Google)
Cloud storage
Web services
File hosting
Network file systems
Cloud computing providers
Cloud platforms |
11137201 | https://en.wikipedia.org/wiki/Fundaci%C3%B3n%20V%C3%ADa%20Libre | Fundación Vía Libre | Fundación Vía Libre of Córdoba, Argentina, is an NGO working on the social implications of information and communication technologies throughout Latin America, with very strong ties to the region's digital rights and software libre community, to academia, to other civil society organizations, and to the global Free Software movement. It was founded in 2000.
Via Libre's work on the role of software libre for the public administration has led to numerous legislative projects demanding the use of free software for all public administration work. The template "free software bill" that Vía Libre together with lawmakers and a large international group of software supporters helped to create, introduce and promote in several Latin American countries was mentioned in the FLOSS report as recommended legislation for all European Union member countries.
Vía Libre took part in the international consortium that executed the FLOSSWorld program within the European Union's 6th Framework Program, led by the University of Maastricht and also was a partner at SELFProject between years 2006/2008. Vía Libre's Support Program for Small and Medium Organizations, co-funded by AVINA Foundation of Switzerland and Argentina's National Agency for Science and Technology, aims at helping small businesses and NGOs introduce libre software in their operations through training, preconfigured software packages and the development of a free ERP tailored to their needs. Vía Libre cooperated heavily with Heinrich-Böll-Stiftung for nine years, working on publications, events and conferences on issues related to copyrights and patents all over Latin America. From February 2008, Vía Libre takes part of FLOSSInclude Project, another European Project within the 7th Framework Program.
Other fields of interest
As a civil rights advocacy group, Vía Libre campaigns for human rights, civil liberties, access to knowledge and the right to privacy in cyberspace. In the last few years, the Foundation has launched campaigns against Electronic Voting, against surveillance and excessive data retention, and for access to knowledge in different fields. From 2008, Vía Libre was also approved by WIPO's General Assembly as an observer at the World Intellectual Property Organization.
References
External links
Fundación Vía Libre
Fundación Vía Libre Information on the website of the Self Project
Heinrich Böll Foundation in Latinamerica
Non-profit organisations based in Argentina
Free and open-source software organizations |
50575792 | https://en.wikipedia.org/wiki/Google%20Daydream | Google Daydream | Daydream is a discontinued virtual reality (VR) platform which was developed by Google, primarily for use with a headset into which a smartphone is inserted. It is available for select phones running the Android mobile operating system (versions "Nougat" 7.1 and later) that meet the platform's software and hardware requirements. Daydream was announced at the Google I/O developer conference in May 2016, and the first headset, the Daydream View, was released on November 10, 2016. To use the platform, users place their phone into the back of a headset, run Daydream-compatible mobile apps, and view content through the viewer's lenses.
Daydream was Google's second foray into VR following Cardboard, a low-cost platform intended to encourage interest in VR. Compared to Cardboard, which was built into compatible apps and offered limited features, Daydream was built into Android itself and included enhanced features, including support for controllers. Daydream was not widely adopted by consumers or developers, and in October 2019, Google announced that the Daydream View headset had been discontinued and that they would no longer certify new devices for Daydream.
History
At the Google I/O developer conference in May 2016, Google announced that a new virtual reality (VR) platform called "Daydream" would be built into the next release of their Android mobile operating system (OS)—Nougat (7.1). Daydream was Google's second foray into VR following Cardboard, which was a low-cost standard that utilized a cardboard viewer with plastic lenses that could hold a smartphone. Whereas Cardboard was used by running compatible apps and was accessible on most smartphones, Daydream was built into the Android OS itself and only worked on select phones that met the platform's standards, such as having specific hardware components. In January 2017, Google opened the Daydream program for all third-party developers.
Software
Android Nougat introduced VR Mode, a low-latency, "sustained performance mode" to optimize the VR experience for Daydream. It dedicated a CPU core to the user interface thread to reduce visual issues that could induce nausea. Whereas the GPU normally sends frames to the device display in a "double buffering" mode on Android, VR Mode switched to "single buffering" to avoid intermediate frame buffer and instead draw frames directly to the display. The mode also allowed for asynchronous reprojection, whereby frames were slightly transformed to account for positional changes in the user's head that occurred during the 16 milliseconds that each frame was rendered and sent to the display. VR Mode also performance tuned the motion sensor pathways to result in quicker input from the device's accelerometer and gyroscope. The mode assisted developers in optimizing apps to a device's thermal profile. Overall, the performance improvements of VR Mode resulted in motion-to-photon latency decreasing on the Nexus 6P phone from 100 milliseconds on Android Marshmallow to less than 20 milliseconds on Android Nougat.
Daydream also included a new head tracking algorithm that combined the input from various device sensors, as well as integration of system notifications into the VR user interface.
Daydream allows users to interact with VR-enabled apps, including YouTube, Google Maps Street View, Google Play Movies & TV, and Google Photos in an immersive view. Google recruited media companies like Netflix and Ubisoft for entertainment apps.
Headsets
First-generation Daydream View
The first-generation Google Daydream View was announced on October 4, 2016. Daydream-ready smartphones can be placed in the front compartment of the Daydream View and then viewed in VR through the headset's two lenses. The View distinguished itself from previous VR head mounts by being constructed out of a light-weight cloth material, as well as featuring capacitive nubs and an NFC chip to simplify the process of setting up virtual reality viewing. The Daydream View was released on November 10, 2016, initially in a "Slate" color option. Two new color choices, "Crimson" and "Snow", became available on December 8.
In a review of the Google Daydream View, Adi Robertson of The Verge wrote that the headset was the "best mobile headset" she'd ever used, complimenting its "squishy foam-and-fabric body" being "significantly smaller, lighter, and more portable than the Samsung Gear VR", and that its design "keeps the lenses relatively protected during travel". She also liked the device's weight distribution, writing that it "rests more weight on your forehead than your cheeks, an option I've found more comfortable" and that allows her to "wear it easily for hours at a time". She also praised the material, particularly its plastic sliders rather than velcro patches on the head strap, writing that it allows "a wider range of sizes and avoids gathering lint", and that the View's overall design "could almost pass for an airplane sleep mask", meaning that it "avoids looking ostentatiously high-tech or intimidating".
Google Daydream headsets are packaged with a wireless controller. This controller can be used for interacting with the virtual world through button presses or through waving the device. On-board sensors are used to track the orientation of the controller and approximate the position of the user's hand. The Daydream View's controller can be stored inside the headset while not in use. The controller has a touch pad, two circular buttons (one functioning as a home button and one functioning as an app-specific button), and two volume buttons, along with a status light. The controller is rechargeable and charges via USB-C. On its support pages, Google noted that the Daydream View "doesn't include a charger or cables" and instead directs users to purchase those from the Google Store.
Second-generation Daydream View
The second-generation Daydream View was unveiled during the Made by Google 2017 event. It was released in a different set of colors, namely: "Charcoal", "Fog", and "Coral". It is largely similar to the first-generation model, with a few improvements, including a slightly altered design and improved lenses for a wider field of view. It was released on October 19, 2017, with a launch price of US$99.
Lenovo Mirage Solo
Lenovo's Mirage Solo headset, announced at CES 2018, is the first standalone headset running on Google's Daydream platform. It is powered by Qualcomm's Snapdragon 835 system-on-chip, has 4 GB of RAM and 64 GB of internal storage expandable by microSD, dual mics, a 3.5mm headphone jack, a 2560 × 1440 LCD screen and a 4,000 mAh battery. Its highlight feature is support for Google "WorldSense", an improved position tracking technology.
The headset is designed to be coupled with the Mirage Camera, which is a point-and shoot 180-degree 3D VR camera with two lenses that can capture in 4K.
Lenovo released the device in May 2018 at a price of $399.
Compatibility
Daydream will only work on certain newer phones with specific components. Google announced at the Google I/O conference in May 2016 that eight hardware partners would make Daydream-ready phones: Samsung, HTC, LG, Xiaomi, Huawei, ZTE, Asus and Alcatel. Google CEO Sundar Pichai expected 11 Android smartphones supporting Daydream VR to be on sale by the end of 2017.
Discontinuance
In 2019, HBO discontinued its Daydream apps, while Hulu dropped support for the platform from its app.
On October 15, 2019, Google announced that it would no longer sell the Daydream View headset, and that their new flagship phones, the Pixel 4 and Pixel 4 XL, would not be certified for Daydream. No phones released in 2019 were compatible with Daydream, and the company confirmed that no additional devices would be certified for the platform. A spokesperson said, "There hasn't been the broad consumer or developer adoption we had hoped, and we've seen decreasing usage over time of the Daydream View headset." The representative said that the company recognized the potential in smartphone VR but: "we noticed some clear limitations constraining smartphone VR from being a viable long-term solution. Most notably, asking people to put their phone in a headset and lose access to the apps they use throughout the day causes immense friction." Google confirmed that the Daydream app and app store would remain available.
In October 2020, the company announced that it had ended support for the Daydream software, and that Android 11 would drop support for the platform entirely. However the Daydream App and Controller both continue to work on Android 11.
References
External links
Android (operating system)
Daydream
Virtual reality headsets |
23497750 | https://en.wikipedia.org/wiki/Basic4ppc | Basic4ppc | Basic4ppc (pronounced "Basic for PPC") is a programming language for Pocket PC handheld computers running Windows Mobile operating system, by Anywhere Software. The language is based on a BASIC-like syntax, taking advantage of Microsoft's .NET technology, to allow additional libraries, graphical user interface design of windows forms, rapid application development (RAD), and .NET framework compatible compilation. The language implements a unique way of adding objects to a program without being object-oriented. Its advantages are simplicity, development pace and the integration with .NET framework. A special version of the integrated development environment (IDE) allows developing straight onto the Windows Mobile device. With the demise of Windows Mobile operating system and the devices running it Basic4PPC came to the end of its life in about 2012. For owners of Basic4PPC it remains a useful Windows-desktop BASIC compiler as it runs code directly in the Windows environment and it can compile a project to a Windows 'exe' file for use as a Windows program.
History (major versions)
Version 1.00 of Basic4ppc was released in 2005. It was targeted mainly for handheld devices, letting users program in a unique device IDE. Basic concepts were introduced there, such as the direct naming reference and the syntax.
Version 2.0 added major improvements with user interface, controls and optimization.
8/2006 - Version 3.0 released, improved stability, allowed stand-alone compiling for first time.
12/2006 - Version 4.0 released, introduced ability to use external libraries for first time.
5/2007 - Version 5.0 released, with fully new IDE and support for Smartphones.
12/2007 - Version 6.0 created a breakthrough, introduced optimized compiling, thus allowing far better performance on both device and desktop compiled applications.
10/2008 - Version 6.5 released, introduced modules support.
06/2009 - Version 6.8 released, with automatic support for different screen resolutions and addition of two new collections objects.
04/2010 - Version 6.9 released, added support for typed variables and subs.
Android
In 2010 a version for Android phones/tablets was released, this is a separate environment working along the same lines and the language is "basic" like and can be compiled to Android devices.
Language features
Dual development platform: Basic4ppc allows development straight on the handheld device via a fully compatible Device IDE. Code written on either device or desktop IDEs is identical for both platforms and operating systems. Compilation, however, must target either device or desktop, due to the difference in the operating system.
Compilation available in four modes: Windows executable, Device executable for Pocket PC (with and without AutoScale), Desktop executable, and Smartphone executable (for mobile phones running Windows Mobile OS). Compiled .EXE files require .NET 2.0 framework to be installed on the target machine. This is usually the case with Windows XP SP2 and later, but has to be manually taken care of with earlier versions.
Additional libraries: based on the Microsoft .NET framework, Basic4ppc can use code inside .NET .dll files after being adapted for Basic4ppc (this can be done by any programmer using Microsoft Development tools). Many such additional libraries exist, most of which are open source, written by users and accessible via the Basic4ppc forum.
Merging: Additional libraries code is merged into the main executable almost always. This way a single file can be deployed.
Characteristics
Basic4ppc is procedural, structural, implementing a partial object-oriented programming model. Syntax is similar to common Basic dialects, most influenced by Visual Basic. It supports events. Like most modern languages, the development environment supplies graphical user interface design tools. Users build applications using the drag and drop, component based UI. This is possible on both Device and Desktop, being unique in this ability.
Regular flow structures, such as if…then and for…next are supported, as in many other Basic versions.
Reserved words: Basic4ppc includes a vast number of reserved words. This is because of variable declaration scope.
Variables can be local (accessible throughout a subroutine), global (accessible throughout a module) or public (accessible throughout a program). All variables are typeless. This means you can write the following code:
Sub App_Start
numA = "Five "
numB = "5"
numC = 6
SUM1 = numA & numB 'remark: = "Five 5"
SUM2 = numB + numC 'remark: = 11
End Sub
There is no need to declare variables explicitly.
Subroutines (called "Sub") are the most basic unit of code. All code must be written inside subroutines. Subroutines can return a value.
Direct Naming Reference: All internal controls can be accessed directly and passed as parameters to subroutines by specifying their name expressed as a string. This lets the programmer the ability to pass controls as parameters without knowing in advance the control that is to be passed, and without having to deal with either pointers nor with object oriented programming.
AutoScale mode allows developing for different screen resolution having the language taking care of the adjustments needed in UI appearance.
Example code
Here is an example of the language:
Code snippet that displays a message box "Hello, World!" as the application starts, without any forms being loaded:
Sub App_Start
MsgBox ("Hello, World!")
End Sub
Libraries
Based on Microsoft's .NET technology, Basic4ppc supports .NET .DLLs with some minor adjustments. This allowed users to create a lot of open-source libraries, downloadable at the Basic4ppc forum, usually with complete source code. As with many other programming languages, additional libraries include most of the real-world language functionality. Additional libraries cover subjects such as graphics, databases, user interface, GPS, barcode readers and peripheral devices, debug, connectivity (bluetooth, wifi, and data-transfer protocols such as http, ftp and so on), XML, and more.
References
External links
Integrated development environments
Pocket PC software
Articles with example BASIC code
Procedural programming languages
BASIC programming language family
Programming languages created in 2005 |
23392007 | https://en.wikipedia.org/wiki/Document%20capture%20software | Document capture software | Document Capture Software refers to applications that provide the ability and feature set to automate the process of scanning paper documents or importing electronic documents, often for the purposes of feeding advanced document classification and data collection processes. Most scanning hardware, both scanners and copiers, provides the basic ability to scan to any number of image file formats, including: PDF, TIFF, JPG, BMP, etc. This basic functionality is augmented by document capture software, which can add efficiency and standardization to the process.
Typical Features
Typical features of Document Capture Software include:
Barcode recognition
Patch Code recognition
Separation
Optical Character Recognition (OCR)
Optical Mark Recognition (OMR)
Quality Assurance
Indexing
Migration
Goal for Implementation of a Document Capture Solution
The goal for implementing a document capture solution is to reduce the amount of time spent scanning, separating, enhancing, organizing, classifying, normalizing, and collecting information from document collections, and to produce metadata along with an image/PDF file, and/or OCR text. This information is then migrated to a file share, FTP site, database, Document Management or Enterprise Content Management system. These systems often provide a search function, allowing search of the assets based on the produced metadata, and then viewed using document imaging software.
Document Capture System Solutions - General
Integration with Document Management System
ECM (Enterprise Content management) and their DMS component (Document Management System) are being adopted by many organizations as a corporate document management system for all types of electronic files, e.g. MS word, PDF ... However, much of the information held by organisations is on paper and this needs to be integrated within the same document repository.
By converting paper documents into digital format through scanning, organizations convert paper into image formats such as TIF, JPG, and PDF, and also extract valuable index information or business data from the document using OCR technology. Digital documents and associated metadata can easily be stored in the ECM in a variety of formats. The most popular of these formats is PDF which not only provides an accurate representation of the document but also allows all the OCR text in the document to be stored behind the PDF image. This format is known as PDF with hidden text or text-searchable PDF. This allows users to search for documents by using keywords in the metadata fields or by searching the content of PDF files across the repository.
Advantages of scanning documents into a ECM/DMS
Information held on paper is usually just as valuable to organisations as the electronic documents that are generated internally. Often this information represents a large proportion of the day to day correspondence with suppliers and customers. Having the ability to manage and share this information internally through a document management system such as SharePoint or a CMIS-compatible repository improves collaboration between departments or employees and also eliminates the risk of losing this information through disasters such as floods or fire.
Organisations adopting an ECM/DMS often implement electronic workflow which allows the information held on paper to be included as part of an electronic business process and incorporated into a customer record file along with other associated office documents and emails.
For business critical documents, such as purchase orders and supplier invoices, digitising documents helps speed up business transactions as well as reduce manual effort involved in keying data into business systems, such as CRM, ERP and Accounting. Scanned invoices can also be routed to managers for payment approval via email or an electronic workflow.
Electronic Document Capture
In the earlier implementations of Document Capture Software, the technology focused solely on the digitization and capture of information from paper documents. Document images were acquired from document scanners via TWAIN/ISIS drivers. Only image-based file formats like TIF, JPG, and BMP were typically compatible with these solutions. But in recent years, as the volume of electronically-created documents and the number of proprietary file formats continues to increase at exponential rates, the need for handling documents existing in electronic formats has grown. The relevant document capture products have adapted to function with non-image file formats with the end-goal of creating a unified processing workflow capable of handling all incoming documents
The ability to import files from a variety of sources is one example of such adaptation. Importing documents from ECM/DMS software solutions, email servers, FTP, and EDI is now as much of a requirement of document capture software as is paper capture.
The normalization of output files to text-based PDF format is now another critical factor in long-term archival of proprietary electronic file formats. Normalization expands access and usage of files to users throughout the enterprise, rather than only those that created the original electronic file.
References
Artificial intelligence applications
Optical character recognition
Data management
SharePoint |
855103 | https://en.wikipedia.org/wiki/Olivetti%20S.p.A. | Olivetti S.p.A. | Olivetti S.p.A. is an Italian manufacturer of computers, tablets, smartphones, printers and other such business products as calculators and fax machines. Headquartered in Ivrea, in the Metropolitan City of Turin, the company has been part of the Gruppo TIM since 2003. One of the first commercial programmable desktop calculators, the Programma 101, was produced by Olivetti in 1964 and was a commercial success.
History
Founding
The company was founded as a typewriter manufacturer by Camillo Olivetti in 1908 in the Turin commune of Ivrea, Italy. The firm was mainly developed by his son Adriano Olivetti. Olivetti opened its first overseas manufacturing plant in 1930, and its Divisumma electric calculator was launched in 1948. Olivetti produced Italy's first electronic computer, the transistorised Elea 9003, in 1959, and purchased the Underwood Typewriter Company that year. In 1964 the company sold its electronics division to the American company General Electric. It continued to develop new computing products on its own; one of these was Programma 101, one of the first commercially produced programmable calculators. In the 1970s and 1980s they were the biggest manufacturer for office machines in Europe and 2nd biggest PC vendor behind IBM in Europe.
In 1980, Olivetti began distributing in Indonesia through Dragon Computer & Communication.
In 1981, Olivetti installed the electronic voting systems for the European Parliament in Strasburg and Luxembourg.
In September 1994, the company launched Olivetti Telemedia chaired by Elserino Piol.
Since 2003, Olivetti has been part of the TIM Group through a merger.
Design
Olivetti was famous for the attention it gave to design: In 1952, the Museum of Modern Art held an exhibit titled "Olivetti: Design in Industry"; today, many Olivetti products are still part of the museum's permanent collection. Another major show, mounted by the Musée des Arts Décoratifs in Paris in 1969, toured five other cities. Olivetti was also renowned for the caliber of the architects it engaged to design its factories and offices, including Le Corbusier, Louis Kahn, Gae Aulenti, Egon Eiermann, Figini-Pollini, Ignazio Gardella, Carlo Scarpa, BBPR, and many others.
From the 1940s to the 1960s, Olivetti industrial design was led by Marcello Nizzoli, responsible for the Lexicon 80 (1948) and the portable Lettera 22 (1950). Later, Mario Bellini and Ettore Sottsass directed design. Bellini designed the Programma 101 (1965), Divisumma 18 (1973) and Logos 68 (1973) calculators and the TCV-250 video display terminal (1966), among others. Sottsass designed the Tekne 3 typewriter (1958), Elea 9003 computer (1959), the Praxis 48 typewriter (1964), the Valentine portable typewriter (1969), and others. Michele De Lucchi designed the Art Jet 10 inkjet printer (1999) (winner of the Compasso d'Oro) and the Gioconda calculator (2001). During the 1970s Olivetti manufactured and sold two ranges of minicomputers. The 'A' series started with the typewriter-sized A4 through to the large A8, and the desk-sized DE500 and DE700 series. George Sowden worked for Olivetti from 1970 until 1990, and designed their first desktop computer, Olivetti L1, in 1978 (following ergonomic research lasting two years). In 1991, Sowden won the prestigious ADI Compasso d'Oro Award for the design of the Olivetti fax OFX420.
Olivetti paid attention to more than product design; graphic and architectural design was also considered pivotal to the company. Giovanni Pintori was hired by Adriano Olivetti in 1936 to work in the publicity department. Pintori was the creator of the Olivetti logo and many promotional posters used to advertise the company and its products. During his activity as Art Director from 1950, Olivetti's graphic design obtained several international awards, and he designed works that created the Olivetti image and became emblematic Italian reference in the history of 20th-century design.
Those designers also created the Olivetti Synthesis office furniture series which mainly were used to be installed in the firm's own headquarters, worldwide branch offices and showrooms. Olivetti also produced some industrial production machinery, including metalworking machines of the Horizon series.
Typewriters
Olivetti began with mechanical typewriters when the company was founded in 1909, and produced them until the mid-1990s. Until the mid-1960s, they were fully mechanical, and models such as the portable Olivetti Valentine were designed by Ettore Sottsass.
With the Tekne/Editor series and Praxis 48, some of the first electromechanical typewriters were introduced. The Editor series was used for speed typing championship competition. The Editor 5 from 1969 was the top model of that series, with proportional spacing and the ability to support justified text borders. In 1972 the electromechanical typeball machines of the Lexicon 90 to 94C series were introduced, as competitors to the IBM Selectric typewriters, the top model 94c supported proportional spacing and justified text borders like the Editor 5, as well as lift-off correction.
In 1978 Olivetti was one of the first manufacturers to introduce electronic daisywheel printer-based word processing machines, called TES 401 and TES 501. Later the ET series typewriters without (or with) LCD and different levels of text editing capabilities were popular in offices. Models in that line were ET 121, ET 201, ET 221, ET 225, ET 231, ET 351, ET 109, ET 110, ET 111, ET 112, ET 115, ET 116, ET 2000, ET 2100, ET 2200, ET 2250, ET 2300, Et 2400 and ET 2500. For home users in 1982 the Praxis 35, Praxis 40 and 45D were some of the first portable electronic typewriters. Later, Olivetti added the Praxis 20, ET Compact 50, ET Compact 60, ET Compact 70, ET Compact 65/66, the ET Personal series and Linea 101. The top models were 8 lines LCD based portables like Top 100 and Studio 801, with the possibility to save the text to a 3.5-inch floppy disk.
The professional line was upgraded with the ETV series video typewriters based on CP/M operating system, ETV 240, ETV 250, ETV 300, ETV 350 and later MS-DOS operating system based ETV 260, ETV 500, ETV 2700, ETV 2900, ETV 4000s word processing systems having floppy drives or hard disks. Some of them (ETV 300, 350, 500, 2900) were external boxes that could be connected through an optional serial interface to many of the ET series office typewriters, the others were fully integrated with an external monitor which could be installed on a holder over the desk. Most of the ET/ETV/Praxis series electronic typewriters were designed by Marion Bellini.
By 1994, Olivetti stopped production of typewriters, as most users had transitioned to Personal Computers.
Computers
Between 1955 and 1964 Olivetti developed some of the first transistorized mainframe computer systems, such as the Elea 9003. Although 40 large commercial 9003 and over 100 smaller 6001 scientific machines were completed and leased to customers to 1964, low sales, loss of two key managers and financial instability caused Olivetti to withdraw from the field in 1964.
In 1965 Olivetti released the Programma 101, considered one of the first commercial desktop programmable calculators. It was saved from the sale of the computer division to GE thanks to an employee, Gastone Garziera, who spent successive nights changing the internal categorization of the product from "computer" to "calculator", so leaving the small team in Olivetti and creating some awkward situations in the office, since that space was now owned by GE.
In 1974 the firm released the TC800, an intelligent terminal designed to be attached to a mainframe and used in the finance sector. It was followed in 1977 by the TC1800.
Olivetti's first modern personal computer, the M20, featuring a Zilog Z8000 CPU, was released in 1982.
The M20 was followed in 1983 by the M24, a clone of the IBM PC using DOS and the Intel 8086 processor (at 8 MHz) instead of the Intel 8088 used by IBM (at 4.77 MHz). The M24 was sold in North America as the AT&T 6300. Olivetti also manufactured the AT&T 6300 Plus, which could run both DOS and Unix. The M24 in the US also was sold as Xerox 6060. The Olivetti M28 was the firm's first PC to have the Intel 80286 processor.
The same year Olivetti produced its M10 laptop computer, a 8085-based workalike of the successful Radio Shack TRS-80 Model 100, which it marketed in Europe. These were the first laptops to sell in million-unit quantities, though the itself only attained sales figures in the tens of thousands and went out of production within two years.
During the 1980s and 1990s Olivetti continued to release PC compatible machines, facing mounting competition from other brands. It turned to laptops, introducing in 1991 the D33, a laptop in a carry case, and continuing with the M111, M211, S20, D33, Philos and Echos series. A very interesting subnotebook was the Quaderno, about the same size as an A5 paper – it was the grandfather of the netbooks introduced 20 years later.
Olivetti did attempt to recover its position by introducing the Envision in 1995, a full multimedia PC, to be used in the living room; this project was a failure. Packard Bell managed to successfully introduce a similar product in the U.S. but only some years later.
The company continued to develop personal computers until it sold its PC business in 1997.
End of Olivetti as a separate company
In the 1990s, Olivetti's computer businesses were in great difficulty, reportedly because of the competition from US vendors and new cheap manufacturers for PC components in Taiwan like ASUS, MSI, Gigabyte and so on from which local system builders profited much to offer cheaper PCs than Olivetti did with their own designs. It was on the brink of collapse and had needed government support to stay afloat. A company in transition, it had moved out of the typewriter business into personal computers before embracing telecoms between 1997 and 1999. In the process it had lost around three-quarters of its staff.
In 1999, The Luxembourg-based company Bell S.A. acquired a controlling stake in Olivetti, but sold it to a consortium including the Pirelli and Benetton groups two years later. Olivetti then launched a hostile bid for Telecom Italia in February 1999, despite being less than a seventh of the size of its target. In a take-over battle against Deutsche Telekom, and other potential bidders, Olivetti won out and controlled 52.12% of former monopoly Telecom Italia, Italy's #1 fixed-line and mobile phone operator. However, the ownership structure of the merged Olivetti / Telecom Italia was complex and multi-layered with Olivetti took on around $16 billion of extra debt. It was then referred to as the "Olivetti/Telecom Italia affair" because of the unpleasant secret affairs behind.
After a 2003 reorganization, Olivetti became the office equipment and systems services subsidiary of Telecom Italia. In 2003 Olivetti was absorbed into the Telecom Italia group, maintaining a separate identity as Olivetti Tecnost.
Rebirth and resumption of computer production
In 2005, Telecom Italia relaunched the company in the information technology sector, investing €200 million; at first, restoring the original Olivetti brand, then replacing it with Olivetti Tecnost in 2003. In 2007, Olivetti launched the "LINEA_OFFICE", designed by Jasper Morrison for Olivetti; a new line of PCs, notebooks, printers, fax machines and calculators. Olivetti today operates in Italy and Switzerland, and has sales associates in 83 countries. Research and development are located in Agliè, Carsoli and Scarmagno in Italy, and Yverdon, Switzerland.
In March 2011 Olivetti began producing the OliPad, its first tablet computer, featuring a ten-inch screen, 3G, WiFi, Bluetooth connectivity, Nvidia Tegra 2, Android 2.2.2 and a 1024 x 600 display. It also features an application store, with apps specifically designed by Olivetti for 'business & government'. In 2014 the R&D department in Arnad was sold to SICPA.
Smartphones
In 2013, Olivetti launched a series of smartphones called Oliphone:
Olivetti Oliphone M8140
Olivetti Oliphone Q8145
Olivetti Oliphone Q8150
Olivetti Oliphone Q9047
Olivetti Oliphone WG451
Olivetti Oliphone WG501
See also
Olivetti typewriters
Olivetti computers
List of Italian companies
References
External links
History of Olivetti at Telecom Italia (archived 2005)
Picture of a 1983 office featuring an Olivetti M24
"History of Olivetti" - SEQ Corporation, Stockholm, Sweden
Video Olivetti L1 M40 ST Retro Computer museum, Zatec, Czech Republic video
Video Olivetti P6066 Retro Computer museum, Zatec, Czech Republic video
Technology companies established in 1908
Electronics companies established in 1908
Italian companies established in 1908
Italian brands
Mobile phone manufacturers
Mechanical calculator companies
Electromechanical calculator companies
Computer printer companies
Office supply companies
Computer companies of Italy
Computer hardware companies
Electronics companies of Italy
Home computer hardware companies
Netbook manufacturers
Display technology companies
Telecommunications equipment vendors
Mobile phone companies of Italy
Multinational companies headquartered in Italy
Telecom Italia
2003 mergers and acquisitions
Ivrea |
35588872 | https://en.wikipedia.org/wiki/GNOME%20Boxes | GNOME Boxes | GNOME Boxes is an application of the GNOME Desktop Environment, used to access virtual systems. Boxes uses the QEMU, KVM, and libvirt virtualization technologies.
GNOME Boxes requires the CPU to support some form of hardware-assisted virtualization (AMD-V or Intel VT-x, for example).
History and functionality
GNOME Boxes was initially introduced as beta software in GNOME 3.3 (development branch for 3.4) as of Dec 2011, and as a preview release in GNOME 3.4. Its primary functions were as a virtual machine manager, remote desktop client (over VNC), and remote filesystem browser, utilizing the libvirt, libvirt-glib, and libosinfo technologies. This enabled the viewing of remote systems and virtual machines on other computers in addition to locally created virtual machines. Boxes possesses the ability to easily create local virtual machines from a standard disk image file, such as an ISO image while requiring minimum user input. As of version 40, the remote connection functionality has been moved to the separate application, GNOME Connections.
People
Boxes was originally developed by Marc-André Lureau, Zeeshan Ali, Alexander Larsson and Christophe Fergeau and is currently being maintained and developed by Felipe Borges.
See also
VirtualBox
Red Hat Virtual Machine Manager (virt-manager)
VMware Workstation
List of GNOME applications
References
External links
Boxes designs on GNOME wiki
Free emulation software
Free software programmed in C
Free software programmed in Vala
Free virtualization software
GNOME Core Applications
Software that uses Meson
Virtualization-related software for Linux
Virtualization-related software that uses GTK |
10060957 | https://en.wikipedia.org/wiki/TV%20Network%20Protocol | TV Network Protocol | The TV Network Protocol or TVNP as it is more commonly referred to is an open network protocol developed to enable CCTV systems from any manufacturer to be integrated into an existing CCTV network. It provides high levels of support for audio routing, video routing and camera control.
The protocol was developed by Philips Projects (now Tyco Integrated Systems) on behalf of the Traffic Control Systems Unit (TCSU), now a part of Transport for London (TfL). Tyco acts as the standards and approvals house for companies who want to implement the protocol.
The protocol's roots can be traced back to the Highways Agency HDLC standard. It is the property of TfL and is independent of any supplier. As of late of 2011 there are at least eight manufacturers who have a partial or full TVNP interface, including:
BAE Systems (previously Petards)
Chubb (previously Initial Fire and Security)
Honeywell
Infinitronix
Meyertech
Costain (previously Simulation Systems Limited)
Synectics
Tyco (previously Philips Projects).
TVNP layers are broadly based on the OSI model. TVNP Layer 2 and 3 correspond to OSI Layers 2 and 3. When used over RS232 only, TVNP Layer 1 corresponds to OSI Layer 1. TVNP Layer 4 is equivalent to OSI Layer 7.
Structuring the TVNP in such a way means that as future needs and provisions change, aspects of one layer can be enhanced or modified without the need for change to the other layers.
Layer 1 (L1) For serial RS232 L1 is the Physical Protocol Layer that defines the electrical signals and interconnect requirements at the communication interface port(s) of the CCTV system. V3.0 of the specification allows UDP/IP, typically over Ethernet, to be used for L1. This option is not a physical protocol layer in the OSI sense.
Layer 2 (L2) is the Frame Protocol Layer, sometimes referred to as the Link Layer. Its purpose is to detect and correct errors in the stream of data passing between any two adjacent CCTV systems, so that CCTV network messages are not received in a corrupted form. Layer 2 operates strictly on point-to-point links between adjacent sites and contains no source or destination address information.
Layer 3 (L3) is the Network Protocol Layer, sometimes referred to as the Packet Layer. This is the layer of actual CCTV network messages. The messages have end-to-end significance and contain both source and destination address information.
Layer 4 (L4) is the Application Protocol Layer which makes use of the data network and lower protocol layers to provide services that are required either directly by the users of the system or for system management.
See also
OSI model
RS232
Closed-circuit television
High-Level Data Link Control
References
Network protocols |
301915 | https://en.wikipedia.org/wiki/John%20Cocke | John Cocke | John Cocke (May 30, 1925 – July 16, 2002) was an American computer scientist recognized for his large contribution to computer architecture and optimizing compiler design. He is considered by many to be "the father of RISC architecture."
Biography
He was born in Charlotte, North Carolina, US. He attended Duke University, where he received his bachelor's degree in mechanical engineering in 1946 and his Ph.D. in mathematics in 1956. Cocke spent his entire career as an industrial researcher for IBM, from 1956 to 1992.
Perhaps the project where his innovations were most noted was in the IBM 801 minicomputer, where his realization that matching the design of the architecture's instruction set to the relatively simple instructions actually emitted by compilers could allow high performance at a low cost.
He is one of the inventors of the CYK algorithm (C for Cocke). He was also involved in the pioneering speech recognition and machine translation work at IBM in the 1970s and 1980s, and is credited by Frederick Jelinek with originating the idea of using a trigram language model for speech recognition.
Cocke was appointed IBM Fellow in 1972. He won the Eckert-Mauchly Award in 1985, ACM Turing Award in 1987, the National Medal of Technology in 1991 and the National Medal of Science in 1994, IEEE John von Neumann Medal in 1984, The Franklin Institute's Certificate of Merit in 1996, the Seymour Cray Computer Engineering Award in 1999, and The Benjamin Franklin Medal in 2000. He was a member of the American Academy of Arts and Sciences, the American Philosophical Society, and the National Academy of Sciences.
In 2002, he was made a Fellow of the Computer History Museum "for his development and implementation of reduced instruction set computer architecture and program optimization technology."
He died in Valhalla, New York, US.
References
External links
IBM obituary
Duke profile from 1988 By Eileen Bryn
Interview transcript
IEEE John von Neumann Medal Recipients
1925 births
2002 deaths
American computer scientists
Computer hardware engineers
Computer designers
Duke University alumni
20th-century American mathematicians
21st-century American mathematicians
Turing Award laureates
National Medal of Science laureates
National Medal of Technology recipients
IBM Research computer scientists
Seymour Cray Computer Engineering Award recipients
IBM employees
IBM Fellows
People from Charlotte, North Carolina
People from Valhalla, New York
Mathematicians from New York (state)
Duke University Pratt School of Engineering alumni
Members of the United States National Academy of Sciences
Members of the American Philosophical Society |
17713642 | https://en.wikipedia.org/wiki/Saints%20Row%3A%20The%20Third | Saints Row: The Third | {{Infobox video game
| title = Saints Row: The Third
| image = Saints Row The Third box art.jpg
| developer = Volition
| publisher = THQ
| director = Scott Phillips
| producer = Greg Donovan
| designer = Bryan Dillow
| programmer = Nick Lee
| artist = Frank Marquart
| writer = Steve Jaros
| composer = Malcolm Kirby Jr.
| series = Saints Row
| platforms = Microsoft Windows, PlayStation 3, Xbox 360, Linux, Nintendo Switch, PlayStation 4, Xbox One, Stadia, PlayStation 5, Xbox Series X/S, Luna
| released = PS3, X360, WindowsLinuxNintendo SwitchPlayStation 4, Xbox OneGoogle StadiaPlayStation 5, Xbox Series X/S Amazon Luna | genre = Action-adventure
| modes = Single-player, multiplayer
}}Saints Row: The Third' is a 2011 action-adventure game developed by Volition and published by THQ. It is the sequel to 2008's Saints Row 2 and the third installment in the Saints Row series. It was released on November 15, 2011 for Microsoft Windows, PlayStation 3, and Xbox 360, and May 10, 2019 for the Nintendo Switch. A remastered version of Saints Row: The Third, titled Saints Row: The Third Remastered, was released by Deep Silver on May 22, 2020 for Windows, PlayStation 4, and Xbox One, March 5, 2021 for Stadia, May 25, 2021 for Xbox Series X and S and PlayStation 5, and July 29, 2021 for Luna.
The game is set in the fictional city of Steelport (based on New York City), and continues the story of the 3rd Street Saints, once again putting players in the role of the gang's leader, who is fully customizable. Five years after the events of Saints Row 2, the Saints have grown from their humble origins as a street gang into a large media and consumer empire with their own brand, while many of the gang's members have become celebrities and pop culture icons. After being stranded in Steelport, which is firmly ruled by an international crime organization known as the Syndicate, the Saints must rebuild their forces once more to take over the city and defeat the Syndicate, as well as S.T.A.G., a violent paramilitary contracted with restoring order to Steelport.
Development of Saints Row: The Third began in late 2008, shortly after the release of Saints Row 2. There was high staff turnover from the previous Saints Row team, with only one-fifth of the final 100-person staff having worked on a previous title in the series. They aimed to improve on the series by giving the game a coherent tone, and found it in films such as Hot Fuzz and the game's signature sex toy bat. Saints Row: The Third was built using a proprietary engine known as Core Technology Group and the Havok physics engine.
The game received generally positive reviews from critics, who praised its general zaniness and customization options. Criticism was aimed at the new setting, which many critics felt was flat and underdeveloped compared to the previous games' Stilwater, and the lackluster humor. On the contrary, others thought the game perfected the Saints Row formula. It was a nominee for Best Narrative at the 2012 Game Developers Conference, an IGN Editor's Choice, and a recipient of perfect scores from GamesRadar and G4. A complete edition including the three downloadable content packs was released in 2012, and its planned Enter the Dominatrix expansion became the game's sequel, Saints Row IV, released in 2013.
Gameplay Saints Row: The Third is an action-adventure game played from the third-person perspective in an open world, such that players explore an unrestricted environment. Similar to the premise of the previous Saints Row games, the player's goal is to lead the Third Street Saints gang to overtake its rival gangs in the city turf war. While the protagonist is the same, the game introduces a new setting, the city of Steelport, with its own three gangs: the Morningstar, Luchadores, and Deckers, together known as the Syndicate. To further complicate matters, the government's Special Tactical Anti-Gang unit (STAG) is summoned to quell both organizations. The Third is the first in the series to intertwine the narratives of its three-gang structures, and also presents the player with story-altering decisions.
The series has historically been considered a clone of Grand Theft Auto that later positioned itself as more "gleefully silly" in comparison. In combat, players select weapons from a weapon selection wheel, including regular pistols, submachine guns, shotguns, and rocket launchers alongside special weapons such as UAV drones and a fart-in-a-jar stun grenade. Player melee attacks include running attacks such as DDTs and a purple dildo bat. Players may use vehicles to navigate the city, including a hover jet (known as the F-69 VTOL) and a pixelated retrogame tank that are unlocked through story missions. Once special vehicles are unlocked, they are in unlimited supply and can be delivered directly to the player-character's location. Player actions are intensified with what Volition calls the "awesome button", where for example the player will divekick through the windshield into the driver's seat of a car. The main story campaign missions can be played alone, or cooperatively either online or via System Link offline. Some elements are added to the campaign for the second player. There is no competitive multiplayer, but a "wave-based survival mode" called Whored Mode that supports up to two players.
Players customize their characters after the introductory mission. Player-character bodies, dress, and vehicles can be customized, as well as home properties. Players can additionally share their character designs in a Saints Row online community. Apart from the main story missions, there are optional diversions to make money and earn reputation, such as Insurance Fraud, where players hurt themselves in traffic to maximize self-injury before a timer expires, or Mayhem, where players maximize property destruction before a time expires. Some of these diversions were introduced in previous Saints Row games. Activities serve the plot and are positioned as training the player-character or damaging the Syndicate. They can also be repeated. Outside of structured diversions, players are free to make their own fun by purchasing property, shopping for items, finding hidden sex doll and money cache collectibles, and wreaking unsolicited havoc. There are also "flashpoint" gang operations that grant respect when disrupted. Attacking others increases the player's notoriety level, as depicted with stars.Saints Row: The Third introduced experience levels and weapon upgrades to the series. Most actions in the game come with incentives in the form of money and respect (reputation). Money buys land, weapons, and other upgrades, and respect is a kind of experience point that can unlock player abilities like "no damage from falling" or "infinite sprint", as well as upgrades to the player's computer-controlled gang member support. In turn, players receive further incentive to nearly miss car collisions, streak naked through the streets, shoot others in the groin, blow up Smart cars, and kill mascots in ambient challenges to earn more respect. Lack of respect does not hinder story progress, as it has in previous games. Player progress and unlocks are managed by an in-game cell phone menu that also lets the player call for vehicle deliveries and non-player character backup. The computer-controlled support will also dialogue with each other.
Plot
Five years after the events of Saints Row 2, the 3rd Street Saints have merged with the Ultor Corporation to become a media and consumer empire with their own brand. While robbing a bank in Stilwater to promote an upcoming film about themselves, the Boss (Troy Baker, Kenn Michael, Robin Atkin Downes, Laura Bailey, Tara Platt, Rebecca Sanabria, or Steve Blum) and their top lieutenants, Shaundi (Danielle Nicolet) and Johnny Gat (Daniel Dae Kim), encounter unanticipated resistance on the job, which ultimately leads to them being arrested. The group are turned over to Phillipe Loren (Jacques Hennequet), head of an international criminal enterprise known as "the Syndicate". After refusing his deal to give the Syndicate most of their profits in exchange for their lives, the Saints stage a breakout, though Gat is forced to sacrifice himself to allow the Boss and Shaundi to escape. In response to the incident, Loren orders the Syndicate to attack the Saints, and ensure that their empire is destroyed.
Shaundi and the Boss land in the city of Steelport, firmly ruled over by the Syndicate's three main gangs: the Morningstar, a sophisticated gang led by Loren and his lieutenants, sisters Viola (Sasha Grey) and Kiki DeWynter (Megan Hollingshead), who dominate the sex trade; the Luchadores, a Mexican wrestler-themed gang led by Eddie "Killbane" Pryor (Rick D. Wasserman), who operate their own casino; and the Deckers, a hacker gang led by Matt Miller (Yuri Lowenthal), who dominate the city's cyber black-market. After Saints lieutenant Pierce Washington (Arif S. Kinchen) arrives with backup, the Saints secure a hideout, and go after Loren's operations, ultimately killing him in his own building. In the process, they rescue Oleg Kirrlov (Mark Allen Stuart), a former KGB agent being forcefully cloned to provide brutes for the Syndicate, who helps them to track down other allies: ex-FBI agent Kinzie Kensington (Natalie Lander), who seeks to disrupt the Deckers; veteran pimp Zimos (Alex Désert), who lost his business to the Morningstar; and Angel de la Muerte (Hulk Hogan), Killbane's embittered former wrestling partner.
With Loren dead, a power struggle ensues amongst the Syndicate, culminating in Killbane taking over after he kills Kiki in a jealous rage. Out of anger, Viola defects to the Saints and helps them finish off the Morningstar. Meanwhile, the lawlessness in Steelport leads to the federal government approving the initialisation of a taskforce to combat it - the Special Tactical Anti-Gang (S.T.A.G.), led by Cyrus Temple (Tim Thomerson) and supervised by Senator Monica Hughes (Tasia Valenza). Armed with highly advanced technology, STAG puts the city under martial law until order can be restored. During this time, the Saints focus on the Deckers, with the Boss acquiring items needed by Kinzie to allow them to access the Deckers' network with a virtual avatar. Once inside, the Boss battles Matt's avatar and defeats it, forcing Matt to retire his gang and leave the city. With only the Luchadores left, Angel suggests humiliating Killbane during his next major wrestling match, resulting in him going on a rampage across Steelport after he loses.
While pursuing Killbane admist the chaos, the Boss is informed that Shaundi, Viola, and Mayor Burt Reynolds (himself) have been kidnapped by STAG and taken to Steelport's most prominent monument, which has been rigged with explosives. At this point, the player must choose between continuing their pursuit of Killbane, or trying to stop STAG. In the canon ending, the Boss rescues their gang members and Reynolds and prevents the monument's destruction, resulting in the Saints being hailed as heroes and STAG being forced to leave Steelport after their actions become severely questioned by the federal government. The Saints decide not to pursue Killbane, who has fled Steelport, and instead resume their consumer activities, focusing on a new film called Gangstas In Space that stars the Boss. If the player alternatively chooses to pursue Killbane, they ultimately kill him, but Shaundi, Viola, and Reynolds die when STAG destroys the monument, which the Saints are framed for. The Boss exacts revenge and destroys STAG's flying aircraft carrier, before declaring Steelport an independent nation under the Saints' rule.
Development Saints Row 2 design philosophy was to "put everything ... into the game", which made for a disjointed title with varied tone. Design director Scott Phillips said the series' legacy of lightheartedness made the sequel's tone hard to define. The development team withstood a high turnover between the two releases, with only a fifth of the final 100-person team having worked on a title in the series before. Saints Row: The Third was in development by September 2008 as Saints Row 3. For its first six months of development, the team tested a choice-based adventure concept featuring an undercover agent infiltrating the Saints, which was dropped for not aligning with the spirit of the series. Now without a vision, the team made a "tone video" with film segments and songs that would define the new title. The final version featured bits from Bad Boys II, Shoot 'Em Up, Hot Fuzz, and Mötley Crüe's "Kickstart My Heart". The team worked in this direction to find a personality for Saints Row: The Third, which it found in its signature "dildo bat". The idea started as one-off mission-specific weapon and the artists ran with the concept. Their design mantra became "Embrace the Crazy; Fun Trumps All".
They came to the conclusion that "everything had to be 'over the top this time around so as to distinguish Saints Row: The Third from other open world titles and to make the franchise into a AAA title. The team increased playtesting to check for the action's pacing and "setpiece moments" within its overall flow. Producer Greg Donovan considered Saints Row: The Third a reboot of the franchise, "cohesive" in a way the prior two "semi-serious" entries were not. Other than "over the top" themes, the team wanted "holy shit" "water cooler moments" that players would remember forever and want to share. Phillips also "didn't want the player to be a dick".
The city of Steelport was designed such that the player could identify locations without needing a minimap, with a spatially recognizable skyline and iconic gang vehicles in specific regions.
The title was not shown at the 2010 Electronic Entertainment Expo (E3) with the explanation that the company had spent the year "rebuilding the technology", but a tie-in movie was mentioned as in production and a Saints Row 3 announcement was expected at the December Spike VGAs. Saints Row: The Third was finally announced officially in March 2011. The team wanted to include many different features and items, so scoping the final product became an issue. They laid out their ideas on a schedule and began to cut until over "4000 man-days of scheduled work" were removed, including features such as free-running (called "freegunning") and a cover system. Competitive multiplayer was removed due to its lack of popularity in the previous series entries. In retrospect, Phillips said he wanted to remove more. The studio borrowed people from other parts of the company to finish the project. Writer Drew Holmes expressed the difficulty in determining what was too risqué for the game. In keeping with series advertising, Saints Row: The Third included sex symbol Sasha Grey in the production as a character voice. Other celebrity voice actors include Hulk Hogan and Daniel Dae Kim.
The development team also pre-visualized rough drafts to sketch ideas for others to advance. For example, the introductory airplane level was pre-visualized two years prior to its creation as a demonstration for the development team and publisher. Levels were built in Volition's Core Technology Group (CTG) editor, which was continually built in the four years preceding release. Like the other two titles, Saints Row: The Third was built in the Havok physics engine with customizations. The engine let the team build vehicle drifting physics and the VTOL aircraft. The studio considered the Red Faction series' Geo-Mod 2 engine but chose against it due to the implementation's difficulty and not wanting that degree of destruction. Phillips gave a game development postmortem at the 2012 Game Developers Conference, where he advised studios to let development team members run with their ideas. Volition began to add modding support to the title and series in mid 2013.
Audio Saints Row: The Third has a licensed soundtrack available as radio stations when driving in vehicles. Players can switch between the playlists, which range from classical to electronic to hip hop, rock, or customize their own station based on their preferences. The original soundtrack was composed by Malcolm Kirby Jr., who had previously worked on The Love Guru soundtrack. It was released through Sumthing Else Music Works alongside the game via compact disc and digital download. Kirby said the series' over-the-top nature influenced the score, and that he was a huge fan of the series before he received the opportunity. In his composition, each gang has a theme and specific characteristics that range from "menacing orchestral to gangster hip hop to heavy metal".
Marketing and release
The game was released for Microsoft Windows, PlayStation 3, and Xbox 360 simultaneously on November 15, 2011, in the United States and Australia, and three days later in the United Kingdom. The November 17, 2011, Japan release had the veins removed from the Penetrator weapon (the three-foot long phallus bat) due to regulatory restrictions on depictions of genitalia. In lieu of exclusive game content scheduled for the PlayStation 3 version that did not ship with the game, early North American and European players who purchased that version received a complimentary download code for Saints Row 2. The summer before Saints Row: The Third release, THQ pledged to support it with a year's worth of downloadable content. Around the time of release, Danny Bilson of THQ announced that Saints Row IV was already in planning.
Those who preordered the game received Professor Genki's Hyper Ordinary Preorder Pack, which included Genki-themed downloadable content (a costume, a vehicle, and a weapon). A North American limited edition box set release called the Platinum Pack included the preorder content, the soundtrack, and a custom headset. Australia and New Zealand received two limited editions: the Smooth Criminal pack from EB Games and the Maximum Pleasure bundle from JB Hi-Fi, each of which included tie-in items along with the game and preorder content.
Though the game was not shown at E3 2010, THQ spoke of extensive tie-in merchandising (collectible card game, books) and a Saints Row film in production as part of a "robust transmedia play". Instead, THQ announced Saints Row: Drive By, a tie-in game for the Nintendo 3DS and Xbox Live Arcade that would unlock content in Saints Row 3. After the game was announced in March 2011, it was featured on the cover of Game Informer April issue. Closer to release, THQ sent rap group the Broken Pixels a development kit with a pre-release version of the game and asked them to record track about "all the wacky things" to do in the game. The group wrote the rap in a day and later produced a YouTube video set to clips from the game. THQ hosted an event in Redfern, Australia where women in skintight clothes pumped free gas for three hours, which generated an estimated 35 times return on investment. Eurogamer recalled that the game was "marketed almost exclusively on the basis of all the wacky stuff it will let you do" from the costumes to the sex toy weapons, and Edge described Saints Row: The Third as "marketed by sex toys and porn stars".
Two weeks before the game's release, Saints Row: The Third had four times the preorder count of Saints Row 2 at its comparable point. By January 2012, the game had shipped 3.8 million units worldwide, which THQ cited as an example for its business model change to focus on the big franchises. THQ President and CEO Brian Farrell expected to ship five to six million copies of the game in its lifetime. It had reached four million by April, and 5.5 million by the end of the year. Saints Row: The Third was an unexpected continued success for the company. It was featured in promotions with Humble Bundle, PlayStation Plus, and Xbox Live Games with Gold over the next several years.
Volition released a Linux port of the game in 2016, and made the Xbox 360 release compatible with its successor, the Xbox One, the next year. In August 2018, Deep Silver announced a Nintendo Switch port, which will be released in May 2019. The port was developed by Fishlabs. Deep Silver announced Saints Row: The Third Remastered in April 2020. The remastered version was developed by Sperasoft and features remodeled assets for high-definition, enhanced graphics and lighting, and includes all of the game's downloadable content. The title was released on PlayStation 4, Xbox One, and Microsoft Windows on May 22, 2020, with it later being released on Google Stadia on March 5, 2021, Steam on May 22, 2021, and Amazon Luna on July 29, 2021.
Downloadable content
Downloadable content for Saints Row: The Third includes additional story missions, weapons, and characters. A "definitive edition", Saints Row: The Third – The Full Package, contains all post-release downloadable contentincluding all three mission packs ("Genkibowl VII", "Gangstas in Space", and "The Trouble with Clones") and bonus items (clothes, vehicles, and weapons)in addition to the main game. The Full Package was announced in September 2012 for release two months later on PC, PlayStation 3, and Xbox 360.
THQ announced an Enter the Dominatrix standalone expansion as an April Fool's joke in 2012. It was confirmed as in development the next month. In Enter the Dominatrix, the alien commander Zinyak imprisons the Saints' leader in a simulation of Steelport called The Dominatrix so as to prevent interference when he takes over the planet. The expansion also added superpowers for the player-character. In June, THQ said the expansion would be wrapped into a full sequel, tentatively titled "The Next Great Sequel in the Saints Row Franchise" and scheduled for a 2013 release. Parts of Enter the Dominatrix that were not incorporated into the sequel, Saints Row IV, were later released as downloadable content for the new title, under the same name.
Reception
The game received "generally favorable" reviews, according to video game review score aggregator Metacritic. Some said the game did not try to be more than a good time, and described it as a variant of "ridiculous", "zany", or "absurd". In another way, others called it "juvenile". Critics praised the degree of customization options, and had mixed views of the array of activities, but found Professor Genki's Super Ethical Reality Climax a high point. Some found the game's ironic sexism to verge on misogyny, and that its other humor sometimes fell flat. Several critics referred to the game as the perfection of the Saints Row formula It was a nominee for Best Narrative at the 2012 Game Developers Conference, an IGN Editor's Choice, and a recipient of perfect scores from GamesRadar and G4.Edge said that the series "wants to be the WarioWare of open-city games", "a cartoon flipbook of anything-goes extremity" to Grand Theft Auto "ostentatious crime drama". They wrote that the game's "single-minded" "puerile imagination" demanded respect and noted the game's escalation of video game tropes and cultural references from Japanese game shows to text adventures to zombie apocalypses to lucha libre. IGN's Daemon Hatfield called the game "an open world adult theme park". He said that calling it "a good time would be a severe understatement" and praised its method of incentivizing almost every action in the game as "fantastic game design". Hatfield was "addicted" to efficiently expanding his in-game hourly income. GamesRadar Michael Grimm wrote Saints Row: The Third was nearly surreal, and praised the player-character's running attacks.
Referring to the historical comparison between the Saints Row and Grand Theft Auto series, Dan Whitehead of Eurogamer wrote that Grand Theft Auto IV serious turn let the Saints Row series be a "gleeful silly sandbox game", and noted that Saints Row: The Third was "marketed almost exclusively" based on its wackiness, from the costumes to the sex toy weapons. He felt that the "wacky hijinks" quickly became "predictable and repetitive" and the activities felt "sanitized and generic". Edge wrote that they were "one-off gags". Eurogamer Whitehead added that the tiger escort Guardian Angel missions appeared to draw from Will Ferrell's Talladega Nights, and that the Prof. Genki's Super Ethical Reality Climax shooting gallery drew from Bizarre Creations' The Club shooter. Eurogamer and PC Gamer both found the game easy.
Ryan McCaffrey of Official Xbox Magazine thought that the game resolved some of the problems of open world design and thus allowed for an experience with good times and no filler, such as Burnout-style arrows on the streets instead of hidden in the minimap GPS. He added that this was the game Volition "was born to make". Grimm from GamesRadar similarly praised Volition for their "http://deckers.die" mission, which was "so insanely creative and funny that it single handedly makes the game worth playing". He added that the game's unrealistic driving made the game more fun. IGN's Hatfield was "really won ... over" by his character and both was convinced she cared about her friends and impressed by her voice actress. Whitehead of Eurogamer found Zimos, the pimp who speaks in Autotune, to be the game's best character. Edge found some of the writing "sharp" and executed well by the voice actors. PC Gamer Tom Senior found the major story missions to be a highlight. Hatfield of IGN thought the single-player game fell apart at the end and called the two endings either "a super downer" or nonsense. He found the cooperative mode easy to set up, but felt like the game's missions were not designed well for multiple players, and that the visiting player became a "third wheel". On the other hand, CBS News's Christina Santiago called the cooperative mode "near perfect" and exemplary.
IGN's Hatfield considered the game's graphics average for the age. He "loved the neon-lit towering skyscrapers of Steelport" but thought the streets were sometimes "lifeless", as the game may be "open world" but not a "living world". Edge added that the city was easy enough to navigate, but that it was missing character. Grimm of GamesRadar said it didn't look bad, but wasn't interesting. Multiple reviewers complained of "pop-in", or of graphical errors. 1UP.com reported the PC version's graphics to be more stable, and Eurogamer Digital Foundry face-off recommended the PlayStation 3 release for its lack of screen tearing.Eurogamer Whitehead felt that the game crept closer "from ironic sexism to outright misogyny" in missions such as "Trojan Whores" and set pieces like "Tits n' Grits" and "Stikit Inn", even in the series' "gloriously lowbrow standards". Edge added that intent of humor in the sex trafficking-related mission "The Ho Boat" did not come across well, and seemed to be included only for shock value. Hatfield of IGN related that some of the game's more juvenile aspects made him cringe, and Edge wrote that the game felt "largely meaningless" in response to the desensitizing barrage of "context-free frippery". PC Gamer Tom Senior said he was almost offended during much of the game but stayed more happy than disgusted, adding that while the game has a "huge purple dildo", it doesn't have the prostitute-killing liberties or "other moments of nastiness" associated with the Grand Theft Auto franchise.
Whitehead of Eurogamer wrote in conclusion that the game doesn't propose "anything particularly inventive" and instead ends up with a toy box of gadgets. Edge felt that the game was weakest where it leaned on Grand Theft Auto precedent without adding a social commentary. Eurogamer Whitehead added that Saints Row: The Third missed an opportunity to separate from "the GTA formula", which Edge thought was done well in the last third of the game. IGN, however, felt the game was explicitly not a Grand Theft Auto clone, and G4 called it "a knockoff no more".
During an interview on the future of THQ in June 2012, its president, Jason Rubin, responded to the interviewer's concerns that Saints Row: The Third'' was not a game he wanted to play in front of his family by saying that, while he does not consider there to be no place in the company "for a game that features a purple dildo", Volition chose that route because of the limited options and their "environment at the time", and he was looking to push the publisher and its studios to do better.
Notes
References
General references
Further reading
External links
2011 video games
Action-adventure games
LGBT-related video games
Self-reflexive video games
Open-world video games
Organized crime video games
Video games about cloning
Lua (programming language)-scripted video games
PlayStation 3 games
PlayStation 4 games
PlayStation 5 games
Nintendo Switch games
Saints Row
THQ games
Censored video games
Video game sequels
Video games scored by Jake Kaufman
Video games developed in the United States
Video games featuring protagonists of selectable gender
Video games set in the United States
Video games with alternate endings
Video games with downloadable content
Video games with expansion packs
Windows games
Linux games
Xbox 360 games
Xbox One games
Xbox Series X and Series S games
Video games set in 2011
Video games using Havok
Video games with customizable avatars |
5805995 | https://en.wikipedia.org/wiki/Ecology%20of%20Banksia | Ecology of Banksia | The ecology of Banksia refers to all the relationships and interactions among the plant genus Banksia and its environment. Banksia has a number of adaptations that have so far enabled the genus to survive despite dry, nutrient-poor soil, low rates of seed set, high rates of seed predation and low rates of seedling survival. These adaptations include proteoid roots and lignotubers; specialised floral structures that attract nectariferous animals and ensure effective pollen transfer; and the release of seed in response to bushfire.
The arrival of Europeans in Australia has brought new ecological challenges. European colonisation of Australia has directly affected Banksia through deforestation, exploitation of flowers and changes to the fire regime. In addition, the accidental introduction and spread of plant pathogens such as Phytophthora cinnamomi (dieback) pose a serious threat to the genus's habitat and biodiversity. Various conservation measures have been put in place to mitigate these threats, but a number of taxa remain endangered.
Background
Banksia is a genus of around 170 species in the plant family Proteaceae. An iconic Australian wildflower and popular garden plant, Banksias are most commonly associated with their elongate flower spikes and fruiting "cones", although less than half of Banksia species possess this feature. They grow in forms varying from prostrate woody shrubs to trees up to 30 metres tall, and occur in all but the most arid areas of Australia.
Pollination
The pollination ecology of Banksia has been well studied, because the large showy inflorescences make it easy to conduct pollination experiments, and the pollination roles of nectariferous birds and mammals makes the genus a popular subject for zoologists.
Visits to Banksia inflorescences by western honeybees and nectarivorous birds are often observed and are obviously important to pollination. Also important are visits by nectariferous mammals, although such visits are rarely observed because these mammals are usually nocturnal and reclusive. Studies have found that Banksia inflorescences are foraged by a variety of small mammals, including marsupials (such as honey possums and yellow-footed antechinus, Antechinus flavipes), and rodents (such as the pale field rat, Rattus tunneyi). Studies in New South Wales and in Western Australia found that small mammals can carry pollen loads comparable to those of nectarivorous birds, likely making them effective pollinators of many "Banksia" species. Other studies have shown that the relative importance of vertebrates and invertebrates for pollination may vary from species to species, with some Banksia species exhibiting reduced fruit set when vertebrate pollinators are excluded, while others are unaffected by the exclusion of vertebrates and set some fruit even when all pollinators are excluded.
Almost all Banksia species studied so far have shown outcrossing rates among the highest ever recorded for plants; that is, very few Banksia plants are found to occur as a result of self-fertilisation. There are a number of potential reasons for this:
One possibility is that Banksia flowers are simply not exposed to their own pollen. This is highly unlikely for two reasons. Firstly, the morphology of the Banksia flower makes it virtually inevitable that the stigma will be exposed to its own pollen, since it functions also as a "pollen-presenter". It has been suggested that this problem would be avoided if the flowers were strongly protandrous, but the evidence so far supports only partial protandry. Moreover, the question of protandry of individual flowers is probably irrelevant, because the sequential anthesis of flowers means that each inflorescence will typically contain flowers in both male and female stages at the same time. Observations of foraging patterns in pollinators have shown that transfer of pollen between different flowers in the same inflorescence is inevitable.
Another possibility is that the high outcrossing rate is due to self-incompatibility, due either to a failure to fertilise or abortion of self-fertilised fruit. Studies have shown self-compatibility of pollen to vary between Banksia species, with some but not all species inhibiting the growth of pollen tubes for pollen from its own flowers. A more likely form of self-incompatibility is the spontaneous abortion of fruits that have been self-fertilised. These could be caused either by the expression of lethal genes, or the expression of genes that, while not lethal, cause the maternal plant to abort. Genetic causes are thought to be a common form of self-incompatibility, because of the high genetic load of the genus. However abortion rates are difficult to assess because the ovaries are deeply embedded in the "rhachis" (woody spine) of the inflorescence.
Finally, there is the mechanism of "facultative" abortion of fruits, where a maternal plant without the resources to mature all fruit aborts the least vigorous ones. This is thought to be common in those taxa that are generally self-compatible, since even these have high outcrossing rates. For example, Banksia spinulosa var. neoanglica, one of the most self-compatible Banksia species, has been shown to set far more cross-pollinated than self-pollinated fruit.
A few species, such as B. brownii, are exceptional in having low outcrossing rates. In all cases these are rare species that occur in very small populations, which increases the probability of self-fertilisation, and may discourage visits by pollinators.
Response to fire
Banksia plants are naturally adapted to the presence of regular bushfires. About half of Banksia species typically survive bushfires, either because they have very thick bark that protects the trunk from fire, or because they have lignotubers from which they can resprout after fire. In addition, fire triggers the release of seed stored in the aerial seed bank — an adaptation known as serotiny. In ecological literature, the species that are killed by fire but regenerate from seed are referred to as "fire-sensitive" or "seeders", while those that typically survive by resprouting from a trunk or underground lignotuber are called "fire-tolerant" or "sprouters".
All Banksia exhibit serotiny to some extent. Most retain all of their seed until release is triggered by fire, but a few species release a small amount of seed spontaneously. Serotiny is achieved through the mechanism of thick, woody follicles, which are held tightly closed by resin. Seeds retained in follicles are protected from granivores and the heat of bushfire, and remain viable for around ten years. Follicles require a critical heat to melt the resin, so that the follicles may begin opening; for B. elegans, for example, this is 2 minutes at 500 °C. Those species with high heat requirements typically retain their old withered florets. These are highly combustible and thus help ensure the critical heat is reached.
With some exceptions, each follicle contains two seeds plus a winged "separator". While the separator remains in the follicle, it holds the seeds in position. In some species, the separator remains in the follicle until it has cooled; once cooled, the separator loosens and falls out, and the seeds follow. In this way the separator ensures that the seeds fall onto cool ground. In other species, the separator does not loosen until it has been wet. In these species, the seeds do not fall to the ground until the first rains after the bushfire. Seed is typically released over a period of about 90 days.
Immediately after bushfire, granivorous birds move in to extract seed from newly open follicles, and to eat seeds that have fallen to the ground. Those seeds that escape the granivores are soon buried by wind and surface water. Nearly all buried seeds germinate.
Establishment of seedlings
Most Banksia seedlings do not survive to adulthood. A major reason for this is a lack of water. Competition for soil moisture can be intense, especially during drought. In one study, an estimated 13680 seedlings were counted over June–October following an experimental bushfire, but by January only eleven plants remained. Other threats to seedling establishment include predation by invertebrates such as grasshoppers and mites; and by vertebrates such as kangaroos and bandicoots.
Diseases, predation and other symbioses
Banksia seed is predated by a birds and insects. Insects also feed on stems, leaves, flowers and cones. Some insects cause galls. Many species of fungi live on Banksia plants, including Banksiamyces. Most Banksia species are highly susceptible to Phytophthora cinnamomi dieback.
Conservation
The biodiversity of Banksia is impacted by a range of processes. Major threats include disease; changes in fire frequency and intensity; clearing of land for agriculture, mining, urban development and roads; and exploitation of flowers, seeds and foliage by the cut flower and other industries. Three Banksia species are currently declared endangered under Australia's Environment Protection and Biodiversity Conservation Act 1999, and a further two are considered vulnerable.
Disease
The most severe disease threat to Banksia is the introduced plant pathogen Phytophthora cinnamomi, commonly known as "dieback". This is a water mould that attacks the roots of plants, destroying the structure of the root tissues, "rotting" the root, and preventing the plant from absorbing water and nutrients. Banksia'''s proteoid roots make it highly susceptible to this disease, with infected plants typically dying within a few years of exposure.
The threat of exposure to dieback is greatest in southwest Western Australia, where dieback infestation has reached epidemic proportions. This area holds the greatest species diversity for Banksia, with all species considered susceptible to infection. Consequently, a number of southwestern species are considered under threat from dieback. Nearly every known wild population of B. brownii shows some signs of dieback infection, and it is said that this species would be extinct within a decade if it were not protected. Other vulnerable species include B. cuneata, B. goodii, B. oligantha and B. verticillata.
Infested areas of Banksia forest in southwest Western Australia typically have less than 30% of the cover of uninfested areas. Plant deaths in such large proportions can have a profound influence on the makeup of plant communities. For example, in southwestern Australia Banksia often occurs as an understory to forests of jarrah (Eucalyptus marginata), another species highly vulnerable to dieback. Infestation kills both the jarrah overstory and the Banksia understory, and over time these may be replaced by a more open woodland consisting of an overstory of the resistant marri (Corymbia calophylla), and an understory of the somewhat resistant Parrotbush (Dryandra sessilis).
Dieback is notoriously difficult to manage. A number of protective measures have been implemented to slow the spread of disease and boost the survival rates of infected plants; these include restricting access to infected and susceptible sites, the collection and cold-storage of seed, and the treatment of plants with phosphite. Phosphite boosts the resistance of both infected and uninfected plants, and also acts as a direct fungicide. Aerial spraying of phosphite boosts plant survival and slows the spread of infection, but must be carefully managed as studies have shown that foliar spraying of phosphite adversely affects root and shoot growth. Direct injection of phosphite into tree stems appears to lack this disadvantage, but is costly to administer and restricted to known plants.
Because dieback thrives in moist soil conditions, it can be a severe problem for Banksias that are watered, such as in the cut flower industry and urban gardens. In some species this problem can be countered by grafting onto a rootstock of an eastern species, many of which demonstrate at least some resistant to dieback.
Other diseases to which Banksia species are vulnerable include the aerial canker fungus Zythiostroma and the parasitic fungus Armillaria.
Fire regime
The frequency and intensity of bushfires are important factors in the population health of Banksias. The ideal time interval between bushfires varies from species to species, but twenty years is a typical figure. If bushfires occur too frequently, plants are killed before they reach fruiting age or before they have developed a substantial seed bank. This can seriously reduce or even eliminate populations in some areas. Longer time intervals also reduce populations, as more plants die of natural attrition between fires. Unlike some other Proteaceae, Banksias do not release their seed when they die, and dead plants usually release much less seed in response to fire than live plants do, so long fire intervals cause seed wastage. Fire intensity is also important. If a fire is not intense enough to promote the release of seed, then the effective interval between seed release will be further increased by the loss of fire fuel.
Fire intervals are not as critical for resprouters, as adults typically survive fire. Fire does kill seedlings, however, as most resprouters do not develop a lignotuber until they reach fruiting age. Thus overly frequent fires prevent the recruitment of new adults, and populations decline at the rate that adults die.
It is widely accepted that colonisation by Europeans has led to an increase in fire frequency. This is especially the case near urban areas, where bushland is subject to both arson and prescribed burns. The proximity of urban areas creates a need to manage the ferocity and rate of occurrence of bushfires, resulting in pressure to prescribe regular low-intensity burns. This is at odds with the conservation needs of Banksia, which requires intense fires at long intervals.
Land clearing
The distribution of Banksia habitat coincides with areas of high population density, and large areas of Banksia woodland have been cleared for agriculture, mining, urban development and roads. As well as the direct loss of population and habitat, this has led to an increased spread of weeds and disease. As Banksia occurs on the poorest soils, the areas in which they are most abundant have been the last to be cleared for agriculture. Nonetheless, it is estimated that 55% of Banksia woodland had been cleared by 1986. Species threatened by clearing include B. hookeriana and the endangered species B. cuneata and B. goodii.
Exploitation by wildflower industryBanksias are highly favoured by Australia's wildflower industry, with commercial picking of blooms especially prevalent in southwest Western Australia. Blooms are harvested from around 29 Banksia species, the most popular being B. hookeriana, B. coccinea and B. baxteri. As of 1990 there were around 1000 licensed commercial pickers operating in the state, and in that year around 675000 blooms were harvested from B. hookeriana alone. Heavy harvesting of blooms substantially reduces harvest head production, resulting in a smaller seedbank. It is estimated population sizes for the next generation are likely to be around half the current populations at picking sites.
Threatened species
19 Banksia taxa are currently declared rare. All are endemic to Western Australia. Protection is afforded to them under the Australian Environment Protection and Biodiversity Conservation Act 1999 (EPBC Act), and the Western Australian Wildlife Conservation Act 1950. The Department of Environment and Conservation also provides for taxa to be declared "Priority Flora", either because they are poorly known, or because they are rare but not threatened. The following is a list of threatened and priority Banksia'' taxa:
See also
References
Banksia
Banksia, Ecology of |
1014669 | https://en.wikipedia.org/wiki/Overlay%20network | Overlay network | An overlay network is a computer network that is layered on top of another network.
Structure
Nodes in the overlay network can be thought of as being connected by virtual or logical links, each of which corresponds to a path, perhaps through many physical links, in the underlying network. For example, distributed systems such as peer-to-peer networks and client–server applications are overlay networks because their nodes run on top of the Internet.
The Internet was originally built as an overlay upon the telephone network, while today (through the advent of VoIP), the telephone network is increasingly turning into an overlay network built on top of the Internet.
Uses
Enterprise networks
Enterprise private networks were first overlaid on telecommunication networks such as frame relay and Asynchronous Transfer Mode packet switching infrastructures but migration from these (now legacy) infrastructures to IP based MPLS networks and virtual private networks started (2001~2002).
From a physical standpoint, overlay networks are quite complex (see Figure 1) as they combine various logical layers that are operated and built by various entities (businesses, universities, government etc.) but they allow separation of concerns that over time permitted the buildup of a broad set of services that could not have been proposed by a single telecommunication operator (ranging from broadband Internet access, voice over IP or IPTV, competitive telecom operators etc.).
Internet
The availability of digital circuit switching equipment and optical fiber. Telecommunication transport networks and IP networks (which combined make up the broader Internet) are all overlaid with at least an optical fiber layer, a transport layer and an IP or circuit switching layers (in the case of the PSTN).
Over the Internet
Nowadays the Internet is the basis for more overlaid networks that can be constructed in order to permit routing of messages to destinations not specified by an IP address. For example, distributed hash tables can be used to route messages to a node having a specific logical address, whose IP address is not known in advance.
Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees to achieve higher-quality streaming media. Previous proposals such as IntServ, DiffServ, and IP multicast have not seen wide acceptance, largely because they require modification of all routers in the network. On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation from ISPs. The overlay has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes a message traverses before reaching its destination.
For example, Akamai Technologies manages an overlay network which provides reliable, efficient content delivery (a kind of multicast). Academic research includes End System Multicast and Overcast, which is multicasting on an overlay network; RON (Resilient Overlay Network) for resilient routing; and OverQoS for quality of service guarantees, among others.
Internet of Things
The dispersed nature of the Internet of things (IoT) presents a major operational challenge that is uncommon in the traditional Internet or enterprise networks. Devices that are managed together --- say a fleet of railcars --- are not physically colocated. Instead, they are widely geographically distributed. The operational approaches for management and security used in enterprise networks, where most hosts are densely contained in buildings or campuses, do not translate to the IoT. IoT devices operate outside of the enterprise network security and operational perimeter and the corporate LAN firewall can’t protect them. Dispatching technicians is expensive, so manual provisioning and configuration doesn’t scale. Devices connect to the Internet via a variety of last-mile ISPs, so many devices won’t share a common IP prefix and addresses will change at arbitrary times. Any configuration based on these IPs will require continued upkeep and will often be out-of-date, exposing hosts and devices to external threats.
Advantages and Benefits
Resilience
Resilient Overlay Networks (RON) are architectures that allow distributed Internet applications to detect and recover from disconnection or interference. Current wide area routing protocols that take at least several minutes to recover from are improved upon with this application layer overlay. The RON nodes monitor the Internet paths among themselves and will determine whether or not to reroute packets directly over the internet or over other RON nodes thus optimizing application specific metrics.
The Resilient Overlay Network has a relatively simple conceptual design. RON nodes are deployed at various locations on the Internet. These nodes form an application layer overlay that cooperate in routing packets. Each of the RON nodes monitor the quality of the Internet paths between each other and uses this information to accurately and automatically select paths from each packet, thus reducing the amount of time required to recover from poor quality of service.
Multicast
Overlay multicast is also known as End System or Peer-to-Peer Multicast. High bandwidth multi-source multicast among widely distributed nodes is a critical capability for a wide range of applications, including audio and video conferencing, multi-party games and content distribution. Throughout the last decade, a number of research projects have explored the use of multicast as an efficient and scalable mechanism to support such group communication applications. Multicast decouples the size of the receiver set from the amount of state kept at any single node and potentially avoids redundant communication in the network.
The limited deployment of IP Multicast, a best effort network layer multicast protocol, has led to considerable interest in alternate approaches that are implemented at the application layer, using only end-systems. In an overlay or end-system multicast approach, participating peers organize themselves into an overlay topology for data delivery. Each edge in this topology corresponds to a unicast path between two end-systems or peers in the underlying Internet. All multicast-related functionality is implemented at the peers instead of at routers, and the goal of the multicast protocol is to construct and maintain an efficient overlay for data transmission.
Disadvantages
Slow in spreading the data.
Long latency.
Duplicate packets at certain points.
List of overlay network protocols
Overlay network protocols based on TCP/IP include:
http / https
Distributed hash tables (DHTs) based on the Chord
JXTA
XMPP: the routing of messages based on an endpoint Jabber ID (Example: nodeId_or_userId@domainId\resourceId) instead of by an IP Address
Many peer-to-peer protocols including Gnutella, Gnutella2, Freenet, I2P and Tor.
PUCC
Solipsis: a France Télécom system for massively shared virtual world
Overlay network protocols based on UDP/IP include:
Distributed hash tables (DHTs) based on Kademlia algorithm, such as KAD, etc.
Real Time Media Flow Protocol – Adobe Flash
See also
Darknet
Mesh network
Net
Peercasting
Virtual Private Network
References
External links
List of overlay network implementations, July 2003
Resilient Overlay Networks
Overcast: reliable multicasting with an overlay network
OverQoS: An overlay based architecture for enhancing Internet QoS
RFC 3170
Multicast over TCP/IP HOWTO
End System Multicast
Overlay networks
Anonymity networks
Network architecture
Computer networking |
447288 | https://en.wikipedia.org/wiki/Project%20Athena | Project Athena | Project Athena was a joint project of MIT, Digital Equipment Corporation, and IBM to produce a campus-wide distributed computing environment for educational use. It was launched in 1983, and research and development ran until June 30, 1991. , Athena is still in production use at MIT. It works as software (currently a set of Debian packages) that makes a machine a thin client, that will download educational applications from the MIT servers on demand.
Project Athena was important in the early history of desktop and distributed computing. It created the X Window System, Kerberos, and Zephyr Notification Service. It influenced the development of thin computing, LDAP, Active Directory, and instant messaging.
Description
Leaders of the $50 million, five-year project at MIT included Michael Dertouzos, director of the Laboratory for Computer Science; Jerry Wilson, dean of the School of Engineering; and Joel Moses, head of the Electrical Engineering and Computer Science department. DEC agreed to contribute more than 300 terminals, 1600 microcomputers, 63 minicomputers, and five employees. IBM agreed to contribute 500 microcomputers, 500 workstations, software, five employees, and grant funding.
History
In 1979 Dertouzos proposed to university president Jerome Wiesner that the university network mainframe computers for student use. At that time MIT used computers throughout its research, but undergraduates did not use computers except in Course VI (computer science) classes. With no interest from the rest of the university, the School of Engineering in 1982 approached DEC for equipment for itself. President Paul E. Gray and the MIT Corporation wanted the project to benefit the rest of the university, and IBM agreed to donate equipment to MIT except to the engineering school.
Project Athena began in May 1983. Its initial goals were to:
Develop computer-based learning tools that are usable in multiple educational environments
Establish a base of knowledge for future decisions about educational computing
Create a computational environment supporting multiple hardware types
Encourage the sharing of ideas, code, data, and experience across MIT
The project intended to extend computer power into fields of study outside computer science and engineering, such as foreign languages, economics, and political science. To implement these goals, MIT decided to build a Unix-based distributed computing system. Unlike those at Carnegie Mellon University, which also received the IBM and DEC grants, students did not have to own their own computer; MIT built computer labs for their users, although the goal was to put networked computers into each dormitory. Students were required to learn FORTRAN and Lisp, and would have access to sophisticated graphical workstations, capable of 1 million instructions per second and with 1 megabyte of RAM and a 1 megapixel display.
Although IBM and DEC computers were incompatible, Athena's designers intended that software would run similarly on both. MIT did not want to be dependent on one vendor at the end of Athena. Sixty-three DEC VAX-11/750 servers were the first timesharing clusters. "Phase II" began in September 1987, with hundreds of IBM RT PC workstations replacing the VAXes, which became fileservers for the workstations. The DEC-IBM division between departments no longer existed. Upon logging into a workstation, students would have immediate access to a universal set of files and programs via central services. Because the workstation used a thin client model, the user interface would be consistent despite the use of different hardware vendors for different workstations. A small staff could maintain hundreds of clients.
The project spawned many technologies that are widely used today, such as the X Window System and Kerberos. Among the other technologies developed for Project Athena were the Zephyr Notification Service and the Hesiod name and directory service.
MIT had 722 workstations in 33 private and public clusters on and off campus, including student living groups and fraternities. A survey found that 92% of undergraduates had used the Athena workstations at least once, and 25% used them every day. The project received an extension of three years in January 1988. Developers who had focused on creating the operating system and courseware for various educational subjects now worked to improve Athena's stability and make it more user friendly. When Project Athena ended in June 1991, MIT's IT department took it over and extended it into the university's research and administrative divisions. the MIT campus had more than 1300 Athena workstations, and more than 6000 Athena users logged into the system daily. Athena is still used by many in the MIT community through the computer labs scattered around the campus. It is also now available for installation on personal computers, including laptops.
Educational computing environment
Athena continues in use , providing a ubiquitous computing platform for education at MIT; plans are to continue its use indefinitely.
Athena was designed to minimize the use of labor in its operation, in part through the use of (what is now called ) "thin client" architecture and standard desktop configurations. This not only reduces labor content in operations but also minimizes the amount of training for deployment, software upgrade, and trouble-shooting. These features continue to be of considerable benefit today.
In keeping with its original intent, access to the Athena system has been greatly enlarged in the last several years. Whereas in 1991 much of the access was in public "clusters" (computer labs) in academic buildings, access has been extended to dormitories, fraternities and sororities, and independent living groups. All dormitories have officially supported Athena clusters. In addition, most dormitories have "quick login" kiosks, which is a standup workstation with a timer to limit access to ten minutes. The dormitories have "one port per pillow" Internet access.
Originally, the Athena release used Berkeley Software Distribution (BSD) as the base operating system for all hardware platforms. public clusters consisted of Sun SPARC and SGI Indy workstations. SGI hardware was dropped in anticipation of the end of IRIX production in 2006. Linux-Athena was introduced in version 9, with the Red Hat Enterprise Linux operating system running on cheaper x86 or x86-64 hardware. Athena 9 also replaced the internally developed "DASH" menu system and Motif Window Manager (mwm) with a more modern GNOME desktop. Athena 10 is based on Ubuntu Linux (derived from Debian) only. Support for Solaris is expected to be dropped almost entirely.
Educational software
The original concept of Project Athena was that there would be course-specific software developed to use in conjunction with teaching. Today, computers are most frequently used for "horizontal" applications such as e-mail, word processing, communications, and graphics.
The big impact of Athena on education has been the integration of third party applications into courses. MATLAB and Maple (especially the former) are integrated into large numbers of science and engineering classes. Faculty expect that their students have access to and know how to use these applications for projects and homework assignments, and some have used the MATLAB platform to rebuild the courseware that they had originally built using the X Window System.
More specialized third-party software are used on Athena for more discipline-specific work. Rendering software for architecture and computer graphics classes, molecular modeling software for chemistry, chemical engineering, and material science courses, and professional software used by chemical engineers in industry, are important components of a number of MIT classes in various departments.
Contributing to the development of distributed systems
Athena was not a research project, and the development of new models of computing was not a primary objective of the project. Indeed, quite the opposite was true. MIT wanted a high-quality computing environment for education. The only apparent way to obtain one was to build it internally, using existing components where available, and augmenting those components with software to create the desired distributed system. However, the fact that this was a leading edge development in an area of intense interest to the computing industry worked strongly to the favor of MIT by attracting large amounts of funding from industrial sources.
Long experience has shown that advanced development directed at solving important problems tends to be much more successful than advanced development promoting technology that must look for a problem to solve. Athena is an excellent example of advanced development undertaken to meet a need that was both immediate and important. The need to solve a "real" problem kept Athena on track to focus on important issues and solve them, and to avoid getting side-tracked into academically interesting but relatively unimportant problems. Consequently, Athena made very significant contributions to the technology of distributed computing, but as a side-effect to solving an educational problem.
The leading edge system architecture and design features pioneered by Athena, using current terminology, include:
Client–server model of distributed computing using three-tier architecture (see Multitier architecture)
Thin client (stateless) desktops
System-wide security system (Kerberos encrypted authentication and authorization)
Naming service (Hesiod)
X Window System, widely used within the Unix community
X tool kit for easy construction of human interfaces
Instant messaging (Zephyr real time notification service)
System-wide use of a directory system
Integrated system-wide maintenance system (Moira Service Management System)
On-Line Help system (OLH)
Public bulletin board system (Discuss)
Many of the design concepts developed in the "on-line consultant" now appear in popular help desk software packages.
Because the functional and system management benefits provided by the Athena system were not available in any other system, its use extended beyond the MIT campus. In keeping with the established policy of MIT, the software was made available at no cost to all interested parties. Digital Equipment Corp. "productized" the software as DECAthena to make it more portable, and offered it along with support services to the market. A number of academic and industrial organizations installed the Athena software. As of early 1992, 20 universities worldwide were using DECathena, with a reported 30 commercial organisations evaluating the product.
The architecture of the system also found use beyond MIT. The architecture of the Distributed Computing Environment (DCE) software from the Open Software Foundation was based on concepts pioneered by Athena. Subsequently, the Windows NT network operating system from Microsoft incorporates Kerberos and several other basic architecture design features first implemented by Athena.
Use outside MIT
Pixar Animation Studios, the computer graphics and animation company (then the Lucasfilm Computer Graphics Project, now owned by Walt Disney Pictures), used most of the first fifty Project Athena systems before they went into general use rendering The Adventures of André and Wally B.
Iowa State University runs an implementation of Athena named "Project Vincent", named after John Vincent Atanasoff, the inventor of the Atanasoff–Berry Computer.
North Carolina State University also runs a variation of Athena named "Eos/Unity".
Carnegie Mellon University began a similar system a year earlier than MIT called Project Andrew which spawned AFS, Athena's current filesystem.
University of Maryland, College Park also ran a variation of Athena on the WAM (Workstations at Maryland) and Glue, now renamed '"TerpConnect".
See also
tkWWW, a defunct web browser developed for the project by Joseph Wang
References
Sources
External links
Athena at MIT
TerpConnect (formerly Project Glue) at UMD College Park
Guide to the Ellen McDaniel Collection of Project Athena and Project Vincent Manuals and Other Materials 1986-1993
Massachusetts Institute of Technology
Software projects
Athena, Project |
8608911 | https://en.wikipedia.org/wiki/Wii%20Shop%20Channel | Wii Shop Channel | The Wii Shop Channel is a defunct digital distribution service for the Wii video game console. The service allowed users to purchase and play additional software for the Wii (called Channels), including exclusive games (branded WiiWare), and games from prior generations of video games (marketed with the Virtual Console brand). The Wii Shop Channel launched on November 19, 2006, and ceased service operations worldwide on January 30, 2019. As of February 1, 2019, all previously purchased content can still be re-downloaded indefinitely or Wii data can be transferred from a Wii to a Wii U (via the Wii U Transfer Tool).
Succeeded by the Nintendo eShop, the Wii Shop Channel was accessible on the original Wii and on the Wii U console via Wii Mode, supporting the download of WiiWare titles, as well as legacy Virtual Console titles that are yet to be available via the Nintendo eShop.
The Channel's music theme has become popular and well-received on the internet, and is often used in internet memes.
Wii Points
Wii Points were the currency used in transactions on the Wii Shop Channel. Wii Points were obtained by either redeeming Wii Points Cards purchased from retail outlets (100-2,000 in the USA; 1,000-3,000 in Japan) or directly through the Wii Shop Channel using a Mastercard or Visa credit card (1,000, 2,000, 3,000, 4,000, or 5,000 Wii Points depending with the number of dollars). In 2008, Club Nintendo in Europe began offering Wii Points in exchange for "stars" received from registering games and consoles on the website. To purchase and play Virtual Console games, Wii Shop Channel users would have to fund their account with Wii Points. On March 26, 2018, the ability to add Wii Points (with either a credit card or Wii Points card) worldwide was removed forever following a temporary maintenance notice. This prevented users to purchase and play WiiWare and/or Virtual Console games unless if they had enough Wii Points in their account balance already. Already purchased software could still be downloaded (purchased and played), and any existing Wii Point credit were redeemable until January 30, 2019.
Virtual Console
Virtual Console was a brand that included games from past video game consoles, which ran under emulation. There were over 300 games available in North America and, as of December 31, 2007, over 10 million games have been downloaded worldwide. All games are exact replicas of the originals with no updated features or graphics, with the exception of Pokémon Snap, which was updated to allow in-game pictures to be posted to the Wii Message Board. New games were added weekly at 9 A.M. Pacific Time every Thursday (previously every Monday) in North America, Tuesdays in Japan and South Korea, and Fridays in Europe, Australia and New Zealand.
In Europe and North America, the Virtual Console featured several import titles which were not previously made available in those respective territories, such as Mario's Super Picross. These games cost 100-300 more points than the normal price due to their import status and some translation work.
Consoles included both Nintendo systems, such as the Nintendo Entertainment System, Super Nintendo Entertainment System and Nintendo 64, and non-Nintendo systems, such as the Sega Genesis/Mega Drive, Sega Master System, PC Engine/TurboGrafx-16, MSX, Neo Geo and Commodore 64 (Europe and North America only). Each system had a base starting price for games on that system. All titles ranged from 500 to 1200 Wii Points.
If a person using the now defunct Connection Ambassador Programme reached Gold status (Helped 10 people to connect), they would be able to download any Nintendo-published NES game free of charge. Additionally, if they reached Platinum (helped 20 people to connect), they would be able to download any NES, SNES and N64 game in the Virtual console free of charge.
WiiWare
The WiiWare section featured original games specifically designed for Wii. Games were priced between 500 and 1500 points. To decrease the size of the games, instruction manuals were hosted on each game's Wii Shop Channel page. Some titles featured additional downloadable content, priced from 100 to 800 points, that could be purchased using Wii Points in game or from the game's page.
The first WiiWare games were made available on March 25, 2008 in Japan, on May 12, 2008 in North America, and on May 20, 2008 in Europe.
Wii Channels
The Wii Channels section featured additional non-game channels that can be downloaded and used on Wii.
Before the WiiConnect24 service was discontinued, there were three free Channels offered worldwide: the Everybody Votes Channel, the Check Mii Out Channel (Mii Contest Channel in Europe), and the Nintendo Channel. An update to the Photo Channel (Photo Channel 1.1) is also available, if not preinstalled. A fourth Channel, the Internet Channel, a web browser based on Opera, was available worldwide originally for 500 Wii Points but was free as of September 1, 2009. Anyone who paid the 500 Wii Points for the Internet Channel has been refunded. There were also two exclusive free Japanese channels: the Television Friend Channel, which provides channel listing and recording reminder features, and the Digicam Print Channel, which allows users to order business cards and photo albums using photos stored on SD cards or the Photo Channel. Previously, a preview channel for Metroid Prime 3: Corruption was available for free in the fall of 2007 for North America and PAL regions before it was removed from the Wii Shop Channel several months after the game's launch. In North America and Europe, the Netflix channel was available in the Wii Channels section, along with Crunchyroll.
The Wii Channels section in the Wii Shop Channel was originally under the name of WiiWare in North America and Wii Software in Europe, before moving to its own dedicated space when WiiWare launched. These Wii Channels were unavailable on the Wii U console.
Downloading
Software downloaded from the Wii Shop Channel is saved onto the Wii console's internal memory. After a download is complete, the new software appears on the Wii Menu as a channel. Software can be copied to SD cards or re-downloaded for free. Wii consoles with system software version 4.0 can download software directly to SD cards.
On December 10, 2007, a gift feature was added to the Wii Shop Channel, allowing users to purchase and send games and channels to others as gifts. The receiving user was given the option to download or reject the gift upon opening the Wii Shop Channel, with a notification being sent out to the send if it was accepted. If a user already had the game or if the user did not choose to accept the gift within 45 days, then the gift expired and the Wii Points are returned to the sender. The feature was region locked and incompatible with the Wii U's Nintendo eShop.
Game updates
Downloaded games can receive updates from the Wii Shop Channel. This has been done four times so far to update Military Madness, Star Fox 64/Lylat Wars, Kirby 64: The Crystal Shards (in North America and Europe), and Mario Kart 64 (in Europe and Australia). Several NES and SNES games released before March 30, 2007 have also been given updates in Europe and Australia to fix previous problems with the Wii component cables. These updates are free of charge to those who have downloaded a previous version of the game. Some WiiWare games have also featured free updates for the purposes of fixing bugs. These games include Dr. Mario Online Rx and Alien Crush Returns.
Connection Ambassador Promotion
In 2009, Nintendo of Europe announced the "Connection Ambassador Promotion", a scheme designed to reward users for helping other new users get connected online and to the Wii Shop Channel. Both users (The Ambassador and the person who was helped) received a reward of 500 Wii Points each time the ambassador helped someone get online. If the ambassador assisted 20 people, the ambassador would have accumulated 10,000 Wii Points from the programme while attaining Platinum status and be able to download all NES, SNES and N64 titles from the Virtual Console section of the Wii Shop Channel free of charge. The service also launched in New Zealand and Australia. Since the service launched in 2009 the scheme had proved hugely popular with many sites appearing online dedicated to helping connect users and share system codes. This service remained exclusive for PAL version Wii consoles.
The programme ended on November 21, 2012.
Discontinuation
On September 29, 2017, Nintendo announced that the Wii Shop Channel would be closed on January 31st 2019 (limiting usage of the service to redownload previously purchased content only and that the service would become entirely inaccessible at an unspecified date that year). To prepare for the closure, they announced that the ability to add Wii Points with a credit card or a Wii Points card (to purchase and play VC games and/or WiiWare) would be removed on March 26, 2018. Then from that date, Wii Shop Channel users could still add Wii Points to purchase and play VC games until March 26, 2018 (until 1 P.M PST on March 26, 2018).
On March 26, 2018, the Wii Shop Channel was updated to remove the ability to add Wii Points (with a credit card or a Wii Points card) to purchase and play VC games and/or WiiWare in preparation for its shutdown in the following year. This was at 1 PM for PST. Beginning on that date, Wii Shop Channel users were no longer able to purchase Wii Points and add them to accounts so they could purchase and play VC games and/or Wii Ware. This prevented them to add Wii Points (and purchase and play VC games and/or WiiWare) unless they had enough Wii Points in their account balance. Afterwards, the Wii Shop Channel remained online until January 29, 2019.
Finally on January 30, 2019, Nintendo (at 6 am PST) shut down the Wii Shop Channel. They removed all WiiWare, Virtual Console games, and other Wii Channels from sale and/or initial download. The only exceptions are the save data update channel for The Legend of Zelda: Skyward Sword, the Wii U Transfer Tool channel (on Wii consoles), and the Wii System Transfer channel (on Wii U consoles). Users can still continue re-downloading previously purchased content they have acquired prior to the shutdown date and/or transfer Wii data from a Wii to a Wii U via the Wii U Transfer Tool (if purchased from the Wii Shop Channel). The ability to re-download previously purchased content and/or transfer Wii data from a Wii to a Wii U is going to continue until an unknown date. On the day of the closure, the shop's main UI has been updated to show its original 2006 layout as it appeared when the channel first launched back on November 19, 2006, removing the WiiWare option entirely.
Japanese users were able to transfer or refund any remaining Wii Points after the shutdown date from February 21, 2019 until August 31, 2019. The refunded points could be transferred to a local bank account or received as a refund from a convenience store.
See also
Nintendo eShop
Xbox Games Store
PlayStation Store
Steam
Lists of Virtual Console games
Lists of PS one Classics
List of WiiWare games
List of downloadable PlayStation games
WiiWare
Xbox Live Arcade
References
Online-only retailers of video games
Wii
Retail companies established in 2006
Internet properties established in 2006
Products and services discontinued in 2019
Retail companies disestablished in 2019
Internet properties disestablished in 2019
Video games scored by Kazumi Totaka
es:Wii Channels#Wii Shop Channel
sv:Wii Channels#Wii Shop Channel
ts:Wii Channels#Wii Shop Channel |
28244092 | https://en.wikipedia.org/wiki/Schmidt-Samoa%20cryptosystem | Schmidt-Samoa cryptosystem | The Schmidt-Samoa cryptosystem is an asymmetric cryptographic technique, whose security, like Rabin depends on the difficulty of integer factorization. Unlike Rabin this algorithm does not produce an ambiguity in the decryption at a cost of encryption speed.
Key generation
Choose two large distinct primes p and q and compute
Compute
Now N is the public key and d is the private key.
Encryption
To encrypt a message m we compute the ciphertext as
Decryption
To decrypt a ciphertext c we compute the plaintext as which like for Rabin and RSA can be computed with the Chinese remainder theorem.
Example:
Now to verify:
Security
The algorithm, like Rabin, is based on the difficulty of factoring the modulus N, which is a distinct advantage over RSA.
That is, it can be shown that if there exists an algorithm that can decrypt arbitrary messages, then this algorithm can be used to factor N.
Efficiency
The algorithm processes decryption as fast as Rabin and RSA, however it has much slower encryption since the sender must compute a full exponentiation.
Since encryption uses a fixed known exponent an addition chain may be used to optimize the encryption process. The cost of producing an optimal addition chain can be amortized over the life of the public key, that is, it need only be computed once and cached.
References
A New Rabin-type Trapdoor Permutation Equivalent to Factoring and Its Applications
Public-key encryption schemes |
2869181 | https://en.wikipedia.org/wiki/M32R | M32R | The M32R is a 32-bit RISC instruction set architecture (ISA) developed by Mitsubishi Electric for embedded microprocessors and microcontrollers. The ISA is now owned by Renesas Electronics Corporation, and the company designs and fabricates M32R implementations. M32R processors are used in embedded systems such as Engine Control Units, digital cameras and PDAs. The ISA was supported by Linux and the GNU Compiler Collection but was dropped in Linux kernel version 4.16.
References
External links
M32R homepage
Linux/M32R homepage
Interface (CQ Publishing Co.,Ltd.)
Computer-related introductions in 1997
Instruction set architectures
Renesas microcontrollers |
9090516 | https://en.wikipedia.org/wiki/LoCo%20team | LoCo team | A Local Community Team, or LoCo Team, is a group of local Linux advocates. The main focus of a LoCo team is to advocate the use of the Linux operating system as well as the use of open source/free software products.
Ubuntu & LoCos
The Ubuntu OS receives the credit for the promotion of the use of LoCos. They provide an assortment of materials and media to help each LoCo with their goals.
Approved LoCo Teams
New LoCo Teams
Cyprus
See also
Linux User Group
Ubuntu Community Council
Jono Bacon -- Ubuntu Community Manager
External links
Ubuntu LoCo List
Ubuntu LoCo Main
Ubuntu LoCo Howto
Ubuntu LoCo FAQ
Ubuntu LoCo List (Wiki)
Ubuntu tries to go LoCo in all 50 states
Linux user groups
ru:Группа пользователей Linux#LoCo |
58158408 | https://en.wikipedia.org/wiki/Gina%20Cody | Gina Cody | Gina Parvaneh Cody is a Canadian-Iranian engineer and business leader.
In 1989, Gina became the first woman in Concordia University’s history to earn a PhD in building engineering. In 2018, following her donation of $15 million, Concordia University renamed its faculty of engineering and computer science after her (the Gina Cody School of Engineering and Computer Science), making it the first university engineering and computer science faculty to be named after a woman in Canada, and internationally.
Early life and education
Cody was born in Iran in 1956. Her father owned a boy's high school, where Cody would teach during the summer. Cody's three brothers became engineers, while her sister became a dentist. In 1978, Cody completed a Bachelor of Science degree in structural engineering at Aryamehr University of Technology (now called Sharif University of Technology).
She left Iran for Canada in 1979 with $2,000. At this point, Cody's brother had completed a Bachelor's of Engineering at Concordia University in Montreal, and arranged for her to meet with the engineering professor Cedric Marsh. Through this meeting, Cody received a scholarship in engineering to attend the Concordia University in Montreal, where she completed a master's degree in engineering in 1981.
In 1989, Cody became the first woman in Concordia University's history to earn a PhD in building engineering.
Career
Following her PhD, Cody moved to Toronto, where she worked for a year on provincial building codes for Ontario's Ministry of Housing (now the Ministry of Municipal Affairs and Housing). Cody then moved to the private sector, where she performed tower crane inspections at Construction Control Inc. (CCI; an Ontario-based engineering consulting company), making her the first woman to climb Toronto construction cranes as an inspector. Cody later became the president of CCI.
Until her retirement, Cody was the executive chair and principal shareholder of CCI Group Inc. Cody sold CCI Group Inc. and retired in 2016. Following a 2016 merger, the CCI Group Inc. now operates as McIntosh Perry Consulting Engineers.
In 2018, Cody donated $15 million to Concordia University, and had its faculty of engineering and computer science named after her (the Gina Cody School of Engineering and Computer Science), making it the first university engineering faculty to be named after a woman in Canada, and one of the first internationally. Her support will foster gender equity, diversity and inclusion through 100 undergraduate and 40 graduate entrance scholarships across the faculty. In addition, it will support the Canada Excellence Research Chair in Smart and Resilient Cities and Communities, and enable next-generation work of three new research chairs in the internet of things, artificial intelligence, and industry 4.0.
Advocate for equity, diversity and inclusion
Since retiring and making her landmark gift to Concordia University, Cody has become a vocal advocate for equity, diversity and inclusion in science, technology, engineering and math (STEM) fields.
Cody has been invited to speak at dozens of events for major corporations, universities, government, conferences and women’s groups. These include keynote presentations for the Association québécoise des technologies, Autodesk, City of Markham, CCWESTT 2020, Bombardier, Broadcom, Engineers Without Borders Canada, IEEE, Pratt & Whitney, PwC, Qualcomm, Ryerson University, SAP, Siemens, Sunlife, UCLA and Western University.
Board memberships
Concordia University
Canadian Apartment Properties REIT
European Residential REIT
Other roles and responsibilities
Honorary Lieutenant-Colonel of 34 Combat Engineer Regiment (CER)
Co-chair of the Campaign for Concordia. Next-Gen. Now.
Chair of Concordia University’s Real Estate Planning Committee
Chair of the Gina Cody School Advisory Board
Awards
For her contributions to engineering, business and her community, Cody has received numerous awards, including an Award of Merit from the Canadian Standards Association, a Volunteer Service Award from the Government of Ontario and Officer of the Order of Honour of Professional Engineers Ontario. The Financial Post recognized Construction Control (then under Cody's management) as one of Canada's best managed companies. In 2010, Profit magazine named Cody one of Canada's Top Women Entrepreneurs, and listed CCI as the ninth most profitable Canadian company owned by a woman. The following year, the Concordia University Alumni Association named her Alumna of the Year.
In 2019, Cody was named to the Order of Montreal, and was inducted as a Fellow of the Canadian Academy of Engineering. In 2020, Cody was named one of Canada's Top 25 Women of Influence. In 2021, she was appointed as a Member of the Order of Canada.
Personal life
Cody currently resides in Toronto and is married to Thomas Cody, a Concordia MBA graduate and retired Bank of America Canada senior vice-president. Cody met her husband at Concordia University. The couple have two daughters: Roya and Tina Cody.
References
1957 births
Iranian emigrants to Canada
Concordia University alumni
Sharif University of Technology alumni
Canadian women engineers
Living people
21st-century women engineers
20th-century women engineers
Members of the Order of Canada |
656230 | https://en.wikipedia.org/wiki/Boxer%20%28armoured%20fighting%20vehicle%29 | Boxer (armoured fighting vehicle) | The Boxer is a multirole armoured fighting vehicle designed by an international consortium to accomplish a number of operations through the use of installable mission modules. The governments participating in the Boxer program have changed as the program has developed. The Boxer vehicle is produced by the ARTEC GmbH (armoured vehicle technology) industrial group, and the programme is being managed by OCCAR (Organisation for Joint Armament Cooperation). ARTEC GmbH is based in Munich; its parent companies are Krauss-Maffei Wegmann GmbH and Rheinmetall Military Vehicles GmbH on the German side, and Rheinmetall Defence Nederland B.V. for the Netherlands. Overall, Rheinmetall has a 64% stake in the joint venture.
A distinctive and unique feature of the vehicle is its composition of a drive platform module and interchangeable mission modules which allow several configurations to meet different operational requirements.
Other names in use or previously used for Boxer are GTK (; armoured transport vehicle) Boxer and MRAV (multi role armoured vehicle). Confirmed Boxer customers as of February 2020 are Germany, the Netherlands, Lithuania, Australia and the UK. The Boxer has been produced and seen service in A0, A1 and A2 configurations. The UK Boxer will be A3 configuration. Australian deliveries are an A2/A3 hybrid.
Production history
With exceptions for style and ease of reading, the following development and production history is presented in as near-chronological order as possible.
The Boxer started in 1993 as a joint venture design project between Germany and France, with the UK joining the project in 1996. In November 1999 a £70 million contract for eight prototype vehicles (four each, Germany and the UK) was awarded. France left the programme in 1999 to pursue its own design, the Véhicule Blindé de Combat d'Infanterie (VBCI). In February 2001, the Netherlands joined the programme and an additional four prototypes were built for the Netherlands. Boxer, then known as GTK/MRAV/PWV, was unveiled on 12 December 2002. The name Boxer was announced when the second prototype appeared. At this time the first production run was to have been 200 for each country.
The UK Ministry of Defence announced its intention to withdraw from the Boxer programme and focus on the Future Rapid Effect System (FRES) in July 2003. In October 2003, the first Dutch prototype was delivered. In October 2006 the Netherlands confirmed the procurement of 200 Boxer to replace the M577 and the support variants of the YPR-765 in the Royal Netherlands Army. Deliveries were scheduled to run from 2013 through to 2018, and within the RNLA the baseline Boxer is called the Pantserwielvoertuig (PWV).
On 13 December 2006 the German parliament approved the procurement of 272 Boxers for the German Army, to replace some of its M113 and Fuchs TPz 1 vehicles. Production of Boxer had been scheduled to commence in 2004, but production was delayed and the first production example was delivered to the German Army in September 2009. Over seven years, prototypes accrued over 90,000 km of reliability trials and over 90,000 km of durability trials. There are three production facilities for Boxer, one in the Netherlands (Rheinmetall) and two in Germany (Krauss-Maffei Wegmann and Rheinmetall).
2010s
In December 2015 it was announced that Germany had ordered an additional 131 Boxers worth EUR476 million and that Lithuania had selected the Boxer.
In August 2016 a EUR385.6 million production contract was placed by Lithuania for the supply of 88 Boxers, and at this time it was stated that 53 Boxer would be manufactured by KMW and the remaining 35 by Rheinmetall, with deliveries running 2017–2021. In Lithuanian service the vehicle is designated as Vilkas (Wolf). The precise mix/number of Lithuanian vehicles was initially unclear but according to Janes, Lithuania will receive 91 Boxer in A2 configuration, 89 as variants of the baseline IFV configuration, plus two driver training vehicles. The exact IFV breakdown is: 55 IFV squad leader, 18 IFV platoon leader; 12 IFV company leader; 4 IFV command post. A single IFV will be used for maintenance training. The first two vehicles (driver training configuration) were delivered in December 2017. The first two Boxer in IFV configuration were delivered in June 2019 and at this time the Lithuanian MoD stated that 15 vehicles would be delivered in 2019, and that all 89 IFV variants would be delivered by the end of 2021.
Most of the original German Army Boxer order were delivered in A1 configuration, however 40 APC and 16 command post were delivered in A0 configuration, but these were subsequently upgraded to A1 configuration. In June 2017 it was announced that the Bundeswehr's Boxer A1 fleet would be upgraded to A2 standard. The first A2 Boxer was delivered in June 2015. The differences between A1 and A2 configuration are relatively minor electrical and mechanical engineering changes. The A2 standard resulted from operations in Afghanistan and incorporates changes in the drive and mission modules that include preparation for the integration of a driver vision system, changes to the stowage concept in both modules, changes to the gearbox, integration of a fire suppression system, modification of the RCWS, interface for an IED jammer, satellite communication system and other minor modifications." The latest Boxer variant is the A3, the British the first customer for A3 in its entirety.
In July 2017 ARTEC awarded the then Rheinmetall MAN Military Vehicles (RMMV) a €21 million contract to upgrade 38 Bundeswehr Boxer command vehicles to A2 configuration with work scheduled for completion in mid 2020. At this time the Bundeswehr also had 124 Boxer APCs, 72 ambulances and twelve driver training vehicles to upgrade to A2 status.
In February 2018 it was announced that Slovenia had selected the Boxer as the basis for two new mechanised infantry battle groups. In November it was revealed that pricing issues had impacted the Slovenian procurement timeline and that a new proposal from industry was pending. According to the Slovenian MoD's initial release on the subject, funding had been allocated for the procurement of 48 vehicles in 2018-2020 for the first battle group, which was expected to become operational by 2022, followed by the second in 2025. The desired total was reported to be 112 Boxer (96 IFV, 16 mortar) plus a small number of driver training vehicles. It was reported mid-2019 that the planned Boxer procurement had been suspended, the MoD deciding to conduct research and draw up a new comprehensive tactical study relating to the formation of a medium infantry battalion group, this likely to affect the procurement of 8×8 wheeled armoured vehicles. The ministry will then re-examine options available and make a decision on how to build a medium infantry battalion group capability.
In July 2016 it had been announced that the Boxer was one of two vehicle types (from four) down-selected to take part in the 12-month Risk Mitigation Activity for Australia's Land 400 Phase 2 project, and in March 2018 it was announced that Rheinmetall Defense Australia (RDA) had been selected as the preferred tenderer for that project which at the time called for 211 vehicles, with a roll out of initial vehicles by 2021 and deliveries scheduled to be complete by 2026. In Australian service the Boxer will replace the army's ageing fleet of 257 Australian Light Armoured Vehicles (ASLAV) that reach their life-of-type around 2021. Under Rheinmetall's offering, the first batch of 20 to 25 vehicles will be built in Germany with Australians embedded into teams to learn the necessary skills before transferring back to Australia for the build of the remaining vehicles. RDA's Military Vehicle Centre of Excellence (MILVEHCOE) in Brisbane, Queensland, will be the hub for the production of the majority of the vehicles, the local build programme including about 40 local suppliers. These industrial opportunities will create up to 1,450 jobs across Australia, The acquisition and sustainment of the vehicles is costed at AUD15.7 billion (US$12.2 billion), acquisition worth AUD5.2 billion, the remaining AUD10.5 billion costed for sustainment over the vehicles' 30-year life.
In March 2018 it was announced by the UK government that it was re-joining the Boxer programme, and in April 2018 it was announced that Boxer had been selected by the British Army to meet its Mechanised Infantry Vehicle (MIV) requirement. No details relating to quantity, cost, timeline or any contractual status was given. It was first reported in October 2016 that the British Ministry of Defence had taken its first formal step towards government-to-government acquisition of Boxer. At DSEI 2017, a Boxer in a Union Jack paint scheme was shown by Rheinmetall to promote the vehicle for the MIV requirement. In November 2017, a company of German army mechanised infantry equipped with 11 Boxers exercised with British Army units on Salisbury Plain. British Army sources denied that the exercise was linked to any decision on a procurement process for its MIV project. In February 2018 it was reported that Artec had signed agreements with UK suppliers, this contributing to the fact that 60% by value of the MIV contract will be done in Britain, along with final assembly of the MIVs at facilities already owned by the consortium.
In July 2018 there were three Boxer-related announcements made over a period of three days. On 17 July the Dutch MoD announced that the last Dutch Boxer had rolled off the production line, this being a cargo variant. On 18 July the Lithuanian MoD announced that the country's first two Boxer prototypes had entered trials in Germany. On 19 July 2018 the UK MoD disclosed its intent to order between 400 and 600 Boxers in four variants plus driver training vehicles, reference vehicles and support, with the first vehicles to be in-service by 2023. The contract will contain options to increase the quantity of vehicles by up to an additional 900.
In March 2019 the Australian Ambassador to Germany inspected the first Boxer being delivered to the Australian Government under the LAND 400 Phase 2 program prior to its shipping to Australia., and in July 2019 the first two of the 25 Boxer being built in Germany arrived in Australia. The 25 vehicles delivered from Germany are split 13 reconnaissance platforms and 12 multi-purpose vehicles (MPVs). Once in Australia, these vehicles will receive a number of Australia specific modifications prior to final delivery to the Army. The first vehicles were in use for training purposes by October 2020. Rheinmetall will deliver 211 Boxer to the Australian Army under and in service Boxer will fill seven different roles on the battlefield: reconnaissance, command and control, joint fires, surveillance, multi-purpose, battlefield repair and recovery. The reconnaissance variant will account for 133 of the 211 vehicles and is equipped with Rheinmetall's Lance turret system and armed with a 30 mm automatic cannon.
Also in July 2019 the first two Boxer (Vilkas) IFVs ordered by Lithuania were officially handed over to the MoD. The MoD stated that 15 Vilkas would be delivered in 2019 and all 89 vehicles would be delivered by the end of 2021.
In September 2019 there were three Boxer-related announcements. On 10 September it was revealed that the target date for the UK's MIV programme to receive its main gate approval was 22 October 2019. It was reported that the business case for the purchase of an initial batch of 508 vehicles, valued at about GBP1.2 billion (US$1.48 billion), was currently under scrutiny by financial, commercial, and technical experts before receiving final approval by ministers. UK MoD officials submitted their final business case for the purchase of the Boxer MIVs on 9 September 2019 to meet the British Army's target of getting its first Boxer in service by 2023. At the 2019 Defence and Security Equipment International exhibition (DSEI 2019) in London, Germany's Flensburger Fahrzeugbau Gesellschaft (FFG) presented an armoured recovery mission module (ARM) for the Boxer Christoph Jehn, FFG's project manager, stated the ARM was developed as a private venture from 2017. The company noticed Boxer users struggling to recover stranded vehicles with the aid of other Boxers and so decided to develop the bespoke mission module for the purpose. The ARM has an approximate weight of 13 tonnes, is manned by two personnel and connects to the Boxer using standard mechanical interfaces. On 24 September 2019 it was announced that the first Boxer for the Australian Army had formally been handed over. The turretless vehicle was the first of 25 Boxers – 13 multipurpose and 12 reconnaissance variants – that are being manufactured in Germany through to 2021 to meet an early Australian capability requirement for familiarisation and training purposes. Production of the other 186 platforms will begin in late 2020/early 2021 at a military vehicle centre of excellence constructed by Rheinmetall at Ipswich, southwest of Brisbane, and that formally opened in October 2020. This is the company's largest facility outside Germany. Also in September 2019 reports emerged that Algeria had selected the Boxer and that production would commence shortly. As of Q1 2021 this had not been confirmed by ARTEC.
In November 2019 the UK Ministry of Defence awarded ARTEC a US$2.97 billion (GBP2.3 billion) contract to deliver more than 520 Boxer vehicles in multiple configurations.
2020s
In January 2020 in an interview with Shaun Connors of Jane's, Stefan Lishka, MD of ARTEC, stated that only 8% of UK Boxers would be manufactured in Germany with the remainder being assembled at and delivered from two sites in the UK, Rheinmetall BAE Systems Land (RBSL) at Telford and KMW subsidiary WFEL at Stockport. Deliveries of series examples should start very early in 2023.
In November 2020 it was announced that ARTEC consortium partners Rheinmetall Landsysteme and Krauss-Maffei Wegmann (KMW) had awarded two separate subcontracts to Rheinmetall BAE Systems Land (RBSL) and WFEL respectively for the local production of Boxer for the UK. RBSL and WFEL were selected by Rheinmetall and KMW respectively to be the UK Tier 1 suppliers and will operate one Boxer production line each. The value of KMW's contract has not been announced but is known to involve at least 480 drive modules being produced by WFEL in the UK, with under half of them being assembled by WFEL into full vehicles covering the Infantry Carriers, Specialist Carriers and Ambulance variants. The remaining drive modules being produced by WFEL will be shipped to RBSL to construct the other full vehicles in a number of variants, including the Specialist Carrier. Rheinmetall's contract with RBSL is worth US$1.15 billion (GPB860 million) and involves the manufacture of 262 Boxer vehicles at RBSL's assembly line in Telford, UK. All of these vehicles will either be the Specialist Carrier or Command vehicles.
The Bundesamt für Ausrüstung, Informationstechnik und Nutzung der Bundeswehr (BAAINBw), Germany's Federal Office for Bundeswehr Equipment, Information Technology and In-Service Support, awarded Rheinmetall a contract at the end of January 2021 to upgrade 27 more Boxer command vehicles to the A2 standard, this award bringing all the Bundeswehr's Boxer command vehicles up A2 standard.
Design
The Boxer is an eight-wheeled multirole vehicle that at the time of its development easily exceeded most comparable vehicles in weights and dimensions. In recent years the size/weight differences between Boxer and its contemporaries has reduced considerably, with Boxer quoted to have a combat weight of 36,500 kg in 2016 in A1 and A2 configurations, while vehicles such as ST Kinetics' Terrex 3 had a quoted combat weight of 35 tonnes, and Nexter's VBCI, Patria's AMV and General Dynamics' Piranha V all weighing in around the 32 to 33 tonne mark. Current combat weight of the Boxer in A3 configuration is quoted as up to 38.5 tonnes.
Boxer consists of two key elements: the platform/drive-line (the drive module) and the removable mission module. The A iterations applied to Boxer are specific to the drive module. Initial production examples were A0 and less than 60 were delivered. Main production was A1, followed in 2015 by A2. Current production standard depending on user is either A2 or A3. Australia is receiving an A2/A3 hybrid, in that it will receive the latest A3 drive module (rated at 38,500 kg) but with the A2 standard engine rating of
The platform/drive module has the driver located front right, with the power pack to the left. The MTU/Allison powerpack can be replaced under field conditions in about 30 minutes and can, if required, be run outside of the vehicle for test purposes. Boxer is full-time all-wheel drive, the front four wheels steering. Suspension is double-wishbone coil springs, independent all round. Tyres are either 415/80 R27 or 415/80 R685, and a central tire inflation system and run-flat inserts are fitted.
The mission module is a key (and unique) feature of Boxer. Mission modules are interchangeable pod-like units that are fitted to drive modules to form a complete mission variant vehicle. Mission modules are attached by four points and can be swapped within an hour under field conditions. The driver can access their compartment through the mission module or in an emergency via the large single-piece power-operated hatch above this position.
Armament
Production Boxers are fitted with a variety of armament ranging from a 7.62 mm light machine gun in a remote weapon station to a 30 mm cannon in a turret. Numerous armament options are offered.
Most in-service Boxers are equipped with a remote weapon station for self-defense. Dutch vehicles are fitted with the Protector M151 RWS from Kongsberg fitted with a 12.7 mm heavy machine gun. German vehicles are usually fitted with the FLW-200 from KMW, which can be fitted with either a 7.62 mm MG3 machine gun, a 12.7 mm M3M HMG or a 40 mm GMW automatic grenade launcher. The FLW-200 has dual-axis stabilization and incorporates a laser rangefinder and a thermal imager.
Lithuanian Boxers are fitted with the Israeli-made RAFAEL Advanced Defense Systems Samson Mk II RCT turret, mounting a fully stabilised Orbital ATK Mk 44 30 mm dual-feed cannon, 7.62 mm co-axial MG, and Spike-LR missiles. The turret is fitted with an independent commander's sight with both commander and gunner provided with thermal and daylight channels.
Australian Boxer CRVs mount the Rheinmetall Lance 30 mm two-man turret, fitted with the Rheinmetall Mauser MK30-2/ABM (air-bursting munition) dual-feed stabilised cannon and 7.62 mm coaxial MG. Turret traverse is all electric through a full 360° with weapon elevation from -15° to +45°. A Rheinmetall computerised fire-control system is installed, which allows stationary and moving targets to be engaged. The gunner has a Rheinmetall Stabilised Electro-Optical Sighting System (SEOSS), which typically has day/thermal channels and an eye-safe laser rangefinder. The commander has a Rheinmetall SEOSS panoramic sighting system, which allows hunter/killer target engagements to take place.
Protection
The Boxer is constructed from rolled all-welded steel armour to which the AMAP-B module-based appliqué armour kit can be fitted as required by mission threat estimates. AMAP-B modules are taken from the IBD Diesenroth AMAP modular armour package and are fitted to the vehicle with shock absorbing mountings.
Exact details of Boxer protection levels have now been classified. According to ARTEC, the vehicle will withstand anti-personnel and large anti-tank mines of an undisclosed type under the wheel, platform or side attack. It has previously been stated that Boxer's baseline armour is all-round resistant to 14.5 mm armour-piercing ammunition in accordance with STANAG 4569 Level 4.
To increase survivability in case of armour penetration, the crew compartment is completely covered by an AMAP-L spall liner. The spall liner stops most of the fragments of the armour and projectile brought about by hull penetration. To further enhance crew protection, the seats are decoupled from the floor, this preventing the shock of a mine-detonation being directly transmitted to the crew. The roof armour of the Boxer is designed to withstand artillery fragments and top attack weapons such as bomblets fitted with a High-Explosive Anti-Tank (HEAT) warhead.
The Boxer drive module A1 (as designated by the German BWB) is an upgraded version of the baseline A0 version of the Boxer drive module, with the primary difference being the installation of a mine protection package fitted to the belly and wheel stations of the vehicle. The vehicle is fitted an additional armour package focused on protecting against side and underbody blast threats. This consists of the AMAP-M and AMAP-IED packages. An unspecified electronic countermeasure (ECM) system was also fitted to counter IEDs. These changes result in a 1,058 kg weight increase for the A1 over the baseline A0 APC variant. For the A2 Boxer protection is reported to have been increased further.
Mobility and transport
The powerpack of Boxer consists of a MTU 8V199 TE20 diesel engine developing (originally) 720 hp and coupled to an Allison HD4070 fully automatic transmission with seven forward and three reverse gears. The powerpack can be replaced under field conditions in approximately 20 minutes. The MTU 8V199 TE20 engine is a militarised development of the Mercedes-Benz OM 500 truck engine, modified by MTU to produce increased power via changes to the turbocharger, fuel injection and cooling systems. To maintain mobility levels at increased weights, the 8V199 TE20 is now available developing either or , and when the drive module is fitted with the 600 kW version of this engine it is designated A3. Boxer is fitted with three fuel tanks containing a total of 562 litres, divided between a 280-litre front tank, 238-litre rear tank, and a 44-litre reserve tank.
Boxer has full-time 8×8 drive with differential locks on all axles and the front four wheels steer. Tyres are 415/80R 27 Michelin XML on German and Dutch Boxers. The Land 400 prototypes were fitted with 415/80R 685 Michelin XForce 2 tyres, these having a 500 kg per wheel greater load rating than the XML and being more 'all-terrain' in design than the rocks/mud-optimised XML. Standard tyre fit for Australian and UK Boxers will be 415/80R 685 Michelin XForce ZL rated to carry 5,600 kg each.
A central tire inflation system (CTIS) is fitted, and run-flat inserts allow for 30 km travel at up to 50 km/h in the event of a puncture. Braking is provided by Knott pneumatic ABS on all wheels with main braking power actuated on the front two axles. Suspension is fully independent double wishbone with coil springs.
Boxer can be transported in the Airbus A400M tactical airlifter, albeit not in one piece. With a capacity of around 32-tonnes, the loading ramp of an A400M cannot accommodate a Boxer so the drive and mission modules need to be separated. Two Boxers can be transported by three A400Ms, two for the drive modules and a third for the mission modules.
Boxer variants and mission modules overview
Armoured Personnel Carrier — The armoured personnel carrier (APC) variant can be considered a baseline configuration for Boxer. The German Army received 125 APC modules as part of the initial 272-vehicle order. All 131 vehicles from the second German Army order are in a new configuration of the armoured personnel carrier (Gepanzertes Transportfahrzeug) and in A2 configuration.
Command Post — The command post variants of Boxer are used for command and control in theatre, acting as a centre for tactical communications. Secured communication, displays for situational awareness and instruments for network-enabled warfare are key characteristics of this variant. In standard configuration the command post module offers room for four workstations and the vehicle crew consists of driver, commander/weapon operator, two staff officers, one staff assistant and one additional crew member. The German Army received 65 command post modules as part of the initial 272-vehicle order; the Dutch Army ordered 60 command post modules originally, but later reduced this to 36 modules. Australia and the UK will also receive command post variants of Boxer. Lithuania's command post variants will be based around the IFV. The UK has a requirement for a command and control mission module this designated Mechanised Infantry Vehicle Command and Control (MIV-CC), and Australia has a requirement for a command and control mission module, plus a specialist surveillance mission module.
Ambulance — The German Army received 72 ambulance modules as part of the initial 272-vehicle order; the Dutch Army ordered 52 ambulance modules. The German and Dutch Boxer ambulance variant utilise a mission module with a raised roofline providing an internal height of 1.85 m and volume of 17.5 m3. In Dutch service the Boxer ambulance replaced the YPR-765 prgwt variant of the AIFV (Armored Infantry Fighting Vehicle) casualty transport and it can accommodate seven casualties that are seated or three lying down on stretchers, or one of the following combinations: three seated and two lying down, or four seated and a single casualty lying down. The crew consists of driver, commander and a single medic. The Dutch vehicle, a medical evacuation vehicle, differs from the German medical treatment vehicle. Australia and the UK have ordered ambulance modules, the UK variant to be known as Mechanised Infantry Vehicle Ambulance (MIV-A).
Combat Reconnaissance Vehicle — The combat reconnaissance (CRV) is a development of the baseline Boxer designed to fulfil the Australian Land 400 Phase 2 requirement. It mounts the Rheinmetall Defence Lance modular turret system (MTS) fitted with the MK30-2/ABM cannon. Other variants being developed for Australia are an Ambulance, a Command & Control, a Joint Fires, a Surveillance, and Repair & Recovery variants.
Vilkas (Wolf) — 89 of 91 Lithuanian Vilkas/Wolf will be fitted with the Rafael Advanced Defense Systems Samson Mk II RCT turret mounting a fully stabilised Orbital ATK Mk 44 30 mm dual-feed cannon, 7.62 mm co-axial MG, and Spike-LR missiles. A range of turret options were bid including the unmanned Lance turret from the PSM Puma IFV, however the selected vehicle mounts the Rafael Advanced Defense Systems Samson Mk II RCT armed with a 30 mm cannon, 7.62 mm co-axial MG, and Spike-LR missiles. Lithuania will receive four variants of the IFV, 55 IFV squad leader, 18 IFV platoon leader; 12 IFV company leader; 4 IFV command post. Variants vary by mission fit primarily in the areas of additional voice and data communication equipment as well as modified BMS. Two driver training vehicles are also included in the Lithuanian order.
Geniegroep – The Boxer Geniegroep (GNPR) is a Dutch-specific engineering and logistics support vehicle that is deployed for the transport of troops and engineer group equipment. It provides seating for six dismounts with space available for their personal equipment and an additional separate stowage section for munitions. It may be deployed as a support vehicle with other units or used for independent assignments such as route clearance, or as a protected work location during mine clearance or demolition operations. The Boxer GNGP replaces the YPR-765 prgm/PRCO-C3 variant of the AIFV (Armored Infantry Fighting Vehicle). The Royal Netherlands Army initially ordered 53 GNPR, this later revised to 92, and has subsequently converted 12 of the 92 GNGP vehicles ordered to Boxer Battle Damage Repair (BDR) configuration. The BDR variant is able to accommodate the special equipment, tools, expendable and non-expendable supplies needed to carry out diagnoses, maintenance and minor repairs if required. Crew consists of an engineer commander, driver, observing commander, gunner, and five engineers.
Cargo — The Boxer Cargo is a Dutch-specific variant that replaces the YPR-765 prv variant of the AIFV (Armored Infantry Fighting Vehicle). It is equipped with a special loading floor to secure cargo during transport and can transport a maximum of two standard one tonne army pallets (max. load 2,5 t). The interior design of the vehicle allows adaptation as necessary for different kinds of missions. For conducting peace-keeping missions or other peacetime operations the set of vehicle equipment can be changed and tailored to suit as required. Crew consists of commander/gunner and driver. 27 cargo examples were originally ordered, this later revised to 12. A cargo variant was the final Dutch Boxer produced.
Driver Training Vehicle — This driver training vehicle (DTV) variant is equipped with a training module. The driver sits in the conventional driver's station and the instructor is seated in an elevated position in the driver training cabin. Active occupant protection is designed to protect the crew sitting exposed in the driver training cabin. In the event of a roll-over accident, the instructor and upper occupant seats are electronically retracted into the Driver Training Module. In normal use, the instructor can monitor the trainee driver via a duplicated control and display unit and override gear selector, brake and accelerator pedal of the driver's station. Steering override is available as an option. Crew consists of a trainee driver, instructor, plus up to two additional trainee passengers. The Australian, Dutch (8), German (10) and Lithuanian (2) armies operate driver training vehicles.
Repair and Recovery - Australia and the UK will receive a repair and recovery mission module, the details of which have yet to be released. The UK designation for this variant is Mechanised Infantry Vehicle Repair and Recovery (MIV-REC).
Other identified modules — Of the four Boxer build configurations currently proposed for the UK's Mechanised Infantry Vehicle (MIV) requirement there is also the generic Mechanised Infantry Vehicle Protected Mobility (MIV-PM), this believed to be a reconfigurable personnel carrier. Of the seven Boxer variants required by Australia there is also a joint fires mission module and a surveillance mission module. Germany's BAAINBw ordered 10 Boxer C-UAV (Counter UAV) systems in December 2019, placing contracts with Kongsberg and Hensoldt in a EUR24 million contract, with delivery to be completed within 24 months. By June 2020 all elements of the system had passed the critical design review and a live firing had been conducted. It was aimed to deliver the first systems to the Bundeswehr by the close of 2020. The initial operational capability requires only a single sensor, which provides a 120° coverage in azimuth.
During an interview with Jane's at IAV 2020, Stefan Lishka, MD of ARTEC commented that the term "configuration" had superseded variant for Boxers, and Boxer modules. The reason for this was that some current/planned variants (build configurations) are interchangeable by crew members.
Other variants including prototypes, concepts and developmental platforms
Boxer JODAA — Boxer JODAA (Joint Operational Demonstrator for Advanced Applications) is a technology demonstrator used by the German Army and Rheinmetall Landsysteme to carry out R&D studies around potential Boxer improvements. It is based on the Boxer armoured medical treatment vehicle variant and is regularly refitted for a range of purposes and roles.
Boxer fitted with Oerlikon Skyranger - Boxer has been shown fitted with the Oerlikon Skyranger air defence system turret. This is armed with Rheinmetall's 35mm x 228 calibre Revolver Gun, this having the option of a dual ammunition feeding system that allows the choice of two types of shell. It would primarily fire the 35 mm Advanced Hit Efficiency And Destruction (AHEAD) ammunition, which although optimised for the air defence role is also effective against ground targets including lightly protected vehicles. The secondary nature would be Frangible Armour-Piercing Discarding Sabot (FAPDS) ammunition. The gun has a cyclic rate of fire of 1,000 rounds a minute, with a typical aerial target being engaged by a burst of 20 to 24 rounds.
Boxer IFV Demonstrator/RCT30 — Boxer IFV Demonstrator is a technology demonstrator used by Rheinmetall Landsysteme to demonstrate, market, and test the company's preferred configuration for an IFV variant of the Boxer platform. Boxer RCT30 IFV is a technology demonstrator used by KMW for the same purposes. The vehicle mounts the unmanned Rheinmetall RCT, the latest development of the turret fitted to the German Army's PSM Puma IFV. The turret is installed on the forward part of the rear Boxer mission module and armed with the stabilised Orbital ATK Armament Systems 30 mm MK44 dual-feed cannon with option of a coaxial 7.62 mm MG. On top of this is the KMW FL 200 RCWS armed with a 12.7 mm MG that can be replaced by a 5.56 mm or 7.62 mm MG or a 40 mm AGL. It was stated that an alternate primary armament, the Mauser MK30-2 ABM dual-feed 30 mm cannon could be installed if requested.
Boxer Armoured Recovery Module (ARM) — The Boxer ARM is a repair and recovery mission module developed by FFG to provide Boxer users with a recovery and maintenance capability as well as an operational means to mount mission modules onto drive modules.
Boxer RCH155 — Boxer RCH155 mounts a version of the KMW Artillery Gun Module (AGM). This is a further development of the PzH 2000 155 mm 52-calibre artillery system. The system was developed to meet potential requirements of export customers a wheeled Boxer-type platform has greater strategic mobility than tracked and heavier PzH 2000-type system. Initial firing trials have taken place. In December 2020 Krauss-Maffei Wegmann (KMW) announced in a press release that it plans to begin developmental testing of the Remote Controlled Howitzer (RCH) 155 mm gun in 2021, this essentially a remotely controllable version of the RCH155.
Boxer, direct fire support — In April 2020 John Cockerill Defense revealed that it was supplying a C3105 two-person turret armed with 105 mm rifled gun to KMW so that it could be incorporated onto Boxer. The company stated that the development was funded by internal R&D budgets and that firing trials were anticipated to take place within the course of 2020. Firing trials are now planned to take place in Germany or the UK when COVID-19 restrictions are lifted. The vehicle was to be shown for the first time at the Eurosatory exhibition in Paris in June 2020, but that event was cancelled due to the pandemic.
Boxer WFEL bridging module concept — The Boxer WFEL bridging module concept is a variant designed by WFEL and KMW as a private venture, to meet the need to integrate the Leguan bridging system onto medium-sized vehicles.
Boxer ARTHUR — At the 2020 Omega Future Indirect Fires/Mortar Systems conference in the UK Saab displayed a concept of its ARTHUR Mod D mounted onto the mission module of a Boxer. Saab said ARTHUR Mod D was its “answer to the requirements for a highly mobile, agile, and long range WLR, supporting high tempo brigade and divisional manoeuvre operations. The technology is drawing on [both] existing and evolutions of Saab in-house sensor technologies”, and can be seen “as a spiral development” of ARTHUR.
Boxer Mobile LWS — The Boxer Mobile LWS (laser weapon system) demonstrator was a version of the Boxer armoured medical treatment vehicle that was fitted with a RWS coupled to a Rheinmetall RMG 12.7 mm HMG, integrated with an unmanned protected turret and fitted with a fully-automated MANTIS turret. No further development or production has taken place.
Gallery
Operators
Current operators
: Australian Army – 211 vehicles on order, with deliveries expected until 2026. Vehicles to be delivered under the Land 400 Phase 2 program. The first of 25 Boxers – 13 multipurpose and 12 turreted reconnaissance variants – that are being manufactured in Germany through to 2021 to meet an early Australian capability requirement for familiarisation and training, were formally handed over to the army in September 2019. Prior to delivery the Boxers were modified locally with Australian-specific communications and battlefield management systems and fitted temporarily with the Kongsberg Protector RWS that previously equipped Australian ASLAVs deployed to Iraq and Afghanistan. Training with the first vehicles delivered had commenced by October 2020. Production of the balance of 186 platforms – a mix of reconnaissance, command-and-control, joint fires, surveillance, ambulance, and battlefield repair and recovery variants – will begin in late 2022 at RDA's AUD170 million Military Vehicle Centre of Excellence (MILVEHCOE). Located at Ipswich, southwest of Brisbane, this is Rheinmetall's biggest facility outside Germany and represents the largest single infrastructure investment to be made by the company in its 131-year history. The facility formally opened on 11 October 2020. To reduce integration risk, fitting the Australian-designed and produced Electro-Optic Systems R400 Mk 2 RWS to the 133 turreted reconnaissance variants is not expected to begin until after domestically produced 30 mm Lance turrets become available from the MILVEHCOE facility, probably sometime in 2023. For the Boxer's selection process, in the overall evaluation protection received a higher priority than lethality, lethality had a higher priority than mobility, and mobility had a higher priority than sustainability or C4ISR considerations.
: German Army – 403 vehicles, deliveries until 2020. The first German order consisted of 272 drive modules and 272 accompanying mission modules encompassing 125 APCs, 72 armoured medical treatment vehicles, 10 driver training vehicles, and 65 command vehicles.
: Lithuanian Land Force – 91 vehicles, deliveries until 2021. Lithuania will receive Boxer in A2 configuration, 89 as variants of the baseline IFV configuration, plus two driver training vehicles. The IFV breakdown is: 55 IFV squad leader, 18 IFV platoon leader; 12 IFV company leader; 4 IFV command post. A single IFV will be used for maintenance training. The first two vehicles (driver training configuration) were delivered to Lithuania in December 2017. The first two Boxer in IFV configuration were delivered on 25 June 2019 and at this time the Lithuanian MoD stated that 15 vehicles would be delivered to Lithuania in 2019, and that all 89 IFV variants would be delivered by the end of 2021. In Lithuanian service these vehicles will be known as IFV Wolf (Vilkas being Lithuanian for wolf). and in June 2019 the Lithuanian MoD stated that 15 IFV Boxer/Vilkas would be delivered to Lithuania in 2019, and that all 89 IFV variants would be delivered by the end of 2021. It's reported that the shipment of Spike LR missiles for the Vilkas was completed in June 2021.
: Royal Netherlands Army – 200 vehicles, deliveries from 2013 until 2018. The last Dutch Boxer was produced in July 2018. Variant breakdown following a 2016 contract modification was 12 cargo, 92 engineer (subsequently converted 12 of the 92 to Battle Damage Repair (BDR) configuration), 36 command post, 8 driver training, 52 ambulance.
Future operators
: British Army - 528 vehicles from 2022. Following an announcement on 31 March 2018 by the UK government that it was re-joining the Boxer programme, the UK government announced on 3 April that Boxer had been selected by the British Army to meet its Mechanised Infantry Vehicle (MIV) requirement. On 19 July the UK MoD disclosed its intent to order between 400 and 600 Boxer with options for a further 900, leading to a potential maximum procurement of 1500 vehicles. The first vehicles are currently due to enter service in 2023. As a result of the UK's intended larger order and its return to being a program partner, an option to build and export Boxer from the UK will be explored. In January 2019 Rheinmetall announced that subject to government approvals the company would buy a 55% share of UK-based BAE Systems' land business for £28.6 m. The joint venture (JV) is called Rheinmetall BAE Systems Land (RBSL) and is headquartered at BAE's existing facility in Telford, Shropshire. On November 5, 2019, it was announced that a £2.3 billion deal for Boxer had been signed. There will be four variants, for a total of 528 units. Deliveries will start in 2023. A contract to make "threat-detection technology" for the Army's new Boxer vehicles has been awarded to Thales UK's Glasgow site in Glasgow Scotland. The UK's Ministry of Defence said the Remote Weapons Stations contract was worth £180m and would last 10 years.
: Algerian Army – production under license was reportedly to start in 2020, with 500 units to be produced by 2023.
: Slovenian Ground Force – In February 2018, the Slovenian Ministry of Defence selected the Boxer as the base vehicle around which to form two new mechanised infantry battlegroups. The procurement was to proceed through OCCAR and a 'kick off meeting' was held on 13 March 2018. The actual contract was expected to be signed Q4 2018 and the first series vehicle was planned to be delivered end of 2020. It was reported in early 2019 that Slovenia's accession to OCCAR alongside a contract for the vehicles had been suspended, the MoD deciding to conduct a new tactical study, this likely to affect the procurement of 8×8 wheeled armoured vehicles. According to the latest information, these vehicles are to be purchased, but a smaller number than originally planned. According to a new study, in October 2021, Slovenia only decided to purchase new Boxer armored vehicles, necessary for the construction of the central battalion group of the Slovenian Army in NATO. It is planned to purchase 45 vehicles, equipped according to the Lithuanian model (Vilkas/Wolf) with a 30mm cannon.
Possible future operators
See also
Comparable vehicles
Stryker
LAV III/LAV AFV/LAV-25/ASLAV
K808 Armored Personnel Carrier
Tusan AFV
Freccia IFV
BTR-90
CM-32
Type 96 Armored Personnel Carrier
Type 16 maneuver combat vehicle
Patria AMV
BTR-4
Saur 2
VBCI
KTO Rosomak
FNSS Pars
MOWAG Piranha
References
External links
Boxer interview with Stefan Lishka, MD of ARTEC
Boxer offer for UK – 10 min detailed interview with Rheinmetall Defence UK
Artec Boxer
Boxer – Infantry fighting vehicle – Rheinmetall Defence
Boxer at ThinkDefence.co.uk
Infantry fighting vehicles
Armoured fighting vehicles of the post–Cold War period
Armoured personnel carriers of Germany
Rheinmetall
Wheeled infantry fighting vehicles
Armoured fighting vehicles of the Netherlands
Military vehicles introduced in the 2000s |
62154752 | https://en.wikipedia.org/wiki/Monticello%20Independent%20Schools | Monticello Independent Schools | Monticello Independent Schools was a school district headquartered in Monticello, Kentucky. It operated Monticello Elementary School and Monticello Middle / High School.
The district was established in 1905. After a wave of school consolidations swept the state in 1960's and 1970's, it was one of the smallest public school districts in Kentucky. In 2013 the district had 850 students. The school district became insolvent in 2012 and Bill Estep of the Lexington Herald-Leader described the district as "troubled".
On December 17, 2012, the Monticello schools board voted for the Kentucky Department of Education management of their schools which was expected to result in a merger with Wayne County Schools.
On June 30, 2013, it closed and was merged into Wayne County Schools.
Athletics
Monticello High School boys and girls basketball teams, nicknamed the Trojans and Lady Trojans, were notable, having participated in several Kentucky High School Athletic Association state tournaments, and produced numerous All-State players. The 1915 boys team was undefeated and claimed the state championship. The 1921 team was coached Hall of Fame Coach Edgar Diddle, who led them to the state tournament semi-finals. From 1957 until 1980, the Trojans were coached by KHSAA Hall of Fame coach Joe Harper who led them to seven district championships, six regional titles, and to the state championship game in 1960. The Trojans made their final appearance in the KHSAA State Tournament in 1987 and the Kentucky Class A State Tournament in 1992; the Lady Trojans also made their last appearance in the KHSAA State Tournament in 1992 and the Kentucky Class A State Tournament in 2009.
References
External links
Former school districts in Kentucky
Wayne County, Kentucky
2013 disestablishments in Kentucky
Educational institutions disestablished in 2013
1905 establishments in Kentucky
School districts established in 1905 |
517385 | https://en.wikipedia.org/wiki/Dasher%20%28software%29 | Dasher (software) | Dasher is an input method and computer accessibility tool which enables users to compose text without using a keyboard, by entering text on a screen with a pointing device such as a mouse, touch screen, or mice operated by the foot or head. Such instruments could serve as prosthetic devices for disabled people who cannot use standard keyboards, or where the use of one is impractical.
Dasher is free and open-source software, subject to the requirements of the GNU General Public License (GPL), version 2. Dasher is available for operating systems with GTK+ support, i.e. Linux, BSDs and other Unix-like including macOS, Microsoft Windows, Pocket PC, iOS and Android.
Dasher was invented by David J. C. MacKay and developed by David Ward and other members of MacKay's Cambridge research group. The Dasher project is supported by the Gatsby Charitable Foundation and by the EU aegis-project.
Design
For whatever the writer intends to write, they select a letter from ones displayed on a screen by using a pointer, whereupon the system uses a probabilistic predictive model to anticipate the likely character combinations for the next piece of text, and accord these higher priority by displaying them more prominently than less likely letter combinations. This saves the user effort and time as they proceed to choose the next letter from those offered. The process of composing text in this way has been likened to an arcade game, as users zoom through characters that fly across the screen and select them in order to compose text. The system learns from experience which letter combinations are the most popular, and changes its display protocol over time to reflect this.
Features
The Dasher package contains various architecture-independent data files:
alphabet descriptions for over 150 languages
letter colours settings
training files in all supported languages
References
External links
User interfaces
User interface techniques
Pointing-device text input
Disability software
Free software programmed in C
Free software programmed in C++
Free software programmed in Java (programming language)
GNOME Accessibility
Cross-platform free software
Free and open-source Android software |
56651332 | https://en.wikipedia.org/wiki/ONAP | ONAP | ONAP (Open Network Automation Platform), is an open-source, orchestration and automation framework. It is hosted by The Linux Foundation.
History
On February 23, 2017, ONAP was announced as a result of a merger of the OpenECOMP and Open-Orchestrator (Open-O) projects. The goal of the project is to develop a widely used platform for orchestrating and automating physical and virtual network elements, with full lifecycle management.
ONAP was formed as a merger of OpenECOMP, the open source version of AT&T's ECOMP project, and the Open-Orchestrator project, a project begun under the aegis of the Linux Foundation with China Mobile, Huawei and ZTE as lead contributors. The merger brought together both sets of source code and their developer communities, who then elaborated a common architecture for the new project.
The first release of the combined ONAP architecture, code named "Amsterdam", was announced on November 20, 2017. The next release ("Beijing") was released on June 12, 2018.
As of January, 2018, ONAP became a project within the LF Networking Fund, which consolidated membership across multiple projects into a common governance structure. Most ONAP members became members of the new LF Networking fund.
Overview
ONAP provides a platform for real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management.
ONAP incorporates or collaborates with other open-source projects, including OpenDaylight, FD.io, OPNFV and others.
Contributing organizations include AT&T, Samsung, Nokia, Ericsson, Orange, Huawei, Intel, IBM and more.
Architecture
References
External links
Computer networking
Linux Foundation projects |
38201252 | https://en.wikipedia.org/wiki/Samsung%20Ativ%20Tab | Samsung Ativ Tab | The Samsung Ativ Tab is a tablet manufactured by Samsung. The Ativ Tab was announced on August 29, 2012 at IFA 2012, incorporates a dual-core 1.2 GHz Qualcomm Snapdragon S4 processor, and runs the Windows RT operating system.
Despite the mixed reception that its Windows RT operating system has received in comparison to Windows 8, the Ativ Tab itself received positive reviews for its lightweight design, its ability to use USB peripherals, and its overall performance for a first generation Windows RT device. While the Ativ Tab was released in December 2012 in the United Kingdom, its release in Germany and the United States was cancelled due to the lukewarm reception and unclear positioning of Windows RT.
Hardware
The design of the Ativ Tab is relatively similar to its Android-based counterparts (such as the Galaxy Note 10.1)built using a mixture of plastic and glass. A micro HDMI port, MicroSD slot, and a full-size USB port are incorporated into the design, as well as a volume rocker, power button, and headphone jack located on the top. A physical Windows button is located directly below the screen. A charging port and dock connector are located on the bottom. The Ativ Tab uses a IPS display at a resolution of 1366x768. The tablet is available with either 32 GB or 64 GB of internal storage.
Availability
The Ativ Tab was originally scheduled for a release in the United Kingdom in November 2012 alongside its Windows Phone 8 counterpart, the Samsung Ativ S, but was delayed into mid-December. The release of both devices was eventually held on December 14, 2012.
In January 2013, Samsung announced that it had cancelled the American release of the Ativ Tab, citing the unclear positioning of the Windows RT operating system, "modest" demand for Windows RT devices, plus the effort and investment required to educate consumers on the differences between Windows 8 and RT as reasons for the move. Mike Abary, senior vice president of Samsung's U.S. PC and tablet businesses, also stated that the company was unable to build the Ativ Tab to meet its target price pointconsidering that lower cost was intended to be a selling point for Windows RT devices. Samsung has also reportedly planned to pull the Ativ Tab from Germany and other unspecified European markets for similar reasons.
Reception
Whilst demoing the device at IFA, TechRadar praised the Ativ Tab's "crisp" screen, lightweight design, and the ability to expand its functionality and storage with its USB port and MicroSD card slot. However, it was also said that while its processor was relatively responsive, it "certainly wasn't in the same league as the Galaxy Note 2."
Anandtech said that despite not being the best substitute for an actual notebook due to the lagging performance of ARM-based processors and the rushed nature of the OS, the Ativ Tab was "well executed" for a first-generation Windows RT device. The tablet's relatively "snappy" performance, battery life, and lightweight design were regarded as positive aspectsdespite considering the design itself to be "nothing particularly new or exciting." The Qualcomm APQ8060A chipset used in the Ativ Tab was also judged as being the best processor for Windows RT so far, noting that its performance was sufficient and "surprisingly competitive" in comparison to the chipsets used in competing Windows RT devices. The rear camera was considered to be neither "horrible" or "great", and the lack of keyboard accessory for its dock connector was also noted.
See also
List of Windows RT devices
Samsung Ativ S
References
Ativ Tab
Windows RT devices
Tablet computers
Tablet computers introduced in 2012
de:Samsung Ativ#Ativ Tab |
849237 | https://en.wikipedia.org/wiki/Contiki | Contiki | Contiki is an operating system for networked, memory-constrained systems with a focus on low-power wireless Internet of Things devices. Extant uses for Contiki include systems for street lighting, sound monitoring for smart cities, radiation monitoring, and alarms. It is open-source software released under the BSD-3-Clause license.
Contiki was created by Adam Dunkels in 2002 and has been further developed by a worldwide team of developers from Texas Instruments, Atmel, Cisco, ENEA, ETH Zurich, Redwire, RWTH Aachen University, Oxford University, SAP, Sensinode, Swedish Institute of Computer Science, ST Microelectronics, Zolertia, and many others. Contiki gained popularity because of its built in TCP/IP stack and lightweight preemptive scheduling over event-driven kernel which is a very motivating feature for IoT. The name Contiki comes from Thor Heyerdahl's famous Kon-Tiki raft.
Contiki provides multitasking and a built-in Internet Protocol Suite (TCP/IP stack), yet needs only about 10 kilobytes of random-access memory (RAM) and 30 kilobytes of read-only memory (ROM). A full system, including a graphical user interface, needs about 30 kilobytes of RAM.
A new branch has recently been created, known as Contiki-NG: The OS for Next Generation IoT Devices
Hardware
Contiki is designed to run on types of hardware devices that are severely constrained in memory, power, processing power, and communication bandwidth. A typical Contiki system has memory on the order of kilobytes, a power budget on the order of milliwatts, processing speed measured in megaHertz, and communication bandwidth on the order of hundreds of kilobits/second. Such systems include many types of embedded systems, and old 8-bit computers.
Networking
Contiki provides three network mechanisms: the uIP TCP/IP stack, which provides IPv4 networking, the uIPv6 stack, which provides IPv6 networking, and the Rime stack, which is a set of custom lightweight networking protocols designed for low-power wireless networks. The IPv6 stack was contributed by Cisco and was, when released, the smallest IPv6 stack to receive the IPv6 Ready certification. The IPv6 stack also contains the Routing Protocol for Low power and Lossy Networks (RPL) routing protocol for low-power lossy IPv6 networks and the 6LoWPAN header compression and adaptation layer for IEEE 802.15.4 links.
Rime is an alternative network stack, for use when the overhead of the IPv4 or IPv6 stacks is prohibitive. The Rime stack provides a set of communication primitives for low-power wireless systems. The default primitives are single-hop unicast, single-hop broadcast, multi-hop unicast, network flooding, and address-free data collection. The primitives can be used on their own or combined to form more complex protocols and mechanisms.
Low-power operation
Many Contiki systems are severely power-constrained. Battery operated wireless sensors may need to provide years of unattended operation and with little means to recharge or replace batteries. Contiki provides a set of mechanisms to reduce the power consumption of systems on which it runs. The default mechanism for attaining low-power operation of the radio is called ContikiMAC. With ContikiMAC, nodes can be running in low-power mode and still be able to receive and relay radio messages.
Simulation
The Contiki system includes a sensor simulator called Cooja, which simulates of Contiki nodes. The nodes belong to one of the three following classes: a) emulated Cooja nodes, b) Contiki code compiled and executed on the simulation host, or c) Java nodes, where the behavior of the node must be reimplemented as a Java class. One Cooja simulation may contain a mix of sensor nodes from any of the three classes. Emulated nodes can also be used to include non-Contiki nodes in a simulated network.
In Contiki 2.6, platforms with the TI MSP430 and Atmel AVR microcontrollers can be emulated.
Programming model
To run efficiently on small-memory systems, the Contiki programming model is based on protothreads. A protothread is a memory-efficient programming abstraction that shares features of both multithreading and event-driven programming to attain a low memory overhead of each protothread. The kernel invokes the protothread of a process in response to an internal or external event. Examples of internal events are timers that fire or messages being posted from other processes. Examples of external events are sensors that trigger or incoming packets from a radio neighbor.
Protothreads are cooperatively scheduled. Thus, a Contiki process must always explicitly yield control back to the kernel at regular intervals. Contiki processes may use a special protothread construct to block waiting for events while yielding control to the kernel between each event invocation.
Features
Contiki supports per-process optional preemptive multithreading, inter-process communication using message passing through events, as well as an optional graphical user interface (GUI) subsystem with either direct graphic support for locally connected terminals or networked virtual display with Virtual Network Computing (VNC) or over Telnet.
A full installation of Contiki includes the following features:
Multitasking kernel
Optional per-application preemptive multithreading
Protothreads
Internet Protocol Suite (TCP/IP) networking, including IPv6
Windowing system and GUI
Networked remote display using Virtual Network Computing
A web browser (claimed to be the world's smallest)
Personal web server
Simple telnet client
Screensaver
Contiki is supported by popular SSL/TLS libraries such as wolfSSL, which includes a port in its 3.15.5 release.
Ports
The Contiki operating system is ported to the following systems:
Microcontrollers
Atmel – ARM, AVR
NXP Semiconductors – LPC1768, LPC2103, MC13224
Microchip – dsPIC, PIC32 (PIC32MX795F512L)
Texas Instruments – MSP430, CC2430, CC2538, CC2630, CC2650, CC2538: RE-Mote, Firefly, Zoul (comprises the CC2538 and CC1200 in a single module format)
STMicroelectronics – STM32 W
Computers
Apple – II series
Atari – 8-bit, ST, Portfolio
Casio – Pocket Viewer
Commodore – PET, VIC-20, 64, 128
Tangerine Computer Systems – Oric
NEC – PC-6001
Sharp – Wizard
Intel, AMD, VIA, many others – x86-based Unix-like systems, atop GTK+, or more directly using an X Window System
Game consoles
Atari – Jaguar
Game Park – GP32
Nintendo – Game Boy, Game Boy Advance, Entertainment System (NES)
NEC – TurboGrafx-16 Entertainment SuperSystem (PC Engine)
See also
BeRTOS
ERIKA Enterprise
RIOT
SymbOS
TinyOS
Wheels (operating system)
Comparison of real-time operating systems
Notes
References
: unofficial website for historic ports of the 1.x version.
.
.
.
External links
Embedded operating systems
Free web browsers
Free software operating systems
TRS-80 Color Computer
Commodore 64 software
Commodore 128 software
Apple II software
Atari 8-bit family software
Atari ST software
Commodore VIC-20 software
ARM operating systems
MIPS operating systems
Software using the BSD license |
152106 | https://en.wikipedia.org/wiki/Inter-process%20communication | Inter-process communication | In computer science, inter-process communication or interprocess communication (IPC) refers specifically to the mechanisms an operating system provides to allow the processes to manage shared data. Typically, applications can use IPC, categorized as clients and servers, where the client requests data and the server responds to client requests. Many applications are both clients and servers, as commonly seen in distributed computing.
IPC is very important to the design process for microkernels and nanokernels, which reduce the number of functionalities provided by the kernel. Those functionalities are then obtained by communicating with servers via IPC, leading to a large increase in communication when compared to a regular monolithic kernel. IPC interfaces generally encompass variable analytic framework structures. These processes ensure compatibility between the multi-vector protocols upon which IPC models rely.
An IPC mechanism is either synchronous or asynchronous. Synchronization primitives may be used to have synchronous behavior with an asynchronous IPC mechanism.
Approaches
Different approaches to IPC have been tailored to different software requirements, such as performance, modularity, and system circumstances such as network bandwidth and latency.
Applications
Remote procedure call interfaces
Java's Remote Method Invocation (RMI)
ONC RPC
XML-RPC or SOAP
JSON-RPC
Message Bus (Mbus) (specified in RFC 3259)
.NET Remoting
gRPC
Platform communication stack
The following are messaging, and information systems that utilize IPC mechanisms but don't implement IPC themselves:
KDE's Desktop Communications Protocol (DCOP) deprecated by D-Bus
D-Bus
OpenWrt uses ubus micro bus architecture
MCAPI Multicore Communications API
SIMPL The Synchronous Interprocess Messaging Project for Linux (SIMPL)
9P (Plan 9 Filesystem Protocol)
Distributed Computing Environment (DCE)
Thrift
ZeroC's Internet Communications Engine (ICE)
ØMQ
Enduro/X Middleware
YAMI4
Enlightenment_(software) E16 uses eesh as an IPC
Operating system communication stack
The following are platform or programming language-specific APIs:
Linux Transparent Inter Process Communication (TIPC)
Apple Computer's Apple events, previously known as Interapplication Communications (IAC)
Enea's LINX for Linux (open source) and various DSP and general-purpose processors under OSE
The Mach kernel's Mach Ports
Microsoft's ActiveX, Component Object Model (COM), Microsoft Transaction Server (COM+), Distributed Component Object Model (DCOM), Dynamic Data Exchange (DDE), Object Linking and Embedding (OLE), anonymous pipes, named pipes, Local Procedure Call, MailSlots, Message loop, MSRPC, .NET Remoting, and Windows Communication Foundation (WCF)
Novell's SPX
POSIX mmap, message queues, semaphores, and shared memory
RISC OS's messages
Solaris Doors
System V's message queues, semaphores, and shared memory
OpenBinder Open binder
QNX's PPS (Persistent Publish/Subscribe) service
Distributed object models
The following are platform or programming language specific-APIs that use IPC, but do not themselves implement it:
Libt2n for C++ under Linux only, handles complex objects and exceptions
PHP's sessions
Distributed Ruby
Common Object Request Broker Architecture (CORBA)
Electron's asynchronous IPC, shares JSON objects between a main and a renderer process
See also
Computer network programming
Communicating Sequential Processes (CSP paradigm)
Data Distribution Service
Protected procedure call
References
Stevens, Richard. UNIX Network Programming, Volume 2, Second Edition: Interprocess Communications. Prentice Hall, 1999.
U. Ramachandran, M. Solomon, M. Vernon Hardware support for interprocess communication Proceedings of the 14th annual international symposium on Computer architecture. Pittsburgh, Pennsylvania, United States. Pages: 178 - 188. Year of Publication: 1987
Crovella, M. Bianchini, R. LeBlanc, T. Markatos, E. Wisniewski, R. Using communication-to-computation ratio in parallel program designand performance prediction 1–4 December 1992. pp. 238–245
External links
Linux IPC with sub-microsecond latencies
Linux ipc(5) man page describing System V IPC
Windows IPC
IPC available using Qt
Unix Network Programming (Vol 2: Interprocess Communications) by W. Richard Stevens
Interprocess Communication and Pipes in C
DIPC, Distributed System V IPC |
24029481 | https://en.wikipedia.org/wiki/K.%20J.%20Somaiya%20Institute%20of%20Engineering%20and%20Information%20Technology | K. J. Somaiya Institute of Engineering and Information Technology | K. J. Somaiya Institute of Engineering and Information Technology (KJSIEIT) was established by the Somaiya Trust in the 2001, at Ayurvihar campus, Sion, Mumbai, India. It is an autonomous institute affiliated to the University of Mumbai.
The institute was set up to impart education in the field of Information Technology and allied branches of Engineering and Technology.
The institute is approved by All India Council for Technical Education New Delhi, DTE Mumbai, permanently affiliated to University of Mumbai, and accredited by Tata Consultancy Services and recently (2017) approved by NAAC grade A college list and awarded with best engineering college in 2017-18 by ISTE Maharashtra and Goa. It has a huge campus (85 acres). In December 2018, KJSIEIT was accredited by NBA for UG programs for 3 years.
Departments
The colleges engineering departments are:
Department of Electronics & Telecommunications Engineering
Department of Computer Engineering
Department of Information Technology
Department of Artificial Intelligence and Data Science
Department of Electronics Engineering
Department of Science and Humanities
Campus
The institute is located at Sion, Mumbai. It is part of the 85-acre Somaiya Ayurvihar campus, which also houses the K. J. Somaiya Hospital, Medical College, Cancer Research Centre and K. J. Somaiya College of Physiotherapy. The campus grounds host open cricket and football matches. The campus has one volleyball court, three lawn tennis courts, one rink football, two half-size football grounds,one turf football ground and an open gym. The campus is located off the Eastern Express Highway near Sion.
The college building is an eight-story structure and houses all the departments of the college, as well as a canteen, an auditorium, and separate girls and boys rooms which have a table tennis table for leisure.
Campus activities
Software Development Cell which is actively involved in Research & Development and consultancy projects with industries and academic institutes.
Research Innovation Incubation Design Lab Somaiya Vidyavihar, at Somaiya Campus focuses on technology and startup incubation.
Affiliation to various professional bodies for Student chapters and Faculty chapters
The Institute of Electrical and Electronics Engineers is the world's largest professional association dedicated to advancing technological innovation and excellence for the benefit of humanity. It inspires a global community through IEEE's highly cited publications, conferences, technology standards, and professional and educational activities.
The Institute of Electronics and Telecommunication Engineers is one of the oldest technical bodies in India. It started in 2009 in KJSIEIT.
The students chapter of the Computer Society of India was formed in 2009. Technical and non-technical activities are held throughout the year. The annual technical festival Renaissance is organized jointly by the committee, along with many events through the year.
The Institution of Engineering and Technology is a professional society for the engineering and technology community, with more than 150000 members in 127 countries. It began in 2013.
Entrepreneurship Cell, beginning in March 2009, is an association managed and driven by students to promote entrepreneurship among students. It is associated with the National Entrepreneurship Network to interact with like-minded people. The Cell organises events like E-Week, Campus Company, Techno-preneur workshop and E-Movies.
Active Training and Placement Cell
Student enrichment activities through various student clubs and cells, including:
Robocon Cell
Cyber Security & Research Cell
IOT Cell
Programming Club
Hobby Club
Street Play Team
Marathi Bhasha Vangmay Mandal, etc.
Interactive/Smart Board learning
Open Gymnasium
Rankings and achievements
Conferred with Autonomous Status by University Grants Commissions' (UGC) Regulation.
Recently Accredited with A Grade by National Assessment & Accreditation Council of India with CGPA of 3.21, in first cycle.
Winner of Lander Mission Design Contest "Touch the Jovion Moon" conducted by LPSC ISRO as a part of Pearl Jubilee Celebrations.
"Best Engineering College Principal Award 2017" by ISTE Maharashtra & Goa Section in 14thISTE Annual State Convention held at Bharti Vidyapeeth University COE,Pune on 17 February 2017.
KJSIEIT was awarded as best engineering college 2017-18 by ISTE Maharashtra and Goa.
AA+ by Careers360.
KJSIEIT received "An Active Local Chapter Award" by National Program on Technology Enhanced Learning.
Institute has received all over India 3rd Rank in March 2018, 9th Rank in 2017 and 12th Rank in 2015 at National level Robocon Contest,India.
NBA accreditation for UG programs for 3 years.
Placements
To provide appropriate career opportunities to the students, the Training and Placement Cell interacts continuously with different industries and training organizations. Workshops and seminars are organized for academic and overall development of the students.
Leading Recruiters Include:
Avaya Global Connect
Accenture
Computer Sciences Corporation
Patni Compu
NSE IT
VSNL Global
Fortune Infotech
SMG Convonix
Syntel
Mastek
i Flex
Citos
Tata Elxsi
HSBC Global Technology
Tech Mahindra
Infosys
L & T Infotech
ATOS Origin
MU SIGMA
Infosys
Alumni
KJSIEIT Alumni Association comprises students who have completed their final year of the four-year degree course. The Alumni Association is a platform for ex-students with the desire to make contributions to their Alma Mater. The association arranges alumni meets every year in February.
References
Engineering colleges in Mumbai
Affiliates of the University of Mumbai
Educational institutions established in 2001
2001 establishments in Maharashtra |
34077190 | https://en.wikipedia.org/wiki/Command%20and%20control%20regulation | Command and control regulation | Command and Control (CAC) regulation finds common usage in academic literature and beyond. The relationship between CAC and environmental policy is considered in this article, an area that demonstrates the application of this type of regulation. However, CAC is not limited to the environmental sector and encompasses a variety of different fields.
Definition
Command and Control (CAC) Regulation can be defined as “the direct regulation of an industry or activity by legislation that states what is permitted and what is illegal”. This approach differs from other regulatory techniques, e.g. the use of economic incentives, which frequently includes the use of taxes and subsidies as incentives for compliance.
The ‘command’ is the presentation of quality standards/targets by a government authority that must be complied with. The ‘control’ part signifies the negative sanctions that may result from non-compliance e.g. prosecution.
CAC encompasses a variety of methods. Influencing behaviour through: laws, incentives, threats, contracts and agreements. In CAC, there is a perception of a problem and the solution for its control is developed and subsequently implemented.
In the case of environmental policy and regulation, the CAC approach strongly relies on the use of standards to ensure the improvements in the quality of the environment. The CAC approach uses three main types of standards. These are ambient standards, emission standards, and technology standards. Although these standards can be used individually, it is also possible to use the standards in combination. In fact, in most pollution control programs, it is the case where there is a combination of standards being implemented.
Although environmental policy has a long history, a proliferation of policy making in this area occurred in the 1970s and continued to today. The CAC approach dominated policy in industrial nations during this decade because the general focus was on that of remedial policies rather than more comprehensive prevention techniques. Whilst many view CAC negatively, direct regulatory control is still used in many countries' environmental policy.
Enforcement and compliance
To deliver its objectives, direct regulation must ensure the highest level of compliance possible. This can be achieved through appropriate implementation and enforcement. Non-compliance to CAC regulation presents a serious challenge to its effectiveness The manner in which CAC is enforced differs between countries. For example, in the US, some regulators who are tasked with implementing CAC techniques are given rule-making powers. Whereas in the UK, regulatory standards are more commonly set by departments of government. This is achieved through both primary and secondary legislation which is subsequently exacted by regulatory bureaucracies. Regulation differs within countries as well, in the UK the current regulatory sanctioning system possesses variations between powers and practices among regulators.
Enforcement of CAC often involves the use of uniform sanctions, this can result in small businesses feeling the burdens of regulation more severely than companies of a larger size.
Strengths and weaknesses of approach
A CAC approach in policy is used for several reasons. It has been proposed that by imposing fixed standards with the force of law behind them, CAC can respond more quickly to activities which do not abide by the set standards. It also has benefits politically as the regulator (often the government) is seen to be acting swiftly and decisively.
It is far from a problem free form of regulation, the 1980s in particular saw CAC subject to widespread criticism. A good number of the critics tend to favour market-based strategies and are often dubious of the merits of governmental regulatory approaches
Some issues highlighted include:
Regulatory capture: The concern here is that the relationship between regulators and the regulated may lead to the interests of the public being neglected. In this situation it is possible for the relationship to become too close, leading to capture. This may result in the regulator protecting the interests of the regulated.
Legalism: Command and control has been accused of stifling competition and enterprise. It has been posited that this is an inevitable consequence of the inflexible and complicated rules that can be created by the approach. Over-regulation can result, which in turn can lead to ‘over-inclusive’ regulation.
Standard-setting: Selecting the appropriate standards when implementing a CAC regime is crucial if the regulation is to avoid causing detriment to those that it regulates. This is a challenging obstacle to overcome as the amount of information required can be severe.
Enforcement: This constitutes a very significant dilemma for a CAC regulatory approach. One of the key issues is the expense of enforcement, especially when a complex system of rules has been developed. There are also problems of scope.
Critics of CAC often point to incentive-based regulation as an alternative with terms used such as smart regulation, management-based regulation, responsive regulation and meta-regulation. Possible benefits of this approach may include cheaper administration costs and a reduction in the risk of regulatory capture. However the view that incentive-based regulation is radically different from CAC has been scrutinised. The advantages can be exaggerated, a complex system of rules is often necessary to allow an effective system, this can cause many incentive-based schemes to appear to replicate some of the characteristics of CAC. Inspection and enforcement may also be essential to prevent evasion of liability, again resembling CAC and possibly removing the posited benefits in terms of cost. While practices may be changed at a superficial level through the use of CAC, it may not be able to achieve the changes of behaviour necessary for more sustainable environmental practices.
There are some commentators on the topic who prefer to use ‘direct regulatory instrument’ instead of ‘command and control’ instrument because of the negative connotations surrounding the term.
Efficiency
Much of the literature on regulatory instruments considers efficiency in terms of monetary costs. CAC has been labelled by many critics as ‘inefficient’ as a system that spends resources but generates little revenue. The cost of compliance is perceived to be high, which can result in costs that are higher than the sanctions for non-compliance. A summary of 10 studies demonstrated significant differences in cost between CAC and least cost alternatives. Empirical data suggests that CAC regulations, especially government subsidies in agriculture, often fuel environmental damage, deforestation and overfishing in particular.
Some have moved to defend certain aspects of a CAC approach, arguing against the commonly held belief that these regimes are inherently inefficient. Economics incentives are frequently referred to as a considerably more efficient approach to regulation. The most commonly used incentives in this method relate to tax. The administrative costs of tax collection can be understated. Advocates of incentives have been accused of making simplifying assumptions and not fully taking into account the costs of administrating tax systems. In some circumstances, CAC regulation can end up being a less costly option. Whilst economic instruments may act to reduce compliance costs, in certain cases their total costs may actually be higher, This may stem from the high level of monitoring that is required to make an incentivised method viable and successful.
Environmental regulation
Application
The use of Command and Control in regulation involves the government or similar body to “command” the reduction of pollution (e.g. setting emissions levels) levels and to “control” the manner in which it is achieved (e.g. by installing pollution-control technologies). It has been argued that CAC has the potential to be effective under certain conditions. Often its effectiveness can be determined by whether the problem has a diffuse or a point source. A CAC approach is relatively compatible with point source and regulation of these can often achieve success. On the other hand, CAC struggles to appropriately tackle issues that have a diffuse, non-point source. Evans draws on the following example: “it is relatively easy to regulate the emissions from 10 large coal burning power stations in a single country, but far less easy to monitor the emissions caused by millions of motorists or the effluent discharges from tens of thousands of farms across the world.”
In Environmental Policy, CAC is characterised by 3 different types of standards, the use of the standards is determined by various factors, including the nature of the environmental problem and the administrative capacities of the governing body:
Environmental Standards. These are centrally driven standards. A legally enforceable numerical limit is often used to determine the 'standard', but the term can be used more broadly, describing more general rules about acceptability.
Target Standards. The condition of the environment into which the pollutant enters is central to these standards. It can be subdivided into ambient and receptor standards. Ambient standards set the targets that apply to the regulators and policy makers. Whereas Receptor standards apply to the regulated and state that a specified maximum level is not to be exceeded.
Performance Standards. These determine what releases of a pollutant into the environment are acceptable.
It has been suggested that if compliance reaches appropriate levels, there may be a good degree of certainty of environmental results. CAC regulation has the potential to lead to a more rapid resolution of certain environmental policy objectives. It may also provide clarity to those that are subject to the regulation. There may be a clearer understanding of what is required and how to meet those requirements.
It has been argued that the use of the CAC approach to solve environmental problems can result in unexpected consequences if the application is conducted uncritically. Much of environmental policy to date has been associated with the term Disjointed Incrementalism. This term was coined by Lindblom and describes the small and often unplanned changes that have occurred in the field of environmental regulation. These changes in regulation often address small-scale problems with laws tuned towards the particular area of concern. This approach is criticised on the grounds that it does not take into account the wider causes of environmental issues.
International environmental agreements
Montreal Protocol
The 1987 Montreal Protocol is commonly cited as a CAC success story at international level. The aim of the agreement was to limit the release of Chlorofluorocarbons into the atmosphere and subsequently halt the depletion of Ozone (O3) in the stratosphere.
There were a number of factors that contributed to Montreal’s success, these included:
The problem and solution were both clearly defined and supported by industry (albeit not initially)
The Ozone hole was easily measurable
There was an effective scientific lobbying alliance that played a key role in convincing the US Government and the commercial sector (in particular DuPont, then one of the largest manufacturers of CFCs)
Defining this agreement as a CAC approach is slightly problematic as the agreement does not directly instruct states how to meet their targets. However, the aim of the Montreal Protocol has been to eliminate the source of CFC emissions, as a result the only really feasible way for a state to achieve this would be through a ban on substances related to Ozone depletion. Montreal is considered by some to be a 'special case' of a successful CAC approach.
Climate change
The traditional model of command and control typically involved areas of environmental concern being dealt with by national governments. In recent decades, transboundary environmental problems have risen in prominence. This shift has exposed many of the limitations of a command and control approach when it is applied to a larger and more complex arena.
Climate change is often used to exemplify the perceived failings of this regulatory approach. Climate change is good example of a concern that is complex, full of uncertainties and difficult for many people to understand. This may go someway in explaining the apparent incompatibility of climate change and a CAC approach. Mitigating climate change requires action of a much more proactive nature than traditional CAC models are able to deliver.
One reason for the lack of compatibility with many international environmental agreements is the manner in which the international community is organised. International law cannot be implemented in the same way as law at national level. Given that the CAC approach relies heavily on prohibiting certain activities and then enforcing it through sanctions makes the scaling-up to international level problematic. Without a strong international enforcement body it is unlikely that CAC will be an effective tool for dealing with most transboundary environmental issues, climate change included.
Future of command and control regulation in environmental policy
The international nature of many contemporary environmental issues makes CAC regulatory approaches difficult. Since the 1970s enthusiasm for the implementation of economic incentives for regulation has been on the increase. This is due, in part, to the disenchantment with command and control. The shift away from CAC does not seem to be slowing, the increased participation of a variety of actors may be the answer. The role of environmental NGOs in policy making has changed drastically in recent decades. Their numbers and the influence they exert over national governments and negotiations at international level has risen. The involvement of NGOs has assisted the development of international policy in a number of ways. A great deal of environmental policy has been influenced by research collected by these organisations. They also act as whistleblowers, updating the regulators of progress and compliance. A blend of different approaches, involving a range of actors and regulatory types may be the best answer. However, it is likely that many governments will persist with CAC because of the political benefits and the fact that it is not always as inflexible and inefficient as many economists would suggest.
References
Economics of regulation |
12565894 | https://en.wikipedia.org/wiki/Public%20Interest%20Declassification%20Board | Public Interest Declassification Board | The Public Interest Declassification Board (PIDB) is an advisory committee established by the United States Congress with the official mandate of promoting the fullest possible public access to a thorough, accurate, and reliable documentary record of significant U.S. national security decisions and activities. The Board is composed of nine individuals: five appointed by the President of the United States and one each appointed by the Speaker of the House, House Minority Leader, Senate Majority Leader, and Senate Minority Leader. Appointees must be U.S. citizens preeminent in the fields of history, national security, foreign policy, intelligence policy, social science, law, or archives.
Established by the Public Interest Declassification Act of 2000 (Title VII of P.L. 106-567, 114 Stat. 2856), the board advises the President of the United States regarding issues pertaining to national classification and declassification policy. Section 1102 of the Intelligence Reform and Terrorism Prevention Act of 2004 extended and modified the Board.
The director of the Information Security Oversight Office (ISOO) serves as the executive secretary of the PIDB, and ISOO staff provides support on a reimbursable basis. In December 2020, President Donald Trump appointed Acting Under Secretary of Defense for Intelligence Ezra Cohen-Watnick to chair the Public Interest Declassification Board.
Functions
Advises the President and other executive branch officials on the systematic, thorough, coordinated, and comprehensive identification, collection, review for declassification, and release of declassified records and materials that are of archival value, including records and materials of extraordinary public interest.
Promotes public access to thorough, accurate, and reliable documentary records of significant U.S. national security decisions and significant U.S. national security activities in order to: support the oversight and legislative functions of Congress; support the policymaking role of the executive branch; respond to the interest of the public in national security matters; and promote reliable historical analysis and new avenues of historical study in national security matters.
Provides recommendations to the President for the identification, collection, and review for declassification of information of public interest that would not undermine U.S. national security
Advises executive branch officials on policies deriving from Executive orders regarding the classification and declassification of national security information.
Makes recommendations to the President regarding congressional committee requests to declassify certain records or to reconsider a declination to declassify specific records.
Board members
Martin Faga - Appointed to a four-year term by President George W. Bush in October 2004 and reappointed for a three-year term in January 2009. In 2005, he was appointed to the President's Foreign Intelligence Advisory Board. Faga was president and chief executive officer of the Mitre Corporation from 2000 to 2006 and is currently a member of its board of trustees. Before joining Mitre, Faga served from 1989 until 1993 as Assistant Secretary of the Air Force for Space with primary emphasis on policy, strategy, and planning. At the same time, he served as Director of the National Reconnaissance Office (NRO). Faga's career included service as a staff member of the House Permanent Select Committee on Intelligence, where he headed the program and budget staff; as an engineer at the Central Intelligence Agency; and as a research and development officer in the Air Force. Faga received bachelor's and master's degrees in electrical engineering from Lehigh University in 1963 and 1964.
Herbert O. Briick - Appointed to a three-year term by President George W. Bush in October 2008. Briick is currently a senior analyst for a subsidiary of General Dynamics. Briick retired from the Central Intelligence Agency in January 2008, following a 33-year career which included service in every directorate of the Agency. For the last five years of his career he was responsible for the management of the CIA declassification program. In that capacity he took part in a wide variety of declassification issues involving the National Security Council, the National Archives and Records Administration, the presidential libraries, the Office of the Historian in the Department of State, other members of the Intelligence Community, the Congress, and non-governmental organizations. He promoted a number of successful initiatives to release previously classified National Intelligence Estimates and other CIA records of historic significance. Briick was awarded the Career Intelligence Medal in recognition of his service to the CIA. Briick graduated from the University of Notre Dame in 1973 with a Bachelor of Arts in history and received his Master of Arts in Law and Diplomacy in international security studies from the Fletcher School of Law and Diplomacy at Tufts University in 1975.
Elizabeth Rindskopf Parker - Appointed to a three-year term by President George W. Bush in October 2004 and reappointed for 3 years on October 23, 2008. She joined McGeorge School of Law as its eighth dean in 2002 from her position as general counsel for the University of Wisconsin System. Previously, she served as general counsel for the CIA; Principal Deputy Legal Adviser, U.S. Department of State; general counsel, National Security Agency; and as Acting Assistant Director (Mergers and Acquisitions) at the Federal Trade Commission. Parker also served as the director of the New Haven Legal Assistance Association. Early in her career, Parker gained significant experience in the federal courts with a variety of litigation involving discrimination and civil liberties issues, including two successful oral arguments before the Supreme Court of the United States and numerous arguments before various courts of appeal. Parker graduated cum laude from the University of Michigan in 1965 and received her J.D. from the University of Michigan Law School in 1968.
Jennifer Sims - Appointed to a three-year term by President George W. Bush in December 2008. Sims is Visiting Professor in the Security Studies Program and Director of Intelligence Studies at Georgetown University. Prior to this, she taught as a professorial lecturer at School of Advanced International Studies at Johns Hopkins University. Sims served as Senior Intelligence Advisor to the Under Secretary of State for Management from December 1998 to May 2001 and as Deputy Assistant Secretary for Intelligence Policy and Coordination in the Bureau of Intelligence and Research from 1994 to 1998. From November 1990 to April 1994, she served as a professional staff member on the Senate Select Committee on Intelligence and as foreign affairs and defense advisor to Senator John Danforth. In 1998, Sims received the National Intelligence Distinguished Service Medal for her work on developing intelligence support for diplomatic operations. She has written extensively on nuclear arms control and intelligence, including Icarus Restrained: An Intellectual History of American Arms Control, 1945-1960 (Westview Press, 1991) and, most recently, co-edited volumes with Burton Gerber, Transforming US Intelligence (Georgetown University Press, 2005) and Vaults Mirrors and Masks: Problems in US Counterintelligence Policy (Georgetown University Press, 2008). Sims received her Bachelor of Arts from Oberlin College and her Master of Arts (1978) and Ph.D (1985) from the School of Advanced International Studies at Johns Hopkins University.
David E. Skaggs - David Skaggs was appointed to the PIDB for a 2-year term by the Minority Leader of the U.S. House of Representatives in January 2005. He was reappointed for a second term in July 2007, and then reappointed for a third term in June 2009. He is Chairman of the board of the Office of Congressional Ethics and the former executive director of the Colorado Department of Higher Education (2007-2009). He served 12 years in Congress (1987–1999) as the Representative from the 2nd Congressional District in Colorado, including 8 years on the House Appropriations Committee and 6 years on the House Permanent Select Committee on Intelligence, where he devoted particular attention to classification and information security issues. After leaving Congress, he was the founding executive director of the Center for Democracy and Citizenship at the Council for Excellence in Government (1999-2006), counsel to a Washington, DC–based law firm, and 3 years as an adjunct professor at the University of Colorado. Mr. Skaggs was a Colorado State Representative (1981–1987), including two terms as Minority Leader, and was chief of staff for Congressman Timothy E. Wirth of Colorado from 1974 to 1977. Before serving in elected office, Mr. Skaggs practiced law in Boulder, CO; as a judge advocate in the United States Marine Corps; and briefly in New York City. He has a B.A. in philosophy from Wesleyan University (1964) and an LL.B from Yale Law School (1967).
William O. (Bill) Studeman - Appointed to a three-year term by Speaker of the House Dennis Hastert in June 2006, and reappointed for three-year term in June 2009. Studeman is a retired United States Navy admiral. He is a distinguished graduate of both the Naval War College and National War College and as a restricted line naval intelligence officer, his flag tours included Director of Long Range Navy Planning in the Office of the Chief of Naval Operations, director of the National Security Agency, and Deputy Director of Central Intelligence (DDCI) with two extended periods as acting Director of Central Intelligence (DCI). As DDCI, he served in both the George H. W. Bush and Clinton administrations under DCIs Robert Gates, R. James Woolsey, Jr., and John M. Deutch. Studeman retired from the Navy in 1995 after almost 35 years of service and later became vice president of Northrop Grumman and deputy general manager of Mission Systems. He was recently a commissioner on the Presidential Commission on Weapons of Mass Destruction, and is currently serving on the National Science Advisory Board for Biosecurity. He is a member of the Defense Science Board, as well as Defense Intelligence Agency Joint Military Intelligence College, and other advisory boards. Studeman holds a B.A. in history from the Sewanee: The University of the South and an M.A. in public and international affairs from George Washington University, as well as several honorary doctorates.
Sanford J. Ungar - Appointed to a three-year term by Senate Majority Leader Harry Reid in March 2008. He is the tenth president of Goucher College in Baltimore, Maryland. Ungar obtained his B.A. in government from Harvard College and a master's degree in international history from the London School of Economics. In May 1999 he was awarded an honorary Doctorate of Humane Letters by Wilkes University in his hometown of Wilkes-Barre, Pennsylvania. Prior to assuming his position at Goucher, Ungar was Director of the Voice of America for two years. From 1986 until 1999, he was dean of the American University School of Communication. The author of many magazine and newspaper articles on topics of political and international interest, Ungar has spoken frequently around the United States and in other countries on issues of American foreign policy and domestic politics, free expression, human rights, and immigration. Sanford Ungar has been Washington editor of The Atlantic, managing editor of Foreign Policy magazine, and a staff writer for The Washington Post. He was a correspondent for UPI in Paris and for Newsweek in Nairobi, and for many years contributed to The Economist, as well as The New York Times Magazine.
By-Laws
The Board is assigned functions and membership by the Public Interest Declassification Act of 2000 (P.L. 106-567, December 27, 2000) as amended by the Intelligence Reform and Terrorism Prevention Act of 2004, notably section 703.
The U.S. President selects the Chairperson from among the members. The members may elect from among the members a Vice Chairperson who fills in when the Chairperson is not present.
Meetings of the Board are only official when a quorum is present, which by law is a majority of the members. Such meetings of the board are by law generally open to the public. In those instances where the Board finds it necessary to conduct business at a closed meeting, attendance at meetings of the Board shall be limited to those persons necessary for the Board to fulfill its functions in a complete and timely manner, as determined by the Chairperson. The Executive Secretary is responsible for the preparation of each meeting's minutes and the distribution of draft minutes to members. Approved minutes will be maintained among the records of the Board.
Decisions can be made by Board votes at meetings or by the membership outside the context of a formal Board meeting. The Executive Secretary shall record and retain such votes in a documentary form and immediately report the results to the Chairperson and other members.
The staff of NARA's Information Security Oversight Office (ISOO) provide program and administrative support for the Board and the office director serves as Executive Secretary to the Board. The Board may seek detailees from its member agencies to augment the staff of the Information Security Oversight Office in support of the Board.
Board records are maintained by the Executive Secretary. Freedom of Information Act requests and other requests for a document that originated within an agency other than the Board are referred to that agency.
Article VIII sets forth the procedures for considering a proper request under the Act from a committee of jurisdiction in the Congress for the Board to make a recommendation to the President regarding the declassification of certain records.
Standards for decision. A recommendation to declassify a record in whole or in part requires a determination by the Board, after careful consideration of the views of the original classifying authority, that declassification is in the public interest. A decision to recommend declassification in whole or in part requires the affirmative vote of a majority of a quorum of the Board, and of no less than four members of the Board, and the vote of each member present shall be recorded.
Resolution of Requests. The Board may recommend that the President: (1) take no action pursuant to the request; (2) declassify the record(s) in whole or in part, pursuant to action taken in accordance with paragraph C; or (3) remand the matter to the agency responsible for the record(s) for further consideration and a timely response to the Board.
Notification. The Chair shall promptly convey to the President, through the Assistant to the President for National Security Affairs and to the agency head responsible for the record(s), the Board's recommendation, including a written justification for its recommendation.
The approval and amendment of these bylaws shall require the affirmative vote of at least five of the Board's members. The Executive Secretary shall submit the approved bylaws and their amendments for publication in the Federal Register.
Protection of Classified Information. Any classified information contained in the request file shall be handled and protected in accordance with the Order and its implementing directives. Information that is subject to a request for declassification under this section shall remain classified unless and until a final decision is made by the President or by the agency head responsible for the record(s) to declassify it.
Decisions to declassify and release information rest with the President or the agency responsible for the records, not this Board.
The Board reports annually to Congress. Amendments to the Board's bylaws are published in the Federal Register.
Meetings
Declassification Policy Forum
On May 27, 2009, President Barack Obama signed a presidential memorandum ordering the review of Executive Order 12958, as amended "Classified National Security Information". The review of the Order is to be completed within 90 days. On June 2, 2009, the National Security Advisor asked the PIDB to assist in this review by soliciting recommendations for revisions to the Order to ensure adequate public input as the review moves forward.
The PIDB was soliciting recommendations via the Declassification Policy Forum, here.
The four topics of discussion were:
Declassification Policy
Creation of a National Declassification Center
Classification Policy
Technology Challenges and Opportunities
The Forum had a very productive discussion and received more than 150 thoughtful comments from members of the public. The Public Interest Declassification Board has sent a letter and a summary of the comments to the National Security Advisor.
It ran from June 29 through July 19, 2009.
Reports
"Improving Declassification" (2007)
"Transforming the Security Classification System" (November 2012)
"Setting Priorities: An Essential Step in Transforming Declassification" (December 2014)
See also
Classified information in the United States
Controlled Unclassified Information
Interagency Security Classification Appeals Panel
References
External links
Official website
Independent agencies of the United States government
United States government secrecy |
245926 | https://en.wikipedia.org/wiki/Self-driving%20car | Self-driving car | A self-driving car, also known as an autonomous vehicle (AV), driverless car, or robotic car (robo-car), is a car incorporating vehicular automation, that is, a ground vehicle that is capable of sensing its environment and moving safely with little or no human input. The future of this technology may have an impact on multiple industries and other circumstances.
Self-driving cars combine a variety of sensors to perceive their surroundings, such as thermographic cameras, radar, lidar, sonar, GPS, odometry and inertial measurement units. Advanced control systems interpret sensory information to identify appropriate navigation paths, as well as obstacles and relevant signage.
Possible implementations of the technology include personal self-driving vehicles, shared robotaxis, and connected vehicle platoons. Several projects to develop a fully self-driving commercial car are in various stages of development, but there are no self-driving cars available for everyday consumers.
Autonomy in vehicles is often categorized in six levels, according to a system developed by SAE International (SAE J3016, revised periodically). The SAE levels can be roughly understood as Level 0 - no automation; Level 1 - hands on/shared control; Level 2 - hands off; Level 3 - eyes off; Level 4 - mind off, and Level 5 - steering wheel optional.
, vehicles operating at Level 3 and above remain a marginal portion of the market. Waymo became the first service provider to offer driver-less taxi rides to the general public in a part of Phoenix, Arizona in 2020. However, while there is no driver in the car, the vehicles still have remote human overseers.
In March 2021, Honda became the first manufacturer to provide a legally approved Level 3 vehicle, and Toyota operated a potentially Level 4 service around the Tokyo 2020 Olympic Village. Nuro has been allowed to start autonomous commercial delivery operations in California in 2021.
In December 2021, Mercedes-Benz became the second manufacturer to receive legal approval for a Level 3 complying with legal requirements.
In China, two publicly accessible trials of robotaxis have been launched, in 2020 in Shenzhen's Pingshan District by Chinese firm AutoX and in 2021 at Shougang Park in Beijing by Baidu, a venue for the 2022 Winter Olympics.
History
Experiments have been conducted on automated driving systems (ADS) since at least the 1920s; trials began in the 1950s. The first semi-automated car was developed in 1977, by Japan's Tsukuba Mechanical Engineering Laboratory, which required specially marked streets that were interpreted by two cameras on the vehicle and an Analog computer. The vehicle reached speeds up to with the support of an elevated rail.
A landmark autonomous car appeared in the 1980s, with Carnegie Mellon University's Navlab and ALV projects funded by the United States' Defense Advanced Research Projects Agency (DARPA) starting in 1984 and Mercedes-Benz and Bundeswehr University Munich's EUREKA Prometheus Project in 1987. By 1985, the ALV had demonstrated self-driving speeds on two-lane roads of , with obstacle avoidance added in 1986, and off-road driving in day and night time conditions by 1987. A major milestone was achieved in 1995, with CMU's NavLab 5 completing the first autonomous coast-to-coast drive of the United States. Of the between Pittsburgh, Pennsylvania and San Diego, California, were autonomous (98.2%), completed with an average speed of . From the 1960s through the second DARPA Grand Challenge in 2005, automated vehicle research in the United States was primarily funded by DARPA, the US Army, and the US Navy, yielding incremental advances in speeds, driving competence in more complex conditions, controls, and sensor systems. Companies and research organizations have developed prototypes.
The US allocated US$650 million in 1991 for research on the National Automated Highway System, which demonstrated automated driving through a combination of automation embedded in the highway with automated technology in vehicles, and cooperative networking between the vehicles and with the highway infrastructure. The program concluded with a successful demonstration in 1997 but without clear direction or funding to implement the system on a larger scale. Partly funded by the National Automated Highway System and DARPA, the Carnegie Mellon University Navlab drove across America in 1995, or 98% of it autonomously. Navlab's record achievement stood unmatched for two decades until 2015, when Delphi improved it by piloting an Audi, augmented with Delphi technology, over through 15 states while remaining in self-driving mode 99% of the time. In 2015, the US states of Nevada, Florida, California, Virginia, and Michigan, together with Washington, DC, allowed the testing of automated cars on public roads.
From 2016 to 2018, the European Commission funded an innovation strategy development for connected and automated driving through the Coordination Actions CARTRE and SCOUT. Moreover, the Strategic Transport Research and Innovation Agenda (STRIA) Roadmap for Connected and Automated Transport was published in 2019.
In November 2017, Waymo announced that it had begun testing driverless cars without a safety driver in the driver position; however, there was still an employee in the car. An October 2017 report by the Brookings Institution found that the $80 billion had been reported as invested in all facets of self driving technology up to that point, but that it was "reasonable to presume that total global investment in autonomous vehicle technology is significantly more than this."
In October 2018, Waymo announced that its test vehicles had travelled in automated mode for over , increasing by about per month. In December 2018, Waymo was the first to commercialize a fully autonomous taxi service in the US, in Phoenix, Arizona. In October 2020, Waymo launched a geo-fenced driverless ride hailing service in Phoenix. The cars are being monitored in real-time by a team of remote engineers, and there are cases where the remote engineers need to intervene.
In March 2019, ahead of the autonomous racing series Roborace, Robocar set the Guinness World Record for being the fastest autonomous car in the world. In pushing the limits of self-driving vehicles, Robocar reached 282.42 km/h (175.49 mph) – an average confirmed by the UK Timing Association at Elvington in Yorkshire, UK.
In 2020, a National Transportation Safety Board chairman stated that no self-driving cars (SAE level 3+) were available for consumers to purchase in the US in 2020:
On 5 March 2021, Honda began leasing in Japan a limited edition of 100 Legend Hybrid EX sedans equipped with the newly approved Level 3 automated driving equipment which had been granted the safety certification by Japanese government to their autonomous "Traffic Jam Pilot" driving technology, and legally allow drivers to take their eyes off the road.
Definitions
There is some inconsistency in the terminology used in the self-driving car industry. Various organizations have proposed to define an accurate and consistent vocabulary.
In 2014, such confusion has been documented in SAE J3016 which states that "some vernacular usages associate autonomous specifically with full driving automation (Level 5), while other usages apply it to all levels of driving automation, and some state legislation has defined it to correspond approximately to any ADS [automated driving system] at or above Level 3 (or to any vehicle equipped with such an ADS)."
Terminology and safety considerations
Modern vehicles provide features such as keeping the car within its lane, speed controls, or emergency braking. Those features alone are just considered as driver assistance technologies because they still require a human driver control while fully automated vehicles drive themselves without human driver input.
According to Fortune, some newer vehicles' technology names—such as AutonoDrive, PilotAssist, Full-Self Driving or DrivePilot—might confuse the driver, who may believe no driver input is expected when in fact the driver needs to remain involved in the driving task. According to the BBC, confusion between those concepts leads to deaths.
For this reason, some organizations such as the AAA try to provide standardized naming conventions for features such as ALKS which aim to have capacity to manage the driving task, but which are not yet approved to be an automated vehicles in any countries. The Association of British Insurers considers the usage of the word autonomous in marketing for modern cars to be dangerous because car ads make motorists think 'autonomous' and 'autopilot' mean a vehicle can drive itself when they still rely on the driver to ensure safety. Technology able to drive a car is still in its beta stage.
Some car makers suggest or claim vehicles are self-driving when they are not able to manage some driving situations. Despite being called Full Self-Driving, Tesla stated that its offering should not be considered as a fully autonomous driving system. This makes drivers risk becoming excessively confident, taking distracted driving behaviour, leading to crashes. While in Great-Britain, a fully self-driving car is only a car registered in a specific list. There have also been proposals to adopt the aviation automation safety knowledge into the discussions of safe implementation of autonomous vehicles, due to the experience that has been gained over the decades by the aviation sector on safety topics.
According to the SMMT, "There are two clear states – a vehicle is either assisted with a driver being supported by technology or automated where the technology is effectively and safely replacing the driver.".
Autonomous vs. automated
Autonomous means self-governing. Many historical projects related to vehicle automation have been automated (made automatic) subject to a heavy reliance on artificial aids in their environment, such as magnetic strips. Autonomous control implies satisfactory performance under significant uncertainties in the environment, and the ability to compensate for system failures without external intervention.
One approach is to implement communication networks both in the immediate vicinity (for collision avoidance) and farther away (for congestion management). Such outside influences in the decision process reduce an individual vehicle's autonomy, while still not requiring human intervention.
, most commercial projects focused on automated vehicles that did not communicate with other vehicles or with an enveloping management regime. EuroNCAP defines autonomous in "Autonomous Emergency Braking" as: "the system acts independently of the driver to avoid or mitigate the accident", which implies the autonomous system is not the driver.
In Europe, the words automated and autonomous might be used together. For instance, Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles (...) defines "automated vehicle" and "fully automated vehicle" based on their autonomous capacity:
"automated vehicle" means a motor vehicle designed and constructed to move autonomously for certain periods of time without continuous driver supervision but in respect of which driver intervention is still expected or required;
"fully automated vehicle" means a motor vehicle that has been designed and constructed to move autonomously without any driver supervision;
In British English, the word automated alone might have several meaning, such in the sentence: "Thatcham also found that the automated lane keeping systems could only meet two out of the twelve principles required to guarantee safety, going on to say they cannot, therefore, be classed as ‘automated driving’, instead it claims the tech should be classed as ‘assisted driving’.": The first occurrence of the "automated" word refers to an Unece automated system, while the second occurrence refers to the British legal definition of an automated vehicle. The British law interprets the meaning of "automated vehicle" based on the interpretation section related to a vehicle "driving itself" and an insured vehicle.
Autonomous versus cooperative
To enable a car to travel without any driver embedded within the vehicle, some companies use a remote driver.
According to SAE J3016,
Classifications
Self-driving car
PC Magazine defines a self-driving car as "a computer-controlled car that drives itself." The Union of Concerned Scientists states that self-driving cars are "cars or trucks in which human drivers are never required to take control to safely operate the vehicle. Also known as autonomous or 'driverless' cars, they combine sensors and software to control, navigate, and drive the vehicle."
The British Automated and Electric Vehicles Act 2018 law defines considers a vehicle as "driving itself" if the vehicle "is operating in a mode in which it is not being controlled, and does not need to be monitored, by an individual".
SAE classification
A classification system with six levels – ranging from fully manual to fully automated systems – was published in 2014 by standardization body SAE International as J3016, Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems; the details are revised periodically. This classification is based on the amount of driver intervention and attentiveness required, rather than the vehicle's capabilities, although these are loosely related. In the United States in 2013, the National Highway Traffic Safety Administration (NHTSA) had released its original formal classification system. After SAE updated its classification in 2016, called J3016_201609, NHTSA adopted the SAE standard, and SAE classification became widely accepted.
Levels of driving automation
In SAE's automation level definitions, "driving mode" means "a type of driving scenario with characteristic dynamic driving task requirements (e.g., expressway merging, high speed cruising, low speed traffic jam, closed-campus operations, etc.)"
: The automated system issues warnings and may momentarily intervene but has no sustained vehicle control.
("hands on"): The driver and the automated system share control of the vehicle. Examples are systems where the driver controls steering and the automated system controls engine power to maintain a set speed (Cruise control) or engine and brake power to maintain and vary speed (Adaptive cruise control or ACC); and Parking Assistance, where steering is automated while speed is under manual control. The driver must be ready to retake full control at any time. Lane Keeping Assistance (LKA) Type II is a further example of Level 1 self-driving. Automatic emergency braking which alerts the driver to a crash and permits full braking capacity is also a Level 1 feature, according to Autopilot Review magazine.
("hands off"): The automated system takes full control of the vehicle: accelerating, braking, and steering. The driver must monitor the driving and be prepared to intervene immediately at any time if the automated system fails to respond properly. The shorthand "hands off" is not meant to be taken literally – contact between hand and wheel is often mandatory during SAE 2 driving, to confirm that the driver is ready to intervene. The eyes of the driver might be monitored by cameras to confirm that the driver is keeping their attention to traffic. Literal hands off driving is considered level 2.5, although there are no half levels officially. A common example is adaptive cruise control which also utilizes lane keeping assist technology so that the driver simply monitors the vehicle, such as "Super-Cruise" in the Cadillac CT6 by General Motors or Ford's F-150 BlueCruise.
("eyes off"): The driver can safely turn their attention away from the driving tasks, e.g. the driver can text or watch a film. The vehicle will handle situations that call for an immediate response, like emergency braking. The driver must still be prepared to intervene within some limited time, specified by the manufacturer, when called upon by the vehicle to do so. You can think of the automated system as a co-driver that will alert you in an orderly fashion when it is your turn to drive. An example would be a Traffic Jam Chauffeur, another example would be a car satisfying the international Automated Lane Keeping System (ALKS) regulations.
("mind off"): As level 3, but no driver attention is ever required for safety, e.g. the driver may safely go to sleep or leave the driver's seat. However, self-driving is supported only in limited spatial areas (geofenced) or under special circumstances. Outside of these areas or circumstances, the vehicle must be able to safely abort the trip, e.g. slow down and park the car, if the driver does not retake control. An example would be a robotic taxi or a robotic delivery service that covers selected locations in an area, at a specific time and quantities.
("steering wheel optional"): No human intervention is required at all. An example would be a robotic vehicle that works on all kinds of surfaces, all over the world, all year around, in all weather conditions.
In the formal SAE definition below, an important transition is from SAE Level 2 to SAE Level 3 in which the human driver is no longer expected to monitor the environment continuously. At SAE 3, the human driver still has responsibility to intervene when asked to do so by the automated system. At SAE 4 the human driver is always relieved of that responsibility and at SAE 5 the automated system will never need to ask for an intervention.
Criticism of SAE
The SAE Automation Levels have been criticized for their technological focus. It has been argued that the structure of the levels suggests that automation increases linearly and that more automation is better, which may not always be the case. The SAE Levels also do not account for changes that may be required to infrastructure and road user behaviour.
Technology
The characteristics of autonomous vehicles, as digital technology, are distinguishable from other types of technologies and vehicles. These characteristics mean autonomous vehicles are able to be more transformative and agile to possible changes. The characteristics include hybrid navigation, homogenization and decoupling, vehicle communication systems, reprogrammable and smart, digital traces and modularity.
Hybrid navigation
There are different systems that help the self-driving car control the car, including the car navigation system, the location system, the electronic map, the map matching, the global path planning, the environment perception, the laser perception, the radar perception, the visual perception, the vehicle control, the perception of vehicle speed and direction, and the vehicle control method.
Driverless car designers are challenged with producing control systems capable of analysing sensory data in order to provide accurate detection of other vehicles and the road ahead. Modern self-driving cars generally use Bayesian simultaneous localization and mapping (SLAM) algorithms, which fuse data from multiple sensors and an off-line map into current location estimates and map updates. Waymo has developed a variant of SLAM with detection and tracking of other moving objects (DATMO), which also handles obstacles such as cars and pedestrians. Simpler systems may use roadside real-time locating system (RTLS) technologies to aid localization. Typical sensors include lidar (Light Detection and Ranging), stereo vision, GPS and IMU. Control systems on automated cars may use Sensor Fusion, which is an approach that integrates information from a variety of sensors on the car to produce a more consistent, accurate, and useful view of the environment. Heavy rainfall, hail, or snow could impede the car sensors.
Driverless vehicles require some form of machine vision for the purpose of visual object recognition. Automated cars are being developed with deep neural networks, a type of deep learning architecture with many computational stages, or levels, in which neurons are simulated from the environment that activate the network. The neural network depends on an extensive amount of data extracted from real-life driving scenarios, enabling the neural network to "learn" how to execute the best course of action.
In May 2018, researchers from the Massachusetts Institute of Technology announced that they had built an automated car that can navigate unmapped roads. Researchers at their Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new system, called MapLite, which allows self-driving cars to drive on roads that they have never been on before, without using 3D maps. The system combines the GPS position of the vehicle, a "sparse topological map" such as OpenStreetMap, (i.e. having 2D features of the roads only), and a series of sensors that observe the road conditions.
Homogenization
During the ongoing evolution of the digital era, certain industry standards have been developed on how to store digital information and in what type of format. This concept of homogenization also applies to autonomous vehicles. In order for autonomous vehicles to perceive their surroundings, they have to use different techniques each with their own accompanying digital information (e.g. radar, GPS, motion sensors and computer vision). Homogenization requires that the digital information from these different sources is transmitted and stored in the same form. This means their differences are decoupled, and digital information can be transmitted, stored, and computed in a way that the vehicles and their operating system can better understand and act upon it.
In international standardization field, ISO/TC 204 is in charge of information, communication and control systems in the field of urban and rural surface transportation in the intelligent transport systems (ITS) field. International standards have been actively developed in the domains of AD/ADAS functions, connectivity, human interaction, in-vehicle systems, management/engineering, dynamic map and positioning, privacy and security.
Vehicle communication systems
Individual vehicles can benefit from information obtained from other vehicles in the vicinity, especially information relating to traffic congestion and safety hazards. Vehicular communication systems use vehicles and roadside units as the communicating nodes in a peer-to-peer network, providing each other with information. As a cooperative approach, vehicular communication systems can allow all cooperating vehicles to be more effective. According to a 2010 study by the US National Highway Traffic Safety Administration, vehicular communication systems could help avoid up to 79% of all traffic accidents.
There has so far been no complete implementation of peer-to-peer networking on the scale required for traffic.
In 2012, computer scientists at the University of Texas in Austin began developing smart intersections designed for automated cars. The intersections will have no traffic lights and no stop signs, instead of using computer programs that will communicate directly with each car on the road. In the case of autonomous vehicles, it is essential for them to connect with other 'devices' in order to function most effectively. Autonomous vehicles are equipped with communication systems that allow them to communicate with other autonomous vehicles and roadside units to provide them, amongst other things, with information about road work or traffic congestion. In addition, scientists believe that the future will have computer programs that connect and manage each individual autonomous vehicle as it navigates through an intersection. These types of characteristics drive and further develop the ability of autonomous vehicles to understand and cooperate with other products and services (such as intersection computer systems) in the autonomous vehicles market. This could lead to a network of autonomous vehicles all using the same network and information available on that network. Eventually, this can lead to more autonomous vehicles using the network because the information has been validated through the usage of other autonomous vehicles. Such movements will strengthen the value of the network and are called network externalities.
Among connected cars, an unconnected one is the weakest and will be increasingly banned from busy high-speed roads, as predicted by the Helsinki think tank, Nordic Communications Corporation, in January 2016.
In 2017, Researchers from Arizona State University developed a 1/10 scale intersection and proposed an intersection management technique called Crossroads. It was shown that Crossroads is very resilient to network delay of both V2I communication and Worst-case Execution time of the intersection manager. In 2018, a robust approach was introduced which is resilient to both model mismatch and external disturbances such as wind and bumps.
Vehicle networking may be desirable due to difficulty with computer vision being able to recognize brake lights, turn signals, buses, and similar things. However, the usefulness of such systems would be diminished by the fact current cars are not equipped with them; they may also pose privacy concerns.
Re-programmable
Another characteristic of autonomous vehicles is that the core product will have a greater emphasis on the software and its possibilities, instead of the chassis and its engine. This is because autonomous vehicles have software systems that drive the vehicle, meaning that updates through reprogramming or editing the software can enhance the benefits of the owner (e.g. update in better distinguishing blind person vs. non-blind person so that the vehicle will take extra caution when approaching a blind person). A characteristic of this re-programmable part of autonomous vehicles is that the updates need not only to come from the supplier, because through machine learning, smart autonomous vehicles can generate certain updates and install them accordingly (e.g. new navigation maps or new intersection computer systems). These reprogrammable characteristics of the digital technology and the possibility of smart machine learning give manufacturers of autonomous vehicles the opportunity to differentiate themselves on software. This also implies that autonomous vehicles are never finished because the product can continuously be improved.
Digital traces
Autonomous vehicles are equipped with different sorts of sensors and radars. As said, this allows them to connect and interoperate with computers from other autonomous vehicles and/or roadside units. This implies that autonomous vehicles leave digital traces when they connect or interoperate. The data that comes from these digital traces can be used to develop new (to be determined) products or updates to enhance autonomous vehicles' driving ability or safety.
Modularity
Traditional vehicles and their accompanying technologies are manufactured as a product that will be complete, and unlike autonomous vehicles, they can only be improved if they are redesigned or reproduced. As said, autonomous vehicles are produced but due to their digital characteristics never finished. This is because autonomous vehicles are more modular since they are made up out of several modules which will be explained hereafter through a Layered Modular Architecture. The Layered Modular Architecture extends the architecture of purely physical vehicles by incorporating four loosely coupled layers of devices, networks, services and contents into Autonomous Vehicles. These loosely coupled layers can interact through certain standardized interfaces.
(1) The first layer of this architecture consists of the device layer. This layer consists of the following two parts: logical capability and physical machinery. The physical machinery refers to the actual vehicle itself (e.g. chassis and carrosserie). When it comes to digital technologies, the physical machinery is accompanied by a logical capability layer in the form of operating systems that helps to guide the vehicles itself and make it autonomous. The logical capability provides control over the vehicle and connects it with the other layers.;
(2) On top of the device layer comes the network layer. This layer also consists of two different parts: physical transport and logical transmission. The physical transport layer refers to the radars, sensors and cables of the autonomous vehicles which enable the transmission of digital information. Next to that, the network layer of autonomous vehicles also has a logical transmission which contains communication protocols and network standard to communicate the digital information with other networks and platforms or between layers. This increases the accessibility of the autonomous vehicles and enables the computational power of a network or platform.;
(3) The service layer contains the applications and their functionalities that serves the autonomous vehicle (and its owners) as they extract, create, store and consume content with regards to their own driving history, traffic congestion, roads or parking abilities for example.; and
(4) The final layer of the model is the contents layer. This layer contains the sounds, images and videos. The autonomous vehicles store, extract and use to act upon and improve their driving and understanding of the environment. The contents layer also provides metadata and directory information about the content's origin, ownership, copyright, encoding methods, content tags, Geo-time stamps, and so on (Yoo et al., 2010).
The consequence of layered modular architecture of autonomous vehicles (and other digital technologies) is that it enables the emergence and development of platforms and ecosystems around a product and/or certain modules of that product. Traditionally, automotive vehicles were developed, manufactured and maintained by traditional manufacturers. Nowadays app developers and content creators can help to develop more comprehensive product experience for the consumers which creates a platform around the product of autonomous vehicles.
Challenges
The potential benefits from increased vehicle automation described may be limited by foreseeable challenges such as disputes over liability, the time needed to turn over the existing stock of vehicles from non-automated to automated, and thus a long period of humans and autonomous vehicles sharing the roads, resistance by individuals to forfeiting control of their cars, concerns about safety, and the implementation of a legal framework and consistent global government regulations for self-driving cars.
Other obstacles could include de-skilling and lower levels of driver experience for dealing with potentially dangerous situations and anomalies, ethical problems where an automated vehicle's software is forced during an unavoidable crash to choose between multiple harmful courses of action ('the trolley problem'), concerns about making large numbers of people currently employed as drivers unemployed, the potential for more intrusive mass surveillance of location, association and travel as a result of police and intelligence agency access to large data sets generated by sensors and pattern-recognition AI, and possibly insufficient understanding of verbal sounds, gestures and non-verbal cues by police, other drivers or pedestrians.
Possible technological obstacles for automated cars are:
Artificial Intelligence is still not able to function properly in chaotic inner-city environments.
A car's computer could potentially be compromised, as could a communication system between cars.
Susceptibility of the car's sensing and navigation systems to different types of weather (such as snow) or deliberate interference, including jamming and spoofing.
Avoidance of large animals requires recognition and tracking, and Volvo found that software suited to caribou, deer, and elk was ineffective with kangaroos.
Autonomous cars may require high-definition maps to operate properly. Where these maps may be out of date, they would need to be able to fall back to reasonable behaviours.
Competition for the radio spectrum desired for the car's communication.
Field programmability for the systems will require careful evaluation of product development and the component supply chain.
Current road infrastructure may need changes for automated cars to function optimally.
Validation challenge of Automated Driving and need for novel simulation-based approaches comprising digital twins and agent-based traffic simulation.
Social challenges include:
Uncertainty about potential future regulation may delay deployment of automated cars on the road.
Employment – Companies working on the technology have an increasing recruitment problem in that the available talent pool has not grown with demand. As such, education and training by third-party organizations such as providers of online courses and self-taught community-driven projects such as DIY Robocars and Formula Pi have quickly grown in popularity, while university level extra-curricular programmes such as Formula Student Driverless have bolstered graduate experience. Industry is steadily increasing freely available information sources, such as code, datasets and glossaries to widen the recruitment pool.
Human factor
Self-driving cars are already exploring the difficulties of determining the intentions of pedestrians, bicyclists, and animals, and models of behavior must be programmed into driving algorithms. Human road users also have the challenge of determining the intentions of autonomous vehicles, where there is no driver with which to make eye contact or exchange hand signals. Drive.ai is testing a solution to this problem that involves LED signs mounted on the outside of the vehicle, announcing status such as "going now, don't cross" vs. "waiting for you to cross".
Two human-factor challenges are important for safety. One is the handoff from automated driving to manual driving, which may become necessary due to unfavorable or unusual road conditions, or if the vehicle has limited capabilities. A sudden handoff could leave a human driver dangerously unprepared at the moment. In the long term, humans who have less practice at driving might have a lower skill level and thus be more dangerous in manual mode. The second challenge is known as risk compensation: as a system is perceived to be safer, instead of benefiting entirely from all of the increased safety, people engage in riskier behavior and enjoy other benefits. Semi-automated cars have been shown to suffer from this problem, for example with users of Tesla Autopilot ignoring the road and using electronic devices or other activities against the advice of the company that the car is not capable of being completely autonomous. In the near future, pedestrians and bicyclists may travel in the street in a riskier fashion if they believe self-driving cars are capable of avoiding them.
In order for people to buy self-driving cars and vote for the government to allow them on roads, the technology must be trusted as safe. Self-driving elevators were invented in 1900, but the high number of people refusing to use them slowed adoption for several decades until operator strikes increased demand and trust was built with advertising and features like the emergency stop button. There are three types of trust between human and automation. There is dispositional trust, the trust between the driver and the company's product; there is situational trust, or the trust from different scenarios; and there is learned trust where the trust is built between similar events.
Moral issues
With the emergence of automated automobiles, various ethical issues arise. While the introduction of automated vehicles to the mass market is said to be inevitable due to a presumed but untestable potential for reduction of crashes by "up to" 90% and their potential greater accessibility to disabled, elderly, and young passengers, a range of ethical issues have been posed.
There are different opinions on who should be held liable in case of a crash, especially with people being hurt. Besides the fact that the car manufacturer would be the source of the problem in a situation where a car crashes due to a technical issue, there is another important reason why car manufacturers could be held responsible: it would encourage them to innovate and heavily invest into fixing those issues, not only due to protection of the brand image, but also due to financial and criminal consequences. However, there are also voices that argue those using or owning the vehicle should be held responsible since they know the risks involved in using such a vehicle. One study suggests requesting the owners of self-driving cars to sign end-user license agreements (EULAs), assigning to them accountability for any accidents. Other studies suggest introducing a tax or insurances that would protect owners and users of automated vehicles of claims made by victims of an accident. Other possible parties that can be held responsible in case of a technical failure include software engineers that programmed the code for the automated operation of the vehicles, and suppliers of components of the AV.
Taking aside the question of legal liability and moral responsibility, the question arises how automated vehicles should be programmed to behave in an emergency situation where either passengers or other traffic participants like pedestrians, bicyclists and other drivers are endangered. A moral dilemma that a software engineer or car manufacturer might face in programming the operating software is described in an ethical thought experiment, the trolley problem: a conductor of a trolley has the choice of staying on the planned track and running over five people, or turn the trolley onto a track where it would kill only one person, assuming there is no traffic on it. When a self-driving car is in following scenario: it's driving with passengers and suddenly a person appears in its way. The car has to decide between the two options, either to run the person over or to avoid hitting the person by swerving into a wall, killing the passengers. There are two main considerations that need to be addressed. First, what moral basis would be used by an automated vehicle to make decisions? Second, how could those be translated into software code? Researchers have suggested, in particular, two ethical theories to be applicable to the behavior of automated vehicles in cases of emergency: deontology and utilitarianism. Asimov's Three Laws of Robotics are a typical example of deontological ethics. The theory suggests that an automated car needs to follow strict written-out rules that it needs to follow in any situation. Utilitarianism suggests the idea that any decision must be made based on the goal to maximize utility. This needs a definition of utility which could be maximizing the number of people surviving in a crash. Critics suggest that automated vehicles should adapt a mix of multiple theories to be able to respond morally right in the instance of a crash. Recently, some specific ethical frameworks i.e., utilitarianism, deontology, relativism, absolutism (monism), and pluralism, are investigated empirically with respect to the acceptance of self-driving cars in unavoidable accidents.
Many 'trolley' discussions skip over the practical problems of how a probabilistic machine learning vehicle AI could be sophisticated enough to understand that a deep problem of moral philosophy is presenting itself from instant to instant while using a dynamic projection into the near future, what sort of moral problem it actually would be if any, what the relevant weightings in human value terms should be given to all the other humans involved who will be probably unreliably identified, and how reliably it can assess the probable outcomes. These practical difficulties, and those around testing and assessment of solutions to them, may present as much of a challenge as the theoretical abstractions.
While most trolley conundrums involve hyperbolic and unlikely fact patterns, it is inevitable mundane ethical decisions and risk calculations such as the precise millisecond a car should yield to a yellow light or how closely to drive to a bike lane will need to be programmed into the software of autonomous vehicles. Mundane ethical situations may even be more relevant than rare fatal circumstances because of the specificity implicated and their large scope. Mundane situations involving drivers and pedestrians are so prevalent that, in the aggregate, produce large amounts of injuries and deaths. Hence, even incremental permutations of moral algorithms can have a notable effect when considered in their entirety.
Privacy-related issues arise mainly from the interconnectivity of automated cars, making it just another mobile device that can gather any information about an individual (see data mining). This information gathering ranges from tracking of the routes taken, voice recording, video recording, preferences in media that is consumed in the car, behavioural patterns, to many more streams of information. The data and communications infrastructure needed to support these vehicles may also be capable of surveillance, especially if coupled to other data sets and advanced analytics.
The implementation of automated vehicles to the mass market might cost up to 5 million jobs in the US alone, making up almost 3% of the workforce. Those jobs include drivers of taxis, buses, vans, trucks, and e-hailing vehicles. Many industries, such as the auto insurance industry are indirectly affected. This industry alone generates an annual revenue of about US$220 billion, supporting 277,000 jobs. To put this into perspective–this is about the number of mechanical engineering jobs. The potential loss of a majority of those jobs will have a tremendous impact on those individuals involved.
The Massachusetts Institute of Technology (MIT) has animated the trolley problem in the context of autonomous cars in a website called The Moral Machine. The Moral Machine generates random scenarios in which autonomous cars malfunction and forces the user to choose between two harmful courses of action. MIT's Moral Machine experiment has collected data involving over 40 million decisions from people in 233 countries to ascertain peoples' moral preferences. The MIT study illuminates that ethical preferences vary among cultures and demographics and likely correlate with modern institutions and geographic traits.
Global trends of the MIT study highlight that, overall, people prefer to save the lives of humans over other animals, prioritize the lives of many rather than few, and spare the lives of young rather than old. Men are slightly more likely to spare the lives of women, and religious affiliates are slightly more likely to prioritize human life. The lives of criminals were prioritized more than cats, but the lives of dogs were prioritized more than the lives of criminals. The lives of homeless were spared more than the elderly, but the lives of homeless were spared less often than the obese.
People overwhelmingly express a preference for autonomous vehicles to be programmed with utilitarian ideas, that is, in a manner that generates the least harm and minimizes driving casualties. While people want others to purchase utilitarian promoting vehicles, they themselves prefer to ride in vehicles that prioritize the lives of people inside the vehicle at all costs. This presents a paradox in which people prefer that others drive utilitarian vehicles designed to maximize the lives preserved in a fatal situation but want to ride in cars that prioritize the safety of passengers at all costs. People disapprove of regulations that promote utilitarian views and would be less willing to purchase a self-driving car that may opt to promote the greatest good at the expense of its passengers.
Bonnefon et al. conclude that the regulation of autonomous vehicle ethical prescriptions may be counterproductive to societal safety. This is because, if the government mandates utilitarian ethics and people prefer to ride in self-protective cars, it could prevent the large scale implementation of self-driving cars. Delaying the adoption of autonomous cars vitiates the safety of society as a whole because this technology is projected to save so many lives. This is a paradigmatic example of the tragedy of the commons, in which rational actors cater to their self-interested preferences at the expense of societal utility.
Testing
The testing of vehicles with varying degrees of automation can be carried out either physically, in a closed environment or, where permitted, on public roads (typically requiring a license or permit, or adhering to a specific set of operating principles), or in a virtual environment, i.e. using computer simulations.
When driven on public roads, automated vehicles require a person to monitor their proper operation and "take over" when needed. For example, New York state has strict requirements for the test driver, such that the vehicle can be corrected at all times by a licensed operator; highlighted by Cardian Cube Company's application and discussions with New York State officials and the NYS DMV.
Apple is testing self-driving cars, and has increased its fleet of test vehicles from three in April 2017, to 27 in January 2018, and 45 by March 2018.
Russian internet-company Yandex started to develop self-driving cars in early 2017. The first driverless prototype was launched in May 2017. In November 2017, Yandex released a video of its AV winter tests. The car drove successfully along snowy roads of Moscow. In June 2018, Yandex self-driving vehicle completed a 485-mile (780 km) trip on a federal highway from Moscow to Kazan in autonomous mode. In August 2018, Yandex launched a Europe's first robotaxi service with no human driver behind the wheel in the Russian town of Innopolis. At the beginning of 2020 it was reported that over 5,000 autonomous passenger rides were made in the city. At the end of 2018, Yandex obtained a license to operate autonomous vehicles on public roads in the U.S. state of Nevada. In 2019 and 2020, Yandex cars carried out demo rides for Consumer Electronic Show visitors in Las Vegas. Yandex cars were circulating the streets of the city without any human control. In 2019 Yandex started testing its self-driving cars on the public roads of Israel. In October 2019, Yandex became one of the companies selected by Michigan Department of Transportation (MDOT) to provide autonomous passenger rides to the visitors of Detroit Autoshow 2020. At the end of 2019, Yandex made an announcement its self-driving cars passed 1 million miles in fully autonomous mode in Russia, Israel, and the United States. In February 2020, Yandex doubled its mileage with 2 million miles passed. In 2020, Yandex started to test its self-driving cars in Michigan.
The progress of automated vehicles can be assessed by computing the average distance driven between "disengagements", when the automated system is switched off, typically by the intervention of a human driver. In 2017, Waymo reported 63 disengagements over of testing, an average distance of between disengagements, the highest among companies reporting such figures. Waymo also travelled a greater total distance than any of the other companies. Their 2017 rate of 0.18 disengagements per was an improvement over the 0.2 disengagements per in 2016, and 0.8 in 2015. In March 2017, Uber reported an average of just per disengagement. In the final three months of 2017, Cruise (now owned by GM) averaged per disengagement over a total distance of . In July 2018, the first electric driverless racing car, "Robocar", completed a 1.8-kilometer track, using its navigation system and artificial intelligence.
In October 2021, L3Pilot, Europe's first comprehensive pilot test of automated driving on public roads demonstrated automated systems for cars in Hamburg, Germany, in conjunction with ITS World Congress 2021. SAE Level 3 and 4 functions were tested on ordinary roads.
Applications
Autonomous trucks and vans
Companies such as Otto and Starsky Robotics have focused on autonomous trucks. Automation of trucks is important, not only due to the improved safety aspects of these very heavy vehicles, but also due to the ability of fuel savings through platooning. Autonomous vans are being used by online grocers such as Ocado.
Research has also indicated that goods distribution on the macro (urban distribution) and micro level (last mile delivery) could be made more efficient with the use of autonomous vehicles thanks to the possibility of smaller vehicle sizes.
Transport systems
China trailed the first automated public bus in Henan province in 2015, on a highway linking Zhengzhou and Kaifeng. Baidu and King Long produce automated minibus, a vehicle with 14 seats, but without driving seat. With 100 vehicles produced, 2018 will be the first year with commercial automated service in China.
In Europe, cities in Belgium, France, Italy and the UK are planning to operate transport systems for automated cars, and Germany, the Netherlands, and Spain have allowed public testing in traffic. In 2015, the UK launched public trials of the LUTZ Pathfinder automated pod in Milton Keynes. Beginning in summer 2015, the French government allowed PSA Peugeot-Citroen to make trials in real conditions in the Paris area. The experiments were planned to be extended to other cities such as Bordeaux and Strasbourg by 2016. The alliance between French companies THALES and Valeo (provider of the first self-parking car system that equips Audi and Mercedes premi) is testing its own system. New Zealand is planning to use automated vehicles for public transport in Tauranga and Christchurch.
Incidents
Tesla Autopilot
In midOctober 2015, Tesla Motors rolled out version 7 of their software in the US that included Tesla Autopilot capability. On 9 January 2016, Tesla rolled out version 7.1 as an over-the-air update, adding a new "summon" feature that allows cars to retrieve or self-park at parking locations without the driver in the car. , Tesla's automated driving features is currently classified as a Level 2 driver assistance system according to the Society of Automotive Engineers' (SAE) five levels of vehicle automation. At this level the car can be automated but requires the full attention of the driver, who must be prepared to take control at a moment's notice; Autopilot will sometimes fail to detect lane markings and disengage itself while alerting the driver.
On 20 January 2016, the first of five known fatal crashes of a Tesla with Autopilot occurred in China's Hubei province. According to China's 163.com news channel, this marked "China's first accidental death due to Tesla's automatic driving (system)". Initially, Tesla pointed out that the vehicle was so badly damaged from the impact that their recorder was not able to conclusively prove that the car had been on autopilot at the time; however, 163.com pointed out that other factors, such as the car's absolute failure to take any evasive actions prior to the high speed crash, and the driver's otherwise good driving record, seemed to indicate a strong likelihood that the car was on autopilot at the time. A similar fatal crash occurred four months later in Florida. In 2018, in a subsequent civil suit between the father of the driver killed and Tesla, Tesla did not deny that the car had been on autopilot at the time of the accident, and sent evidence to the victim's father documenting that fact.
The second known fatal accident involving a vehicle being driven by itself took place in Williston, Florida on 7 May 2016 while a Tesla Model S electric car was engaged in Autopilot mode. The occupant was killed in a crash with an 18-wheel tractor-trailer. On 28 June 2016 the US National Highway Traffic Safety Administration (NHTSA) opened a formal investigation into the accident working with the Florida Highway Patrol. According to NHTSA, preliminary reports indicate the crash occurred when the tractor-trailer made a left turn in front of the Tesla at an intersection on a non-controlled access highway, and the car failed to apply the brakes. The car continued to travel after passing under the truck's trailer. NHTSA's preliminary evaluation was opened to examine the design and performance of any automated driving systems in use at the time of the crash, which involved a population of an estimated 25,000 Model S cars. On 8 July 2016, NHTSA requested Tesla Motors provide the agency detailed information about the design, operation and testing of its Autopilot technology. The agency also requested details of all design changes and updates to Autopilot since its introduction, and Tesla's planned updates schedule for the next four months.
According to Tesla, "neither autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied." The car attempted to drive full speed under the trailer, "with the bottom of the trailer impacting the windshield of the Model S". Tesla also claimed that this was Tesla's first known autopilot death in over driven by its customers with Autopilot engaged, however by this statement, Tesla was apparently refusing to acknowledge claims that the January 2016 fatality in Hubei China had also been the result of an autopilot system error. According to Tesla there is a fatality every among all type of vehicles in the US However, this number also includes fatalities of the crashes, for instance, of motorcycle drivers with pedestrians.
In July 2016, the US National Transportation Safety Board (NTSB) opened a formal investigation into the fatal accident while the Autopilot was engaged. The NTSB is an investigative body that has the power to make only policy recommendations. An agency spokesman said "It's worth taking a look and seeing what we can learn from that event, so that as that automation is more widely introduced we can do it in the safest way possible." In January 2017, the NTSB released the report that concluded Tesla was not at fault; the investigation revealed that for Tesla cars, the crash rate dropped by 40 percent after Autopilot was installed.
In 2021, NTSB Chair called on Tesla to change the design of its Autopilot to ensure it cannot be misused by drivers, according to a letter sent to the company's CEO.
Waymo
Waymo originated as a self-driving car project within Google. In August 2012, Google announced that their vehicles had completed over 300,000 automated-driving miles (500,000 km) accident-free, typically involving about a dozen cars on the road at any given time, and that they were starting to test with single drivers instead of in pairs. In late-May 2014, Google revealed a new prototype that had no steering wheel, gas pedal, or brake pedal, and was fully automated . , Google had test-driven their fleet in automated mode a total of . In December 2016, Google Corporation announced that its technology would be spun off to a new company called Waymo, with both Google and Waymo becoming subsidiaries of a new parent company called Alphabet.
According to Google's accident reports as of early 2016, their test cars had been involved in 14 collisions, of which other drivers were at fault 13 times, although in 2016 the car's software caused a crash.
In June 2015, Brin confirmed that 12 vehicles had suffered collisions as of that date. Eight involved rear-end collisions at a stop sign or traffic light, two in which the vehicle was side-swiped by another driver, one in which another driver rolled through a stop sign, and one where a Google employee was controlling the car manually. In July 2015, three Google employees suffered minor injuries when their vehicle was rear-ended by a car whose driver failed to brake at a traffic light. This was the first time that a collision resulted in injuries. On 14 February 2016 a Google vehicle attempted to avoid sandbags blocking its path. During the maneuver it struck a bus. Google stated, "In this case, we clearly bear some responsibility, because if our car hadn't moved, there wouldn't have been a collision." Google characterized the crash as a misunderstanding and a learning experience. No injuries were reported in the crash.
Uber ATG
In March 2017, an Uber Advanced Technologies Group's test vehicle was involved in a crash in Tempe, Arizona when another car failed to yield, flipping the Uber vehicle. There were no injuries in the accident.
By 22 December 2017, Uber had completed in automated mode.
In March 2018, Elaine Herzberg became the first pedestrian to be killed by a self-driving car in the United States after being hit by an Uber vehicle, also in Tempe. Herzberg was crossing outside of a crosswalk, approximately 400 feet from an intersection. This marks the first time an individual is known to have been killed by an autonomous vehicle, and considered to raise questions about regulations surrounding the burgeoning self-driving car industry. Some experts say a human driver could have avoided the fatal crash. Arizona Governor Doug Ducey later suspended the company's ability to test and operate its automated cars on public roadways citing an "unquestionable failure" of the expectation that Uber make public safety its top priority. Uber has pulled out of all self-driving-car testing in California as a result of the accident. On 24 May 2018, the US National Transport Safety Board issued a preliminary report.
In September 2020, according to the BBC, the backup driver has been charged of negligent homicide, because she did not look to the road for several seconds while her television was streaming The Voice broadcast by Hulu.
Uber does not face any criminal charge because in the USA there is no basis for criminal liability for the corporation. The driver is assumed to be responsible of the accident, because she was in the driver seat in capacity to avoid an accident (like in a Level 3). Trial is planned for February 2021.
Navya Arma driving system
On 9 November 2017, a Navya Arma automated self-driving bus with passengers was involved in a crash with a truck. The truck was found to be at fault of the crash, reversing into the stationary automated bus. The automated bus did not take evasive actions or apply defensive driving techniques such as flashing its headlights, or sounding the horn. As one passenger commented, "The shuttle didn't have the ability to move back. The shuttle just stayed still."
Toyota e-Palette operation
On 26 August 2021, a Toyota e-Palette, a mobility vehicle used to support mobility within the Athletes' Village at the Olympic and Paralympic Games Tokyo 2020, collided with a visually impaired pedestrian about to cross a pedestrian crossing.
The suspension was made after the accident, and restarted on 31 with improved safety measures.
Public opinion surveys
In a 2011 online survey of 2,006 US and UK consumers by Accenture, 49% said they would be comfortable using a "driverless car".
A 2012 survey of 17,400 vehicle owners by J.D. Power and Associates found 37% initially said they would be interested in purchasing a "fully autonomous car". However, that figure dropped to 20% if told the technology would cost US$3,000 more.
In a 2012 survey of about 1,000 German drivers by automotive researcher Puls, 22% of the respondents had a positive attitude towards these cars, 10% were undecided, 44% were sceptical and 24% were hostile.
A 2013 survey of 1,500 consumers across 10 countries by Cisco Systems found 57% "stated they would be likely to ride in a car controlled entirely by technology that does not require a human driver", with Brazil, India and China the most willing to trust automated technology.
In a 2014 US telephone survey by Insurance.com, over three-quarters of licensed drivers said they would at least consider buying a self-driving car, rising to 86% if car insurance were cheaper. 31.7% said they would not continue to drive once an automated car was available instead.
In a February 2015 survey of top auto journalists, 46% predicted that either Tesla or Daimler would be the first to the market with a fully autonomous vehicle, while (at 38%) Daimler was predicted to be the most functional, safe, and in-demand autonomous vehicle.
In 2015 a questionnaire survey by Delft University of Technology explored the opinion of 5,000 people from 109 countries on automated driving. Results showed that respondents, on average, found manual driving the most enjoyable mode of driving. 22% of the respondents did not want to spend any money for a fully automated driving system. Respondents were found to be most concerned about software hacking/misuse, and were also concerned about legal issues and safety. Finally, respondents from more developed countries (in terms of lower accident statistics, higher education, and higher income) were less comfortable with their vehicle transmitting data. The survey also gave results on potential consumer opinion on interest of purchasing an automated car, stating that 37% of surveyed current owners were either "definitely" or "probably" interested in purchasing an automated car.
In 2016, a survey in Germany examined the opinion of 1,603 people, who were representative in terms of age, gender, and education for the German population, towards partially, highly, and fully automated cars. Results showed that men and women differ in their willingness to use them. Men felt less anxiety and more joy towards automated cars, whereas women showed the exact opposite. The gender difference towards anxiety was especially pronounced between young men and women but decreased with participants' age.
In 2016, a PwC survey, in the United States, showing the opinion of 1,584 people, highlights that "66 percent of respondents said they think autonomous cars are probably smarter than the average human driver". People are still worried about safety and mostly the fact of having the car hacked. Nevertheless, only 13% of the interviewees see no advantages in this new kind of cars.
In 2017, Pew Research Center surveyed 4,135 US adults from 1–15 May and found that many Americans anticipate significant impacts from various automation technologies in the course of their lifetimes—from the widespread adoption of automated vehicles to the replacement of entire job categories with robot workers.
In 2019, results from two opinion surveys of 54 and 187 US adults respectively were published. A new standardised questionnaire, the autonomous vehicle acceptance model (AVAM) was developed, including additional description to help respondents better understand the implications of different automation levels. Results showed that users were less accepting of high autonomy levels and displayed significantly lower intention to use highly autonomous vehicles. Additionally, partial autonomy (regardless of level) was perceived as requiring uniformly higher driver engagement (usage of hands, feet and eyes) than full autonomy.
Regulation
The Geneva Convention on Road Traffic subscribed to by over 101 countries worldwide, requires the driver to be 18 years old.
The 1968 Vienna Convention on Road Traffic, subscribed to by 83 countries worldwide, establishes principles to govern traffic laws. One of the fundamental principles of the convention had been the concept that a driver is always fully in control and responsible for the behavior of a vehicle in traffic.
In 2016, a reform of the convention has opened possibilities for automated features for ratified countries.
In February 2018, UNECE's Inland Transport Committee (ITC) acknowledged the importance of WP.29 activities related to automated, autonomous and connected vehicles and requested WP.29 to consider establishing a dedicated working Party. Following the request, WP.29, at its June 2018 session, decided to convert the Working Party on Brakes and Running Gear (GRRF) into a new Working Party on Automated/Autonomous and Connected Vehicles (GRVA).
In June 2020, WP.29 virtual meeting approved reports from GRVA about its fifth session on "automated/autonomous and connected vehicles" and sixth session on "cyber security and software updates",
it means that UN regulation on Level 3 was established.
In first half 2022, UNECE regulation 157 should enter into force in some countries on 22 January 2022 for cars.
In second half 2022, Article 1 and new Article 34 bis amendment of the 1968 Convention on Road Traffic should enter into force on 14 July 2022, unless it is rejected before 13 january 2022.
Legislation and regulation in Japan
Japan is a non-signatory country to the Vienna Convention. In 2019, Japan amended two laws, "Road Traffic Act" and "Road Transport Vehicle Act", and they came into effect in April 2020. In the former act, Level 3 self driving cars became allowed on public roads.
In the latter act, process to designate types for safety certification on Level 3 self driving function of
Autonomous Driving System (ADS) and the certification process for the asserted type were legally defined.
Through the amendment process, the achievements from the national project "SIP-adus" led by Cabinet Office since 2014 were fully considered and accepted.
In 2020, the next stage national level roadmap plan was officially issued which had considered social deployment and acceptability of Level 4.
At the end of 2020, Ministry of Land, Infrastructure, Transport and Tourism (MLIT) amended its "Safety Regulation for Road Transport Vehicle" to reflect undertakings of UNECE WP.29 GRVA on cyber security and software updates, and the regulation came into effect in January 2021.
In April 2021, National Police Agency (NPA) published its expert committee's report of FY 2020 on summary of issues in research to realize Level 4 mobility services, including required legal amendment issues.
During the summer of 2021, Ministry of Economy, Trade and Industry (METI) prepared with MLIT to launch a project "RoAD to the L4" to cover R&D with social deployment to realize acceptable Level 4 mobility service, and updated its public information in September. As a part of this project, civil law liability problem reflecting changed roles will be clarified.
About misleading representation in marketing, Article 5 of "Act against Unjustifiable Premiums and Misleading Representations" is applied.
In 2022, NPA is going to submit amendment bill on "Road Traffic Act" to the Diet in the next ordinary Diet session to include approving scheme for Level 4 services.
Legal status in the United States
In the United States, a non-signatory country to the Vienna Convention, state vehicle codes generally do not envisage—but do not necessarily prohibit—highly automated vehicles . To clarify the legal status of and otherwise regulate such vehicles, several states have enacted or are considering specific laws. By 2016, seven states (Nevada, California, Florida, Michigan, Hawaii, Washington, and Tennessee), along with the District of Columbia, have enacted laws for automated vehicles. Incidents such as the first fatal accident by Tesla's Autopilot system have led to discussion about revising laws and standards for automated cars.
Federal policies
In September 2016, the US National Economic Council and US Department of Transportation (USDOT) released the Federal Automated Vehicles Policy, which are standards that describe how automated vehicles should react if their technology fails, how to protect passenger privacy, and how riders should be protected in the event of an accident. The new federal guidelines are meant to avoid a patchwork of state laws, while avoiding being so overbearing as to stifle innovation. Since then, USDOT has released multiple updates:
Automated Driving Systems: A Vision for Safety 2.0 (12 September 2017)
Preparing for the Future of Transportation: Automated Vehicles 3.0 (4 October 2018)
Ensuring American Leadership in Automated Vehicle Technologies: Automated Vehicles 4.0 (8 January 2020)
The National Highway Traffic Safety Administration released for public comment the Occupant Protection for Automated Driving System on 30 March 2020, followed by the Framework for Automated Driving System Safety on 3 December 2020. Occupant Protection is intended to modernize the Federal Motor Vehicle Safety Standards considering the removal of manual controls with automated driving systems, while the Framework document is intended to provide an objective way to define and assess automated driving system competence to ensure motor vehicle safety while also remaining flexible to accommodate the development of features to improve safety.
State policies
Nevada
In June 2011, the Nevada Legislature passed a law to authorize the use of automated cars. Nevada thus became the first jurisdiction in the world where automated vehicles might be legally operated on public roads. According to the law, the Nevada Department of Motor Vehicles is responsible for setting safety and performance standards and the agency is responsible for designating areas where automated cars may be tested. This legislation was supported by Google in an effort to legally conduct further testing of its Google driverless car. The Nevada law defines an automated vehicle to be "a motor vehicle that uses artificial intelligence, sensors and global positioning system coordinates to drive itself without the active intervention of a human operator". The law also acknowledges that the operator will not need to pay attention while the car is operating itself. Google had further lobbied for an exemption from a ban on distracted driving to permit occupants to send text messages while sitting behind the wheel, but this did not become law. Furthermore, Nevada's regulations require a person behind the wheel and one in the passenger's seat during tests.
Florida
In April 2012, Florida became the second state to allow the testing of automated cars on public roads.
California
California became the third state to allow automated car testing when Governor Jerry Brown signed SB 1298 into law in September 2012 at Google Headquarters in Mountain View.
On 19 February 2016, California Assembly Bill 2866 was introduced in California that would allow automated vehicles to operate on public roads, including those without a driver, steering wheel, accelerator pedal, or brake pedal. The bill states that the California Department of Motor Vehicles would need to comply with these regulations by 1 July 2018 for these rules to take effect. , this bill has yet to pass the house of origin. California published discussions on the proposed federal automated vehicles policy in October 2016.
In December 2016, the California Department of Motor Vehicles ordered Uber to remove its self-driving vehicles from the road in response to two red-light violations. Uber immediately blamed the violations on human-error, and has suspended the drivers.
Washington, DC
In Washington, DC's district code:
In the same district code, it is considered that:
Michigan and others
In December 2013, Michigan became the fourth state to allow testing of driverless cars on public roads. In July 2014, the city of Coeur d'Alene, Idaho adopted a robotics ordinance that includes provisions to allow for self-driving cars.
Legislation in the United Kingdom
In 2013, the government of the United Kingdom permitted the testing of automated cars on public roads. Before this, all testing of robotic vehicles in the UK had been conducted on private property.
In March 2019, the UK became a signatory country to the Vienna Convention.
, the UK is working on a new law proposal to allow self-driving automated lane keeping systems (ALKS) up to 37 mph (or 60 km/h) after a mixed reaction of experts during the consultation launched in summer 2020. This system would be allowed to give back control to the driver when "unplanned events" such as road construction or inclement weather occurs. The Centre for Connected and Autonomous Vehicles (CCAV) has asked the Law Commission of England and Wales and the Scottish Law Commission to undertake a far-reaching review of the legal framework for "automated" vehicles, and their use as part of public transport networks and on-demand passenger services. The teams developed policy and the full analysis report was published in January 2022.
About misleading representation in marketing, the Society of Motor Manufacturers and Traders (SMMT) published guiding principles as followings:
An automated driving feature must be described sufficiently clearly so as not to mislead, including setting out the circumstances in which that feature can function.
An automated driving feature must be described sufficiently clearly so that it is distinguished from an assisted driving feature.
Where both automated driving and assisted driving features are described, they must be clearly distinguished from each other.
An assisted driving feature should not be described in a way that could convey the impression that it is an automated driving feature.
The name of an automated or assisted driving feature must not mislead by conveying that it is the other – ancillary words may be necessary to avoid confusion – for example for an assisted driving feature, by making it clear that the driver must be in control at all times.
Legislation in Europe
In 2014, the Government of France announced that testing of automated cars on public roads would be allowed in 2015. 2000 km of road would be opened through the national territory, especially in Bordeaux, in Isère, Île-de-France and Strasbourg. At the 2015 ITS World Congress, a conference dedicated to intelligent transport systems, the very first demonstration of automated vehicles on open road in France was carried out in Bordeaux in early October 2015.
In 2015, a preemptive lawsuit against various automobile companies such as GM, Ford, and Toyota accused them of "Hawking vehicles that are vulnerable to hackers who could hypothetically wrest control of essential functions such as brakes and steering."
In spring of 2015, the Federal Department of Environment, Transport, Energy and Communications in Switzerland (UVEK) allowed Swisscom to test a driverless Volkswagen Passat on the streets of Zurich.
As of April 2017, it is possible to conduct public road tests for development vehicles in Hungary, furthermore the construction of a closed test track, the ZalaZone test track, suitable for testing highly automated functions is also under way near the city of Zalaegerszeg.
Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles defines specific requirements relating to automated vehicles and fully automated vehicles. This law is applicable from 2022 and is based on uniform procedures and technical specifications for the systems and other items.
In July 2021 in Germany, the Federal Act Amending the Road Traffic Act and the Compulsory Insurance Act (Autonomous Driving Act) came into effect.
The Act allows motor vehicles with autonomous driving capabilities, meaning vehicles that can perform driving tasks independently without a person driving, in specified operating areas on public roads. Provisions about autonomous driving in appropriate operating areas correspond to Level 4.
Legislation in Asia
In 2016, the Singapore Land Transit Authority in partnership with UK automotive supplier Delphi Automotive, began launch preparations for a test run of a fleet of automated taxis for an on-demand automated cab service to take effect in 2017.
In 2017, the South Korean government stated that the lack of universal standards is preventing its own legislation from pushing new domestic rules. However, once the international standards are settled, South Korea's legislation will resemble the international standards.
Regulation in China
In 2018, China introduced regulations to regulate autonomous cars, for conditional automation, high-level automation and full automation (L3, L4 and L5 SAE levels).
Chinese regulation gives Ministry of Industry and Information Technology (MIIT), the Ministry of Public Security (MPS) and Ministry of Transport (MOT) regulatory competence.
Chinese regulation mandates remote monitoring capability and capacity to record, analyze and remake the incident of the test vehicles.
Requirements for a test driver are at least a 3-years unblemished driving experience.
Automated vehicles are required capacity to automatically record and store information during the 90 seconds before accident or malfunction. Those data should be stored at least 3 years.
In 2021, China plans to add highways to the list of roads were provincial and city-level authorities can authorize automated cars.
In 2021, NIO manufactures cars with autonomous driving system with level similar to Tesla:
NIO is working on a Level 2 and a Level 4 vehicle.
Regulation in Australia
Australia also has some ongoing trials.
Liability
Self-driving car liability is a developing area of law and policy that will determine who is liable when an automated car causes physical damage to persons, or breaks road rules. When automated cars shift the control of driving from humans to automated car technology the driver will need to consent to share operational responsibility which will require a legal framework. There may be a need for existing liability laws to evolve in order to fairly identify the parties responsible for damage and injury, and to address the potential for conflicts of interest between human occupants, system operator, insurers, and the public purse. Increases in the use of automated car technologies (e.g. advanced driver-assistance systems) may prompt incremental shifts in this responsibility for driving. It is claimed by proponents to have potential to affect the frequency of road accidents, although it is difficult to assess this claim in the absence of data from substantial actual use. If there was a dramatic improvement in safety, the operators may seek to project their liability for the remaining accidents onto others as part of their reward for the improvement. However, there is no obvious reason why they should escape liability if any such effects were found to be modest or nonexistent, since part of the purpose of such liability is to give an incentive to the party controlling something to do whatever is necessary to avoid it causing harm. Potential users may be reluctant to trust an operator if it seeks to pass its normal liability on to others.
In any case, a well-advised person who is not controlling a car at all (Level 5) would be understandably reluctant to accept liability for something out of their control. And when there is some degree of sharing control possible (Level 3 or 4), a well-advised person would be concerned that the vehicle might try to pass back control at the last seconds before an accident, to pass responsibility and liability back too, but in circumstances where the potential driver has no better prospects of avoiding the crash than the vehicle, since they have not necessarily been paying close attention, and if it is too hard for the very smart car it might be too hard for a human. Since operators, especially those familiar with trying to ignore existing legal obligations (under a motto like 'seek forgiveness, not permission'), such as Waymo or Uber, could be normally expected to try to avoid responsibility to the maximum degree possible, there is potential for attempt to let the operators evade being held liable for accidents while they are in control.
As higher levels of automation are commercially introduced (Level 3 and 4), the insurance industry may see a greater proportion of commercial and product liability lines while personal automobile insurance shrinks.
When it comes to the direction of fully autonomous car liability, torts cannot be ignored. In any car accident the issue of negligence usually arises. In the situation of autonomous cars, negligence would most likely fall on the manufacturer because it would be hard to pin a breach of duty of care on the user who isn't in control of the vehicle. The only time negligence was brought up in an autonomous car lawsuit, there was a settlement between the person struck by the autonomous vehicle and the manufacturer (General Motors). Next, product liability would most likely cause liability to fall on the manufacturer. For an accident to fall under product liability, there needs to be either a defect, failure to provide adequate warnings, or foreseeability by the manufacturer. Third, is strict liability which in this case is similar to product liability based on the design defect. Based on a Nevada Supreme Court ruling (Ford vs. Trejo) the plaintiff needs to prove failure of the manufacturer to pass the consumer expectation test. That is potentially how the three major torts could function when it comes to autonomous car liability.
Anticipated launch of cars
Between manually driven vehicles (SAE Level 0) and fully autonomous vehicles (SAE Level 5), there are a variety of vehicle types that can be described to have some degree of automation. These are collectively known as semi-automated vehicles. As it could be a while before the technology and infrastructure are developed for full automation, it is likely that vehicles will have increasing levels of automation. These semi-automated vehicles could potentially harness many of the advantages of fully automated vehicles, while still keeping the driver in charge of the vehicle.
Anticipated Level 2
Tesla vehicles are equipped with hardware that Tesla claims will allow full self driving in the future. In October 2020 Tesla released a "beta" version of its "Full Self-Driving" software to a small group of testers in the United States; however, this "Full Self-Driving" corresponds to level 2 autonomy.
Anticipated Level 3
In December 2021, Mercedes-Benz became the world second manufacturer to receive legal approval for a Level 3. Their type approval was on UN-R157 for automated lane keeping, and it is the first case for the type, as Honda's type approval for Traffic Jam Pilot was on different type. Mercedes-Benz says that customers will be able to buy an S-Class with the Drive Pilot technology in the first half of 2022, enabling them to drive in conditionally automated mode at speeds of up to 60 km/h (37 mph) in heavy traffic or congested situations on suitable stretches of motorway in Germany.
In 2017, BMW was expected to trial 7 Series as an automated car in public urban motorways of the United States, Germany and Israel before commercializing them in 2021. Although it was not realized, BMW is still preparing 7 Series to become the next manufacturer to reach Level 3 in the second half of 2022.
Although Audi had unveiled an A8 sedan with insisting Level 3 technology in 2017, regulatory hurdles have prevented it from realizing Level 3 in 2020.
In September 2021, Stellantis has presented its findings from a pilot program testing Level 3 autonomous vehicles on public Italian highways.
Stellantis's Highway Chauffeur claims Level 3 capabilities, which was tested on the Maserati Ghibli and Fiat 500X prototypes.
Anticipated Level 4
In August 2021, Toyota operated potentially Level 4 service around the Tokyo 2020 Olympic Village.
In October 2021 at World Congress on Intelligent Transport Systems, Honda presented that they are already testing Level 4 technology on modified Legend Hybrid EX.
At the end of the month, Honda explained that they are conducting verification project on Level 4 technology on a test course in Tochigi prefecture. Honda plans to test on public roads in early 2022.
See also
Automated guideway transit
Automatic train operation
Automobile safety
Automotive navigation system
Autopilot
Advanced Driver Assistance Systems
Computer vision
Connected car
DARPA Grand Challenge: 2004, DARPA Grand Challenge, 2007
DARPA Robotics Challenge (2012)
Dutch Automated Vehicle Initiative
Death by GPS
Driverless tractor
Hybrid navigation
Intelligent transportation system
List of self-driving system suppliers
Machine vision
Mobility as a service (transport)
Personal rapid transit
Platoon (automobile)
Retrofitting
Smart camera
Technological unemployment
Unmanned ground vehicle
Unmanned aerial vehicle
Vehicle infrastructure integration
Vehicle safety technology
Vision processing unit
Measurement of Assured Clear Distance Ahead
Electronic stability control
Precrash system
Deep learning
Artificial intelligence
Self-driving truck
References
Further reading
Gereon Meyer, Sven Beiker (Eds.), Road Vehicle Automation, Springer International Publishing 2014, , and following issues: Road Vehicle Automation 2 (2015), Road Vehicle Automation 3 (2016), Road Vehicle Automation 4 (2017), Road Vehicle Automation 5 (2018), Road Vehicle Automation 6 (2019). These books are based on presentations and discussions at the Automated Vehicles Symposium organized annually by TRB and AUVSI. |
10969 | https://en.wikipedia.org/wiki/Field-programmable%20gate%20array | Field-programmable gate array | A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturinghence the term field-programmable. The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC). Circuit diagrams were previously used to specify the configuration, but this is increasingly rare due to the advent of electronic design automation tools.
FPGAs contain an array of programmable logic blocks, and a hierarchy of reconfigurable interconnects allowing blocks to be wired together. Logic blocks can be configured to perform complex combinational functions, or act as simple logic gates like AND and XOR. In most FPGAs, logic blocks also include memory elements, which may be simple flip-flops or more complete blocks of memory. Many FPGAs can be reprogrammed to implement different logic functions, allowing flexible reconfigurable computing as performed in computer software.
FPGAs have a remarkable role in embedded system development due to their capability to start system software (SW) development simultaneously with hardware (HW),
enable system performance simulations at a very early phase of the development, and allow various system partitioning (SW and HW) trials and iterations before final freezing of the system architecture.
History
The FPGA industry sprouted from programmable read-only memory (PROM) and programmable logic devices (PLDs). PROMs and PLDs both had the option of being programmed in batches in a factory or in the field (field-programmable). However, programmable logic was hard-wired between logic gates.
Altera was founded in 1983 and delivered the industry's first reprogrammable logic device in 1984 – the EP300 – which featured a quartz window in the package that allowed users to shine an ultra-violet lamp on the die to erase the EPROM cells that held the device configuration.
Xilinx co-founders Ross Freeman and Bernard Vonderschmitt invented the first commercially viable field-programmable gate array in 1985 – the XC2064. The XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market. The XC2064 had 64 configurable logic blocks (CLBs), with two three-input lookup tables (LUTs). More than 20 years later, Freeman was entered into the National Inventors Hall of Fame for his invention.
In 1987, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992.
Altera and Xilinx continued unchallenged and quickly grew from 1985 to the mid-1990s when competitors sprouted up, eroding a significant portion of their market share. By 1993, Actel (now Microsemi) was serving about 18 percent of the market.
The 1990s were a period of rapid growth for FPGAs, both in circuit sophistication and the volume of production. In the early 1990s, FPGAs were primarily used in telecommunications and networking. By the end of the decade, FPGAs found their way into consumer, automotive, and industrial applications.
By 2013, Altera (31 percent), Actel (10 percent) and Xilinx (36 percent) together represented approximately 77 percent of the FPGA market.
Companies like Microsoft have started to use FPGAs to accelerate high-performance, computationally intensive systems (like the data centers that operate their Bing search engine), due to the performance per watt advantage FPGAs deliver. Microsoft began using FPGAs to accelerate Bing in 2014, and in 2018 began deploying FPGAs across other data center workloads for their Azure cloud computing platform.
The following timelines indicate progress in different aspects of FPGA design:
Gates
1987: 9,000 gates, Xilinx
1992: 600,000, Naval Surface Warfare Department
Early 2000s: millions
2013: 50 million, Xilinx
Market size
1985: First commercial FPGA : Xilinx XC2064
1987: $14 million
: >$385 million
2005: $1.9 billion
2010 estimates: $2.75 billion
2013: $5.4 billion
2020 estimate: $9.8 billion
Design starts
A design start is a new custom design for implementation on an FPGA.
2005: 80,000
2008: 90,000
Design
Contemporary FPGAs have large resources of logic gates and RAM blocks to implement complex digital computations. As FPGA designs employ very fast I/O rates and bidirectional data buses, it becomes a challenge to verify correct timing of valid data within setup time and hold time.
Floor planning enables resource allocation within FPGAs to meet these time constraints. FPGAs can be used to implement any logical function that an ASIC can perform. The ability to update the functionality after shipping, partial re-configuration of a portion of the design and the low non-recurring engineering costs relative to an ASIC design (notwithstanding the generally higher unit cost), offer advantages for many applications.
Some FPGAs have analog features in addition to digital functions. The most common analog feature is a programmable slew rate on each output pin, allowing the engineer to set low rates on lightly loaded pins that would otherwise ring or couple unacceptably, and to set higher rates on heavily loaded pins on high-speed channels that would otherwise run too slowly. Also common are quartz-crystal oscillators, on-chip resistance-capacitance oscillators, and phase-locked loops with embedded voltage-controlled oscillators used for clock generation and management as well as for high-speed serializer-deserializer (SERDES) transmit clocks and receiver clock recovery. Fairly common are differential comparators on input pins designed to be connected to differential signaling channels. A few "mixed signal FPGAs" have integrated peripheral analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) with analog signal conditioning blocks allowing them to operate as a system-on-a-chip (SoC). Such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable interconnect fabric, and field-programmable analog array (FPAA), which carries analog values on its internal programmable interconnect fabric.
Logic blocks
The most common FPGA architecture consists of an array of logic blocks (called configurable logic blocks, CLBs, or logic array blocks, LABs, depending on vendor), I/O pads, and routing channels. Generally, all the routing channels have the same width (number of wires). Multiple I/O pads may fit into the height of one row or the width of one column in the array.
An application circuit must be mapped into an FPGA with adequate resources. While the number of CLBs/LABs and I/Os required is easily determined from the design, the number of routing tracks needed may vary considerably even among designs with the same amount of logic.
For example, a crossbar switch requires much more routing than a systolic array with the same gate count. Since unused routing tracks increase the cost (and decrease the performance) of the part without providing any benefit, FPGA manufacturers try to provide just enough tracks so that most designs that will fit in terms of lookup tables (LUTs) and I/Os can be routed. This is determined by estimates such as those derived from Rent's rule or by experiments with existing designs. , network-on-chip architectures for routing and interconnection are being developed.
In general, a logic block consists of a few logical cells (called ALM, LE, slice etc.). A typical cell consists of a 4-input LUT, a full adder (FA) and a D-type flip-flop. These might be split into two 3-input LUTs. In normal mode those are combined into a 4-input LUT through the first multiplexer (mux). In arithmetic mode, their outputs are fed to the adder. The selection of mode is programmed into the second mux. The output can be either synchronous or asynchronous, depending on the programming of the third mux. In practice, entire or parts of the adder are stored as functions into the LUTs in order to save space.
Hard blocks
Modern FPGA families expand upon the above capabilities to include higher level functionality fixed in silicon. Having these common functions embedded in the circuit reduces the area required and gives those functions increased speed compared to building them from logical primitives. Examples of these include multipliers, generic DSP blocks, embedded processors, high speed I/O logic and embedded memories.
Higher-end FPGAs can contain high speed multi-gigabit transceivers and hard IP cores such as processor cores, Ethernet medium access control units, PCI/PCI Express controllers, and external memory controllers. These cores exist alongside the programmable fabric, but they are built out of transistors instead of LUTs so they have ASIC-level performance and power consumption without consuming a significant amount of fabric resources, leaving more of the fabric free for the application-specific logic. The multi-gigabit transceivers also contain high performance analog input and output circuitry along with high-speed serializers and deserializers, components which cannot be built out of LUTs. Higher-level physical layer (PHY) functionality such as line coding may or may not be implemented alongside the serializers and deserializers in hard logic, depending on the FPGA.
Soft core
An alternate approach to using hard-macro processors is to make use of soft processor IP cores that are implemented within the FPGA logic. Nios II, MicroBlaze and Mico32 are examples of popular softcore processors. Many modern FPGAs are programmed at "run time", which has led to the idea of reconfigurable computing or reconfigurable systems – CPUs that reconfigure themselves to suit the task at hand. Additionally, new, non-FPGA architectures are beginning to emerge. Software-configurable microprocessors such as the Stretch S5000 adopt a hybrid approach by providing an array of processor cores and FPGA-like programmable cores on the same chip.
Integration
In 2012 the coarse-grained architectural approach was taken a step further by combining the logic blocks and interconnects of traditional FPGAs with embedded microprocessors and related peripherals to form a complete "system on a programmable chip". This work mirrors the architecture created by Ron Perloff and Hanan Potash of Burroughs Advanced Systems Group in 1982 which combined a reconfigurable CPU architecture on a single chip called the SB24. Examples of such hybrid technologies can be found in the Xilinx Zynq-7000 all Programmable SoC, which includes a 1.0 GHz dual-core ARM Cortex-A9 MPCore processor embedded within the FPGA's logic fabric or in the Altera Arria V FPGA, which includes an 800 MHz dual-core ARM Cortex-A9 MPCore. The Atmel FPSLIC is another such device, which uses an AVR processor in combination with Atmel's programmable logic architecture. The Microsemi SmartFusion devices incorporate an ARM Cortex-M3 hard processor core (with up to 512 kB of flash and 64 kB of RAM) and analog peripherals such as a multi-channel analog-to-digital converters and digital-to-analog converters to their flash memory-based FPGA fabric.
Clocking
Most of the circuitry built inside of an FPGA is synchronous circuitry that requires a clock signal. FPGAs contain dedicated global and regional routing networks for clock and reset so they can be delivered with minimal skew. Also, FPGAs generally contain analog phase-locked loop and/or delay-locked loop components to synthesize new clock frequencies as well as attenuate jitter. Complex designs can use multiple clocks with different frequency and phase relationships, each forming separate clock domains. These clock signals can be generated locally by an oscillator or they can be recovered from a high speed serial data stream. Care must be taken when building clock domain crossing circuitry to avoid metastability. FPGAs generally contain blocks of RAMs that are capable of working as dual port RAMs with different clocks, aiding in the construction of building FIFOs and dual port buffers that connect differing clock domains.
3D architectures
To shrink the size and power consumption of FPGAs, vendors such as Tabula and Xilinx have introduced 3D or stacked architectures. Following the introduction of its 28 nm 7-series FPGAs, Xilinx said that several of the highest-density parts in those FPGA product lines will be constructed using multiple dies in one package, employing technology developed for 3D construction and stacked-die assemblies.
Xilinx's approach stacks several (three or four) active FPGA dies side by side on a silicon interposer – a single piece of silicon that carries passive interconnect. The multi-die construction also allows different parts of the FPGA to be created with different process technologies, as the process requirements are different between the FPGA fabric itself and the very high speed 28 Gbit/s serial transceivers. An FPGA built in this way is called a heterogeneous FPGA.
Altera's heterogeneous approach involves using a single monolithic FPGA die and connecting other die/technologies to the FPGA using Intel's embedded multi-die interconnect bridge (EMIB) technology.
Programming
To define the behavior of the FPGA, the user provides a design in a hardware description language (HDL) or as a schematic design. The HDL form is more suited to work with large structures because it's possible to specify high-level functional behavior rather than drawing every piece by hand. However, schematic entry can allow for easier visualization of a design and its component modules.
Using an electronic design automation tool, a technology-mapped netlist is generated. The netlist can then be fit to the actual FPGA architecture using a process called place-and-route, usually performed by the FPGA company's proprietary place-and-route software. The user will validate the map, place and route results via timing analysis, simulation, and other verification and validation methodologies. Once the design and validation process is complete, the binary file generated, typically using the FPGA vendor's proprietary software, is used to (re-)configure the FPGA. This file is transferred to the FPGA/CPLD via a serial interface (JTAG) or to an external memory device like an EEPROM.
The most common HDLs are VHDL and Verilog as well as extensions such as SystemVerilog. However, in an attempt to reduce the complexity of designing in HDLs, which have been compared to the equivalent of assembly languages, there are moves to raise the abstraction level through the introduction of alternative languages. National Instruments' LabVIEW graphical programming language (sometimes referred to as "G") has an FPGA add-in module available to target and program FPGA hardware. Verilog was created to simplify the process making HDL more robust and flexible. Verilog is currently the most popular. Verilog creates a level of abstraction to hide away the details of its implementation. Verilog has a C-like syntax, unlike VHDL.
To simplify the design of complex systems in FPGAs, there exist libraries of predefined complex functions and circuits that have been tested and optimized to speed up the design process. These predefined circuits are commonly called intellectual property (IP) cores, and are available from FPGA vendors and third-party IP suppliers. They are rarely free, and typically released under proprietary licenses. Other predefined circuits are available from developer communities such as OpenCores (typically released under free and open source licenses such as the GPL, BSD or similar license), and other sources. Such designs are known as "open-source hardware."
In a typical design flow, an FPGA application developer will simulate the design at multiple stages throughout the design process. Initially the RTL description in VHDL or Verilog is simulated by creating test benches to simulate the system and observe results. Then, after the synthesis engine has mapped the design to a netlist, the netlist is translated to a gate-level description where simulation is repeated to confirm the synthesis proceeded without errors. Finally the design is laid out in the FPGA at which point propagation delays can be added and the simulation run again with these values back-annotated onto the netlist.
More recently, OpenCL (Open Computing Language) is being used by programmers to take advantage of the performance and power efficiencies that FPGAs provide. OpenCL allows programmers to develop code in the C programming language and target FPGA functions as OpenCL kernels using OpenCL constructs. For further information, see high-level synthesis and C to HDL.
Most FPGAs rely on an SRAM-based approach to be programmed. These FPGAs are in-system programmable and re-programmable, but require external boot devices. For example, flash memory or EEPROM devices may often load contents into internal SRAM that controls routing and logic. The SRAM approach is based on CMOS.
Rarer alternatives to the SRAM approach include:
Fuse: One-time programmable. Bipolar. Obsolete.
Antifuse: One-time programmable. CMOS. Examples: Actel SX and Axcelerator families; Quicklogic Eclipse II family.
PROM: Programmable Read-Only Memory technology. One-time programmable because of plastic packaging. Obsolete.
EPROM: Erasable Programmable Read-Only Memory technology. One-time programmable but with window, can be erased with ultraviolet (UV) light. CMOS. Obsolete.
EEPROM: Electrically Erasable Programmable Read-Only Memory technology. Can be erased, even in plastic packages. Some but not all EEPROM devices can be in-system programmed. CMOS.
Flash: Flash-erase EPROM technology. Can be erased, even in plastic packages. Some but not all flash devices can be in-system programmed. Usually, a flash cell is smaller than an equivalent EEPROM cell and is therefore less expensive to manufacture. CMOS. Example: Actel ProASIC family.
Major manufacturers
In 2016, long-time industry rivals Xilinx (now part of AMD) and Altera (now an Intel subsidiary) were the FPGA market leaders. At that time, they controlled nearly 90 percent of the market.
Both Xilinx (now AMD) and Altera (now Intel) provide proprietary electronic design automation software for Windows and Linux (ISE/Vivado and Quartus) which enables engineers to design, analyze, simulate, and synthesize (compile) their designs.
Other manufacturers include:
Microchip:
Microsemi (previously Actel), producing antifuse, flash-based, mixed-signal FPGAs; acquired by Microchip in 2018
Atmel, a second source of some Altera-compatible devices; also FPSLIC mentioned above; acquired by Microchip in 2016
Lattice Semiconductor, which manufactures low-power SRAM-based FPGAs featuring integrated configuration flash, instant-on and live reconfiguration
SiliconBlue Technologies, which provides extremely low power SRAM-based FPGAs with optional integrated nonvolatile configuration memory; acquired by Lattice in 2011
QuickLogic, which manufactures Ultra Low Power Sensor Hubs, extremely low powered, low density SRAM-based FPGAs, with display bridges MIPI & RGB inputs, MIPI, RGB and LVDS outputs
Achronix, manufacturing SRAM based FPGAS with 1.5 GHz fabric speed
In March 2010, Tabula announced their FPGA technology that uses time-multiplexed logic and interconnect that claims potential cost savings for high-density applications. On March 24, 2015, Tabula officially shut down.
On June 1, 2015, Intel announced it would acquire Altera for approximately $16.7 billion and completed the acquisition on December 30, 2015.
On October 27, 2020, AMD announced it would acquire Xilinx.
Applications
An FPGA can be used to solve any problem which is computable. This is trivially proven by the fact that FPGAs can be used to implement a soft microprocessor, such as the Xilinx MicroBlaze or Altera Nios II. Their advantage lies in that they are significantly faster for some applications because of their parallel nature and optimality in terms of the number of gates used for certain processes.
FPGAs originally began as competitors to CPLDs to implement glue logic for printed circuit boards. As their size, capabilities, and speed increased, FPGAs took over additional functions to the point where some are now marketed as full systems on chips (SoCs). Particularly with the introduction of dedicated multipliers into FPGA architectures in the late 1990s, applications which had traditionally been the sole reserve of digital signal processor hardware (DSPs) began to incorporate FPGAs instead.
Another trend in the use of FPGAs is hardware acceleration, where one can use the FPGA to accelerate certain parts of an algorithm and share part of the computation between the FPGA and a generic processor. The search engine Bing is noted for adopting FPGA acceleration for its search algorithm in 2014. , FPGAs are seeing increased use as AI accelerators including Microsoft's so-termed "Project Catapult" and for accelerating artificial neural networks for machine learning applications.
Traditionally, FPGAs have been reserved for specific vertical applications where the volume of production is small. For these low-volume applications, the premium that companies pay in hardware cost per unit for a programmable chip is more affordable than the development resources spent on creating an ASIC. , new cost and performance dynamics have broadened the range of viable applications.
The company Gigabyte Technology created an i-RAM card which used a Xilinx FPGA although a custom made chip would be cheaper if made in large quantities. The FPGA was chosen to bring it quickly to market and the initial run was only to be 1000 units making an FPGA the best choice. This device allows people to use computer RAM as a hard drive.
Other uses for FPGAs include:
Space (i.e. with radiation hardening)
Hardware security modules
Security
FPGAs have both advantages and disadvantages as compared to ASICs or secure microprocessors, concerning hardware security. FPGAs' flexibility makes malicious modifications during fabrication a lower risk. Previously, for many FPGAs, the design bitstream was exposed while the FPGA loads it from external memory (typically on every power-on). All major FPGA vendors now offer a spectrum of security solutions to designers such as bitstream encryption and authentication. For example, Altera and Xilinx offer AES encryption (up to 256-bit) for bitstreams stored in an external flash memory.
FPGAs that store their configuration internally in nonvolatile flash memory, such as Microsemi's ProAsic 3 or Lattice's XP2 programmable devices, do not expose the bitstream and do not need encryption. In addition, flash memory for a lookup table provides single event upset protection for space applications. Customers wanting a higher guarantee of tamper resistance can use write-once, antifuse FPGAs from vendors such as Microsemi.
With its Stratix 10 FPGAs and SoCs, Altera introduced a Secure Device Manager and physical unclonable functions to provide high levels of protection against physical attacks.
In 2012 researchers Sergei Skorobogatov and Christopher Woods demonstrated that FPGAs can be vulnerable to hostile intent. They discovered a critical backdoor vulnerability had been manufactured in silicon as part of the Actel/Microsemi ProAsic 3 making it vulnerable on many levels such as reprogramming crypto and access keys, accessing unencrypted bitstream, modifying low-level silicon features, and extracting configuration data.
Similar technologies
Historically, FPGAs have been slower, less energy efficient and generally achieved less functionality than their fixed ASIC counterparts. A study from 2006 showed that designs implemented on FPGAs need on average 40 times as much area, draw 12 times as much dynamic power, and run at one third the speed of corresponding ASIC implementations. More recently, FPGAs such as the Xilinx Virtex-7 or the Altera Stratix 5 have come to rival corresponding ASIC and ASSP ("Application-specific standard part", such as a standalone USB interface chip) solutions by providing significantly reduced power usage, increased speed, lower materials cost, minimal implementation real-estate, and increased possibilities for re-configuration 'on-the-fly'. A design that included 6 to 10 ASICs can now be achieved using only one FPGA. Advantages of FPGAs include the ability to re-program when already deployed (i.e. "in the field") to fix bugs, and often include shorter time to market and lower non-recurring engineering costs. Vendors can also take a middle road via FPGA prototyping: developing their prototype hardware on FPGAs, but manufacture their final version as an ASIC so that it can no longer be modified after the design has been committed. This is often also the case with new processor designs. Some FPGAs have the capability of partial re-configuration that lets one portion of the device be re-programmed while other portions continue running.
The primary differences between complex programmable logic devices (CPLDs) and FPGAs are architectural. A CPLD has a comparatively restrictive structure consisting of one or more programmable sum-of-products logic arrays feeding a relatively small number of clocked registers. As a result, CPLDs are less flexible, but have the advantage of more predictable timing delays and FPGA architectures, on the other hand, are dominated by interconnect. This makes them far more flexible (in terms of the range of designs that are practical for implementation on them) but also far more complex to design for, or at least requiring more complex electronic design automation (EDA) software. In practice, the distinction between FPGAs and CPLDs is often one of size as FPGAs are usually much larger in terms of resources than CPLDs. Typically only FPGAs contain more complex embedded functions such as adders, multipliers, memory, and serializer/deserializers. Another common distinction is that CPLDs contain embedded flash memory to store their configuration while FPGAs usually require external non-volatile memory (but not always). When a design requires simple instant-on (logic is already configured at power-up) CPLDs are generally preferred. For most other applications FPGAs are generally preferred. Sometimes both CPLDs and FPGAs are used in a single system design. In those designs, CPLDs generally perform glue logic functions, and are responsible for "booting" the FPGA as well as controlling reset and boot sequence of the complete circuit board. Therefore, depending on the application it may be judicious to use both FPGAs and CPLDs in a single design.
See also
FPGA Mezzanine Card
FPGA prototyping
List of HDL simulators
List of Xilinx FPGAs
Verilog
SystemVerilog
VHDL
Hardware acceleration
References
Further reading
Mencer, Oskar et al. (2020). "The history, status, and future of FPGAs". Communications of the ACM. ACM. Vol. 63, No. 10. doi:10.1145/3410669
External links
Integrated circuits
Semiconductor devices
American inventions
Hardware acceleration |
15343985 | https://en.wikipedia.org/wiki/MyHeritage | MyHeritage | MyHeritage is an online genealogy platform with web, mobile, and software products and services, introduced by the Israeli company MyHeritage in 2003. Users of the platform can obtain their family trees, upload and browse through photos, and search through over 14 billion historical records, among other features. As of 2020, the service supports 42 languages and has more than 50 million users worldwide who have built around 52 million family trees. In 2016, it launched a genetic testing service called MyHeritage DNA. The company is headquartered in Or Yehuda, Israel, with additional offices in Tel Aviv, Israel, Lehi, Utah, Kyiv, Ukraine, and Burbank, California.
History
2003–2007: Foundation and early years
MyHeritage was founded in 2003 by Israeli entrepreneur Gilad Japhet (who continues to serve as the company's CEO). Japhet started the company from his living room in Bnei Atarot, Israel. For a long time, the company's headquarters were located in a family farmhouse in Bnei Atarot. In its infancy, MyHeritage was almost completely self-funded but had received funds from angel investors by 2005. It switched from a free service to a freemium business model. As of 2020, the company and its subsidiaries have a major control of the most world genealogy data.
Early on, MyHeritage required users to upload genealogical information from desktop software. The information could be viewed online, but could not be altered. In 2006, MyHeritage introduced new features including facial recognition software that recognized facial features from a database of photographs to link individuals together. In December 2006, the company acquired Pearl Street Software which was the creator of family tree software (Family Tree Legends) and a family tree submission site (GenCircles) with over 160 million profiles and 400 million public records.
By 2007, MyHeritage had 150,000 family trees, 180 million people profiles, 100 million photos, and 17.2 million users worldwide. The service was available in 17 languages. The company also began offering a new web-based feature that allowed users to upload genealogical information directly to the MyHeritage site. MyHeritage had also received a total of US$9 million in investor funding, half of which had come from Accel.
2008–2012: Acquisitions and expansion
In 2008, MyHeritage raised US$15 million from an investment group including Index Ventures and Accel. At that time, the website had grown to 260 million people profiles, 25 million users, 230 million photos, and 25 supported languages. Soon after securing funding, MyHeritage acquired Kindo, a UK-based family tree building service. In 2009, the company released a new version of their free genealogy software, Family Tree Builder, which included the ability to sync between the software and the website.
In 2010, the company acquired Germany-based OSN Group, a family tree website network with seven genealogy sites under its name. Some websites in the OSN network included Verwandt.de in Germany, Moikrewni.pl in Poland, and Dynastree.com in the United States. The acquisition provided MyHeritage with several new features (including coats of arms, family tree merging, and an option to venture into mobile applications) and a total of 540 million people profiles, 47 million active users, and 13 million family trees. In 2011, those numbers increased to 760 million people profiles and 56 million users after MyHeritage acquired Poland-based Bliscy.pl, another genealogy website.
Other 2011 acquisitions included the Dutch family network, Zooof; BackupMyTree, a backup service designed to protect up to 9 terabytes of offline family history data; and FamilyLink, a developer of family history content sites and owner of a large database of historical records (WorldVitalRecords.com, which included census, birth, death, and marriage records along with an archive of historical newspapers). By the end of 2011, MyHeritage had 60 million users, 900 million people profiles, 21 million family trees, and was available in 38 different languages. The company also released the first version of its mobile app for iOS and Android devices.
In 2012, MyHeritage surpassed 1 billion people profiles and launched several new features including SuperSearch, a search engine for billions of historical records, and Record Matching, a technology that automatically compares MyHeritage's historical records to the profiles on the site and alerts users whenever a match is found for a relative in their family tree.
In November 2012, MyHeritage acquired one of its primary competitors, Geni.com. The company kept all of Geni's employees and operated the company as a separate brand in Los Angeles, California, and, as of 2016, MyHeritage and Geni were still separate. Founded by David O. Sacks in 2007, Geni is a genealogy website with the goal of "creating a family tree of the whole world" whereas MyHeritage focuses on records and collecting non-merged individual family trees. The acquisition added 7 million new users to MyHeritage, bringing the total number of members to 72 million. At the time, MyHeritage also had 27 million family trees and 1,5 billion profiles and was available in 40 languages. In addition to the acquisition of Geni, MyHeritage also raised US$25 million in a funding round led by Bessemer Venture Partners.
2013–present: Partnerships, further growth, and beyond
In 2013, MyHeritage entered into a strategic partnership to allow FamilySearch to use its technologies. At the time of the deal, MyHeritage had 75 million registered users and 1.6 billion people profiles. The company also gained access to all United States census records from 1790 to 1940. In April 2013, MyHeritage released Family Tree Builder 7.0 which included new features like sync, Unicode, and Record Matches. MyHeritage also introduced a web feature called Record Detective that automatically makes connections between different historical records.
In 2014, MyHeritage announced partnerships and collaborations with other companies and entities. In February 2014, the company partnered with BillionGraves to digitize and document graves and cemeteries worldwide. In October 2014, the company partnered with EBSCO Information Services to provide educational institutions (libraries, universities, etc.) with free access to MyHeritage's database of historical records. In December 2014, MyHeritage entered into an agreement with the Danish National Archives to index Census records and Parish registers from 1646 to 1930 (a total of around 120 million records). The company also surpassed 5 billion historical records in their database in 2014 and launched the Instant Discoveries feature, which enables users to add whole branches of relatives to their family tree at once.
In 2015, MyHeritage reached 6.3 billion historical records, 200 million photographs, 80 million registered users, and availability in 42 languages. It also released the Global Name Translation technology which automatically translates names from different languages to make searching for ancestors more efficient.
In March 2016, employees of MyHeritage recorded and preserved the family history of remote peoples in the Highlands Region of Papua New Guinea.
In August 2017, the company acquired Millennia Corp. and its Legacy Family Tree software and Legacy webinars program.
In 2018, the company announced its sponsorship of Eurovision Song Contest 2019. It also announced that, as of October 2018, the total number of historical records reached over 9 billion. Also in 2018, chief science officer Yaniv Erlich received media attention for creating a family tree of 13 million people using data from Geni.com.
On 7 September 2019, MyHeritage announced that they acquired both SNPedia and Promethease. All non-European raw genetic data files previously uploaded to Promethease, and not deleted by users by 1 Nov 2019, are to be copied to MyHeritage website into new user accounts that will be created for them, these accounts will receive for free services like ethnicity estimates and DNA matching for relatives.
In April 2019, MyHeritage changed autosomal DNA microarray chip from Infinium OmniExpress chip to the Infinium GSA chip, with 642,824 markers.
In early 2021, MyHeritage was acquired by Francisco Partners for a reported 600 million dollars.
Security incidents
In June 2018, it was announced that MyHeritage experienced a security breach that leaked the data of over 92 million users. According to the company, the breach occurred on October 26, 2017. The leak allowed for the users' email addresses and hashed passwords to be compromised. MyHeritage stated that information about family trees, DNA profiles and credit card information are stored on a separate system and were not part of the leak. The company notified customers on the day it discovered the breach and implemented two-factor authentication with support for Text or App Authenticator as a response. In February 2019 the leak appeared in multiple dark web sites for sale, that made the subject broader in scope from previous information, along with this several other websites that were compromised surfaced in the same dark web marketplace.
Products and services
MyHeritage's products and services exist in the spheres of web, mobile, and downloadable software. The company's website, MyHeritage.com, works on a freemium business model. It is free to sign up and begin building family trees and making matches. The website will provide excerpts from historical records and newspapers, or from other family trees, but in order to read full versions of those documents, or confirm relationships, the user will have to have a paid subscription. Members of The Church of Jesus Christ of Latter-day Saints are eligible for free accounts due to the aforementioned partnership between MyHeritage and FamilySearch. Additionally, only paid users can contact other members.
As of 2015, the MyHeritage online database contains 6.3 billion historical records, including census, birth, marriage, death, military, and immigration documents along with historical newspapers. In 2020 number of historical records has reached 12 billion. The SuperSearch feature allows users to search through the site's entire catalog of historical records to find information about potential family members. Users may also upload photos to their family trees. MyHeritage's mobile app is available for iOS and Android devices and offers a range of similar features including the ability to view and edit family trees, research historical databases, and capture and share photos.
Matching technologies
MyHeritage uses several matching technologies for family history research. These include Smart Matching, Record Matching, Record Detective, Instant Discoveries, Global Name Translation, and Search Connect. Smart Matching is used to cross-reference one user's family tree with the family trees of all other users. The feature allows users to utilize information about their families from other, possibly related users. Record Matching is similar except that it matches and compares family trees to historical records rather than other family trees.
Record Detective is a technology that links related historical records based on information from one historical record. It also uses existing family trees to make connections between records (for instance, a death certificate and a marriage license). Instant Discoveries is a feature that compares users' family trees to other family trees and records, and then instantly shows them information about their family found in these sources, packaged as a new branch they can add to their trees. Global Name Translation allows users to search for a relative in their preferred language but get historical documents with their relative's name in other languages.
Search Connect is a feature announced by MyHeritage in July 2015 and released in November that same year. The feature indexes search queries along with their metadata dates, places, relatives, etc. and then displays them in search results when others perform a similar search. The feature allows users performing similar searches to connect with each other for collaboration.
Family Tree Builder
Family Tree Builder is downloadable software that allows users to build family trees, upload photos, view charts and statistics, and more.
MyHeritage DNA
MyHeritage DNA is a genetic testing service launched by MyHeritage in 2016. DNA results are obtained from home test kits, allowing users to use cheek swabs to collect samples. The results provide DNA matching and ethnicity estimates. In 2018, the company offered 5,000 of these kits as part of an initiative to reunite migrant families separated at the United States-Mexico border. The company also offered 15,000 DNA kits as part of a pro bono initiative called DNA Quest, which connected adoptees with biological parents. In 2016, MyHeritage launched a project to help children of Yemenite Jewish immigrants that had been forcefully separated from their families reunite with their biological family. As of 2019, about 2.5 million MyHeritage DNA kits have been sold, making it the third most popular genealogical DNA testing company In April 2019, MyHeritage changed DNA microarray chip for autosomal tests from Infinium OmniExpress to the Infinium GSA chip, with 642,824 markers. In April 2019, MyHeritage began releasing data from a new DNA chip. In May 2019, MyHeritage launched the MyHeritage DNA Health test, a test that provides comprehensive health reports to consumers. In December 2020, MyHeritage launched a feature called "Genetic Groups," which pinpoints precise ancestor locations and complements the Ethnicity Estimate. An update to the Ethnicity Estimate was originally planned for 2021 but has since been delayed.
Deep Nostalgia
Deep Nostalgia is an AI-powered service that allows users to create lifelike animations of faces in still photos. Released in February 2021, it became internet famous, being used millions of times.
Other projects
The Tribal Quest Expedition project is MyHeritage's pro bono project to record the family histories of tribal peoples. It also has a program to match descendants of Holocaust survivors with property taken from their family.
Awards and recognition
In 2013, MyHeritage was selected by Globes as the most promising Israeli startup for 2013–2014. The company was ranked number one out of a possible 4,800 startups. Also in 2013, Deloitte ranked MyHeritage among the top 10 fastest-growing companies from Europe, the Middle East, and Africa (EMEA) on the Deloitte Fast 500 list.
On April 18, 2018, MyHeritage was listed on the 6th place among the list of 50 most promising startups in Israel, published by the business newspaper Calcalist.
See also
23andMe
Ancestry.com
Geneanet
Genographic Project
WikiTree
References
External links
2003 establishments in Israel
Genealogy websites
Internet properties established in 2003
Privately held companies of Israel
Software companies of Israel
2021 mergers and acquisitions
Private equity portfolio companies |
72038 | https://en.wikipedia.org/wiki/C%2B%2B | C++ | C++ () is a general-purpose programming language created by Bjarne Stroustrup as an extension of the C programming language, or "C with Classes". The language has expanded significantly over time, and modern C++ now has object-oriented, generic, and functional features in addition to facilities for low-level memory manipulation. It is almost always implemented as a compiled language, and many vendors provide C++ compilers, including the Free Software Foundation, LLVM, Microsoft, Intel, Oracle, and IBM, so it is available on many platforms.
C++ was designed with an orientation toward systems programming and embedded, resource-constrained software and large systems, with performance, efficiency, and flexibility of use as its design highlights. C++ has also been found useful in many other contexts, with key strengths being software infrastructure and resource-constrained applications, including desktop applications, video games, servers (e.g. e-commerce, web search, or databases), and performance-critical applications (e.g. telephone switches or space probes).
C++ is standardized by the International Organization for Standardization (ISO), with the latest standard version ratified and published by ISO in December 2020 as ISO/IEC 14882:2020 (informally known as C++20). The C++ programming language was initially standardized in 1998 as ISO/IEC 14882:1998, which was then amended by the C++03, C++11, C++14, and C++17 standards. The current C++20 standard supersedes these with new features and an enlarged standard library. Before the initial standardization in 1998, C++ was developed by Danish computer scientist Bjarne Stroustrup at Bell Labs since 1979 as an extension of the C language; he wanted an efficient and flexible language similar to C that also provided high-level features for program organization. Since 2012, C++ has been on a three-year release schedule with C++23 as the next planned standard.
History
In 1979, Bjarne Stroustrup, a Danish computer scientist, began work on "", the predecessor to C++. The motivation for creating a new language originated from Stroustrup's experience in programming for his PhD thesis. Stroustrup found that Simula had features that were very helpful for large software development, but the language was too slow for practical use, while BCPL was fast but too low-level to be suitable for large software development. When Stroustrup started working in AT&T Bell Labs, he had the problem of analyzing the UNIX kernel with respect to distributed computing. Remembering his PhD experience, Stroustrup set out to enhance the C language with Simula-like features. C was chosen because it was general-purpose, fast, portable and widely used. As well as C and Simula's influences, other languages also influenced this new language, including ALGOL 68, Ada, CLU and ML.
Initially, Stroustrup's "C with Classes" added features to the C compiler, Cpre, including classes, derived classes, strong typing, inlining and default arguments.
In 1982, Stroustrup started to develop a successor to C with Classes, which he named "C++" (++ being the increment operator in C) after going through several other names. New features were added, including virtual functions, function name and operator overloading, references, constants, type-safe free-store memory allocation (new/delete), improved type checking, and BCPL style single-line comments with two forward slashes (//). Furthermore, Stroustrup developed a new, standalone compiler for C++, Cfront.
In 1984, Stroustrup implemented the first stream input/output library. The idea of providing an output operator rather than a named output function was suggested by Doug McIlroy (who had previously suggested Unix pipes).
In 1985, the first edition of The C++ Programming Language was released, which became the definitive reference for the language, as there was not yet an official standard. The first commercial implementation of C++ was released in October of the same year.
In 1989, C++ 2.0 was released, followed by the updated second edition of The C++ Programming Language in 1991. New features in 2.0 included multiple inheritance, abstract classes, static member functions, const member functions, and protected members. In 1990, The Annotated C++ Reference Manual was published. This work became the basis for the future standard. Later feature additions included templates, exceptions, namespaces, new casts, and a Boolean type.
In 1998, C++98 was released, standardizing the language, and a minor update (C++03) was released in 2003.
After C++98, C++ evolved relatively slowly until, in 2011, the C++11 standard was released, adding numerous new features, enlarging the standard library further, and providing more facilities to C++ programmers. After a minor C++14 update released in December 2014, various new additions were introduced in C++17. After becoming finalized in February 2020, a draft of the C++20 standard was approved on 4 September 2020 and officially published on 15 December 2020.
On January 3, 2018, Stroustrup was announced as the 2018 winner of the Charles Stark Draper Prize for Engineering, "for conceptualizing and developing the C++ programming language".
C++ ranked fourth on the TIOBE index, a measure of the popularity of programming languages, after Python, C and Java.
Etymology
According to Stroustrup, "the name signifies the evolutionary nature of the changes from C". This name is credited to Rick Mascitti (mid-1983) and was first used in December 1983. When Mascitti was questioned informally in 1992 about the naming, he indicated that it was given in a tongue-in-cheek spirit. The name comes from C's ++ operator (which increments the value of a variable) and a common naming convention of using "+" to indicate an enhanced computer program.
During C++'s development period, the language had been referred to as "new C" and "C with Classes" before acquiring its final name.
Philosophy
Throughout C++'s life, its development and evolution has been guided by a set of principles:
It must be driven by actual problems and its features should be immediately useful in real world programs.
Every feature should be implementable (with a reasonably obvious way to do so).
Programmers should be free to pick their own programming style, and that style should be fully supported by C++.
Allowing a useful feature is more important than preventing every possible misuse of C++.
It should provide facilities for organising programs into separate, well-defined parts, and provide facilities for combining separately developed parts.
No implicit violations of the type system (but allow explicit violations; that is, those explicitly requested by the programmer).
User-created types need to have the same support and performance as built-in types.
Unused features should not negatively impact created executables (e.g. in lower performance).
There should be no language beneath C++ (except assembly language).
C++ should work alongside other existing programming languages, rather than fostering its own separate and incompatible programming environment.
If the programmer's intent is unknown, allow the programmer to specify it by providing manual control.
Standardization
C++ is standardized by an ISO working group known as JTC1/SC22/WG21. So far, it has published six revisions of the C++ standard and is currently working on the next revision, C++23.
In 1998, the ISO working group standardized C++ for the first time as ISO/IEC 14882:1998, which is informally known as C++98. In 2003, it published a new version of the C++ standard called ISO/IEC 14882:2003, which fixed problems identified in C++98.
The next major revision of the standard was informally referred to as "C++0x", but it was not released until 2011. C++11 (14882:2011) included many additions to both the core language and the standard library.
In 2014, C++14 (also known as C++1y) was released as a small extension to C++11, featuring mainly bug fixes and small improvements. The Draft International Standard ballot procedures completed in mid-August 2014.
After C++14, a major revision C++17, informally known as C++1z, was completed by the ISO C++ Committee in mid July 2017 and was approved and published in December 2017.
As part of the standardization process, ISO also publishes technical reports and specifications:
ISO/IEC TR 18015:2006 on the use of C++ in embedded systems and on performance implications of C++ language and library features,
ISO/IEC TR 19768:2007 (also known as the C++ Technical Report 1) on library extensions mostly integrated into C++11,
ISO/IEC TR 29124:2010 on special mathematical functions, integrated into C++17
ISO/IEC TR 24733:2011 on decimal floating-point arithmetic,
ISO/IEC TS 18822:2015 on the standard filesystem library, integrated into C++17
ISO/IEC TS 19570:2015 on parallel versions of the standard library algorithms, integrated into C++17
ISO/IEC TS 19841:2015 on software transactional memory,
ISO/IEC TS 19568:2015 on a new set of library extensions, some of which are already integrated into C++17,
ISO/IEC TS 19217:2015 on the C++ concepts, integrated into C++20
ISO/IEC TS 19571:2016 on the library extensions for concurrency, some of which are already integrated into C++20
ISO/IEC TS 19568:2017 on a new set of general-purpose library extensions
ISO/IEC TS 21425:2017 on the library extensions for ranges, integrated into C++20
ISO/IEC TS 22277:2017 on coroutines, integrated into C++20
ISO/IEC TS 19216:2018 on the networking library
ISO/IEC TS 21544:2018 on modules, integrated into C++20
ISO/IEC TS 19570:2018 on a new set of library extensions for parallelism
ISO/IEC TS 23619:2021 on a new extensions for reflection
More technical specifications are in development and pending approval, including new set of concurrency extensions.
Language
The C++ language has two main components: a direct mapping of hardware features provided primarily by the C subset, and zero-overhead abstractions based on those mappings. Stroustrup describes C++ as "a light-weight abstraction programming language [designed] for building and using efficient and elegant abstractions"; and "offering both hardware access and abstraction is the basis of C++. Doing it efficiently is what distinguishes it from other languages."
C++ inherits most of C's syntax. The following is Bjarne Stroustrup's version of the Hello world program that uses the C++ Standard Library stream facility to write a message to standard output:
#include <iostream>
int main()
{
std::cout << "Hello, world!\n";
}
Object storage
As in C, C++ supports four types of memory management: static storage duration objects, thread storage duration objects, automatic storage duration objects, and dynamic storage duration objects.
Static storage duration objects
Static storage duration objects are created before main() is entered (see exceptions below) and destroyed in reverse order of creation after main() exits. The exact order of creation is not specified by the standard (though there are some rules defined below) to allow implementations some freedom in how to organize their implementation. More formally, objects of this type have a lifespan that "shall last for the duration of the program".
Static storage duration objects are initialized in two phases. First, "static initialization" is performed, and only after all static initialization is performed, "dynamic initialization" is performed. In static initialization, all objects are first initialized with zeros; after that, all objects that have a constant initialization phase are initialized with the constant expression (i.e. variables initialized with a literal or constexpr). Though it is not specified in the standard, the static initialization phase can be completed at compile time and saved in the data partition of the executable. Dynamic initialization involves all object initialization done via a constructor or function call (unless the function is marked with constexpr, in C++11). The dynamic initialization order is defined as the order of declaration within the compilation unit (i.e. the same file). No guarantees are provided about the order of initialization between compilation units.
Thread storage duration objects
Variables of this type are very similar to static storage duration objects. The main difference is the creation time is just prior to thread creation and destruction is done after the thread has been joined.
Automatic storage duration objects
The most common variable types in C++ are local variables inside a function or block, and temporary variables. The common feature about automatic variables is that they have a lifetime that is limited to the scope of the variable. They are created and potentially initialized at the point of declaration (see below for details) and destroyed in the reverse order of creation when the scope is left. This is implemented by allocation on the stack.
Local variables are created as the point of execution passes the declaration point. If the variable has a constructor or initializer this is used to define the initial state of the object. Local variables are destroyed when the local block or function that they are declared in is closed. C++ destructors for local variables are called at the end of the object lifetime, allowing a discipline for automatic resource management termed RAII, which is widely used in C++.
Member variables are created when the parent object is created. Array members are initialized from 0 to the last member of the array in order. Member variables are destroyed when the parent object is destroyed in the reverse order of creation. i.e. If the parent is an "automatic object" then it will be destroyed when it goes out of scope which triggers the destruction of all its members.
Temporary variables are created as the result of expression evaluation and are destroyed when the statement containing the expression has been fully evaluated (usually at the ; at the end of a statement).
Dynamic storage duration objects
These objects have a dynamic lifespan and can be created directly with a call to and destroyed explicitly with a call to . C++ also supports malloc and free, from C, but these are not compatible with and . Use of returns an address to the allocated memory. The C++ Core Guidelines advise against using directly for creating dynamic objects in favor of smart pointers through for single ownership and for reference-counted multiple ownership, which were introduced in C++11.
Templates
C++ templates enable generic programming. C++ supports function, class, alias, and variable templates. Templates may be parameterized by types, compile-time constants, and other templates. Templates are implemented by instantiation at compile-time. To instantiate a template, compilers substitute specific arguments for a template's parameters to generate a concrete function or class instance. Some substitutions are not possible; these are eliminated by an overload resolution policy described by the phrase "Substitution failure is not an error" (SFINAE). Templates are a powerful tool that can be used for generic programming, template metaprogramming, and code optimization, but this power implies a cost. Template use may increase code size, because each template instantiation produces a copy of the template code: one for each set of template arguments, however, this is the same or smaller amount of code that would be generated if the code was written by hand. This is in contrast to run-time generics seen in other languages (e.g., Java) where at compile-time the type is erased and a single template body is preserved.
Templates are different from macros: while both of these compile-time language features enable conditional compilation, templates are not restricted to lexical substitution. Templates are aware of the semantics and type system of their companion language, as well as all compile-time type definitions, and can perform high-level operations including programmatic flow control based on evaluation of strictly type-checked parameters. Macros are capable of conditional control over compilation based on predetermined criteria, but cannot instantiate new types, recurse, or perform type evaluation and in effect are limited to pre-compilation text-substitution and text-inclusion/exclusion. In other words, macros can control compilation flow based on pre-defined symbols but cannot, unlike templates, independently instantiate new symbols. Templates are a tool for static polymorphism (see below) and generic programming.
In addition, templates are a compile-time mechanism in C++ that is Turing-complete, meaning that any computation expressible by a computer program can be computed, in some form, by a template metaprogram prior to runtime.
In summary, a template is a compile-time parameterized function or class written without knowledge of the specific arguments used to instantiate it. After instantiation, the resulting code is equivalent to code written specifically for the passed arguments. In this manner, templates provide a way to decouple generic, broadly applicable aspects of functions and classes (encoded in templates) from specific aspects (encoded in template parameters) without sacrificing performance due to abstraction.
Objects
C++ introduces object-oriented programming (OOP) features to C. It offers classes, which provide the four features commonly present in OOP (and some non-OOP) languages: abstraction, encapsulation, inheritance, and polymorphism. One distinguishing feature of C++ classes compared to classes in other programming languages is support for deterministic destructors, which in turn provide support for the Resource Acquisition is Initialization (RAII) concept.
Encapsulation
Encapsulation is the hiding of information to ensure that data structures and operators are used as intended and to make the usage model more obvious to the developer. C++ provides the ability to define classes and functions as its primary encapsulation mechanisms. Within a class, members can be declared as either public, protected, or private to explicitly enforce encapsulation. A public member of the class is accessible to any function. A private member is accessible only to functions that are members of that class and to functions and classes explicitly granted access permission by the class ("friends"). A protected member is accessible to members of classes that inherit from the class in addition to the class itself and any friends.
The object-oriented principle ensures the encapsulation of all and only the functions that access the internal representation of a type. C++ supports this principle via member functions and friend functions, but it does not enforce it. Programmers can declare parts or all of the representation of a type to be public, and they are allowed to make public entities not part of the representation of a type. Therefore, C++ supports not just object-oriented programming, but other decomposition paradigms such as modular programming.
It is generally considered good practice to make all data private or protected, and to make public only those functions that are part of a minimal interface for users of the class. This can hide the details of data implementation, allowing the designer to later fundamentally change the implementation without changing the interface in any way.
Inheritance
Inheritance allows one data type to acquire properties of other data types. Inheritance from a base class may be declared as public, protected, or private. This access specifier determines whether unrelated and derived classes can access the inherited public and protected members of the base class. Only public inheritance corresponds to what is usually meant by "inheritance". The other two forms are much less frequently used. If the access specifier is omitted, a "class" inherits privately, while a "struct" inherits publicly. Base classes may be declared as virtual; this is called virtual inheritance. Virtual inheritance ensures that only one instance of a base class exists in the inheritance graph, avoiding some of the ambiguity problems of multiple inheritance.
Multiple inheritance is a C++ feature allowing a class to be derived from more than one base class; this allows for more elaborate inheritance relationships. For example, a "Flying Cat" class can inherit from both "Cat" and "Flying Mammal". Some other languages, such as C# or Java, accomplish something similar (although more limited) by allowing inheritance of multiple interfaces while restricting the number of base classes to one (interfaces, unlike classes, provide only declarations of member functions, no implementation or member data). An interface as in C# and Java can be defined in C++ as a class containing only pure virtual functions, often known as an abstract base class or "ABC". The member functions of such an abstract base class are normally explicitly defined in the derived class, not inherited implicitly. C++ virtual inheritance exhibits an ambiguity resolution feature called dominance.
Operators and operator overloading
C++ provides more than 35 operators, covering basic arithmetic, bit manipulation, indirection, comparisons, logical operations and others. Almost all operators can be overloaded for user-defined types, with a few notable exceptions such as member access (. and .*) as well as the conditional operator. The rich set of overloadable operators is central to making user-defined types in C++ seem like built-in types.
Overloadable operators are also an essential part of many advanced C++ programming techniques, such as smart pointers. Overloading an operator does not change the precedence of calculations involving the operator, nor does it change the number of operands that the operator uses (any operand may however be ignored by the operator, though it will be evaluated prior to execution). Overloaded "&&" and "||" operators lose their short-circuit evaluation property.
Polymorphism
Polymorphism enables one common interface for many implementations, and for objects to act differently under different circumstances.
C++ supports several kinds of static (resolved at compile-time) and dynamic (resolved at run-time) polymorphisms, supported by the language features described above. Compile-time polymorphism does not allow for certain run-time decisions, while runtime polymorphism typically incurs a performance penalty.
Static polymorphism
Function overloading allows programs to declare multiple functions having the same name but with different arguments (i.e. ad hoc polymorphism). The functions are distinguished by the number or types of their formal parameters. Thus, the same function name can refer to different functions depending on the context in which it is used. The type returned by the function is not used to distinguish overloaded functions and differing return types would result in a compile-time error message.
When declaring a function, a programmer can specify for one or more parameters a default value. Doing so allows the parameters with defaults to optionally be omitted when the function is called, in which case the default arguments will be used. When a function is called with fewer arguments than there are declared parameters, explicit arguments are matched to parameters in left-to-right order, with any unmatched parameters at the end of the parameter list being assigned their default arguments. In many cases, specifying default arguments in a single function declaration is preferable to providing overloaded function definitions with different numbers of parameters.
Templates in C++ provide a sophisticated mechanism for writing generic, polymorphic code (i.e. parametric polymorphism). In particular, through the curiously recurring template pattern, it's possible to implement a form of static polymorphism that closely mimics the syntax for overriding virtual functions. Because C++ templates are type-aware and Turing-complete, they can also be used to let the compiler resolve recursive conditionals and generate substantial programs through template metaprogramming. Contrary to some opinion, template code will not generate a bulk code after compilation with the proper compiler settings.
Dynamic polymorphism
Inheritance
Variable pointers and references to a base class type in C++ can also refer to objects of any derived classes of that type. This allows arrays and other kinds of containers to hold pointers to objects of differing types (references cannot be directly held in containers). This enables dynamic (run-time) polymorphism, where the referred objects can behave differently, depending on their (actual, derived) types.
C++ also provides the dynamic_cast operator, which allows code to safely attempt conversion of an object, via a base reference/pointer, to a more derived type: downcasting. The attempt is necessary as often one does not know which derived type is referenced. (Upcasting, conversion to a more general type, can always be checked/performed at compile-time via static_cast, as ancestral classes are specified in the derived class's interface, visible to all callers.) dynamic_cast relies on run-time type information (RTTI), metadata in the program that enables differentiating types and their relationships. If a dynamic_cast to a pointer fails, the result is the nullptr constant, whereas if the destination is a reference (which cannot be null), the cast throws an exception. Objects known to be of a certain derived type can be cast to that with static_cast, bypassing RTTI and the safe runtime type-checking of dynamic_cast, so this should be used only if the programmer is very confident the cast is, and will always be, valid.
Virtual member functions
Ordinarily, when a function in a derived class overrides a function in a base class, the function to call is determined by the type of the object. A given function is overridden when there exists no difference in the number or type of parameters between two or more definitions of that function. Hence, at compile time, it may not be possible to determine the type of the object and therefore the correct function to call, given only a base class pointer; the decision is therefore put off until runtime. This is called dynamic dispatch. Virtual member functions or methods allow the most specific implementation of the function to be called, according to the actual run-time type of the object. In C++ implementations, this is commonly done using virtual function tables. If the object type is known, this may be bypassed by prepending a fully qualified class name before the function call, but in general calls to virtual functions are resolved at run time.
In addition to standard member functions, operator overloads and destructors can be virtual. An inexact rule based on practical experience states that if any function in the class is virtual, the destructor should be as well. As the type of an object at its creation is known at compile time, constructors, and by extension copy constructors, cannot be virtual. Nonetheless a situation may arise where a copy of an object needs to be created when a pointer to a derived object is passed as a pointer to a base object. In such a case, a common solution is to create a clone() (or similar) virtual function that creates and returns a copy of the derived class when called.
A member function can also be made "pure virtual" by appending it with = 0 after the closing parenthesis and before the semicolon. A class containing a pure virtual function is called an abstract class. Objects cannot be created from an abstract class; they can only be derived from. Any derived class inherits the virtual function as pure and must provide a non-pure definition of it (and all other pure virtual functions) before objects of the derived class can be created. A program that attempts to create an object of a class with a pure virtual member function or inherited pure virtual member function is ill-formed.
Lambda expressions
C++ provides support for anonymous functions, also known as lambda expressions, with the following form:
[capture](parameters) -> return_type { function_body }
Since C++20, you can write template parameters on a lambda without the keyword :
[capture]<template_parameters>(parameters) -> return_type { function_body }
If the lambda takes no parameters, and no return type or other specifiers are used, the () can be omitted, that is,
[capture] { function_body }
The return type of a lambda expression can be automatically inferred, if possible, e.g.:
[](int x, int y) { return x + y; } // inferred
[](int x, int y) -> int { return x + y; } // explicit
The [capture] list supports the definition of closures. Such lambda expressions are defined in the standard as syntactic sugar for an unnamed function object.
Exception handling
Exception handling is used to communicate the existence of a runtime problem or error from where it was detected to where the issue can be handled. It permits this to be done in a uniform manner and separately from the main code, while detecting all errors. Should an error occur, an exception is thrown (raised), which is then caught by the nearest suitable exception handler. The exception causes the current scope to be exited, and also each outer scope (propagation) until a suitable handler is found, calling in turn the destructors of any objects in these exited scopes. At the same time, an exception is presented as an object carrying the data about the detected problem.
Some C++ style guides, such as Google's, LLVM's, and Qt's forbid the usage of exceptions.
The exception-causing code is placed inside a try block. The exceptions are handled in separate catch blocks (the handlers); each try block can have multiple exception handlers, as it is visible in the example below.
#include <iostream>
#include <vector>
#include <stdexcept>
int main() {
try {
std::vector<int> vec{3, 4, 3, 1};
int i{vec.at(4)}; // Throws an exception, std::out_of_range (indexing for vec is from 0-3 not 1-4)
}
// An exception handler, catches std::out_of_range, which is thrown by vec.at(4)
catch (std::out_of_range &e) {
std::cerr << "Accessing a non-existent element: " << e.what() << '\n';
}
// To catch any other standard library exceptions (they derive from std::exception)
catch (std::exception &e) {
std::cerr << "Exception thrown: " << e.what() << '\n';
}
// Catch any unrecognised exceptions (i.e. those which don't derive from std::exception)
catch (...) {
std::cerr << "Some fatal error\n";
}
}
It is also possible to raise exceptions purposefully, using the throw keyword; these exceptions are handled in the usual way. In some cases, exceptions cannot be used due to technical reasons. One such example is a critical component of an embedded system, where every operation must be guaranteed to complete within a specified amount of time. This cannot be determined with exceptions as no tools exist to determine the maximum time required for an exception to be handled.
Unlike signal handling, in which the handling function is called from the point of failure, exception handling exits the current scope before the catch block is entered, which may be located in the current function or any of the previous function calls currently on the stack.
Enumerated types
Standard library
The C++ standard consists of two parts: the core language and the standard library. C++ programmers expect the latter on every major implementation of C++; it includes aggregate types (vectors, lists, maps, sets, queues, stacks, arrays, tuples), algorithms (find, for_each, binary_search, random_shuffle, etc.), input/output facilities (iostream, for reading from and writing to the console and files), filesystem library, localisation support, smart pointers for automatic memory management, regular expression support, multi-threading library, atomics support (allowing a variable to be read or written to by at most one thread at a time without any external synchronisation), time utilities (measurement, getting current time, etc.), a system for converting error reporting that doesn't use C++ exceptions into C++ exceptions, a random number generator and a slightly modified version of the C standard library (to make it comply with the C++ type system).
A large part of the C++ library is based on the Standard Template Library (STL). Useful tools provided by the STL include containers as the collections of objects (such as vectors and lists), iterators that provide array-like access to containers, and algorithms that perform operations such as searching and sorting.
Furthermore, (multi)maps (associative arrays) and (multi)sets are provided, all of which export compatible interfaces. Therefore, using templates it is possible to write generic algorithms that work with any container or on any sequence defined by iterators. As in C, the features of the library are accessed by using the #include directive to include a standard header. The C++ Standard Library provides 105 standard headers, of which 27 are deprecated.
The standard incorporates the STL that was originally designed by Alexander Stepanov, who experimented with generic algorithms and containers for many years. When he started with C++, he finally found a language where it was possible to create generic algorithms (e.g., STL sort) that perform even better than, for example, the C standard library qsort, thanks to C++ features like using inlining and compile-time binding instead of function pointers. The standard does not refer to it as "STL", as it is merely a part of the standard library, but the term is still widely used to distinguish it from the rest of the standard library (input/output streams, internationalization, diagnostics, the C library subset, etc.).
Most C++ compilers, and all major ones, provide a standards-conforming implementation of the C++ standard library.
C++ Core Guidelines
The C++ Core Guidelines are an initiative led by Bjarne Stroustrup, the inventor of C++, and Herb Sutter, the convener and chair of the C++ ISO Working Group, to help programmers write 'Modern C++' by using best practices for the language standards C++11 and newer, and to help developers of compilers and static checking tools to create rules for catching bad programming practices.
The main aim is to efficiently and consistently write type and resource safe C++.
The Core Guidelines were announced in the opening keynote at CPPCon 2015.
The Guidelines are accompanied by the Guideline Support Library (GSL), a header only library of types and functions to implement the Core Guidelines and static checker tools for enforcing Guideline rules.
Compatibility
To give compiler vendors greater freedom, the C++ standards committee decided not to dictate the implementation of name mangling, exception handling, and other implementation-specific features. The downside of this decision is that object code produced by different compilers is expected to be incompatible. There were, however, attempts to standardize compilers for particular machines or operating systems (for example C++ ABI), though they seem to be largely abandoned now.
With C
C++ is often considered to be a superset of C but this is not strictly true. Most C code can easily be made to compile correctly in C++ but there are a few differences that cause some valid C code to be invalid or behave differently in C++. For example, C allows implicit conversion from void* to other pointer types but C++ does not (for type safety reasons). Also, C++ defines many new keywords, such as new and class, which may be used as identifiers (for example, variable names) in a C program.
Some incompatibilities have been removed by the 1999 revision of the C standard (C99), which now supports C++ features such as line comments (//) and declarations mixed with code. On the other hand, C99 introduced a number of new features that C++ did not support that were incompatible or redundant in C++, such as variable-length arrays, native complex-number types (however, the std::complex class in the C++ standard library provides similar functionality, although not code-compatible), designated initializers, compound literals, and the restrict keyword. Some of the C99-introduced features were included in the subsequent version of the C++ standard, C++11 (out of those which were not redundant). However, the C++11 standard introduces new incompatibilities, such as disallowing assignment of a string literal to a character pointer, which remains valid C.
To intermix C and C++ code, any function declaration or definition that is to be called from/used both in C and C++ must be declared with C linkage by placing it within an extern "C" {/*...*/} block. Such a function may not rely on features depending on name mangling (i.e., function overloading).
Criticism
Despite its widespread adoption, some notable programmers have criticized the C++ language, including Linus Torvalds, Richard Stallman, Joshua Bloch, Ken Thompson and Donald Knuth.
One of the most often criticised points of C++ is its perceived complexity as a language, with the criticism that a large number of non-orthogonal features in practice necessitates restricting code to a subset of C++, thus eschewing the readability benefits of common style and idioms. As expressed by Joshua Bloch: I think C++ was pushed well beyond its complexity threshold, and yet there are a lot of people programming it. But what you do is you force people to subset it. So almost every shop that I know of that uses C++ says, "Yes, we're using C++ but we're not doing multiple-implementation inheritance and we're not using operator overloading.” There are just a bunch of features that you're not going to use because the complexity of the resulting code is too high. And I don't think it's good when you have to start doing that. You lose this programmer portability where everyone can read everyone else's code, which I think is such a good thing.
Donald Knuth (1993, commenting on pre-standardized C++), who said of Edsger Dijkstra that "to think of programming in C++" "would make him physically ill": The problem that I have with them today is that... C++ is too complicated. At the moment, it's impossible for me to write portable code that I believe would work on lots of different systems, unless I avoid all exotic features. Whenever the C++ language designers had two competing ideas as to how they should solve some problem, they said "OK, we'll do them both". So the language is too baroque for my taste.
Ken Thompson, who was a colleague of Stroustrup at Bell Labs, gives his assessment: It certainly has its good points. But by and large I think it's a bad language. It does a lot of things half well and it's just a garbage heap of ideas that are mutually exclusive. Everybody I know, whether it's personal or corporate, selects a subset and these subsets are different. So it's not a good language to transport an algorithm—to say, "I wrote it; here, take it." It's way too big, way too complex. And it's obviously built by a committee.
Stroustrup campaigned for years and years and years, way beyond any sort of technical contributions he made to the language, to get it adopted and used. And he sort of ran all the standards committees with a whip and a chair. And he said "no" to no one. He put every feature in that language that ever existed. It wasn't cleanly designed—it was just the union of everything that came along. And I think it suffered drastically from that.
However Brian Kernighan, also a colleague at Bell Labs, disputes this assessment: C++ has been enormously influential. ... Lots of people say C++ is too big and too complicated etc. etc. but in fact it is a very powerful language and pretty much everything that is in there is there for a really sound reason: it is not somebody doing random invention, it is actually people trying to solve real world problems. Now a lot of the programs that we take for granted today, that we just use, are C++ programs.
Stroustrup himself comments that C++ semantics are much cleaner than its syntax: "within C++, there is a much smaller and cleaner language struggling to get out".
Other complaints may include a lack of reflection or garbage collection, long compilation times, perceived feature creep, and verbose error messages, particularly from template metaprogramming.
See also
Comparison of programming languages
List of C++ compilers
Outline of C++
:Category:C++ libraries
References
Further reading
External links
JTC1/SC22/WG21 – the ISO/IEC C++ Standard Working Group
Standard C++ Foundation – a non-profit organization that promotes the use and understanding of standard C++. Bjarne Stroustrup is a director of the organization.
Algol programming language family
C++ programming language family
Class-based programming languages
Cross-platform software
High-level programming languages
Object-oriented programming languages
Programming languages created in 1983
Programming languages with an ISO standard
Statically typed programming languages |
232257 | https://en.wikipedia.org/wiki/Cal | Cal | Cal or CAL may refer to:
Arts and entertainment
Cal (novel), a 1983 novel by Bernard MacLaverty
"Cal" (short story), a science fiction short story by Isaac Asimov
Cal (1984 film), an Irish drama starring John Lynch and Helen Mirren
Cal (album), the soundtrack album by Mark Knopfler
Cal (2013 film), a British drama
Judge Cal, a fictional character in the Judge Dredd comic strip in 2000 AD
Aviation
Cal Air International, an airline based in the United Kingdom
Campbeltown Airport IATA airport code
China Airlines ICAO airline code
Continental Airlines, an American airline with the New York Stock Exchange symbol of "CAL"
CAL Cargo Air Lines, a cargo airline based in Israel
Organizations and businesses
CAL Bank, a commercial bank in Ghana
Cal Yachts, originally the Jensen Marine Corporation, founded in 1957
Center for Applied Linguistics, a non-profit organization that researches language and culture
Cercle artistique de Luxembourg, an artist association in Luxembourg
Coalition of African Lesbians, a non-profit organisation based in South Africa
Colorado Association of Libraries, professional association in Colorado
Copyright Agency Ltd, an Australian copyright agency
People
Cal (given name), a list of people
Cal (surname), a list of people
Cal (footballer) (born 1996), Brazilian footballer
John Calipari (born 1959), American basketball coach often called "Coach Cal" or "Cal"
Places
a short form of the state of California
Cal Islet, in the Madeira Islands Archipelago of Portugal
Çal, a district and town in southwest Turkey
School-related
Cal or University of California, Berkeley
California Golden Bears, UC Berkeley's intercollegiate athletic program
Science and math
Calanthe, an orchid genus abbreviated cal. in horticulture
Calcium hydroxide, also called cal
Caliber, firearm barrel measurement
abbreviation for calorie, a unit of energy
Cold Atom Laboratory, an instrument to research Bose-Einstein Condensates on the International Space Station
Software
Cakewalk Application Language, a scripting language used with Cakewalk Pro Audio software
CAL Actor Language, a dataflow language
Cal (application), a former calendar app by Any.do
cal (command), a program on various operating systems that prints an ASCII calendar of the given month or year
CAL (programming language), a programming language based on JOSS
Client access license, operating systems and software license scheme
Cray Assembly Language, included with the Cray Operating System
Sports
Cape Ann League, a high school athletic conference in Massachusetts, United States
Cape-Atlantic League, a high school athletic conference in New Jersey, United States
Cyberathlete Amateur League, online electronic sports league
Other uses
Calipatria State Prison, California, United States
Capital allocation line, a graph to measure risk
Carolinian language ISO 639-3 code
FN CAL, an assault rifle made by the Belgian firm FN (Fabrique Nationale)
Cincinnati Academic League, a high school quiz bowl league
See also
CALS (disambiguation) |
779154 | https://en.wikipedia.org/wiki/BSD%20Daemon | BSD Daemon | The BSD Daemon, nicknamed Beastie, is the generic mascot of BSD operating systems. The BSD Daemon is named after software daemons, a class of long-running computer programs in Unix-like operating systems, which through a play on words takes the cartoon shape of a demon. The BSD Daemon's nickname Beastie is a slurred phonetic pronunciation of BSD. Beastie customarily carries a trident to symbolize a software daemon's forking of processes. The FreeBSD web site has noted Evi Nemeth's 1988 remarks about cultural-historical daemons in the Unix System Administration Handbook: "The ancient Greeks' concept of a 'personal daemon' was similar to the modern concept of a 'guardian angel' ...As a rule, UNIX systems seem to be infested with both daemons and demons."
Copyright
The copyright of the official BSD Daemon images is held by Marshall Kirk McKusick (a very early BSD developer who worked with Bill Joy). He has freely licensed the mascot for individual "personal use within the bounds of good taste (an example of bad taste was a picture of the BSD Daemon blowtorching a Solaris logo)." Any use requires both a copyright notice and attribution.
Reproduction of the daemon in quantity, such as on T-shirts and CDROMs, requires advance permission from McKusick, who restricts its use to implementations having to do with BSD and not as a company logo (although companies with BSD-based products such as Scotgold and Wind River Systems have gotten this kind of permission).
McKusick has said that during the early 1990s "I almost lost the daemon to a certain large company because I failed to show due diligence in protecting it. So, I've taken due diligence seriously since then."
In a request to use a license such as Creative Commons, McKusick replied:
History
The BSD Daemon was first drawn in 1976 by comic artist Phil Foglio. Developer Mike O'Brien, who was working as a bonded locksmith at the time, opened a wall safe in Foglio's Chicago apartment after a roommate had "split town" without leaving the combination. In return Foglio agreed to draw T-shirt artwork for O'Brien, who gave him some Polaroid snaps of a PDP-11 system running UNIX along with some notions about visual puns having to do with pipes, demons/daemons, forks, a "bit bucket" named /dev/null, etc. Foglio's drawing showed four happy little red daemon characters carrying tridents and climbing about on (or falling off of) water pipes in front of a caricature of a PDP-11 and was used for the first national UNIX meeting in the US (which was held in Urbana, Illinois). Bell Labs bought dozens of T-shirts featuring this drawing, which subsequently appeared on UNIX T-shirts for about a decade. Usenix purchased the reproduction rights to Foglio's artwork in 1986. His original drawing was then apparently lost, shortly after having been sent to Digital Equipment Corporation for use in an advertisement; all known copies are from photographs of surviving T-shirts.
The later, more popular versions of the BSD Daemon were drawn by animation director John Lasseter beginning with an early greyscale drawing on the cover of the Unix System Manager's Manual published in 1984 by USENIX for 4.2BSD. Its author/editor Sam Leffler (who had been a technical staff member at CSRG) and Lasseter were both employees of Lucasfilm at the time. About four years after this Lasseter drew his widely known take on the BSD Daemon for the cover of McKusick's co-authored 1988 book, The Design and Implementation of the 4.3BSD Operating System. Lasseter drew a somewhat lesser-known running BSD Daemon for the 4.4BSD version of the book in 1994.
Use in operating system logos
From 1994 to 2004, the NetBSD project used artwork by Shawn Mueller as a logo, featuring four BSD Daemons in a pose similar to the famous photo, Raising the Flag on Iwo Jima. However, this logo was seen as inappropriate for an international project, and it was superseded by a more abstract flag logo, chosen from over 400 entries in a competition.
Early versions of OpenBSD (2.3 and 2.4) used a BSD Daemon with a halo, and briefly used a daemon police officer for version 2.5. Then, however, OpenBSD switched to Puffy, a blowfish, as a mascot.
The FreeBSD project used the 1988 Lasseter drawing as both a logo and mascot for 12 years. However, questions arose as to the graphic's effectiveness as a logo. The daemon was not unique to FreeBSD since it was historically used by other BSD variants and members of the FreeBSD core team considered it inappropriate for corporate and marketing purposes. Lithographically, the scanned Lasseter drawing is not line art and however drawn neither scaled easily in a wide range of sizes nor rendered appealingly in only two or three colours. A contest to create a new FreeBSD logo began in February 2005 and a scalable graphic which somewhat echoes the BSD Daemon's head was chosen the following October, although "the little red fellow" has been kept on as an official project mascot.
Walnut Creek CDROM also produced two variations of the daemon for its CDROM covers. The FreeBSD 1.0 and 1.1 CDROM covers used the 1988 Lasseter drawing. The FreeBSD 2.0 CDROM used a variant with different colored (specifically green) tennis shoes. Other distributions used this image with different colored tennis shoes over the years. Starting with FreeBSD 2.0.5, Walnut Creek CDROM covers used the daemon walking out of a CDROM. Starting with FreeBSD 4.5, the FreeBSD Mall used a mirrored image of the Walnut Creek 2.0 image. The Walnut Creek 2.0 image has also appeared on the cover of different FreeBSD Handbook editions.
Deprecated name
In the mid-1990s a marketer for Walnut Creek CDROM called the mascot Chuck, perhaps referring to a brand name for the kind of shoes worn by the character but this name is strongly deprecated by the copyright holder who has said the BSD Daemon "is very proud of the fact that he does not have a name, he is just the BSD Daemon. If you insist on a name, call him Beastie."
ASCII image
, ,
/( )`
\ \___ / |
/- _ `-/ '
(/\/ \ \ /\
/ / | ` \
O O ) / |
`-^--'`< '
(_.) _ ) /
`.___/` /
`-----' /
<----. __ / __ \
<----|====O)))==) \) /====|
<----' `--' `.__,' \
| |
\ / /\
__( (_ / \__/
,' ,-----' |
`--{__)
This ASCII art image of the BSD Daemon by Felix Lee appeared in the startup menu of FreeBSD version 5.x and can still be set as startup image in later versions. It is also used in the daemon_saver screensaver.
See also
List of computing mascots
:Category:Computing mascots
Tux (mascot), the mascot of Linux kernel
Konqi, the mascot of KDE
Glenda, the Plan 9 Bunny, the mascot of Plan 9 from Bell Labs
Puffy (mascot), the mascot of OpenBSD
Mozilla (mascot), the mascot of Mozilla Foundation
Kiki the Cyber Squirrel, the mascot of Krita
Wilber (mascot), the mascot of GIMP
References and notes
External links
Photograph of a T-shirt bearing Foglio's original 1976 drawing
Photograph of a BSD-UNIX/VAX manual showing Lasseter's 1984 drawing
Photograph of a book cover bearing Lasseter's iconic 1988 drawing
FreeBSD's The BSD Daemon page
The red guy's name, from the FreeBSD FAQ
What's that daemon? — info on daemon shirts and a funny story
How to make a beastie flag
BSD Daemon Gallery
Berkeley Software Distribution
Fictional demons and devils
Computing mascots |
300046 | https://en.wikipedia.org/wiki/EFILM | EFILM | EFILM Digital Laboratories, founded in 1989, is a company serving the motion picture and television industry. Their clients include film studios, independent filmmakers, advertisers, animators, visual effects companies and large format filmmakers. EFILM is part of Deluxe Entertainment Services Group, a group of facilities which includes Beast, Company 3, Method Studios, and Rushes.
Services
Cinemascan
Colorstream
eVue
Digital Intermediate
Digital Lab Services
Image Processing
Laser Film Recording
Location Services
Security & Vaulting
Tape to Film Transfers
Video Services
16MM and 35MM Scanning
History
1989-1990 - Las Palmas Productions, Inc. develops a proprietary tape to film transfer process that creates high quality film recordings.
1993 - EFILM offers its film recording services to commercial, music video and movie trailer producers.
1994 - EFILM adds high resolution scanning to its list of services.
1995 - The company creates the world's first 2K digital intermediate color timing sequences for Batman Forever. The images are scanned, color timed and film recorded on the company's custom systems. The digital color timing system worked in real time via proxy images and allowed for interactive primary and secondary color corrections. The color settings were later applied to the 2K images. 25 minutes of special effects sequences were color timed this way under the creative direction of the film's visual effects designer, John Dykstra. This custom built system predated most, if not all, other high resolution color timing systems.
EFILM adds 65 mm film recording as a service for theatrical clients, theme parks, museums and other special venues. EFILM completed several 65 mm film recording projects including T2-3D: Battle Across Time, which open at Universal Florida in the summer of 1996.
1996 - The company develops digital laboratory software that is used throughout EFILM for almost every step of the workflow including scanning, tape to film, film recording and video and digital cinema deliverables. Their Etron software performs all the image processing needs for most jobs.
1997 - EFILM introduces its EWorks digital color correction software designed with a proprietary viewing approach that allowed the timer to accurately match photographed and digital images at any resolution.
EFILM's laser recording system successfully records its first VFX scenes for the feature film Dante's Peak. EFILM is also delivered 22 minutes of scanning and recording for the 1997 Academy Award Best Picture winner Titanic. Additionally, EFILM is selected to film record Contact, The Devil's Advocate, Mouse Hunt, Con Air, Spawn, Alien Resurrection, and Austin Powers: International Man of Mystery.
1988-1989 - EFILM expands its tape to film services. Refined format and color space translations provide a better workflow from standard definition tape to film gave EFILM the opportunity to participate in longer format productions. EFILM's first long format tape to film project is HBO's From the Earth to the Moon.
2000 - EFILM works with Panavision to transform digitally originated HDTV 24 frame progressive scan images to film.
EFILM is first to transform digitally originated HDTV 24 frame footage to 15 perforation 65 MM. The results are displayed for the Large Format Cinema Association conference in Los Angeles at the California Science Center and at the Universal City Walk IMAX theaters. EFILM goes on to experiment further with large format film makers and completed the first ever 3D stereo HD to 65 mm short film.
2001 - Panavision Inc. acquires LPPI (EFILM), as a digital laboratory that services the feature film, television, and commercial arenas.
2002 - EFILM signs an exclusive software agreement with Colorfront Software, Ltd., of Hungary, to develop proprietary software for EFILM's digital intermediate color timing process.
EFILM begins development to improve the accuracy of digital projection systems, specifically the refinement of the digital display matrix to better emulate film.
We Were Soldiers is the world's first full length motion picture created as a true 2K digital intermediate. The film is digitally mastered from fade up to fade out with :EFILM's proprietary technology including a number of industry firsts.
The first feature film to be 100% digitally mastered at true 2K resolution.
The first feature film to be 100% digitally colored timed on a computer.
The first feature film with release prints that required NO lab timing.
The first feature film with close to 2000 first generation release prints.
The first feature film with all the video masters derived from the 2K digital files.
EFILM is the first digital intermediate facility to incorporate storage area network (SAN) technology for creating digital intermediates.
Deluxe Laboratories became a 20% owner of EFILM.
EFILM expands its facilities by 8,000 square feet and added multiple digital color timing suites. This positions EFILM to become the leader in digital intermediates with the largest dedicated digital laboratory in the industry.
2003 - EFILM and Panavision develop proprietary optics for the digital projection systems EFILM uses in its digital intermediate process. The optical modifications are put on-line in June 2003.
EFILM performs the world's first location-based Digital Color Timing for a theatrically released film, Universal's Bruce Almighty. The work is done under the direction of Dean Semler, ASC in Austin, Texas while Dean is shooting The Alamo. The system includes EFILM's computer based color timing system, proprietary digital projection and our colorist. The session lasts two days with Semler giving specific direction and previewing the results on location via a digital projector. EFILM then returns to its home in Hollywood with the meta data created on location to complete the film.
2004 - Universal's Van Helsing is digitally assembled at 2K and EFILM creates digital cinema preview screening versions directly from the 2K files, which is an industry first. All previous digital previews came from the HD process. The EFILM approach ensures all the work that goes into the preview screenings ends up in the final movie.
EFILM scans the live action film, digitally assembles the cut, color times the entire film and renders and film records Spider-Man 2 at 4K. Other industry firsts include nine digital negatives that generate 10,000 first generation domestic prints as well as RGB HD video masters derived from the high resolution images.
Deluxe Laboratories assumes sole ownership of EFILM after reaching agreement to purchase Panavision’s interest in the industry’s premiere digital film laboratory. Already a 20% owner, Deluxe completes the purchase of Panavision’s 80% holding to become the outright owner of EFILM.
2005 - EFILM introduces the EWorks digital color correction system in concert with Autodesk.
EFILM introduces a virtual keycode mechanism that enables unique tracking, identification, frame accurate editing and asset management of each and every frame (up to 5.45 sextillion), either digitally generated, composited with other (single or multiple) film material.
2006 - The Rank Group announces that it has agreed to sell Deluxe Film to MacAndrews & Forbes Holdings Inc. The sale includes all worldwide business units within the Deluxe Film group. EFILM is now wholly owned by Deluxe.
EFILM creates and launches Colorstream. Colorstream is a proprietary viewing and color correction tool for use on the motion pictures set that allows for on-set emulation and pre-visualization of digitally captured content.
EFILM opens a separate division dedicated solely to finishing motion picture trailers.
2011 - EFILM develops post production workflow for Extremely Loud and Incredibly Close, the first U.S. Feature film to shoot using the ARRIRAW format on the Alexa.
2021 - Kevin Cox retired
Awards
2011
Hollywood Post Alliance (HPA) - Creativity and Innovation Award – The Tree of Life – Steven J. Scott
Hollywood Post Alliance (HPA) – Outstanding Color Grading using a DI process – Feature Film – The Help –Steven J. Scott
Feature films
Space Jam: A New Legacy (2021)
Epic (2013)
Skyfall (2012)
Looper (2012)
Ice Age: Continental Drift (2012)
The Avengers (2012)
John Carter (2012)
The Vow (2012)
War Horse (2011)
The Twilight Saga: Breaking Dawn – Pt. 1 (2011)
The Three Musketeers (2011)
Killer Elite (2011)
Abduction (2011)
Seven Days in Utopia (2011)
30 Minutes or Less (2011)
Cowboys & Aliens (2011)
Friends with Benefits (2011)
Snow and the Secret Fan (2011)
Horrible Bosses (2011)
Bad Teacher (2011)
The Beaver (2011)
Prom (2011)
The Lincoln Lawyer (2011)
Just Go with It (2011)
The Rite (2011)
The Adventures of Tin Tin (2011)
Anonymous (2011)
Moneyball (2011)
Straw Dogs (2011)
Bucky Larson: Born to Be a Star (2011)
Colombiana (2011)
Spy Kids: All the Time in the World in 4D (2011)
The Help (2011)
The Smurfs (2011)
Captain America: The First Avenger (2011)
Zookeeper
Monte Carlo (2011)
Jumping the Broom (2011)
Thor (2011)
Madea's Big Happy Family (2011)
Battle Los Angeles (2011)
The Roommate (2011)
Another Earth (2011)
Robots (2005)
References
Television and film post-production companies
Companies based in Los Angeles
Companies established in 1989 |
144913 | https://en.wikipedia.org/wiki/Bally%20Astrocade | Bally Astrocade | The Bally Astrocade (also known as Bally Arcade or initially as Bally ABA-1000) is a second-generation home video game console and simple computer system designed by a team at Midway, at that time the videogame division of Bally. It was originally announced as the "Bally Home Library Computer" in October 1977 and initially made available for mail order in December 1977. But due to production delays, the units were first released to stores in April 1978 and its branding changed to "Bally Professional Arcade". It was marketed only for a limited time before Bally decided to exit the market. The rights were later picked up by a third-party company, who re-released it and sold it until around 1984. The Astrocade is particularly notable for its very powerful graphics capabilities for the time of release, and for the difficulty in accessing those capabilities.
History
Nutting and Midway
In the late 1970s, Midway contracted Dave Nutting Associates to design a video display chip that could be used in all of their videogame systems, from standup arcade games, to a home computer system. The system Nutting delivered was used in most of Midway's classic arcade games of the era, including Gorf and Wizard of Wor. The chip set supported what was at that time relatively high resolution of 320×204 in four colours per line, although to access this mode required memory that could be accessed at a faster rate than the common 2 MHz dynamic RAM of the era.
Console use
Originally referred to as the Bally Home Library Computer, it was released in 1977 but available only through mail order. Delays in the production meant none of the units actually shipped until 1978, and by this time the machine had been renamed the Bally Professional Arcade. In this form it sold mostly at computer stores and had little retail exposure (unlike the Atari VCS). In 1979, Bally grew less interested in the arcade market and decided to sell off their Consumer Products Division, including development and production of the game console.
At about the same time, a third-party group had been unsuccessfully attempting to bring their own console design to market as the Astrovision. A corporate buyer from Montgomery Ward who was in charge of the Bally system put the two groups in contact, and a deal was eventually arranged. In 1981 they re-released the unit with the BASIC cartridge included for free, this time known as the Bally Computer System, with the name changing again, in 1982, to Astrocade. It sold under this name until the video game crash of 1983, and then disappeared around 1985.
Midway had long been planning to release an expansion system for the unit, known as the ZGRASS-100. The system was being developed by a group of computer artists at the University of Illinois at Chicago known as the 'Circle Graphics Habitat', along with programmers at Nutting. Midway felt that such a system, in an external box, would make the Astrocade more interesting to the market. However it was still not ready for release when Bally sold off the division. A small handful may have been produced as the ZGRASS-32 after the machine was re-released by Astrovision.
The system, combined into a single box, would eventually be released as the Datamax UV-1. Aimed at the home computer market while being designed, the machine was now re-targeted as a system for outputting high-quality graphics to video tape. These were offered for sale some time between 1980 and 1982, but it is unknown how many were built.
Description
The basic system was powered by a Zilog Z80 driving the display chip with a RAM buffer in between the two. The display chip had two modes, a low-resolution mode at 160 × 102, and a high-resolution mode at 320 × 204, both with 2-bits per pixel for four colors. This sort of color/resolution was normally beyond the capabilities of RAM of the era, which could not read out the data fast enough to keep up with the TV display. The system used page mode addressing allowing them to read one "line" at a time at very high speed into a buffer inside the display chip. The line could then be read out to the screen at a more leisurely rate, while also interfering less with the CPU, which was also trying to use the same memory.
On the Astrocade the pins needed to use this "trick" were not connected. Thus the Astrocade system was left with just the lower resolution 160 × 102 mode. In this mode the system used up 160 × 102 × 2bits = 4080 bytes of memory to hold the screen. Since the machine had only 4kiB (4096 bytes) of RAM, this left very little room for program functions such as keeping score and game options. The rest of the program would have to be placed in ROM.
The Astrocade used color registers, or color indirection, so the four colors could be picked from a palette of 256 colors. Color animation was possible by changing the values of the registers, and using a horizontal blank interrupt they could be changed from line to line. An additional set of four color registers could be "swapped in" at any point along the line, allowing the creation of two screen "halves", split vertically. Originally intended to allow creation of a score area on the side of the screen, programmers also used this feature to emulate 8 color modes.
Unlike the VCS, the Astrocade did not include hardware sprite support. It did, however, include a blitter-like system and software to drive it. Memory above 0x4000 was dedicated to the display, and memory below that to the ROM. If a program wrote to the ROM space (normally impossible, it is "read only" after all) the video chip would take the data, apply a function to it, and then copy the result into the corresponding location in the RAM. Which function to use was stored in a register in the display chip, and included common instructions like XOR and bit-shift. This allowed the Astrocade to support any number of sprite-like objects independent of hardware, with the downside that it was up to the software to re-draw them when they moved.
The Astrocade was one of the early cartridge-based systems, using cartridges known as Videocades that were designed to be as close in size and shape as possible to a cassette tape. The unit also included two games built into the ROM, Gunfight and Checkmate, along with the simple but useful Calculator and a "doodle" program called Scribbling. Most cartridges included two games, and when they were inserted the machine would reset and display a menu starting with the programs on the cartridge and then listing the four built-in programs.
The Astrocade featured a relatively complex input device incorporating several types of control mechanisms: the controller was shaped as a pistol-style grip with trigger switch on the front; a small 4-switch/8-way joystick was placed on top of the grip, and the shaft of the joystick connected to a potentiometer, meaning that the stick could be rotated to double as a paddle controller.
On the front of the unit was a 24-key "hex-pad" keyboard used for selecting games and options as well as operating the calculator. On the back were a number of ports, including connectors for power, the controllers, and an expansion port. One oddity was that the top rear of the unit was empty, and could be opened to store up to 15 cartridges. The system's ability to be upgraded from a video game console to personal computer along with its library of nearly 30 games in 1982 are some reasons that made it more versatile than its main competitors, and was listed by Jeff Rovin as one of the seven major video game suppliers.
Astro BASIC
The Astrocade also included a BASIC programming language cartridge, written by Jamie Fenton, who expanded Li-Chen Wang's Palo Alto Tiny BASIC. First published as Bally BASIC in 1978.
Developing a BASIC interpreter on the system was difficult, because the display alone used up almost all the available RAM. The solution to this problem was to store the BASIC program code in the video RAM.
This was accomplished by interleaving every bit of the program along with the display itself; BASIC used all the even-numbered bits, and the display the odd-numbered bits. The interpreter would read out two bytes, drop all the odd-numbered bits, and assemble the results into a single byte of code. This was rendered invisible by setting two of the colors to be the same as the other two, such that colors 01 and 11 would be the same (white), so the presence, or lack, of a bit for BASIC had no effect on the screen. Additional memory was scavenged by using fewer lines vertically, only 88 instead of the full 102. This managed to squeeze out 1760 bytes of RAM for BASIC programs. The downside was that most of the graphics system's power was unavailable.
Programs were entered via the calculator keypad, with a plastic overlay displaying letters, symbols, and BASIC keywords. These were selected through a set of 4 colored shift keys. For example; typing "WORD"(gold) shift then the "+" key would result in GOTO.
A simple line editor was supported. After typing the line number corresponding to an existing program, each press of the PAUSE key would load the next character from memory.
An Astro BASIC program that later became commercialized is Artillery Duel. John Perkins wrote the game first and submitted it to The Arcadian fanzine, from which it was adapted for the Astro BASIC manual. Perkins subsequently developed the Astrocade cartridge of the game.
Language features
Astro BASIC supported the following keywords:
Commands: LIST, RUN, STOP, TRACE
Statements: PRINT, INPUT
Structure: GOTO, GOSUB, RETURN, IF (but no THEN and no ELSE), FOR-TO-STEP/NEXT
Graphics: BOX, CLEAR, LINE
Tape Commands: :PRINT, :INPUT, :LIST, :RUN
Functions: ABS(), CALL(), JX() (specified joystick's horizontal position), JY() (joystick vertical position), KN() (knob status), PX(X,Y) (pixel on or off), RND(), TR() (trigger status)
Built-in variables
(read only): KP (key press), RM (remainder of last division), SZ (memory size), XY (last LINE position)
(write only): SM= (scroll mode), TV= (display ASCII character)
(read/write): BC (background color), CX CY (cursor position), FC (foreground color), NT (note time),
Math: + - × ÷
Relational operators: < > = # [not equal] [the language did not support: <= => <>]
Logical operators: × [AND] + [OR]
A period . at the start of the line was equivalent to REM in other BASIC implementations. Certain commands were handled by the keypad instead of by keywords: the RESET button was equivalent to NEW in other interpreters.
The language supported 26 integer variables A to Z, and two pre-defined arrays, @() - which was stored starting after the program, ascending - and *() - which was stored from the top of memory, descending. The language lacked a DIM statement for dimensioning the arrays, the size of which was determined by available memory (SZ) not used by the program listing (2 bytes per item). Ports were accessed via the array &(), and memory was accessed via the array %(), rather than using PEEK and POKE. While the language lacked strings, KP would provide the ASCII value of a key press, which could be output to TV, meaning that characters could be read in from the keyboard, stored in an array, and then output.
The character display was 11 lines of 26 characters across. The resolution for the graphic commands is 88x160, with X ranging from -80 to 79 and Y ranging from -44 to 43.
Music could be produced in four ways:
The PRINT command, as a side effect, produced a unique tone for each character or keyword displayed.
The MU variable converted numbers into notes.
Ports 16 through 23 accessed a music synthesizer.
The sound-synthesizer variables MO (master oscillator), NM (Noise Mode), NV (Noise Volume), TA (Tone A), TB (Tone B), TC (Tone C), VA (Voice A volume), VB (Voice B volume), VC (Voice C volume), VF (Vibrato Frequency), VR (VibRato). (Added to Astro BASIC but not in Bally BASIC.)
Sample code
The following sample program from the manual demonstrates the joystick input and graphics functions. "Try your skill... The first player's knob moves the phaser left or right and the trigger shoots... Player two controls the target while player one shoots."
This listing illustrates how keywords, which were tokenized, were always displayed with a following space.
ZGRASS
The ZGRASS unit sat under the Astrocade and turned it into a "real" computer, including a full keyboard, a math co-processor (FPU), 32k of RAM, and a new 32k ROM containing the GRASS programming language (sometimes referred to as GRAFIX on this machine). The unit also added I/O ports for a cassette and floppy disk, allowing it to be used with CP/M.
Reception
Danny Goodman of Creative Computing Video & Arcade Games stated in 1983 that Astrocade "has one of the best graphics and sound packages of any home video game".
Specifications
Circuit board and cartridges
CPU: Zilog Z80, 1.789 MHz
RAM: 4 kB (up to 64 kB with external modules in the expansion port)
ROM: 8 kB
Cart ROM: 8 kB
Expansion: 64 kB total
Ports: 4 controller, 1 expansion, 1 light pen
Audio
Sound chip model: 0066-117XX, also known as the Music Processor, or a custom I/O chip since the sound chip also performs I/O functions.
Channel capabilities: There are 3 square wave channels, all with a pitch accuracy of 8-bits (256 possible frequencies from which to choose), which can all play square waves. The chip also has a noise generator, which can be independent from the other 3 square wave channels, or it can add its value to the master oscillator that drives the 3 square wave channels. The master oscillator can be set to different frequencies, which means that the frequency range can be changed for the 3 square wave channels.
Volume control: Each channel has independent 4-bit volume control.
Miscellaneous features concerning sound: There are hardware registers for vibrato, with two bits for the vibrato speed and 6 bits for vibrato depth. This means that it wouldn't be necessary for vibrato to be done entirely with software.
Video
Resolution: True 160×102 / Basic 160×88 / Expanded RAM 320×204
Colors: True 8* / Basic 2
The bitmap structure of the Bally actually only allows for 4 color settings. However, through the use of 2 color palettes and a left/right boundary control byte you could have the left section of screen (this could be the play field) use 1 set of colors while the right side (this could show information such as lives and score) used an entirely different set of colors, thus 8 total colors were possible.
Graphic type: Bitmap, 2 bit per pixel bit map.
Game library
There are 28 officially released video games for the system.
280 Zzzap / Dodgem (1978)
Amazing Maze / Tic Tac Toe (1978)
Artillery Duel (1982)
Astro Battle (1981) (originally titled Space Invaders)
Bally Pin (1981)
Biorhythm (1981)
Blackjack / Poker / Acey-Deucey (1978)
Blast Droids (1981)
Clowns / Brickyard (1979)
Cosmic Raiders (1982)
Dog Patch (1978)
Elementary Math and Speed Math (1978)
Football (1978)
Galactic Invasion (1981) (originally titled Galaxian)
Galaxian (1981) (later retitled Galactic Invasion)
Grand Prix / Demolition Derby (1978)
Gun Fight (1977)
The Incredible Wizard (1981)
Letter Match / Spell'n Score / Crosswords (1981)
Ms. CandyMan (1983) (very rare)
Muncher (1981)
Panzer Attack / Red Baron (1978)
Pirates Chase (1981)
Sea Devil (1983) (rare)
Seawolf / Missile (1978)
Solar Conqueror (1981)
Space Fortress (1981)
Space Invaders (1979) (later retitled Astro Battle)
Star Battle (1978)
Tornado Baseball / Tennis / Hockey / Handball (1978)
Other cartridges
BASIC
Machine Language Manager
Prototypes
Conan the Barbarian
Mazeman
Soccer
Homebrew
Fawn Dungeon
Treasure Cove (1983) (Spectre Systems)
ICBM Attack (Spectre Systems) With the Spectre Systems handle (Extremely rare)
Sneaky Snake (1983) (New Image)
War
References
External links
Bally Alley
Astrocade history at The Dot Eaters
Video Game Console Library
TheGameConsole.com
OldComputers.net
Console Database
Player's Choice Videogames
What is a Bally/Astrocade - Dead Link - history overview at Glankonian.com
Database at GiantBomb
Bally Astrocade games playable for free in the browser at the Internet Archive Console Living Room
Home video game consoles
Second-generation video game consoles
Products introduced in 1977
1970s toys
1980s toys
Z80 |
55420615 | https://en.wikipedia.org/wiki/Dennis%20E.%20Taylor | Dennis E. Taylor | Dennis E. Taylor is a Canadian novelist and former computer programmer known for his large scale hard science fiction stories exploring the interaction between artificial intelligence and the human condition.
Writing
While working at his day job as a computer programmer, Taylor self-published his first novel and began working with an agent to try to publish his second novel We Are Legion. However, Taylor still had difficulty getting any publishing house to take on his work, and eventually published it through his agent's in-house publishing arm. An audiobook rights deal with Audible was also reached. Once recorded, We Are Legion became one of the most popular audiobooks on the service and was awarded Best Science Fiction Audiobook of the year.
Taylor has been noted as one of many popular authors that debut their work in audio form rather than print to take advantage of the explosive growth of the audio medium.
Taylor's 2018 novel The Singularity Trap as well as his 2020 novel Heaven's River debuted on the New York Times Bestseller List for Fiction Audiobooks.
In September 2020, Taylor released his sixth novel Heaven's River, a sequel in the "Bobiverse" series. The new novel follows a loose thread from the earlier novels involving the Bob clone named "Bender", who had disappeared mysteriously many years before and prompts a galaxy spanning search.
Major themes
Taylor's "Bobiverse" series explore how technologies like cryonics, mind uploading and artificial intelligence might change the society and the human condition. Another major topic is global catastrophic risk, which is also featured in Outland and The Singularity Trap.
The Stern magazine praised Taylor's distinctive humour style, often based on nerdy inside jokes and references.
Personal life
Taylor lives outside Vancouver, Canada with his wife Blaihin and daughter Tina and enjoys snowboarding and mountain biking when he isn't writing or traveling.
Works
Novels
Short Stories
Recognition
Taylor's works have been translated to several languages, including Japanese, German, French and Polish.
The We Are Legion (We Are Bob) novel was a finalist of the 2019 Seiun Awards.
In October 2018, Taylor was added to the X-Prize Foundation Science Fiction Advisory Council as a "Visionary Storyteller". This group of accomplished science fiction authors help advise the X-Prize team on envisioning the future.
See also
Von Neumann probe
The Singularity
Topopolis
References
External links
Canadian computer programmers
Canadian science fiction writers
Living people
Year of birth missing (living people) |
26942876 | https://en.wikipedia.org/wiki/Fortiva | Fortiva | Fortiva was a software as a service (SaaS) based email archiving company. Founded in 2005 by Paul Chen, the former CEO and founder of FloNetwork (later acquired by DoubleClick). Fortiva's SaaS email archiving service introduced a "hybrid" method, taking advantage of storage and services "in the cloud" while leaving control over email services at the customer site. As a result, the company claimed to offer businesses the benefits of an in-house product with the advantages of a managed solution.
Fortiva was acquired in May, 2008 by Sunnyvale, California-based security company Proofpoint, Inc.
History
The company was created in response to the growing legal and regulatory challenges that email presents for businesses, in combination with storage challenges. It was also one of the early companies to build a product "from the ground up" as using software as a service, and was recognized early on as a unique solution to the challenge of securely storing and retrieving email without maintaining all data in-house.
Fortiva launched in February 2005, after raising an initial $5 million in funding round, releasing its first product, and lining up two customers as references. Fortiva was backed by venture capital firms McLean Watson and Ventures West. The full series A funding, totalling $8 million was announced in September 2008.
Fortiva was acquired in May, 2008 by Proofpoint, Inc. Its archiving service continues to be marketed and sold under the Proofpoint name.
Products
The Fortiva Archiving & Compliance Suite was a hybrid application suite and managed service for e-mail archiving, compliance and legal discovery. Integrating with Microsoft Exchange and Active Directory, the suite consisted of five components including Fortiva Policy, Fortiva Archive, Fortiva Discovery, Fortiva Supervision, and Fortiva Reports.
DoubleBlind Encryption
Fortiva was notable for its use of a proprietary method of data security, known as DoubleBlind Encryption. DoubleBlind Encryption ensured that customer data was encrypted at the customer site before being sent to Fortiva's secure datacenters. As a result, even Fortiva's own staff could not breach the security of the data.
References
Companies based in Toronto
Email
Computer archives
de:E-Mail-Archivierung
ja:メールアーカイブ |
19147192 | https://en.wikipedia.org/wiki/Magued%20Osman | Magued Osman | Dr. Magued Osman () is the CEO and Director of the Egyptian Center for Public Opinion Research "Baseera," which ran the only transparent public opinion surveys by phone for the first Egyptian Presidential elections in 2012. Baseera implemented also the first exit poll in the middle east.
Dr. Osman is a member of Egypt National Council for Women.
Dr.Osman is acting as the chairman of Telecom Egypt (we), the main landlines service provider in Egypt, since 2016.
He is the former Minister of Communications and Information Technology in Egypt from February to July 2011 in the caretaker government. Before being appointed as a minister, Dr. Osman was the Chairman of the Egyptian Cabinet of Ministers' Information and Decision Support Center (IDSC).
Dr. Osman is one of the Pioneers of Statistics and national information in Egypt; He is a Professor, at the Department of Statistics, Faculty of Economics and Political Science, Cairo University. In 2012, he led the World Values Survey in Egypt.
He has an extensive technical and consultancy experience in the fields of poverty targeting, statistical policies, and public policy for a number of prestigious local, regional institutions, and international organizations; such as Ministry of Urban and Rural Development, Saudi Arabia, the Economic Research Forum (ERF), UNIEF, the Canadian International Development Research Centre, UNDP, the United Nations Conference on Trade and Development (UNCTAD).
Dr. Osman is the editor and lead author of 2016 Human Development Report (HDR). The report discusses social justice in Egypt.
National committees
Ministerial Group for Human Development.
Advisory Committee – Ministry of Higher Education.
Fund of the Civil Affairs System – Sector of the Civil Affairs Authority –Ministry of Interior.
Board of Directors of the National Authority for Remote Sensing.
Board of Directors of the Authority of Standardization and Production Quality Control.
Board of trustees - Egypt's International University.
National Committee for Information Development – Scientific Research Academy.
Research Council of Population and Social Sciences – Academy of Scientific Research.
Executive committee of the National Population Council.
Advisory Council of the Post Graduate Education Department – Cairo's Arab Academy of Financial and Banking Sciences
Advisory Committee for Planning and Statistical Coordination – Central Agency for Public Mobilization and Statistics (CAPMAS)
The Permanent Scientific Committee – National Council for Social and Criminal Research.
Board of Directors of the Center for Information and Computer Systems – Faculty of Economics and Political Science – Cairo University.
Information and Communication Committee of the National Committee of Education, Science and Culture (UNESCO – ELISCO - ESISCO).
Family
Born in Cairo on November 4, 1951. He has two brothers Hussein Osman and Amr Osman. He is married to Dr. Fadia Elwan Professor of Psychology at Cairo University with three sons Hesham Osman, Walid Osman and Tarek Osman.
Education
1984-1987 Ph.D., Biostatistics - Case Western Reserve University, Cleveland, Ohio, USA.
1982-1984 M.S., Biostatistics - Case Western Reserve University, Cleveland, Ohio, USA.
1977-1980 M.Sc., Applied Statistics - Cairo University, Cairo, Egypt.
1970-1974 B.Sc., Statistics - Cairo University, Cairo, Egypt.
1967-1970 High School Diploma - Collège de la Sainte Famille (French Language School), Cairo, Egypt.
Publications
I. Publications in refereed journals:
Osman, M. A note on: the human sex ratio and factors influencing family size in Japan. Journal of Heredity 1985; 76: 143.
Luria, M., Debanne, S., and Osman, M. Long-term follow-up after recovery from acute myocardial infarction: observations on survival, ventricular arrhythmias and sudden death. Archives of Internal Medicine 1985; 145: 1592-1595.
Osman, M. and Yamashita, T. A model for evaluating the effect of son or daughter preference on population size. Journal of Heredity 1987; 78: 377-382.
Osman, M. Pattern of family sex composition preference in Egypt. Studies in African and Asian Demography, Research Monograph Series 1988; 18: 87-96.
Osman, M. Differentials of sex preference in Egypt. Studies in African and Asian Demography, Research Monograph Series 1989; 19: 335-345.
Osman, M. Birth spacing and nutritional status of child, in Egypt, 1988. Studies in African and Asian Demography, Research Monograph Series 1990; 20: 87-96.
Gawad, M. and Osman, M. Effect and safety of Sertraline. The Egyptian Journal of Psychiatry 1991; 14:145-168.
Ashoor, A., Osman, M., and Parashar, S. Head and neck and oesophagus cancers in Saudi Arabia. Saudi Medical Journal 1993;14:209-212.
Osman, M., Magbool, G., and Kaul, K. Hegira adaptation of the NCHS weight and height charts. Annals of Saudi Medicine 1993; 13:170-171.
Magbool, G., Kaul, K., Corea, J., Osman, M., and Al-Arfaj, A. Weight and height of Saudi children 6–16 years from the eastern province. Annals of Saudi Medicine 1993; 13:344-349.
Sebai, Z. and Osman, M. Teaching medicine in Arabic. Journal of Family and Community Medicine 1994; 1:3-9.
Abdel Shafy, M. and Osman, M. The compound Gompertz as a lifetime distribution. The Egyptian Statistical Journal 1995; 39:89-105.
Osman, M. Teaching biostatistics using Epi Info. Journal of Family and Community Medicine 1995; 2:49-62.
Osman, M. Exploring mixture of distributions using Minitab. Computers in Biology and Medicine 1997; 27:223-232.
Al-Hussaini, E. and Osman, M. On median of finite mixture. Journal of Statistical Computation and Simulation 1997; 58: 121-142.
Mangoud, A. et al. Utilization pattern of health care facilities in a selected Thana in Bangladesh. Journal of Preventive and Social Medicine 1997; 16:87-91.
El-Bassiouni, M., Zayed, A. and Osman, M. An empirical study of the UAE job market expectations from business education. Arab Journal of Administrative Sciences 1999; 6: 295-311.
Rashad, H., Osman, M., and Roudi, N. Marriage in the Arab World. Population Reference Bureau 2005.
Osman, Magued and Shahd, Laila (2003) "Age-discrepant marriages in Egypt". In Nicholas Hopkins (eds) The New Arab Family. Cairo: The American University in Cairo Press, pp. 51–61.
Rashad, Hoda and Osman, Magued (2003) "Nuptiality in Arab countries: Changes and implications". In Nicholas Hopkins (eds) The New Arab Family. Cairo: The American University in Cairo Press, pp. 20–50.
II. Publications in conference proceedings:
Osman, M. and McClish, D. Survival analysis for heterogeneous populations. American Statistical Association proceedings 1985, Social Statistics, 1985; 235-240.
Osman, M. and McClish, D. A model for survival in the presence of heterogeneity. American Statistical Association proceedings 1986, Social Statistics, 1986; 234-236.
Osman, M. Simulation experiment of a family building model. Proceedings of the First Conference on Computer Modeling System in Human Social Sciences 1989; 83-108. Center for Information and Computer Systems, Faculty of Economics and Political Science, Cairo University.
Osman, M. Sensitivity of fertility parameters used in population projection using ESCAP/POP. Proceedings of the Second Conference on Statistics and Computer Modeling in Human and Social Sciences 1990; Center for Information and Computer Systems, Faculty of Economics and Political Science, Cairo University.
Osman, M. The mixture of distributions as a model for analyzing anthropometric data. In: IRD/Macro International, Inc. Proceedings of the Demographic and Health Surveys World Conference, Washington, D.C. 1991, Vol 2, Columbia, Maryland, USA 1991:1101-1113.
Osman, M. Modeling height-for-age data in Egypt: compound vs. finite mixture normal distributions. Proceedings of the 19th International Conference on Statistics, Computer Science and Applications. 1994.
El-Bassiouni, M. and Osman, M. Using Minitab in teaching statistical distributions. Proceedings of the International Conference for Teaching Statistics and Information Sciences. July 1994, pp. 219–232, 1994.
Osman, M. Modeling height for age data in developing countries. Paper presented in the Fifth Islamic Countries Conference on Statistical Sciences, Malang, Indonesia, August 1996.
Osman, M. Nutritional status in Egypt: Results from the 1992 Egypt Demographic and Health Survey. Paper presented in the Arab Regional Meeting of the International Union for the Scientific Study of Population, Cairo, December 1996.
Osman, M. Population policies in Egypt, Jordan and Yemen. Paper presented in the Second ESCWA (UN) Meeting of Heads of National Population Councils in The Arab Countries on Population Policies and Sustainable Development, Amman, December 1997.
Al Segeny, M., Ismaeil, S., and Osman, M. Egyptian 2000 Growth Reference Centiles for Weight, and Height Fitted by LMS Method. Proceedings of the Conference on Statistics and Computer Modeling in Human and Social Sciences 2005; Faculty of Economics and Political Science, Cairo University.
III. Books:
Sayed, H. and Osman, M. Pregnancy, Fertility and Family Planning Practice. Ministry of Health, The Health Profile of Egypt, 1987 Cairo.
Sayed, H., Osman, M., El-Zenaty, F. and Way, A. Egypt Demographic and Health Survey 1988. Columbia, Maryland: Institute for Resource Development, Macro System, Inc., 1989.
Osman M. Health surveys - error in data analysis. Case Studies in Community Medicine. Ed. Z. Sebai. pp 71–81.
Osman M. Population and Labor Force in Egypt. Cairo, Egypt: Merit Publishing, 2002.
Osman M. Demographic Profile of the United Arab Emirates. National Research Project for Manpower Development and Education Planning, United Arab Emirates University, Emirates, 1991.
El-Shamsy, M., Hegazi, M. and Osman, M. Women and Employment in Emirates. United Arab Emirates University, Emirates, 1996.
Osman, M. Stunting among Egyptian children: Differentials and risk factors. In: Perspectives on the Population and Health Situation in Egypt. Ed. Mahran, M. et al. pp. 95–112. Demographic and Health surveys, Macro International Inc. Maryland, USA.
Co-Author of Egypt Human Development Report 2004.
Co-Author of Egypt Human Development Report 2005.
IV. Working papers:
Osman M. Sex preference in Egypt. Working Paper # 18, Cairo Demographic Center, 1990.
El-Bassiouni, M. and Osman, M. Using computers in teaching survey methodology course. Occasional Paper # 9, Cairo Demographic Center, 1997.
See also
List of national leaders
Egypt
Collège de la Sainte Famille
References
External links
Profile at the Ministry of Communications and Information Technology
Profile at ITIDA
Profile at IDSC
Profile at UN
Cairo University faculty
Living people
1951 births
Case Western Reserve University alumni
Cairo University alumni |
8095 | https://en.wikipedia.org/wiki/Donald%20Knuth | Donald Knuth | Donald Ervin Knuth ( ; born January 10, 1938) is an American computer scientist, mathematician, and professor emeritus at Stanford University. He is the 1974 recipient of the ACM Turing Award, informally considered the Nobel Prize of computer science. Knuth has been called the "father of the analysis of algorithms".
He is the author of the multi-volume work The Art of Computer Programming. He contributed to the development of the rigorous analysis of the computational complexity of algorithms and systematized formal mathematical techniques for it. In the process he also popularized the asymptotic notation. In addition to fundamental contributions in several branches of theoretical computer science, Knuth is the creator of the TeX computer typesetting system, the related METAFONT font definition language and rendering system, and the Computer Modern family of typefaces.
As a writer and scholar, Knuth created the WEB and CWEB computer programming systems designed to encourage and facilitate literate programming, and designed the MIX/MMIX instruction set architectures. Knuth strongly opposes the granting of software patents, having expressed his opinion to the United States Patent and Trademark Office and European Patent Organisation.
Biography
Early life
Knuth was born in Milwaukee, Wisconsin, to Ervin Henry Knuth and Louise Marie Bohning. He describes his heritage as "Midwestern Lutheran German". His father owned a small printing business and taught bookkeeping. Donald, a student at Milwaukee Lutheran High School, thought of ingenious ways to solve problems. For example, in eighth grade, he entered a contest to find the number of words that the letters in "Ziegler's Giant Bar" could be rearranged to create; the judges had identified 2,500 such words. With time gained away from school due to a pretend stomach ache, and working the problem the other way, Knuth used an unabridged dictionary and determined if each dictionary entry could be formed using the letters in the phrase. Using this algorithm, he identified over 4,500 words, winning the contest. As prizes, the school received a new television and enough candy bars for all of his schoolmates to eat.
Education
Knuth received a scholarship in physics to the Case Institute of Technology (now part of Case Western Reserve University) in Cleveland, Ohio, enrolling in 1956. He also joined Beta Nu Chapter of the Theta Chi fraternity. While studying physics at Case, Knuth was introduced to the IBM 650, an early commercial computer. After reading the computer's manual, Knuth decided to rewrite the assembly and compiler code for the machine used in his school, because he believed he could do it better.
In 1958, Knuth created a program to help his school's basketball team win their games. He assigned "values" to players in order to gauge their probability of getting points, a novel approach that Newsweek and CBS Evening News later reported on.
Knuth was one of the founding editors of Case Institute's Engineering and Science Review, which won a national award as best technical magazine in 1959. He then switched from physics to mathematics, and received two degrees from Case in 1960: his bachelor of science degree, and simultaneously a master of science by a special award of the faculty, who considered his work exceptionally outstanding.
In 1963, with mathematician Marshall Hall as his adviser, he earned a PhD in mathematics from the California Institute of Technology.
Early work
After receiving his PhD, Knuth joined Caltech's faculty as an assistant professor.
He accepted a commission to write a book on computer programming language compilers. While working on this project, Knuth decided that he could not adequately treat the topic without first developing a fundamental theory of computer programming, which became The Art of Computer Programming. He originally planned to publish this as a single book. As Knuth developed his outline for the book, he concluded that he required six volumes, and then seven, to thoroughly cover the subject. He published the first volume in 1968.
Just before publishing the first volume of The Art of Computer Programming, Knuth left Caltech to accept employment with the Institute for Defense Analyses' Communications Research Division, then situated on the Princeton University campus, which was performing mathematical research in cryptography to support the National Security Agency.
In 1967, Knuth attended a Society for Industrial and Applied Mathematics conference and someone asked what he did. At the time, computer science was partitioned into numerical analysis, artificial intelligence and programming languages. Based on his study and The Art of Computer Programming book, Knuth decided the next time someone asked he would say, "Analysis of algorithms."
Knuth then left his position to join the Stanford University faculty in 1969, where he is now Fletcher Jones Professor of Computer Science, Emeritus.
Writings
Knuth is a writer, as well as a computer scientist.
The Art of Computer Programming (TAOCP)
In the 1970s, Knuth described computer science as "a totally new field with no real identity. And the standard of available publications was not that high. A lot of the papers coming out were quite simply wrong. ... So one of my motivations was to put straight a story that had been very badly told."
From 1972 to 1973, Knuth spent a year at the University of Oslo among people such as Ole-Johan Dahl. Here he was to actually write the seventh volume in his book series, a volume that was to deal with programming languages. However, Knuth had only finished the first two volumes when he came to Oslo, and thus spent the year on the third volume, next to teaching. The third volume in the series came out just after Knuth returned to Stanford in 1973.
By 2011, the first three volumes and part one of volume four of his series had been published. Concrete Mathematics: A Foundation for Computer Science 2nd ed., which originated with an expansion of the mathematical preliminaries section of Volume 1 of TAoCP, has also been published. In April 2020, Knuth said he is hard at work on part B of volume 4, and he anticipates that the book will have at least parts A through F.
Other works
Knuth is also the author of Surreal Numbers, a mathematical novelette on John Conway's set theory construction of an alternate system of numbers. Instead of simply explaining the subject, the book seeks to show the development of the mathematics. Knuth wanted the book to prepare students for doing original, creative research.
In 1995, Knuth wrote the foreword to the book A=B by Marko Petkovšek, Herbert Wilf and Doron Zeilberger. Knuth is also an occasional contributor of language puzzles to Word Ways: The Journal of Recreational Linguistics.
Knuth has also delved into recreational mathematics. He contributed articles to the Journal of Recreational Mathematics beginning in the 1960s, and was acknowledged as a major contributor in Joseph Madachy's Mathematics on Vacation.
Knuth has also appeared in a number of Numberphile and Computerphile videos on YouTube where he has discussed topics from writing Surreal Numbers to why he does not use email.
Works regarding his religious beliefs
In addition to his writings on computer science, Knuth, a Lutheran, is also the author of 3:16 Bible Texts Illuminated, in which he examines the Bible by a process of systematic sampling, namely an analysis of chapter 3, verse 16 of each book. Each verse is accompanied by a rendering in calligraphic art, contributed by a group of calligraphers under the leadership of Hermann Zapf. Subsequently, he was invited to give a set of lectures at MIT on his views on religion and computer science behind his 3:16 project, resulting in another book, Things a Computer Scientist Rarely Talks About, where he published the lectures "God and Computer Science".
Opinion on software patents
Knuth is strongly opposed to the policy of granting software patents for trivial solutions that should be obvious, but has expressed more nuanced views for nontrivial solutions such as the interior-point method of linear programming. He has expressed his disagreement directly to both the United States Patent and Trademark Office and European Patent Organisation.
Computer Musings
Knuth gives informal lectures a few times a year at Stanford University, which he titled "Computer Musings". He was a visiting professor at the Oxford University Department of Computer Science in the United Kingdom until 2017 and an Honorary Fellow of Magdalen College.
Programming
Digital typesetting
In the 1970s the publishers of TAOCP abandoned Monotype in favor of phototypesetting. Knuth became so frustrated with the inability of the latter system to approach the quality of the previous volumes, which were typeset using the older system, that he took time out to work on digital typesetting and created TeX and Metafont.
Literate programming
While developing TeX, Knuth created a new methodology of programming, which he called literate programming, because he believed that programmers should think of programs as works of literature. "Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do."
Knuth embodied the idea of literate programming in the WEB system. The same WEB source is used to weave a TeX file, and to tangle a Pascal source file. These in their turn produce a readable description of the program and an executable binary respectively. A later iteration of the system, CWEB, replaces Pascal with C.
Knuth used WEB to program TeX and METAFONT, and published both programs as books: The TeXbook, which is originally published in 1984, and The METAFONTbook, which is originally published in 1986. Around the same time, LaTeX, the now-widely adopted macro package based on TeX, was first developed by Leslie Lamport, who later published its first user manual in 1986.
Music
Knuth is an organist and a composer. In 2016 he completed a musical piece for organ titled Fantasia Apocalyptica, which he describes as "translation of the Greek text of the Revelation of Saint John the Divine into music". It was premièred in Sweden on January 10, 2018.
Personal life
Donald Knuth married Nancy Jill Carter on 24 June 1961, while he was a graduate student at the California Institute of Technology. They have two children: John Martin Knuth and Jennifer Sierra Knuth.
Chinese name
Knuth's Chinese name is Gao Dena (). In 1977, he was given this name by Frances Yao, shortly before making a 3-week trip to China. In the 1980 Chinese translation of Volume 1 of The Art of Computer Programming (), Knuth explains that he embraced his Chinese name because he wanted to be known by the growing numbers of computer programmers in China at the time. In 1989, his Chinese name was placed atop the Journal of Computer Science and Technology header, which Knuth says "makes me feel close to all Chinese people although I cannot speak your language".
Health concerns
In 2006, Knuth was diagnosed with prostate cancer. He underwent surgery in December that year and stated, "a little bit of radiation therapy ... as a precaution but the prognosis looks pretty good", as he reported in his video autobiography.
Humor
Knuth used to pay a finder's fee of $2.56 for any typographical errors or mistakes discovered in his books, because "256 pennies is one hexadecimal dollar", and $0.32 for "valuable suggestions". According to an article in the Massachusetts Institute of Technology's Technology Review, these Knuth reward checks are "among computerdom's most prized trophies". Knuth had to stop sending real checks in 2008 due to bank fraud, and instead now gives each error finder a "certificate of deposit" from a publicly listed balance in his fictitious "Bank of San Serriffe".
He once warned a correspondent, "Beware of bugs in the above code; I have only proved it correct, not tried it."
Knuth published his first "scientific" article in a school magazine in 1957 under the title "The Potrzebie System of Weights and Measures". In it, he defined the fundamental unit of length as the thickness of Mad No. 26, and named the fundamental unit of force "whatmeworry". Mad published the article in issue No. 33 (June 1957).
To demonstrate the concept of recursion, Knuth intentionally referred "Circular definition" and "Definition, circular" to each other in the index of The Art of Computer Programming, Volume 1.
The preface of Concrete Mathematics has the following paragraph:
At the TUG 2010 Conference, Knuth announced a satirical XML-based successor to TeX, titled "iTeX" (, performed with a bell ringing), which would support features such as arbitrarily scaled irrational units, 3D printing, input from seismographs and heart monitors, animation, and stereophonic sound.
Awards and honors
In 1971, Knuth was the recipient of the first ACM Grace Murray Hopper Award. He has received various other awards including the Turing Award, the National Medal of Science, the John von Neumann Medal, and the Kyoto Prize.
Knuth was elected a Distinguished Fellow of the British Computer Society (DFBCS) in 1980 in recognition of Knuth's contributions to the field of computer science.
In 1990 he was awarded the one-of-a-kind academic title of Professor of The Art of Computer Programming, which has since been revised to Professor Emeritus of The Art of Computer Programming.
Knuth was elected to the National Academy of Sciences in 1975. He was also elected a member of the National Academy of Engineering in 1981 for organizing vast subject areas of computer science so that they are accessible to all segments of the computing community. In 1992, he became an associate of the French Academy of Sciences. Also that year, he retired from regular research and teaching at Stanford University in order to finish The Art of Computer Programming. He was elected a Foreign Member of the Royal Society (ForMemRS) in 2003.
Knuth was elected as a Fellow (first class of Fellows) of the Society for Industrial and Applied Mathematics in 2009 for his outstanding contributions to mathematics. He is a member of the Norwegian Academy of Science and Letters. In 2012, he became a fellow of the American Mathematical Society and a member of the American Philosophical Society. Other awards and honors include:
First ACM Grace Murray Hopper Award, 1971
Turing Award, 1974
Lester R. Ford Award, 1975 and 1993
Josiah Willard Gibbs Lecturer, 1978
National Medal of Science, 1979
Golden Plate Award of the American Academy of Achievement, 1985
Franklin Medal, 1988
John von Neumann Medal, 1995
Harvey Prize from the Technion, 1995
Kyoto Prize, 1996
Fellow of the Computer History Museum "for his fundamental early work in the history of computing algorithms, development of the TeX typesetting language, and for major contributions to mathematics and computer science." 1998
Asteroid 21656 Knuth, named in his honor in May 2001
Katayanagi Prize, 2010
BBVA Foundation Frontiers of Knowledge Award in the category of Information and Communication Technologies, 2010
Turing Lecture, 2011
Stanford University School of Engineering Hero Award, 2011
Publications
A short list of his publications include:
The Art of Computer Programming:
Computers and Typesetting (all books are hardcover unless otherwise noted):
, x+483pp.
(softcover).
, xviii+600pp.
, xii+361pp.
(softcover).
, xviii+566pp.
, xvi+588pp.
Books of collected papers:
, (paperback)
, (paperback)
Donald E. Knuth, Selected Papers on Design of Algorithms (Stanford, California: Center for the Study of Language and Information—CSLI Lecture Notes, no. 191), 2010. (cloth), (paperback)
Donald E. Knuth, Selected Papers on Fun and Games (Stanford, California: Center for the Study of Language and Information—CSLI Lecture Notes, no. 192), 2011. (cloth), (paperback)
Donald E. Knuth, Companion to the Papers of Donald Knuth (Stanford, California: Center for the Study of Language and Information—CSLI Lecture Notes, no. 202), 2011. (cloth), (paperback)
Other books:
xiv+657 pp.
Donald E. Knuth, The Stanford GraphBase: A Platform for Combinatorial Computing (New York, ACM Press) 1993. second paperback printing 2009.
Donald E. Knuth, 3:16 Bible Texts Illuminated (Madison, Wisconsin: A-R Editions), 1990.
Donald E. Knuth, Things a Computer Scientist Rarely Talks About (Center for the Study of Language and Information—CSLI Lecture Notes no 136), 2001.
Donald E. Knuth, MMIXware: A RISC Computer for the Third Millennium (Heidelberg: Springer-Verlag— Lecture Notes in Computer Science, no. 1750), 1999. viii+550pp.
Donald E. Knuth and Silvio Levy, The CWEB System of Structured Documentation (Reading, Massachusetts: Addison-Wesley), 1993. iv+227pp. . Third printing 2001 with hypertext support, ii + 237 pp.
Donald E. Knuth, Tracy L. Larrabee, and Paul M. Roberts, Mathematical Writing (Washington, D.C.: Mathematical Association of America), 1989. ii+115pp
Daniel H. Greene and Donald E. Knuth, Mathematics for the Analysis of Algorithms (Boston: Birkhäuser), 1990. viii+132pp.
Donald E. Knuth, , 1976. 106pp.
Donald E. Knuth, Axioms and Hulls (Heidelberg: Springer-Verlag—Lecture Notes in Computer Science, no. 606), 1992. ix+109pp.
See also
Asymptotic notation
Attribute grammar
CC system
Dancing Links
Knuth -yllion
Knuth Prize
Knuth shuffle
Knuth's Algorithm X
Knuth's Simpath algorithm
Knuth's up-arrow notation
Davis–Knuth dragon
Bender–Knuth involution
Trabb Pardo–Knuth algorithm
Fisher–Yates shuffle
Man or boy test
Plactic monoid
Quater-imaginary base
TeX
Termial
The Complexity of Songs
Uniform binary search
List of pioneers in computer science
List of science and religion scholars
References
Bibliography
External links
Donald Knuth's home page at Stanford University.
Knuth discusses software patenting, structured programming, collaboration and his development of TeX.
Biography of Donald Knuth from the Institute for Operations Research and the Management Sciences
Donald Ervin Knuth – Stanford Lectures (Archive)
Interview with Donald Knuth by Lex Fridman
Siobhan Roberts, The Yoda of Silicon Valley. The New York Times, 17 December 2018.
American computer scientists
American computer programmers
Mathematics popularizers
American people of German descent
American technology writers
1938 births
Living people
Combinatorialists
Free software programmers
Programming language designers
Scientists from California
Writers from California
Turing Award laureates
Grace Murray Hopper Award laureates
National Medal of Science laureates
Fellows of the Association for Computing Machinery
Fellows of the American Mathematical Society
Fellows of the British Computer Society
Fellows of the Society for Industrial and Applied Mathematics
Kyoto laureates in Advanced Technology
Donegall Lecturers of Mathematics at Trinity College Dublin
Members of the United States National Academy of Engineering
Members of the United States National Academy of Sciences
Foreign Members of the Royal Society
Foreign Members of the Russian Academy of Sciences
Members of the French Academy of Sciences
Members of the Norwegian Academy of Science and Letters
Members of the Department of Computer Science, University of Oxford
Stanford University School of Engineering faculty
Stanford University Department of Computer Science faculty
California Institute of Technology alumni
Case Western Reserve University alumni
Scientists from Milwaukee
American Lutherans
American typographers and type designers
Writers from Palo Alto, California
20th-century American mathematicians
21st-century American mathematicians
20th-century American scientists
21st-century American scientists
Computer science educators
Mad (magazine) people
Burroughs Corporation people
American organists
American composers
University of Oslo faculty |
29504901 | https://en.wikipedia.org/wiki/Computer%20says%20no | Computer says no | "Computer says no" is a catchphrase first used in the British sketch comedy television programme Little Britain in 2004. In British culture, the phrase is used to criticise public-facing organisations and customer service staff who rely on information stored on or generated by a computer to make decisions and respond to customers' requests, often in a manner which goes against common sense. It may also refer to a deliberately unhelpful attitude towards customers and service-users commonly experienced within British society, whereby more could be done to reach a mutually satisfactory outcome, but is not.
Little Britain
In Little Britain, "Computer says no" is the catchphrase of Carol Beer (played by David Walliams), a bank worker and later holiday rep and hospital receptionist, who always responds to a customer's enquiry by typing it into her computer and responding with "Computer says no" to even the most reasonable of requests. When asked to do something aside from asking the computer, she would shrug and remain obstinate in her unhelpfulness, and ultimately cough in the customer's face. The phrase was also used in the Australian soap opera Neighbours in 2006 as a reference to Little Britain.
The catchphrase returns in Little Brexit, where Carol is still working at Sunsearchers as a holiday rep, confronted by a woman wanting to go to Europe. Carol uses the paraphrase "Brexit Says No", when the woman wants to go to France, Spain and Italy.
Usage
The "Computer says no" attitude often comes from larger companies that rely on information stored electronically. When this information is not updated, it can often lead to refusals of financial products or incorrect information being sent out to customers. These situations can often be resolved by an employee updating the information; however, when this cannot be done easily, the "Computer says no" attitude can be viewed as becoming prevalent when there is unhelpfulness as a result. This attitude can also occur when an employee fails to read human emotion in the customer and reacts according to his or her professional training or relies upon a script. This attitude also crops up when larger companies rely on computer credit scores and do not meet with a customer to discuss his or her individual needs, instead basing a decision upon information stored in computers. Some organisations attempt to offset this attitude by moving away from reliance on electronic information and using a human approach towards requests.
"Computer says no" happens in a more literal sense when computer systems employ filters that prevent messages being passed along, as when these messages are perceived to include obscenities. When information is not passed through to the person operating the computer, decisions may be made without seeing the whole picture.
See also
Jobsworth
Garbage in, garbage out
References
Comedy catchphrases
Computer humor
Computers
Customer service
English phrases
Little Britain
Popular culture neologisms
Quotations from television
2004 neologisms |
13641802 | https://en.wikipedia.org/wiki/Greg%20Stein | Greg Stein | Greg Stein (born March 16, 1967 in Portland, Oregon), living in Austin, Texas, United States, is a programmer, speaker, sometime standards architect, and open-source software advocate, appearing frequently at conferences and in interviews on the topic of open-source software development and use.
He was a director of the Apache Software Foundation, and served as chairman from 21 August 2002 to 20 June 2007. He is also a member of the Python Software Foundation, was a director there from 2001 to 2002, and a maintainer of the Python programming language and libraries (active from 1999 to 2002).
Stein has been especially active in version control systems development. In the late 1990s and early 2000s, he helped develop the WebDAV HTTP versioning specification, and is the main author of mod_dav, the first open-source implementation of WebDAV. He was one of the founding developers of the Subversion project, and is primarily responsible for Subversion's WebDav networking layer.
Stein most recently worked as an engineering manager at Google, where he helped launch Google's open-source hosting platform. Stein publicly announced his departure from Google via his blog on July 29, 2008. Prior to Google, he worked for Oracle Corporation, eShop, Microsoft, CollabNet, and as an independent developer.
Stein was a major contributor to the Lima Mudlib, a MUD server software framework. His MUD community pseudonym was "Deathblade".
References
External links
Ask Apache Software Foundation Chairman Greg Stein (Slashdot article)
Video interview at dev2dev
Interview with Googles (sic) Greg Stein and Chris DiBona (Slashdot interview about the launch of Google's open-source code hosting platform)
Apache's Greg Stein says commercial software's days are numbered (ComputerWorld / InfoWorld / MacWorld article)
Highlights of Greg Stein’s keynote (A third-party summary of Stein's keynote at EclipseCon 2006)
Homepage
Google's Greg Stein InfoTalk on Open Source
In Competitive Move, I.B.M. Puts Code in Public Domain (New York Times article on IBM's donation of WebSphere to the Apache Software Foundation)
Greg Stein Interview podcast (with Leo Laporte and Randal Schwartz).
"Trillions and Trillions Served" (hosted by Greg Stein, a feature documentary detailing ASF's history and far-reaching impact on the open-source software community)
American computer scientists
Free software programmers
Google employees
Oracle employees
Microsoft employees
MUD developers
Living people
1967 births
Python (programming language) people |
22833976 | https://en.wikipedia.org/wiki/OS4000 | OS4000 | OS4000 is a proprietary operating system introduced by GEC Computers Limited in 1977 as the successor to GEC DOS, for its range of GEC 4000 series 16-bit, and later 32-bit, minicomputers. OS4000 was developed through to late 1990s, and has been in a support-only mode since then.
History
The first operating systems for the GEC 4000 series were COS (Core Operating System) and DOS (Disk Operating System). These were basically single-user multi-tasking operating systems, designed for developing and running Process control type applications.
OS4000 was first released around 1977. It reused many of the parts of DOS, but added multi-user access, OS4000 JCL Command-line interpreter, Batch processing, OS4000 hierarchical filesystem (although on-disk format very similar to the non-hierarchical DOS filesystem). OS4000 JCL was based on the Cambridge University Phoenix command interpreter.
OS4000 Rel 3 arrived around 1980, and included Linked-OS — support for Linked OS4000 operating systems to enable multi-node systems to be constructed. The main customer for this was the central computing service of University College London (Euclid), where a multi-node system consisting of a Hub file server and multiple Rim multi-access compute server systems provided service for over 100 simultaneous users. Linked-OS was also used to construct fail-over Process control systems with higher resilience.
OS4000 Rel 4 arrived around 1983, and upped the maximum number of user modules to 150 (again, mainly for the University College London Euclid system), together with an enhanced Batch processing system. It also included support for the GEC 4090 processor, which introduced a 32-bit addressing mode.
OS4000 Rel 5 introduced a modified version of the OS4000 filesystem called CFSX, in order to allow easier use of larger disks. The initial Rel 5 only supported the CFSX filesystem, but support for the original CFS1 filesystem was reintroduced as well quite quickly.
OS4000 Rel 6 introduced support for dual processor systems (GEC 4190D).
OS4000 was developed in the UK at GEC Computers Borehamwood offices in Elstree Way, and at GEC Computers Dunstable Development Centre in Woodside Estate, Dunstable.
Architecture
The architecture of OS4000 is very heavily based around the features of the platform it runs on, the GEC 4000 series minicomputers, and these are rather unusual. They include a feature called Nucleus, which is a combination of a hardware- and firmware-based kernel, which cannot be altered under program control. This means that many of the features typically found in operating system kernels do not need to be included in OS4000, as the underlying platform performs these functions instead of the operating system. Consequently, there is no provision for running privileged mode code on the platform—all OS4000 operating system code runs as processes.
Nucleus supports up to 256 processes, and schedules these automatically using a fixed priority scheme. OS4000 lives entirely within these processes. A set of system tables are used to configure Nucleus, and access to these system tables can be granted to processes which need to alter the configuration of Nucleus, e.g. to load new programs into processes, adjust the Nucleus scheduling for time-shared processes, etc. The system tables tell Nucleus which processes are permitted to communicate with each other, and these are updated as processes are created and destroyed, e.g. when users login and logout. All I/O is performed directly from processes, and the system tables identify which processes have access to which peripherals and handle peripheral interrupts. For example, a device driver for a disk controller is a process, which is responsible for issuing commands through Nucleus to the disk controller, and handling the interrupts passed back from the disk controller via Nucleus, and the system tables will explicitly state that process has access to that disk controller. The system tables will not grant this device driver access to any other peripherals. In the event of a process stopping or crashing, Nucleus looks up its owner process in the system tables, and informs it. The owner process can then take the decision to let the system continue running without that process, or to take out the system (like a Unix panic), or to take some action such as reload and/or restart the process. Functions such as filesystems, store allocation, terminal drivers, timing services, etc. also exist as separate processes.
Nucleus implements a segmented memory system, with processes having their access to memory segments defined by the system tables, which is maintained by OS4000. OS4000 provides a memory system which handles both store-resident memory, and virtual memory backed by disk which is known as overlay, with overlaying being performed at the segment level. OS4000 also inherited grouped segments from DOS, where a group of segments were to be overlaid and retrieved as a single group, but this feature was very little used in OS4000. A process may use any mixture of resident and overlayable segments, although a process performing real-time tasks would normally be designed to only use resident segments.
OS4000 supports a fully mixed set of process scheduling within the same system, from hard real-time processes, through soft real-time, time-shared, and background. Given that OS4000 also includes full program development and test/debug facilities, this made OS4000 ideal for developing and deploying real-time applications such as process control and high speed (at the time) data communications all within one system.
Filesystem
OS4000 uses its own proprietary filesystem. The filesystem is extent based, and variable block size — different files can be created with different blocksizes, ranging from 256 bytes to 16,384 bytes in 256-byte multiples.
The filesystem is hierarchical, with components limited to 8 characters and the "." (period) used as the component separator. OS4000 JCL limits characters in file path components to upper case letters and numbers only. Each file path starts with a context pointer which is a name which refers to a position in a filesystem, followed by zero or more catalogues (equivalent to Unix directories), and ending with a filename. Each disk on the system contains a separate and independent filesystem, and the volume name of a disk is the same as the name of its top level catalogue or master catalogue. There must be one disk mounted with a volume name of SYSTEM which contains specific files required by OS4000. In larger systems, there will usually be additional disks containing user files, data files, etc. although these can all coexist on the SYSTEM disk, space permitting. Users are each given a set of initial context pointers which each point to a catalogue on a filesystem, and users can only see the filesystem hierarchies below their initial context pointers. Systems are usually configured so that unprivileged users cannot see other users files or the system's files, except for the system executables held in SYS. By convention, an area called POOL is available for all users, and enables the transfer/sharing of files.
Files in an OS4000 filesystem are typed, which means that the filesystem can hold several different types of file, and understands how the contents are structured. Most common are logical files which contain a record structure. These are split into sequential and random files, with random files having all records the same length to enable seeking to record numbers. Finally, text and binary files are distinguished, mainly to prevent applications which expect textual data from accidentally using a binary file. This results in a set of logical file types identified by three letters, e.g. Logical Sequential Text is LST. The logical file types are LST, LSB, LRT, LRB. The converse to logical files are physical files, which are accessed block at a time, and these are known as Physical Random Binary (PRB) files. File types PST, PSB, PRT also exist in theory, but have the same capabilities as PRB and are not generally used. Additionally, there is a Logical Indexed Sequential (LIS) filetype, which is an ISAM file and always appears to be sorted on its key field, and a Byte stream (BYT) filetype, which was added in Rel 6.5 to better support the OS4000 NFS server. A filetype CAT is used to hold catalogues—it is actually the same as an LSB file, but can only be modified by the filesystem itself.
In addition to files and catalogues, there are 3 types of symbolic links. References (REF) can be created to point to another file or catalogue which creator of the REF can see through an initial context pointer, in either the same filesystem or another filesystem. Off Disk Pointers (ODP) are similar to references but can be created to point to a file or catalogue which cannot be seen through any initial context pointers, and creating an ODP is a privileged operation only available to the system manager. Support for Unix style symlinks (arbitrary text stored in a catalogue) was added in Rel 6.5 to better support the OS4000 NFS server, but symlinks can only be created and are only visible from NFS clients.
OS4000 also provides a non-hierarchical temporary filesystem. This supports exactly the same types of file as permanent filesystems, except for CAT, REF, ODP, and symlinks. The file contents are stored in dedicated temporary filing disk regions, but the file metadata is stored in memory. Each logged in user has a private temporary filing name space which cannot be seen by any other logged in user (nor even another logged in user with the same username). A user's temporary files are deleted when the user logs out (and implicitly if the system is rebooted). Temporary filenames start with a percent "%" or ampersand "&" and are limited to 8 characters.
Multi-access Environment
The following shows a short Multi-access login session:
In this case, user SMAN has logged in and issued the EXAMINE command. Then the session has been left to timeout through inactivity.
When a user logs in, the OS4000 JCL command interpreter SYS.COMM is loaded into the user's COMM process and started. This reads commands from the terminal. A number of system commands are built into SYS.COMM. In the case of a command which isn't built in, executable binary files are loaded into the USER process and run, and text JCL files are opened and processed directly by SYS.COMM itself. A user normally also gets an AIDA process which is privileged and used to load only trusted debugging programs.
Main Applications
Real-time Process Control accounts for over half of all the OS4000 systems deployed. Of these systems, steel production accounts for a significant proportion. The earlier of these Real-time Process Control systems were upgraded from DOS to OS4000.
X.25 Packet Switches account for a significant proportion of systems (although earlier GEC X.25 Packet Switches ran a special operating system called NOS which was a cut down operating system halfway between DOS and OS4000).
Civil Command and Control systems, e.g. Fire Service control systems interfacing the emergency telephone operator with the Fire Stations.
Prestel (UK) and the public Videotex systems used in many other countries, and many private Viewdata systems.
Multi-User Minicomputers, used in many Education and Research establishments.
Ports
OS4000 was ported to the GEC Series 63 minicomputer where it was known as OS6000. This required the addition of a software Nucleus emulation, as this was not a feature of the GEC Series 63 hardware. GEC Computers dropped OS6000, and the source code was given to Daresbury Laboratory who was the main user of it, and they continued to keep it in step with OS4000 releases for the lifetime of their two GEC Series 63 systems.
See also
GEC 4000 series minicomputers
Babbage (programming language)
GEC Computers Limited
References
Further reading
External links
GEC 4000 family, Which Computer?, May 1979
The Centre for Computing History
Bullet III - A Part of UK Network History
Proprietary operating systems
Real-time operating systems
Time-sharing operating systems
GEC Computers
1977 software |
11690921 | https://en.wikipedia.org/wiki/Vancouver%20Trojans | Vancouver Trojans | The Vancouver Trojans were a Canadian Junior Football team based in Vancouver, British Columbia. The Trojans play in the eight-team B.C. Football Conference, which itself is part of the Canadian Junior Football League (CJFL) and competes annually for the national title known as the Canadian Bowl. The Trojans were founded in 1974, and won the Canadian Bowl as CJFL champions in 1982.
The team was originally called the Renfrew Trojans, but changed their name in 1993. The Trojans practice facility is at Renfrew Park in Vancouver, but play their games in neighbouring Burnaby at either the Burnaby Lake Sports Complex or at Swangard Stadium.
In 2009 the Vancouver Trojans Junior Football team entered non-playing status with the BC Football Conference. There is currently no plan to revive the team.
Coach
Former B.C. Lions running back Cory Philpot
External links
Vancouver Trojans homepage
Canadian Junior Football League
Canadian Junior Football League teams
Tro
American football teams established in 1974
American football teams disestablished in 2009
1974 establishments in British Columbia
2009 disestablishments in British Columbia |
28485 | https://en.wikipedia.org/wiki/Steve%20Jackson%20Games | Steve Jackson Games | Steve Jackson Games (SJGames) is a game company, founded in 1980 by Steve Jackson, that creates and publishes role-playing, board, and card games, and (until 2019) the gaming magazine Pyramid.
History
Founded in 1980, six years after the creation of Dungeons & Dragons, SJ Games created several role-playing and strategy games with science fiction themes. SJ Games' early titles were microgames initially sold in 4×7 inch ziploc bags, and later in the similarly sized Pocket Box. Games such as Ogre, Car Wars, and G.E.V (an Ogre spin-off) were popular during SJ Games' early years. Game designers such as Loren Wiseman and Jonathan Leistiko have worked for Steve Jackson Games.
Today SJ Games publishes a variety of games, such as card games, board games, strategy games, and in different genres, such as fantasy, sci-fi, and gothic horror. They also published the book Principia Discordia, the sacred text of the Discordian religion.
Raid by the Secret Service
On March 1, 1990, the Secret Service raided the offices of Steve Jackson Games, seizing three computers, two laser printers, dozens of floppy disks, and the master copy of GURPS Cyberpunk; a genre toolkit for cyberpunk games, written by Loyd Blankenship, the hacker and an employee at the time. The Secret Service believed that Blankenship had illegally accessed Bell South systems, and uploaded a document possibly affecting 9-1-1 systems onto Steve Jackson Games's public bulletin board system; and, furthermore, that GURPS Cyberpunk would help others commit computer crimes. During their investigation, the Secret Service also read (and deleted) private emails on one of the computers. Though the materials were later returned in June, Steve Jackson Games filed suit in federal court, winning at trial.
The raid led to the formation of the Electronic Frontier Foundation, which was founded in July 1990.
Kickstarter project
In April–May 2012, Steve Jackson Games ran a successful Kickstarter.com campaign for a new "Designer's Edition" of Ogre. The final game was planned to weigh 14 pounds or more, partly because the high level of extra funding achieved in the kickstarter enabled significant game additions.
Games published
Steve Jackson Games' main product line, in terms of sales, is the Munchkin card game, followed by the role-playing system GURPS.
Card games
Battle Cattle The Card Game, a card game, compatible with the Car Wars card game, based on the Battle Cattle miniatures system.
Burn In Hell, a semi-satirical game centered on collecting 'circles' of notable historical and contemporary people's (sinners') souls that share common characteristics.
Car Wars: The Card Game, a card game version of the Car Wars miniatures system.
Chez Geek, a card-game parody of Geek culture with many spinoffs and expansions:
Cowpoker, a card game partly based on poker mechanics with a central theme of old west cattle ranchers.
Dino Hunt, a card game where players travel through time to capture dinosaurs. Features over a hundred dinosaurs with color drawings and accurate scientific data on each one.
Hacker, a modern-day card game based on the mechanics of Illuminati.
Hacker II: The Dark Side
Illuminati, a game of competing conspiracies, based largely on the Illuminatus! Trilogy by Robert Anton Wilson. Originally published in microgame format followed by three numbered expansions. Later published in a full-sized box with expansions 1 and 2 as Deluxe Illuminati. Expansion 3 would later be reprinted as Illuminati: Brainwash.
Illuminati: Y2K - all-card expansion for Deluxe Illuminati
Illuminati: Bavarian Fire Drill - all-card expansion for Deluxe Illuminati
Illuminati: New World Order (INWO), the collectible card game based on concepts in Illuminati.
INWO Subgenius - expansion based on Church of the Subgenius concepts which can also be played stand-alone.
Illuminati Crime Lords, a mafia-based variation on Illuminati which combines gameplay elements of the original Illuminati and INWO.
King's Blood, a Japanese card game originally published by Kadokawa Shoten.
Lord of the Fries (card game), a game of zombies attempting to assemble orders in a fast-food restaurant. Originally designed by James Ernest and published by Cheapass Games.
Munchkin, a card-game parody of hack-and-slash roleplaying with many spinoffs (all able to be mixed with the original). and expansions:
Ninja Burger, a fast-paced ninja delivery card game based on the Ninja Burger website.
Space Pirate Amazon Ninja Catgirls (SPANC), a light-hearted competition between starship crews of cat girls in search of toys and loot.
Spooks, a Halloween-themed card game where players try to get rid of cards from their hands.
Board games
The Awful Green Things from Outer Space, designed by Tom Wham and originally published by TSR.
Car Wars, futuristic battles between automobiles.
Dork Tower, a fantasy game that takes place in the world the Dork Tower characters play their games in.
Frag, "a first-person shooter without a computer".
Globbo, a black comedy game about a murderous alien babysitter.
GreedQuest, a light, randomized romp through a simple dungeon to gain loot.
Knightmare Chess, a chess variant played with cards. Translation of the French Tempête sur l'Echiquier published by Ludodelire.
Kung Fu 2100, a simple game of hand-to-hand combat where players use martial arts to smash their way into the CloneMaster's fortress.
Munchkin Quest, a boardgame variation of the Munchkin card games
Nanuk, a boardgame of bidding and bluffing, centered on Inuit hunters.
Necromancer, a fantasy game for two players, in which each player becomes a powerful wizard controlling the forces of the Undead.
Ogre, the classic simulation of future war involving a cybernetic armored juggernaut firing nuclear weapons. Designed by Jackson, and originally published by Metagaming Concepts.
Battlesuit, a spin-off of Ogre and G.E.V. featuring infantry using powered armor inspired by Starship Troopers.
G.E.V., a spin-off of Ogre focusing on futuristic but "conventional" infantry, artillery, and armor units.
Shockwave, an Ogre/G.E.V. expansion set with new units and a new map.
Ogre Reinforcements Pack, an Ogre/G.E.V. expansion set with new rules and replacement pieces and maps.
Battlefields, an Ogre/G.E.V. expansion set with new rules, pieces, and maps.
One Page Bulge, a simulation of the German Ardennes Offensive in 1944, with the rules printed on a single page.
Proteus, a chess variant using dice to represent normal chess pieces.
Revolution, a blind-bidding area-majority game.
Snits, two classic Tom Wham games, Snit's Revenge and Snit Smashing, both originally published by TSR.
Star Traders, a game where players race through space to deliver cargoes.
The Stars Are Right, a boardgame where players attempt to change a 5×5 tileboard through the use of cards, and gaining victory points based on certain constellations of symbols.
Strange Synergy, a game where teams of warriors battle with a different set of powers each game.
Tile Chess, a multiplayer chess variant played without a chess board.
X-Bugs, a combat game where futuristic bugs are represented by colorful tiddly winks.
Role-playing games
GURPS, the Generic Universal Role Playing System.
GURPS Traveller, GDW's Traveller based upon GURPS.
In Nomine, a game about Angels and Demons based on the popular French role-playing game, In Nomine Satanis / Magna Veritas.
Killer: The Game of Assassination, a variant of Assassin.
Munchkin RPG, a series of D20 supplements based on the Munchkin card game.
Toon, the cartoon role-playing game.
Transhuman Space, a near-future science fiction setting spanning the Solar System.
Tribes, players play cave men (and women) trying to protect and nurture their descendants. Partly designed by science fiction author David Brin.
Miniatures
Ogre & G.E.V have also been published as in miniatures wargaming format.
Cardboard Heroes, paper miniatures.
Computer games
Autoduel, an action arcade game with role-playing elements. Published by Origin Systems, Inc.
Ogre A computer version of the Ogre board game. Published by Origin Systems, Inc.
Ultracorps An online space strategy game originally developed by VR-1.
Dice games
Cthulhu Dice, a custom dice game where the faces are Cthulhu symbols, including the Eye of Horus, the Yellow Sign, the Elder Sign, Cthulhu, and Tentacle. You roll the dice to compete with others to be the last sane person left.
Zombie Dice, a custom dice game where the faces are Brains, Shotgun Blasts and Feet. The goal is to push your luck stacking up zombie kills before your buddies.
Proteus, a custom dice game where the faces of the dice represent chess pieces. The goal is to change your pawns into higher pieces and take over all your buddies' pieces.
Magazines
Publication history
Gaming magazines produced by Steve Jackson Games have included:
The Space Gamer (1980-1985) – Steve Jackson took over the magazine from Metagaming Concepts with issue #27, and transferred the magazine to SJGames in 1982; the final SJGames issue was #76 in 1985, and the rights were sold to Diverse Talents Inc.
Fire & Movement (1982-1985) - a wargaming magazine purchased from Baron Publishing - sold to Diverse Talents in 1985
Autoduel Quarterly (1983-1992) - home for Car Wars material moved from The Space Gamer
Fantasy Gamer (1983-1984) - short-lived magazine split from Space Gamer
Roleplayer (1986-1993) - replaced The Space Gamer as the company's periodical for their fan base until SJGames started the new generalist magazine Pyramid
Pyramid (1993-1998) - published for 30 issues as a print magazine
Pyramid, volume 2 (1998-2008) – published as an online, weekly, subscription-based magazine
Journal of the Travellers Aid Society (starting 2000) - SJGames resurrected Game Designers' Workshop's old magazine as an online magazine
d20 Weekly (2002-2003) – an online magazine devoted to the d20 market
Pyramid, volume 3 (starting 2008) - a PDF-only version of the magazine
Mentions in third-party media
In Uplink, a 2001 computer hacking simulation game by British software company Introversion Software, there is a company named Steve Jackson Games. While this company may occasionally offer hacking contracts to the player, its main feature is a Public Access Server which, if accessed, displays the following information:
Steve Jackson Games
Public Access Server
ATTENTION
This computer system has been seized
by the United States Secret Service
in the interests of National Security.
Your IP has been logged.
This jokingly refers to the 1990 raid by the US Secret Service. As noted in the Ultimate Uplink Guide, this was "put into the game because of the Secret Service Raid on the company, for supposedly making a 'Hacking Guide'. This guide was actually a work of total fiction for a game the company was making, and contained technology that didn't even exist".
References
External links
Steve Jackson Games' official web site
"Fantasy for Fun and Profit": an article about the company from the Austin American-Statesman, April 18, 1988
Board game publishing companies
Card game publishing companies
American companies established in 1980
Privately held companies based in Texas
Role-playing game publishing companies
Companies based in Austin, Texas |
3630057 | https://en.wikipedia.org/wiki/KGDB | KGDB | KGDB is a debugger for the Linux kernel and the kernels of NetBSD and FreeBSD. It requires two machines that are connected via a serial connection. The serial connection may either be an RS-232 interface using a null modem cable, or via the UDP/IP networking protocol (KGDB over Ethernet, KGDBoE). The target machine (the one being debugged) runs the patched kernel and the other (host) machine runs gdb. The GDB remote protocol is used between the two machines.
KGDB was implemented as part of the NetBSD kernel in 1997, and FreeBSD in version 2.2. The concept and existing remote gdb protocol were later adapted as a patch to the Linux kernel. A scaled-down version of the Linux patch was integrated into the official Linux kernel in version 2.6.26.
KGDB is available for the following architectures under Linux: x86, x86-64, PowerPC, ARM, MIPS, and S390. It is available on all supported architectures of NetBSD and FreeBSD using only RS-232 connectivity.
Amit Kale maintained the Linux KGDB from 2000 to 2004. From 2004 to 2006, it was maintained by Linsyssoft Technologies, after which Jason Wessel at Wind River Systems, Inc. took over as the official maintainer. Ingo Molnar and Jason Wessel created a slimmed-down and cleaned up version of KGDB which was called "kgdb light" (without Ethernet support and many other hacks). This was the one merged into the 2.6.26 kernel. This version of kgdb supports only RS-232 connectivity, using a special driver which can split debugger inputs and console inputs such that only a single serial port is required.
FreeBSD
A program named kgdb is also used by FreeBSD. It is a gdb based utility for debugging kernel core files. It can also be used for remote "live" kernel debugging, much in the same way as the Linux KGDB, over either a serial connection or a firewire link.
References
External links
Debugging the NetBSD kernel with GDB HOWTO
KGDB and KDB wiki, the official home of kgdb and kdb for kernel.org
2.5 & up to 2.6.15 Linux Kernel Source Level Debugger
FreeBSD kgdb manual
kgdb at SourceForge.net
Debuggers
Third-party Linux kernel modules |
36446853 | https://en.wikipedia.org/wiki/Caltech%E2%80%93MIT%20rivalry | Caltech–MIT rivalry | The Caltech–MIT rivalry is a college rivalry between California Institute of Technology (Caltech) and Massachusetts Institute of Technology (MIT), stemming from the colleges' reputations as the top science and engineering schools in the United States. The rivalry is unusual given the geographic distance between the schools, one being in Pasadena, California, and the other in Cambridge, Massachusetts (their campuses are separated by about 3000 miles and are on opposite coasts of the United States), as well as its focus on elaborate pranks rather than sporting events.
One pranking war was instigated in April 2005, when Caltech students pulled multiple pranks during MIT's Campus Preview Weekend for prospective freshmen. MIT students responded a year later by stealing Caltech's antique Fleming cannon and transporting it across the country to MIT's campus. Subsequent pranks have included fake satirical school newspapers distributed by Caltech students at MIT and the appearance of a TARDIS device on top of Caltech's Baxter Hall.
Schools
Caltech is located in Pasadena, California, 11 miles northeast of downtown Los Angeles. It was founded in 1891 and adopted its current name in 1920. Caltech enrolled just under 1000 undergraduates and almost 1200 graduate students for the 2011–2012 academic year. Despite its small size, 31 Caltech alumni and faculty have won the Nobel Prize and 66 have won the National Medal of Science or Technology, and Caltech was ranked first in the 2011–2016 Times Higher Education worldwide rankings of universities., whereas MIT was ranked first in the rival QS World University Rankings over the same period. Curiously from 2004-2009, The Times HES and QS collaborated to produce joint rankings.
Caltech has a long history of off-campus pranks, which are sometimes referred to as "RFs". (RF is short for "ratfuck", referring to the shattering of a frozen dead rat in someone's room.) The most notable of these pranks include the 1961 Great Rose Bowl Hoax, where a card stunt was altered to display "Caltech" rather than the name of one of the competing teams. Caltech students also altered the scoreboard display during the 1984 Rose Bowl to show Caltech beating MIT 38–9, and in May 1987 changed the Hollywood Sign to read "CALTECH".
MIT was founded in 1861, and is located in Cambridge, Massachusetts, directly across the Charles River from central Boston. MIT enrolled 4512 undergraduates and 6807 graduate students for the 2014-2015 academic year. 85 Nobel laureates and 28 National Medal of Science or Technology recipients are currently or have previously been affiliated with the university.
MIT also has a long tradition of pranks, which are called "hacks" at that institution. Many hacks involve placing an item on MIT's Great Dome or otherwise altering, such as moving a campus police cruiser to its roof, placing full-sized replicas of the Wright Flyer and a firetruck on top of it to acknowledge the anniversaries of first powered controlled flight and the September 11th attacks respectively, and converting it into R2-D2 and a large yellow ring to acknowledge the release of Star Wars Episode I and Lord of the Rings respectively. A famous off-campus hack involved MIT students inflating a weather balloon labeled "MIT" at the 50-yard line at the Harvard/Yale football game in 1982.
Pranks at the two institutions are seen as a way to relax from the stress of the notoriously rigorous academics of each. Both Caltech and MIT have a set of pranking ethics, stating that pranks should be reversible and not cause permanent damage, and emphasize creativity and originality. In recent years, pranking has been officially encouraged by Tom Mannion, Caltech's Assistant Vice President for Student Affairs and Campus Life. "The grand old days of pranking have gone away at Caltech, and that's what we are trying to bring back," reported The Boston Globe, which noted that "security has orders not to intervene in a prank unless officers get Mannion's approval beforehand." However, hacks at MIT are generally more secretive and often do not involve identifying the hackers.
Pranks
2005 Campus Preview Weekend pranks
In April 2005, Caltech students instigated a series of pranks during MIT's Campus Preview Weekend:
Caltech students snuck into two fairs for the prospective freshmen and handed out 400 T-shirts that were packaged so that "MIT" was visible on the front, but the reverse design, the words "because not everybody can go to Caltech" and a drawing of a palm tree, were obscured until the package was opened.
Inflatable palm trees were placed on the Great Dome and in the Tomb of the Unknown Tool, an important location in MIT's roof and tunnel hacking culture, after the Caltech students had snuck into a "Tangerine Tour" of these locations intended for prospective freshmen.
A hundred orange balloons (orange being Caltech's official color) and a large blimp with the letters "CIT" were floated inside Lobby 7.
The inscription on the exterior of the Lobby 7 dome facing Massachusetts Avenue was changed to read "That Other Institute of Technology" instead of "Massachusetts Institute of Technology".
The letters "CALTECH" were written on the Green Building with a green laser.
The group responsible for the laser dedicated the entire weekend to completing it, working on electronics for three designs operating on different principles, which yielded one working device. The Caltech students intended to upgrade the laser to show three-dimensional rotating and animated letters by using stereo sound signals encoded on a compact disc to control mirrors to deflect the laser beam. MIT campus police and students were initially frustrated in their attempts to locate the source of the laser, but the students were eventually able to trace it and Caltech students turned it off just before the upgraded electronics could be installed. Caltech students had also produced genuine-looking MIT ID cards featuring their real names and photographs, but did not need to use them.
MIT students counterpranked the Lobby 7 dome to read "The Only Institute of Technology", and had to resort to pulling the blimp down using helium balloons covered in sticky tape. One student unsuccessfully attempted to DDOS the Caltech students' website documenting the pranks. The pranks were seen as a way to merge Caltech and MIT's independent but similar pranking cultures. Campus Preview Weekend was chosen because the Caltech students would blend in with the unfamiliar prospective freshmen, and to increase the pranks' visibility. MIT Dean of Admissions Marilee Jones said, "I think it's hilarious. I consider hacks a performance art, and I like the concept of inter-institute rivalry."
2006 Fleming cannon heist
Caltech is home to the 1.7-ton, 130-year-old Fleming cannon. The origins and exact age of the Fleming cannon are not known with certainty. It is believed to have been cast during the Franco-Prussian War era, but completed in 1878 after the war was over. It was then given by the French to the United States where it was re-bored to fit American shells and the carriage constructed, but this work was completed too late for it to see use in the Spanish–American War. The cannon soon became obsolete and was donated to Southwestern Academy in San Marino, California, where it was displayed on the front lawn starting in 1925. By 1972, the school was seeking to discard the cannon, and a group of Caltech students from Fleming House took possession of the cannon and laboriously restored it to working condition. The cannon was returned to Southwestern in 1975 at the insistence of the Caltech administration, but it was permanently restored to Caltech in 1981. The cannon is one of the few objects at Caltech that is designated as unprankable given its age, fragility, and irreplaceable nature.
On March 28, 2006, the cannon disappeared from the Caltech campus, having been taken by people posing as contractors, fooling a security guard with a phony work order. At the time, the cannon was not at its normal location outside Fleming House, where it is normally locked to the ground, due to ongoing renovations. The identity of the perpetrators was initially unknown, and there was speculation that it had been stolen by nearby Harvey Mudd College, who had been responsible for a well-known theft of the cannon almost twenty years prior.
However, it was soon revealed that the cannon had been appropriated by MIT in retaliation for the previous year's pranks, and relocated to Cambridge. The MIT team consisted of about 30 hackers, of which two flew to Pasadena and five drove cross-country. While acquiring the cannon disguised as construction contractors, the hackers had run-ins with a Caltech security guard and physical plant worker, to whom they explained that they were moving the cannon in preparation for the pouring of a concrete pedestal. Once off Caltech's campus, a local resident called in a noise complaint, and the Pasadena police arrived but did not recognize the then-disguised cannon. On the way to the shipping company, the trailer's hitch cracked, necessitating a slow trip on surface roads, and on arrival they were unable to physically remove the cannon from the trailer, causing them to spend an extra $1000 for the services of a company that specialized in moving large film props.
On April 6, the cannon appeared in front of the Green Building sporting a giant 21-pound gold-plated aluminum Brass Rat around its barrel, which was positioned to point towards Pasadena, and female MIT students mockingly posted pictures of themselves posing in bikinis with the cannon. It was revealed that preparations for the heist had been underway since December. MIT was softly criticized for not leaving a note explaining that the theft was a prank, as required by Caltech's pranking ethics, which were said to be more stringent than MIT's, but the prank was largely taken in good humor at both campuses.
Fleming House students and alumni quickly began plotting for the return of the cannon, setting up a command center in a trailer on campus and soliciting donations from alumni. Their initial plan was to use a helicopter to fly the cannon out of the MIT campus. Initial arrangements were made with a helicopter company, but Federal Aviation Administration rules ultimately made this untenable. The students instead decided to surreptitiously steal back the cannon under cover of darkness. On the morning of April 10, about two dozen Fleming students, dressed in their signature red Fleming jerseys, descended upon the cannon to reclaim it and begin its journey back to Pasadena. However, MIT students had been tipped off and were waiting for the Caltech students with a friendly barbecue prepared, and played Wagner's Ride of the Valkyries, a forbidden song at Caltech due to its association with final exams, as the Flems entered. The Fleming students left a miniature toy cannon with a note reading, "Here's something a little more your size."
Later developments
During MIT's CPW in 2007, Caltech distributed a sixteen-page fake edition of MIT's student newspaper, The Tech, containing articles such as "Math Dept. Hires Rising Star Matt Damon", referring to 1997 film Good Will Hunting, and "Infinite Corridor Not Actually Infinite", referring to MIT's iconic main thoroughfare, and a mock advertisement for sperm donation offering more money for Caltech students than MIT students. The prank was inspired by the suggestion that a similar fake-newspaper caper had been perpetrated by the University of Southern California against the University of California, Los Angeles in the past, and the paper was prepared in just two weeks with 15,000 issues printed. The three Caltech students sent to distribute the papers at MIT initially tried to drop the papers at The Techs normal distribution points, but these were quickly discovered and removed by MIT students. The Caltech students then turned to distributing the papers individually on the sidewalk outside of Lobby 7, a location outside the jurisdiction of the MIT Police.
In 2008, Caltech students provided a "Puzzle Zero" in the MIT Mystery Hunt that when solved, told solvers to call a specific number in the 626 area code immediately. When MIT students dialed the number, they heard, "Thank you for calling the Caltech Admissions Office. If you are another MIT student wishing to transfer to Caltech, please download our transfer application form from www.caltech.edu. If you are an MIT student not wishing to transfer to Caltech, we wish you the best of luck, and hope you find happiness someday.... "
Another series of pranks was planned for Thanksgiving weekend in 2009, involving transforming MIT into "Caltech East: School of Humanities". The pranks were planned over the course of six months. Caltech students intended to deploy two large banners that were designed to be easy to place, but removal would require a cherry picker or a rappel. However, the design of MIT's Killian Court prevented the placement of one of them, and another was intercepted by MIT security before its deployment could be completed. Another fake edition of The Tech was released, stating that students would be required to take a core of literature, history, philosophy, and economics, but science subjects would be eliminated. Although the failure of the pranks was considered to be a disappointment, Caltech and MIT students afterwards shared breakfast at a local diner.
In September 2010, MIT hackers attempted to place a TARDIS time machine on the roof of Baxter Hall at Caltech, but were foiled by Caltech Security. It was stated that this was due to MIT students' failure to tell the Caltech administration about the prank in advance. However, in January 2011, Caltech and MIT students cooperated in placing the TARDIS on the roof. The TARDIS had previously been seen on the MIT Great Dome in August 2010, and was subsequently transported to buildings at the University of California, Berkeley, and then Stanford University.
Caltech pranksters again visited MIT's Campus Preview Weekend in April 2014, this time distributing mugs that displayed the MIT logo when cold, but when filled with hot liquid, turned orange and changed to read "Caltech: The Hotter Institute of Technology". Caltech students handed out the mugs to prospective students outside MIT's formal welcoming event. MIT admissions officers tried to stop the Caltech students unless they could prove they were a registered event, but Caltech Prank Club President Julie Jester stalled them for 20 minutes by claiming they were registered through the MIT Alumni Association, pretending to have problems connecting to MIT's WiFi on her smartphone, and calling Caltech Student Activities Director Tom Mannion to get a name of an MIT Alumni Association member. MIT admissions officers reportedly resorted to "ripping the mugs out of prefrosh's
hands." MIT admissions officer Chris Peterson later tweeted that the mugs were "snake oil by charlatans from other coasts." Jester later said that "It's been a couple years since we had a good MIT prank.... We wanted to rekindle that relationship," and "pranks are a big element of the Caltech culture.... We’re just a small institution, but we feel that our impact is really bigger than our size. We do cool stuff because we can."
See also
List of practical joke topics
Harvey Mudd College#Relations with Caltech
References
External links
Howe & Ser Moving Co.
The fake issue of The Tech from the 2007 prank
California Institute of Technology
Massachusetts Institute of Technology student life
Practical jokes
College sports rivalries in the United States
University folklore |
43026 | https://en.wikipedia.org/wiki/Endianness | Endianness | In computing, endianness is the order or sequence of bytes of a word of digital data in computer memory. Endianness is primarily expressed as big-endian (BE) or little-endian (LE). A big-endian system stores the most significant byte of a word at the smallest memory address and the least significant byte at the largest.
A little-endian system, in contrast, stores the least-significant byte at the smallest address. Bi-endianness is a feature supported by numerous computer architectures that feature switchable endianness in data fetches and stores or for instruction fetches.
Other orderings are generically called middle-endian or mixed-endian.
Endianness may also be used to describe the order in which the bits are transmitted over a communication channel, e.g., big-endian in a communications channel transmits the most significant bits first. Bit-endianness is seldom used in other contexts. Danny Cohen introduced the terms big-endian and little-endian into computer science for data ordering in an Internet Experiment Note published in 1980.
The adjective endian has its origin in the writings of 18th century Anglo-Irish writer Jonathan Swift. In the 1726 novel Gulliver's Travels, he portrays the conflict between sects of Lilliputians divided into those breaking the shell of a boiled egg from the big end or from the little end. He called them the Big-Endians and the Little-Endians. Cohen makes the connection to Gulliver's Travels explicit in the appendix to his 1980 note.
Overview
Computers store information in various-sized groups of binary bits. Each group is assigned a number, called its address, that the computer uses to access that data. On most modern computers, the smallest data group with an address is eight bits long and is called a byte. Larger groups comprise two or more bytes, for example, a 32-bit word contains four bytes. There are two possible ways a computer could number the individual bytes in a larger group, starting at either end. Both types of endianness are in widespread use in digital electronic engineering. The initial choice of endianness of a new design is often arbitrary, but later technology revisions and updates perpetuate the existing endianness to maintain backward compatibility.
Internally, any given computer will work equally well regardless of what endianness it uses since its hardware will consistently use the same endianness to both store and load its data. For this reason, programmers and computer users normally ignore the endianness of the computer they are working with. However, endianness can become an issue when moving data external to the computer – as when transmitting data between different computers, or a programmer investigating internal computer bytes of data from a memory dump – and the endianness used differs from expectation. In these cases, the endianness of the data must be understood and accounted for.
These two diagrams show how two computers using different endianness store a 32-bit (four byte) integer with the value of . In both cases, the integer is broken into four bytes, , , , and , and the bytes are stored in four sequential byte locations in memory, starting with the memory location with address a, then a + 1, a + 2, and a + 3. The difference between big and little endian is the order of the four bytes of the integer being stored.
The left-side diagram shows a computer using big-endian. This starts the storing of the integer with the most-significant byte, , at address a, and ends with the least-significant byte, , at address a + 3.
The right-side diagram shows a computer using little-endian. This starts the storing of the integer with the least-significant byte, , at address a, and ends with the most-significant byte, , at address a + 3.
Since each computer uses its same endianness to both store and retrieve the integer, the results will be the same for both computers. Issues may arise when memory is addressed by bytes instead of integers, or when memory contents are transmitted between computers with different endianness.
Big-endianness is the dominant ordering in networking protocols, such as in the internet protocol suite, where it is referred to as network order, transmitting the most significant byte first. Conversely, little-endianness is the dominant ordering for processor architectures (x86, most ARM implementations, base RISC-V implementations) and their associated memory. File formats can use either ordering; some formats use a mixture of both or contain an indicator of which ordering is used throughout the file.
The styles of little- and big-endian may also be used more generally to characterize the ordering of any representation, e.g. the digits in a numeral system or the sections of a date. Numbers in positional notation are generally written with their digits in left-to-right big-endian order, even in right-to-left scripts. Similarly, programming languages use big-endian digit ordering for numeric literals.
Basics
Computer memory consists of a sequence of storage cells (smallest addressable units), most commonly called bytes. Each byte is identified and accessed in hardware and software by its memory address. If the total number of bytes in memory is n, then addresses are enumerated from 0 to n − 1.
Computer programs often use data structures or fields that may consist of more data than can be stored in one byte. In the context of this article where its type cannot be arbitrarily complicated, a "field" consists of a consecutive sequence of bytes and represents a "simple data value" which – at least potentially – can be manipulated by one single hardware instruction. The address of such a field is mostly the address of its first byte.
Another important attribute of a byte being part of a "field" is its "significance".
These attributes of the parts of a field play an important role in the sequence the bytes are accessed by the computer hardware, more precisely: by the low-level algorithms contributing to the results of a computer instruction.
Numbers
Positional number systems (mostly base 10, base 2, or base 256 in the case of 8-bit bytes) are the predominant way of representing and particularly of manipulating integer data by computers. In pure form this is valid for moderate sized non-negative integers, e.g. of C data type unsigned. In such a number system, the value of a digit which it contributes to the whole number is determined not only by its value as a single digit, but also by the position it holds in the complete number, called its significance. These positions can be mapped to memory mainly in two ways:
decreasing numeric significance with increasing memory addresses (or increasing time), known as big-endian and
increasing numeric significance with increasing memory addresses (or increasing time), known as little-endian.
The integer data that are directly supported by the computer hardware have a fixed width of a low power of 2, e.g. 8 bits ≙ 1 byte, 16 bits ≙ 2 bytes, 32 bits ≙ 4 bytes, 64 bits ≙ 8 bytes, 128 bits ≙ 16 bytes. The low-level access sequence to the bytes of such a field depends on the operation to be performed. The least-significant byte is accessed first for addition, subtraction and multiplication. The most-significant byte is accessed first for division and comparison. See .
For floating-point numbers, see .
Text
When character (text) strings are to be compared with one another, e.g. in order to support some mechanism like sorting, this is very frequently done lexicographically where a single positional element (character) also has a positional value. Lexicographical comparison means almost everywhere: first character ranks highest – as in the telephone book.
Integer numbers written as text are always represented most significant digit first in memory, which is similar to big-endian, independently of text direction.
Hardware
Many historical and extant processors use a big-endian memory representation, either exclusively or as a design option. Other processor types use little-endian memory representation; others use yet another scheme called middle-endian, mixed-endian or PDP-11-endian.
Some instruction sets feature a setting which allows for switchable endianness in data fetches and stores, instruction fetches, or both. This feature can improve performance or simplify the logic of networking devices and software. The word bi-endian, when said of hardware, denotes the capability of the machine to compute or pass data in either endian format.
Dealing with data of different endianness is sometimes termed the NUXI problem. This terminology alludes to the byte order conflicts encountered while adapting UNIX, which ran on the mixed-endian PDP-11, to a big-endian IBM Series/1 computer. Unix was one of the first systems to allow the same code to be compiled for platforms with different internal representations. One of the first programs converted was supposed to print out , but on the Series/1 it printed instead.
The IBM System/360 uses big-endian byte order, as do its successors System/370, ESA/390, and z/Architecture. The PDP-10 uses big-endian addressing for byte-oriented instructions. The IBM Series/1 minicomputer uses big-endian byte order.
The Datapoint 2200 used simple bit-serial logic with little-endian to facilitate carry propagation. When Intel developed the 8008 microprocessor for Datapoint, they used little-endian for compatibility. However, as Intel was unable to deliver the 8008 in time, Datapoint used a medium scale integration equivalent, but the little-endianness was retained in most Intel designs, including the MCS-48 and the 8086 and its x86 successors. The DEC Alpha, Atmel AVR, VAX, the MOS Technology 6502 family (including Western Design Center 65802 and 65C816), the Zilog Z80 (including Z180 and eZ80), the Altera Nios II, and many other processors and processor families are also little-endian.
The Motorola 6800 / 6801, the 6809 and the 68000 series of processors used the big-endian format.
The Intel 8051, contrary to other Intel processors, expects 16-bit addresses for LJMP and LCALL in big-endian format; however, xCALL instructions store the return address onto the stack in little-endian format.
SPARC historically used big-endian until version 9, which is bi-endian.
Similarly early IBM POWER processors were big-endian, but the PowerPC and Power ISA descendants are now bi-endian.
The ARM architecture was little-endian before version 3 when it became bi-endian.
Newer architectures
The Intel IA-32 and x86-64 series of processors use the little-endian format. Other instruction set architectures that follow this convention, allowing only little-endian mode, include Nios II, Andes Technology NDS32, and Qualcomm Hexagon.
Solely big-endian architectures include the IBM z/Architecture and OpenRISC.
Some instruction set architectures are "bi-endian" and allow running software of either endianness; these include Power ISA, SPARC, ARM AArch64, C-Sky, and RISC-V. The IBM AIX and Oracle Solaris operating systems on bi-endian Power ISA and SPARC, respectively, run in big-endian mode; some distributions of Linux on Power have moved to little-endian mode, but SPARC has no relevant little-endian deployment, and can be considered big-endian in practice. ARM, C-Sky, and RISC-V have no relevant big-endian deployments, and can be considered little-endian in practice.
Bi-endianness
Some architectures (including ARM versions 3 and above, PowerPC, Alpha, SPARC V9, MIPS, Intel i860, PA-RISC, SuperH SH-4 and IA-64) feature a setting which allows for switchable endianness in data fetches and stores, instruction fetches, or both. This feature can improve performance or simplify the logic of networking devices and software. The word bi-endian, when said of hardware, denotes the capability of the machine to compute or pass data in either endian format.
Many of these architectures can be switched via software to default to a specific endian format (usually done when the computer starts up); however, on some systems, the default endianness is selected by hardware on the motherboard and cannot be changed via software (e.g. the Alpha, which runs only in big-endian mode on the Cray T3E).
Note that the term bi-endian refers primarily to how a processor treats data accesses. Instruction accesses (fetches of instruction words) on a given processor may still assume a fixed endianness, even if data accesses are fully bi-endian, though this is not always the case, such as on Intel's IA-64-based Itanium CPU, which allows both.
Note, too, that some nominally bi-endian CPUs require motherboard help to fully switch endianness. For instance, the 32-bit desktop-oriented PowerPC processors in little-endian mode act as little-endian from the point of view of the executing programs, but they require the motherboard to perform a 64-bit swap across all 8 byte lanes to ensure that the little-endian view of things will apply to I/O devices. In the absence of this unusual motherboard hardware, device driver software must write to different addresses to undo the incomplete transformation and also must perform a normal byte swap.
Some CPUs, such as many PowerPC processors intended for embedded use and almost all SPARC processors, allow per-page choice of endianness.
SPARC processors since the late 1990s (SPARC v9 compliant processors) allow data endianness to be chosen with each individual instruction that loads from or stores to memory.
The ARM architecture supports two big-endian modes, called BE-8 and BE-32. CPUs up to ARMv5 only support BE-32 or word-invariant mode. Here any naturally aligned 32-bit access works like in little-endian mode, but access to a byte or 16-bit word is redirected to the corresponding address and unaligned access is not allowed. ARMv6 introduces BE-8 or byte-invariant mode, where access to a single byte works as in little-endian mode, but accessing a 16-bit, 32-bit or (starting with ARMv8) 64-bit word results in a byte swap of the data. This simplifies unaligned memory access as well as memory-mapped access to registers other than 32 bit.
Many processors have instructions to convert a word in a register to the opposite endianness, that is, they swap the order of the bytes in a 16-, 32- or 64-bit word. All the individual bits are not reversed though.
Recent Intel x86 and x86-64 architecture CPUs have a MOVBE instruction (Intel Core since generation 4, after Atom), which fetches a big-endian format word from memory or writes a word into memory in big-endian format. These processors are otherwise thoroughly little-endian.
Floating point
Although many processors use little-endian storage for all types of data (integer, floating point), there are a number of hardware architectures where floating-point numbers are represented in big-endian form while integers are represented in little-endian form. There are ARM processors that have half little-endian, half big-endian floating-point representation for double-precision numbers; both 32-bit words are stored in little-endian like integer registers, but the most significant one first. VAX floating point stores little-endian 16-bit words in big-endian order. Because there have been many floating-point formats with no network standard representation for them, the XDR standard uses big-endian IEEE 754 as its representation. It may therefore appear strange that the widespread IEEE 754 floating-point standard does not specify endianness. Theoretically, this means that even standard IEEE floating-point data written by one machine might not be readable by another. However, on modern standard computers (i.e., implementing IEEE 754), one may safely assume that the endianness is the same for floating-point numbers as for integers, making the conversion straightforward regardless of data type. Small embedded systems using special floating-point formats may be another matter, however.
Variable-length data
Most instructions considered so far contain the size (lengths) of its operands within the operation code. Frequently available operand lengths are 1, 2, 4, 8, or 16 bytes. But there are also architectures where the length of an operand may be held in a separate field of the instruction or with the operand itself, e.g. by means of a word mark. Such an approach allows operand lengths up to 256 bytes or even full memory size. The data types of such operands are character strings or BCD.
Machines being able to manipulate such data with one instruction (e.g. compare, add) are e.g. IBM 1401, 1410, 1620, System/3x0, ESA/390, and z/Architecture, all of them of type big-endian.
Optimization
The little-endian system has the property that the same value can be read from memory at different lengths without using different addresses (even when alignment restrictions are imposed). For example, a 32-bit memory location with content can be read at the same address as either 8-bit (value = 4A), 16-bit (004A), 24-bit (00004A), or 32-bit (0000004A), all of which retain the same numeric value. Although this little-endian property is rarely used directly by high-level programmers, it is often employed by code optimizers as well as by assembly language programmers.
In more concrete terms, such optimizations are the equivalent of the following C code returning true on most little-endian systems:
union {
uint8_t u8; uint16_t u16; uint32_t u32; uint64_t u64;
} u = { .u64 = 0x4A };
puts(u.u8 == u.u16 && u.u8 == u.u32 && u.u8 == u.u64 ? "true" : "false");
While not allowed by C++, such type punning code is allowed as "implementation-defined" by the C11 standard and commonly used in code interacting with hardware.
On the other hand, in some situations it may be useful to obtain an approximation of a multi-byte or multi-word value by reading only its most significant portion instead of the complete representation; a big-endian processor may read such an approximation using the same base-address that would be used for the full value.
Optimizations of this kind are not portable across systems of different endianness.
Calculation order
Some operations in positional number systems have a natural or preferred order in which the elementary steps are to be executed. This order may affect their performance on small-scale byte-addressable processors and microcontrollers. However, high-performance processors usually fetch typical multi-byte operands from memory in the same amount of time they would have fetched a single byte, so the complexity of the hardware is not affected by the byte ordering.
Addition, subtraction, and multiplication start at the least significant digit position and propagate the carry to the subsequent more significant position. Addressing multi-digit data at its first (= smallest address) byte is the predominant addressing scheme. When this first byte contains the least significant digit – which is equivalent to little-endianness, then the implementation of these operations is marginally simpler.
Comparison and division start at the most significant digit and propagate a possible carry to the subsequent less significant digits. For fixed-length numerical values (typically of length 1,2,4,8,16), the implementation of these operations is marginally simpler on big-endian machines.
Many big-endian processors (e.g. the IBM System/360 and its successors) contain hardware instructions for lexicographically comparing varying length character strings.
The normal data transport by an assignment statement is in principle independent of the endianness of the processor.
Middle-endian
Numerous other orderings, generically called middle-endian or mixed-endian, are possible.
The PDP-11 is in principle a 16-bit little-endian system. The instructions to convert between floating-point and integer values in the optional floating-point processor of the PDP-11/45, PDP-11/70, and in some later processors, stored 32-bit "double precision integer long" values with the 16-bit halves swapped from the expected little-endian order. The UNIX C compiler used the same format for 32-bit long integers. This ordering is known as PDP-endian.
A way to interpret this endianness is that it stores a 32-bit integer as two 16-bit words in big-endian, but the words themselves are little-endian (E.g. "jag cog sin" would be "gaj goc nis"):
The 16-bit values here refer to their numerical values, not their actual layout.
Segment descriptors of IA-32 and compatible processors keep a 32-bit base address of the segment stored in little-endian order, but in four nonconsecutive bytes, at relative positions 2, 3, 4 and 7 of the descriptor start.
In date and time notation in the United States, dates are middle-endian and differ from date formats worldwide.
Endian dates
Dates can represented with different Endianness by the ordering of the year, month and day. For example, September 11 2001 can be represented as:
little-endian date (day, month, year),
middle-endian dates (month, day, year),
big-endian date (year, month, day), as with ISO 8601
Byte addressing
When memory bytes are printed sequentially from left to right (e.g. in a hex dump), little-endian representation of integers has the significance increasing from left to right. In other words, it appears backwards when visualized, which can be counter-intuitive.
This behavior arises, for example, in FourCC or similar techniques that involve packing characters into an integer, so that it becomes a sequences of specific characters in memory. Let's define the notation as simply the result of writing the characters in hexadecimal ASCII and appending to the front, and analogously for shorter sequences (a C multicharacter literal, in Unix/MacOS style):
' J o h n '
hex 4A 6F 68 6E
----------------
-> 0x4A6F686E
On big-endian machines, the value appears left-to-right, coinciding with the correct string order for reading the result:
But on a little-endian machine, one would see:
Middle-endian machines like the Honeywell 316 above complicate this even further: the 32-bit value is stored as two 16-bit words in little-endian, themselves with a big-endian notation (thus ).
Byte swapping
Byte-swapping consists of masking each byte and shifting them to the correct location. Many compilers provide built-ins that are likely to be compiled into native processor instructions (/), such as . Software interfaces for swapping include:
Standard network endianness functions (from/to BE, up to 32-bit). Windows has a 64-bit extension in .
BSD and Glibc functions (from/to BE and LE, up to 64-bit).
macOS macros (from/to BE and LE, up to 64-bit).
Files and filesystems
The recognition of endianness is important when reading a file or filesystem that was created on a computer with different endianness.
Some CPU instruction sets provide native support for endian byte swapping, such as (x86 - 486 and later), and (ARMv6 and later).
Some compilers have built-in facilities for byte swapping. For example, the Intel Fortran compiler supports the non-standard specifier when opening a file, e.g.: .
Some compilers have options for generating code that globally enable the conversion for all file IO operations. This permits the reuse of code on a system with the opposite endianness without code modification.
Fortran sequential unformatted files created with one endianness usually cannot be read on a system using the other endianness because Fortran usually implements a record (defined as the data written by a single Fortran statement) as data preceded and succeeded by count fields, which are integers equal to the number of bytes in the data. An attempt to read such a file using Fortran on a system of the other endianness then results in a run-time error, because the count fields are incorrect. This problem can be avoided by writing out sequential binary files as opposed to sequential unformatted. Note however that it is relatively simple to write a program in another language (such as C or Python) that parses Fortran sequential unformatted files of "foreign" endianness and converts them to "native" endianness, by converting from the "foreign" endianness when reading the Fortran records and data.
Unicode text can optionally start with a byte order mark (BOM) to signal the endianness of the file or stream. Its code point is U+FEFF. In UTF-32 for example, a big-endian file should start with ; a little-endian should start with .
Application binary data formats, such as for example MATLAB .mat files, or the .bil data format, used in topography, are usually endianness-independent. This is achieved by storing the data always in one fixed endianness, or carrying with the data a switch to indicate the endianness.
An example of the first case is the binary XLS file format that is portable between Windows and Mac systems and always little-endian, leaving the Mac application to swap the bytes on load and save when running on a big-endian Motorola 68K or PowerPC processor.
TIFF image files are an example of the second strategy, whose header instructs the application about endianness of their internal binary integers. If a file starts with the signature it means that integers are represented as big-endian, while means little-endian. Those signatures need a single 16-bit word each, and they are palindromes (that is, they read the same forwards and backwards), so they are endianness independent. stands for Intel and stands for Motorola, the respective CPU providers of the IBM PC compatibles (Intel) and Apple Macintosh platforms (Motorola) in the 1980s. Intel CPUs are little-endian, while Motorola 680x0 CPUs are big-endian. This explicit signature allows a TIFF reader program to swap bytes if necessary when a given file was generated by a TIFF writer program running on a computer with a different endianness.
As a consequence of its original implementation on the Intel 8080 platform, the operating system-independent File Allocation Table (FAT) file system is defined with little-endian byte ordering, even on platforms using another endianness natively, necessitating byte-swap operations for maintaining the FAT.
ZFS, which combines a filesystem and a logical volume manager, is known to provide adaptive endianness and to work with both big-endian and little-endian systems.
Networking
Many IETF RFCs use the term network order, meaning the order of transmission for bits and bytes over the wire in network protocols. Among others, the historic RFC 1700 (also known as Internet standard STD 2) has defined the network order for protocols in the Internet protocol suite to be big-endian, hence the use of the term "network byte order" for big-endian byte order.
However, not all protocols use big-endian byte order as the network order. The Server Message Block (SMB) protocol uses little-endian byte order. In CANopen, multi-byte parameters are always sent least significant byte first (little-endian). The same is true for Ethernet Powerlink.
The Berkeley sockets API defines a set of functions to convert 16-bit and 32-bit integers to and from network byte order: the (host-to-network-short) and (host-to-network-long) functions convert 16-bit and 32-bit values respectively from machine (host) to network order; the and functions convert from network to host order. These functions may be a no-op on a big-endian system.
While the high-level network protocols usually consider the byte (mostly meant as octet) as their atomic unit, the lowest network protocols may deal with ordering of bits within a byte.
Bit endianness
Bit numbering is a concept similar to endianness, but on a level of bits, not bytes. Bit endianness or bit-level endianness refers to the transmission order of bits over a serial medium. The bit-level analogue of little-endian (least significant bit goes first) is used in RS-232, HDLC, Ethernet, and USB. Some protocols use the opposite ordering (e.g. Teletext, I2C, SMBus, PMBus, and SONET and SDH), and ARINC 429 uses one ordering for its label field and the other ordering for the remainder of the frame. Usually, there exists a consistent view to the bits irrespective of their order in the byte, such that the latter becomes relevant only on a very low level. One exception is caused by the feature of some cyclic redundancy checks to detect all burst errors up to a known length, which would be spoiled if the bit order is different from the byte order on serial transmission.
Apart from serialization, the terms bit endianness and bit-level endianness are seldom used, as computer architectures where each individual bit has a unique address are rare. Individual bits or bit fields are accessed via their numerical value or, in high-level programming languages, assigned names, the effects of which, however, may be machine dependent or lack software portability.
Notes
References
Computer memory
Data transmission
Metaphors
Software wars |
26056 | https://en.wikipedia.org/wiki/Robert%20Anton%20Wilson | Robert Anton Wilson | Robert Anton Wilson (born Robert Edward Wilson; January 18, 1932 – January 11, 2007) was an American author, futurist, and self-described agnostic mystic. Recognized within Discordianism as an Episkopos, pope and saint, Wilson helped publicize Discordianism through his writings and interviews.
Wilson described his work as an "attempt to break down conditioned associations, to look at the world in a new way, with many models recognized as models or maps, and no one model elevated to the truth". His goal was "to try to get people into a state of generalized agnosticism, not agnosticism about God alone but agnosticism about everything."
In addition to writing several science-fiction novels, Wilson also wrote non-fiction books on extrasensory perception, mental telepathy, metaphysics, paranormal experiences, conspiracy theory, sex, drugs and what Wilson himself called "quantum psychology".
Following a career in journalism and as an editor, notably for Playboy, Wilson emerged as a major countercultural figure in the mid-1970s, comparable to one of his coauthors, Timothy Leary, as well as Terence McKenna.
Early life
Born Robert Edward Wilson in Methodist Hospital, in Brooklyn, New York, he spent his first years in Flatbush, and moved with his family to lower middle class Gerritsen Beach around the age of four or five, where they stayed until relocating to the steadfastly middle-class neighborhood of Bay Ridge when Wilson was thirteen. He suffered from polio as a child, and found generally effective treatment with the Kenny Method (created by Elizabeth Kenny) which the American Medical Association repudiated at that time. Polio's effects remained with Wilson throughout his life, usually manifesting as minor muscle spasms causing him to use a cane occasionally until 2000, when he experienced a major bout with post-polio syndrome that would continue until his death.
He attended Catholic grammar schools before securing admission to the selective Brooklyn Technical High School. Removed from the Catholic influence at "Brooklyn Tech," Wilson became enamored of literary modernism (particularly Ezra Pound and James Joyce), the Western philosophical tradition, then-innovative historians such as Charles A. Beard, science fiction (including the works of Olaf Stapledon, Robert A. Heinlein and Theodore Sturgeon) and Alfred Korzybski's interdisciplinary theory of general semantics. He would later recall that the family was "living so well ... compared to the Depression" during this period "that I imagined we were lace-curtain Irish at last."
Following his graduation in 1950, Wilson was employed in a succession of jobs (including ambulance driver, engineering aide, salesman and medical orderly) and absorbed various philosophers and cultural practices (including bebop, psychoanalysis, Bertrand Russell, Carl Jung, Wilhelm Reich, Leon Trotsky and Ayn Rand, whom he later repudiated) while writing in his spare time. He studied electrical engineering and mathematics at the Brooklyn Polytechnic Institute from 1952 to 1957 and English education at New York University from 1957 to 1958 but failed to take a degree from either institution.
After having smoked marijuana for nearly a decade, Wilson first experimented with mescaline in Yellow Springs, Ohio, on December 28, 1961. Wilson began to work as a freelance journalist and advertising copywriter in the late 1950s. He adopted his maternal grandfather's name, Anton, for his writings and told himself that he would save the "Edward" for when he wrote the Great American Novel. He later found that "Robert Anton Wilson" had become an established identity.
He assumed co-editorship of the School for Living's Brookville, Ohio-based Balanced Living magazine in 1962 and briefly returned to New York as associate editor of Ralph Ginzburg's quarterly, fact:, before leaving for Playboy, where he served as an associate editor from 1965 to 1971. According to Wilson, Playboy "paid me a higher salary than any other magazine at which I had worked and never expected me to become a conformist or sell my soul in return. I enjoyed my years in the Bunny Empire. I only resigned when I reached 40 and felt I could not live with myself if I didn't make an effort to write full-time at last." Along with frequent collaborator Robert Shea, Wilson edited the magazine's Playboy Forum, a letters section consisting of responses to the Playboy Philosophy editorial column. During this period, he covered Timothy Leary and Richard Alpert's Millbrook, New York-based Castalia Foundation at the instigation of Alan Watts in The Realist, cultivated important friendships with William S. Burroughs and Allen Ginsberg, and lectured at the Free University of New York on 'Anarchist and Synergetic Politics' in 1965.
He received a B.A., M.A. (1978) and Ph.D. (1981) in psychology from Paideia University, an unaccredited institution that has since closed. Wilson reworked his dissertation, and it found publication in 1983 as Prometheus Rising.
Wilson married freelance writer and poet Arlen Riley in 1958. They had four children, including Christina Wilson Pearson and Patricia Luna Wilson. Luna was beaten to death in an apparent robbery in the store where she worked in 1976 at the age of 15, and became the first person to have her brain preserved by the Bay Area Cryonics Society. Arlen Riley Wilson died on May 22, 1999, following a series of strokes.
The Illuminatus! Trilogy
Among Wilson's 35 books, and many other works, perhaps his best-known volumes remain the cult classic series The Illuminatus! Trilogy (1975), co-authored with Shea. Advertised as "a fairy tale for paranoids," the three books—The Eye in the Pyramid, The Golden Apple, and Leviathan, soon offered as a single volume—philosophically and humorously examined, among many other themes, occult and magical symbolism and history, the counterculture of the 1960s, secret societies, data concerning author H. P. Lovecraft and author and occultist Aleister Crowley, and American paranoia about conspiracies and conspiracy theories. The book was intended to poke fun at the conspiratorial frame of mind.
Wilson and Shea derived much of the odder material from letters sent to Playboy magazine while they worked as the editors of its Forum. The books mixed true information with imaginative fiction to engage the reader in what Wilson called "guerrilla ontology", which he apparently referred to as "Operation Mindfuck" in Illuminatus! The trilogy also outlined a set of libertarian and anarchist axioms known as Celine's Laws (named after Hagbard Celine, a character in Illuminatus!), concepts Wilson revisited several times in other writings.
Among the many subplots of Illuminatus! one addresses biological warfare and the overriding of the United States Bill of Rights, another gives a detailed account of the John F. Kennedy assassination (in which no fewer than five snipers, all working for different causes, prepare to shoot Kennedy), and the book's climax occurs at a rock concert where the audience collectively face the danger of becoming a mass human sacrifice.
Illuminatus! popularized Discordianism and the use of the term "fnord". It incorporates experimental prose styles influenced by writers such as William S. Burroughs, James Joyce, and Ezra Pound. Although Shea and Wilson never co-operated on such a scale again, Wilson continued to expand upon the themes of the Illuminatus! books throughout his writing career. Most of his later fiction contains cross-over characters from "The Sex Magicians" (Wilson's first novel, written before the release of Illuminatus!, which includes many of his same characters) and The Illuminatus! Trilogy.
Illuminatus! won the Prometheus Hall of Fame award for Best Classic Fiction, voted by the Libertarian Futurist Society for science fiction in 1986, has many international editions, and found adaptation for the stage when Ken Campbell produced it as a ten-hour drama. It also appeared as two card based games from Steve Jackson Games, one a trading-card game (Illuminati: New World Order). Eye N Apple Productions and Rip Off Press produced a comic book version of the trilogy.
Schrödinger's Cat Trilogy, The Historical Illuminatus Chronicles, and Masks of the Illuminati
Wilson wrote two more popular fiction series. The first, a trilogy later published as a single volume, was Schrödinger's Cat. The second, The Historical Illuminatus Chronicles, appeared as three books. In between publishing the two trilogies Wilson released a stand-alone novel, Masks of the Illuminati (1981), which fits into, due to the main character's ancestry, The Historical Illuminatus Chronicles''' timeline and, while published earlier, could qualify for the fourth volume in that series.Schrödinger's Cat consists of three volumes: The Universe Next Door, The Trick Top Hat, and The Homing Pigeons. Wilson set the three books in differing alternative universes, and most of the characters remain almost the same but may have different names, careers and background stories. The books cover the fields of quantum mechanics and the varied philosophies and explanations that exist within the science. The single volume describes itself as a magical textbook and a type of initiation. The single-volume edition omits many entire pages and has many other omissions when compared with the original separate books.The Historical Illuminatus Chronicles, composed of The Earth Will Shake (1982), The Widow's Son (1985), and Nature's God (1991), follows the timelines of several characters through different generations, time periods, and countries. The books cover, among many other topics, the history, legacy, and rituals of the Illuminati and related groups.Masks of the Illuminati, featuring historical characters in a fictionalized setting, contains a great deal of occult data. Intermixing Albert Einstein, James Joyce, Aleister Crowley, Sigmund Freud, Carl Jung, Vladimir Ilyich Lenin, and others, the book focuses on Pan and other occult icons, ideas, and practices. The book includes homages, parodies and pastiches from both the lives and works of Crowley and Joyce.
Plays and screenplays
Wilson's play, Wilhelm Reich in Hell, was published as a book in 1987 and first performed at the Edmund Burke Theatre in Dublin, in San Francisco, and in Los Angeles. It features many factual and fictional characters, including Marilyn Monroe, Uncle Sam, and Wilhelm Reich himself. Wilson also wrote and published as books two screenplays, not yet produced: Reality Is What You Can Get Away With: an Illustrated Screenplay (1992) and The Walls Came Tumbling Down (1997).
Wilson's book Cosmic Trigger I: The Final Secret of the Illuminati has been adapted as a theatrical stage play by Daisy Eris Campbell, daughter of Ken Campbell the British theatre maverick who staged Illuminatus! at the Royal National Theatre in 1977. The play opened on November 23, 2014, in Liverpool before transferring to London and Brighton. Some of the costs were met through crowdfunding. Wilson's book is itself dedicated to "Ken Campbell and the Science-Fiction Theatre Of Liverpool, England."
The Cosmic Trigger series and other books
In his nonfiction and partly autobiographical Cosmic Trigger I: The Final Secret of the Illuminati (1977) and its two sequels, as well as in many other works, Wilson examined Freemasons, Discordianism, Sufism, the Illuminati, Futurology, Zen Buddhism, Dennis and Terence McKenna, Jack Parsons, the occult practices of Aleister Crowley and G.I. Gurdjieff, Yoga, and many other esoteric or counterculture philosophies, personalities, and occurrences.
Wilson advocated Timothy Leary's 8-Circuit Model of Consciousness and neurosomatic/linguistic engineering, which he wrote about in many books including Prometheus Rising (1983, revised 1997) and Quantum Psychology (1990), which contain practical techniques intended to help the reader break free of one's reality tunnels. With Leary, he helped promote the futurist ideas of space migration, intelligence increase, and life extension, which they combined to form the word symbol SMI²LE.
Wilson's 1986 book, The New Inquisition, argues that whatever reality consists of it actually would seem much weirder than we commonly imagine. It cites, among other sources, Bell's theorem and Alain Aspect's experimental proof of Bell's to suggest that mainstream science has a strong materialist bias, and that in fact modern physics may have already disproved materialist metaphysics.
Wilson also supported the work and utopian theories of Buckminster Fuller and examined the theories of Charles Fort. He and Loren Coleman became friends, as he did with media theorist Marshall McLuhan and Neuro Linguistic Programming co-founder Richard Bandler, with whom he taught workshops. He also admired James Joyce, and wrote extensive commentaries on the author and on two of Joyce's novels, Finnegans Wake and Ulysses, in his 1988 book Coincidance: A Head Test.
Although Wilson often lampooned and criticized some New Age beliefs, bookstores specializing in New Age material often sell his books. Wilson, a well-known author in occult and Neo-Pagan circles, used Aleister Crowley as a main character in his 1981 novel Masks of the Illuminati, also included some elements of H. P. Lovecraft's work in his novels, and at times claimed to have perceived encounters with magical "entities" (when asked whether these entities seemed "real", he answered they seemed "real enough," although "not as real as the IRS" but "easier to get rid of", and later decided that his experiences may have emerged from "just my right brain hemisphere talking to my left"). He warned against beginners using occult practice, since to rush into such practices and the resulting "energies" they unleash could lead people to "go totally nuts".
Wilson also criticized scientific types with overly rigid belief systems, equating them with religious fundamentalists in their fanaticism. In a 1988 interview, when asked about his newly published book The New Inquisition: Irrational Rationalism and the Citadel of Science, Wilson commented:
I coined the term irrational rationalism because those people claim to be rationalists, but they're governed by such a heavy body of taboos. They're so fearful, and so hostile, and so narrow, and frightened, and uptight and dogmatic ... I wrote this book because I got tired satirizing fundamentalist Christianity ... I decided to satirize fundamentalist materialism for a change, because the two are equally comical ... The materialist fundamentalists are funnier than the Christian fundamentalists, because they think they're rational! ... They're never skeptical about anything except the things they have a prejudice against. None of them ever says anything skeptical about the AMA, or about anything in establishment science or any entrenched dogma. They're only skeptical about new ideas that frighten them. They're actually dogmatically committed to what they were taught when they were in college. ...
Probability reliance
In a 2003 interview with High Times magazine, Wilson described himself as "model-agnostic" which he said
consists of never regarding any model or map of the universe with total 100% belief or total 100% denial. Following Korzybski, I put things in probabilities, not absolutes ... My only originality lies in applying this zetetic attitude outside the hardest of the hard sciences, physics, to softer sciences and then to non-sciences like politics, ideology, jury verdicts and, of course, conspiracy theory.
Wilson claimed in Cosmic Trigger: Volume 1 "not to believe anything", since "belief is the death of intelligence". He described this approach as "Maybe Logic."
Wilson wrote about this and other topics in articles for the cyberpunk magazine Mondo 2000.
Economic thought
Wilson favored a form of basic income guarantee; synthesizing several ideas under the acronym RICH. His ideas are set forth in the essay "The RICH Economy," found in The Illuminati Papers. In an article critical of capitalism, Wilson self-identified as a "libertarian socialist", saying that "I ask only one thing of skeptics: don't bring up Soviet Russia, please. That horrible example of State Capitalism has nothing to do with what I, and other libertarian socialists, would offer as an alternative to the present system." By the 1980s he was less enthusiastic about the socialist label, writing in Prometheus Rising that he "does not like" the spread of socialism. In his book Right Where You Are Sitting Now, he praises the georgist economist Silvio Gesell. In the essay Left and Right: A Non-Euclidean Perspective, Wilson speaks favorably of several "excluded middles" that "transcend the hackneyed debate between monopoly Capitalism and totalitarian Socialism"; he says his favorite is the mutualist anarchism of Benjamin Tucker and Pierre-Joseph Proudhon, but he also offers kind words for the ideas of Gesell, Henry George, C. H. Douglas, and Buckminster Fuller. Wilson also identified as an anarchist and described his belief system as "a blend of Tucker, Spooner, Fuller, Pound, Henry George, Rothbard, Douglas, Korzybski, Proudhon and Marx." Wilson spoke several times at conventions of the American Libertarian Party. He included Benjamin Tucker's Instead of a Book, Henry George's Progress and Poverty, and Gesell's The Natural Economic Order in a list of 20 book recommendations, "the bare minimum of what everybody really needs to chew and digest before they can converse intelligently about the 21st Century."
Other activities
Robert Anton Wilson and his wife Arlen Riley Wilson founded the Institute for the Study of the Human Future in 1975.
From 1982 until his death, Wilson had a business relationship with the Association for Consciousness Exploration, which hosted his first on-stage dialogue with his long-time friend Timothy Leary entitled The Inner Frontier.Two 60s Cult Heroes, on the Eve of the 80s by James Neff (Cleveland Plain Dealer October 30, 1979) Wilson dedicated his book The New Inquisition to A.C.E.'s co-directors, Jeff Rosenbaum and Joseph Rothenberg.
Wilson also joined the Church of the SubGenius, who referred to him as Pope Bob. He contributed to their literature, including the book Three-Fisted Tales of "Bob", and shared a stage with their founder, Rev. Ivan Stang, on several occasions. Wilson also founded the Guns and Dope Party.
As a member of the Board of Advisors of the Fully Informed Jury Association, Wilson worked to inform the public about jury nullification, the right of jurors to nullify a law they deem unjust.
Wilson advocated for and wrote about E-Prime, a form of English lacking all forms of the verb "to be" (such as "is", "are", "was", "were" etc.).
A decades-long researcher into drugs and a strong opponent of what he called "the war on some drugs", Wilson participated as a Special Guest in the week-long 1999 Annual Cannabis Cup in Amsterdam, and used and often promoted the use of medical marijuana. He participated in a protest organized by the Wo/Men's Alliance for Medical Marijuana in Santa Cruz in 2002.
Death
On June 22, 2006, Huffington Post blogger Paul Krassner reported that Wilson was under hospice care at home with friends and family. On October 2, Douglas Rushkoff reported that Wilson was in severe financial trouble. Slashdot, Boing Boing, and the Church of the SubGenius also picked up on the story, linking to Rushkoff's appeal.Robert Anton Wilson needs our Help, BoingBoing, October 2, 2006 As his webpage reported on October 10, these efforts succeeded beyond expectation and raised a sum which would have supported him for at least six months. Obviously touched by the great outpouring of support, on October 5, 2006, Wilson left the following comment on his personal website, expressing his gratitude:
On January 6, 2007, Wilson wrote on his blog that according to several medical authorities, he would likely only have between two days and two months left to live. He closed this message with "I look forward without dogmatic optimism but without dread. I love you all and I deeply implore you to keep the lasagna flying. Please pardon my levity, I don't see how to take death seriously. It seems absurd."
Wilson died peacefully five days later, on January 11 at 4:50 a.m. Pacific time, just a week short of his 75th birthday. After his cremation on January 18 (also his 75th birthday), his family held a memorial service on February 18 and then scattered most of his ashes at the same spot as his wife's—off the Santa Cruz Beach Boardwalk in Santa Cruz, California.
A tribute show to Wilson, organized by Coldcut and Mixmaster Morris and performed in London as a part of the "Ether 07 Festival" held at the Queen Elizabeth Hall on March 18, 2007, also included Ken Campbell, Bill Drummond and Alan Moore.
Cultural references
Wilson appears as a fictional version of himself in Timothy Leary's 1979 book, The Intelligence Agents. It features a full facsimile reproduction of an article ostensibly authored by Wilson, titled Marilyn's Inout System, from Peeple Magazine of March 1986.
Bibliography
Novels
The Sex Magicians (1973)
The Illuminatus! Trilogy (1975) (with Robert Shea)
The Eye in the Pyramid The Golden Apple Leviathan Schrödinger's Cat Trilogy (1979–1981)
The Universe Next Door The Trick Top Hat The Homing Pigeons Masks of the Illuminati (1981)
The Historical Illuminatus Chronicles The Earth Will Shake (1982)
The Widow's Son (1985)
Nature's God (1988)
Autobiographical / philosophical
Cosmic Trigger Trilogy.
Cosmic Trigger I: The Final Secret of the Illuminati (1977)
Cosmic Trigger II: Down to Earth (1991)
Cosmic Trigger III: My Life After Death (1995)
Non-fiction
Playboy's Book of Forbidden Words (1972)
Sex and Drugs: A Journey Beyond Limits (1973)
The Book of the Breast (1974)
Revised as Ishtar Rising (1989)
Neuropolitics (1978) (with Timothy Leary and George Koopman)
Revised as Neuropolitique (1988)
The Game of Life (1979) (with Timothy Leary)
Prometheus Rising (1983)
The New Inquisition (1986)
Natural Law, or Don't Put a Rubber on Your Willy (1987)
Sex, Drugs and Magick: A Journey Beyond Limits (1988) revision, with new introduction, of Sex and Drugs: A Journey Beyond Limits Quantum Psychology (1990)
Everything Is Under Control: Conspiracies, Cults and Cover-ups, with Miriam Joan Hill. New York: HarperCollins (1998)
TSOG: The Thing That Ate the Constitution (2002)
Articles
"Three Authors in Search of Sadism." The Realist, no. 67 (May 1966), p. 1. . .
Plays and screenplays
Wilhelm Reich in Hell (1987)
Reality Is What You Can Get Away With (1992; revised edition—new introduction added—1996)
The Walls Came Tumbling Down (1997)
Essay collections
The Illuminati Papers (1980) collection of essays and new material
Right Where You Are Sitting Now (1983) collection of essays and new material
Coincidance: A Head Test (1988) collection of essays and new material
Email to the universe and other alterations of consciousness (2005) collection of essays and new material
More Chaos and Beyond (2019) posthumous anthology of previously uncollected material
As editor
Semiotext(e) SF (1989) (anthology, editor, with Rudy Rucker and Peter Lamborn Wilson)
Chaos and Beyond (1994) (editor and primary author)
Discography
A Meeting with Robert Anton Wilson (ACE) cassette
Religion for the Hell of It (ACE) cassette
H.O.M.E.s on LaGrange (ACE) cassette
The New Inquisition (ACE) cassette
The H.E.A.D. Revolution (ACE) cassette and CD
Prometheus Rising (ACE) cassette
The Inner Frontier (with Timothy Leary) (ACE) cassette
The Magickal Movement: Present & Future (with Margot Adler, Isaac Bonewits & Selena Fox) (ACE) Panel Discussion – cassette
Magick Changing the World, the World Changing Magick (ACE) Panel Discussion – cassette
The Self in Transformation (ACE) Panel Discussion – cassette
The Once & Future Legend (with Ivan Stang, Robert Shea and others) (ACE) Panel Discussion – cassette
What IS the Conspiracy, Anyway? (ACE) Panel Discussion – cassette
The Chocolate-Biscuit Conspiracy album with The Golden Horde (1984)
Twelve Eggs in a Basket CD
Robert Anton Wilson On Finnegans Wake and Joseph Campbell (interview by Faustin Bray and Brian Wallace) (1988) 2-CD Set Sound Photosynthesis
Acceleration of Knowledge (1991) cassette
Secrets of Power comedy cassette
Robert Anton Wilson Explains Everything: or Old Bob Exposes His Ignorance (2001) Sounds True
Filmography
Actor
Robert Anton Wilson appeared in the 1998 German film 23 Nichts ist so wie es scheint. He has approximately two minutes featured as himself, with the main actor, portraying hacker Karl Koch, meeting Wilson at the annual German Computer Hackers Convention in 1985. The film is a biographical piece about Germany's infamous computer hackers, and the 1985 meeting in Germany between Wilson and Koch is authentic. Wilson spoke at the 1985 German Computer Hackers Convention, warning of a future in which governments would have total digital control over the citizen. He signed one of his books for Koch. These events are depicted in the film.
Writer
Wilhelm Reich in Hell (2005) (Video) Deepleaf Productions
Himself
Children of the Revolution: Tune Back In (2005) Revolutionary Child Productions
The Gospel According to Philip K. Dick (2001) TKO Productions
23 (1998) (23 – Nichts ist so wie es scheint) Claussen & Wöbke Filmproduktion GmbH (Germany)
Arise! The SubGenius Video (1992) (Recruitment Film #16) The SubGenius Foundation (USA)
Borders (1989) Co-Directions Inc. (TV documentary)
Fear In The Night: Demons, Incest and UFOs (1993) Video – Trajectories
Twelve Eggs in a Box: Myth, Ritual and the Jury System (1994) Video – Trajectories
Consciousness, Conspiracy and Coincidence (1995). Interview with Robert Anton Wilson. New Thinking Allowed, with Jeffrey Mishlove.
Everything Is Under Control: Robert Anton Wilson in Interview (1998) Video – Trajectories
Documentary
Maybe Logic: The Lives and Ideas of Robert Anton Wilson, a documentary featuring selections from over 25 years of Wilson footage, released on DVD in North America on May 30, 2006.
See also
23 Enigma
Chaos magic
General semantics
List of Discordian works
List of occult writers
Max Stirner
The Sekhmet Hypothesis Smart drugs (Nootropics)
Trajectories''
References
External links
, now maintained by his family
RAW Data 2.0, Wilson's blog, now maintained by his daughter, Christina
RAW Data, Wilson's first blog
Guns and Dope Party, Political party created by Wilson and Olga Struthio
Right Where You Are Sitting Now Podcast Extensive two-hour Robert Anton Wilson tribute podcast, featuring audio clips, and interviews with friends of Wilson
A collection of RAW audio/video from his publisher A collection of RAW audio/video from his publisher
Robert Anton Wilson Fans
Cosmic Trigger Play website
1932 births
2007 deaths
20th-century American male writers
20th-century American memoirists
20th-century American novelists
20th-century essayists
American agnostics
American anarchists
American anti-capitalists
American expatriates in Ireland
American futurologists
American logicians
American male dramatists and playwrights
American male essayists
American male novelists
American male poets
American Modern Pagans
American occult writers
American political party founders
American science fiction writers
American SubGenii
Anarchist writers
Brooklyn Technical High School alumni
Consciousness researchers and theorists
Counterculture of the 1960s
Critics of Objectivism (Ayn Rand)
Critics of religions
Critics of the Catholic Church
Discordians
Epistemologists
Former Roman Catholics
Futurologists
General semantics
Jury nullification
Libertarian socialists
Logicians
Metaphysicians
Mutualists
Mystics
Modern Pagan novelists
Novelists from New York (state)
Ontologists
People from Bay Ridge, Brooklyn
People from Capitola, California
People from Flatbush, Brooklyn
Philosophers from California
Philosophers from New York (state)
Philosophers of culture
Philosophers of language
Philosophers of law
Philosophers of logic
Philosophers of mind
Philosophers of religion
Philosophers of social science
Playboy people
Polytechnic Institute of New York University alumni
Psychedelic drug advocates
Wilhelm Reich
Writers about activism and social change
Writers from Brooklyn
People with polio |
10680 | https://en.wikipedia.org/wiki/Flaming%20%28Internet%29 | Flaming (Internet) | Flaming or roasting is the act of posting insults, often including profanity or other offensive language, on the internet. This term should not be confused with the term trolling, which is the act of someone going online, or in person, and causing discord. Flaming emerged from the anonymity that Internet forums provide cover for users to act more aggressively. Anonymity can lead to disinhibition, which results in the swearing, offensive, and hostile language characteristic of flaming. Lack of social cues, less accountability of face-to-face communications, textual mediation and deindividualization are also likely factors. Deliberate flaming is carried out by individuals known as flamers, which are specifically motivated to incite flaming. These users specialize in flaming and target specific aspects of a controversial conversation.
While these behaviors may be typical or expected in certain types of forums, they can have dramatic, adverse effects in others. Flame wars can have a lasting impact on some internet communities where even once a flame war has concluded a division or even dissolution may occur.
The pleasant commentaries within a chat room or message board can be limited by a "war of words" fight or "flaming" with the intent to seek out a negative reaction from the reader. Humphreys defines flaming as "the use of hostile language online, including swearing, insults and otherwise offensive language". Flaming by perpetrators within the online community is commonly received by messaging through text and rarely by face to face or video communication. By basing their conversations on text and not taking full accountability as the "flamer", they have a reduced self-awareness of others feelings, emotions and reactions based on the comments that they provide within the virtual community. The reader now has the perception that this "flamer" is difficult, rude and possibly a bully. The flamer may have limited social cues, emotional intelligence to adapt to others reactions and lack of awareness of how they are being perceived. Their personal social norms, may be considered disrespectful to the reader that has different social norms, education and experience with what is and is not appropriate within virtual communities.
The individuals that create an environment of flaming and hostility, lead the readers to disengage with the offender and may potentially leave the message board and chat room. The continual use of flaming within the online community can create a disruptive and negative experience for those involved and can lead to limited involvement and engagement within the original chat room and program.
Purpose
Social researchers have investigated flaming, coming up with several different theories about the phenomenon. These include deindividuation and reduced awareness of other people's feelings (online disinhibition effect), conformance to perceived norms, miscommunication caused by the lack of social cues available in face-to-face communication, and anti-normative behavior.
Jacob Borders, in discussing participants' internal modeling of a discussion, says:Mental models are fuzzy, incomplete, and imprecisely stated. Furthermore, within a single individual, mental models change with time, even during the flow of a single conversation. The human mind assembles a few relationships to fit the context of a discussion. As debate shifts, so do the mental models. Even when only a single topic is being discussed, each participant in a conversation employs a different mental model to interpret the subject. Fundamental assumptions differ but are never brought into the open. Goals are different but left unstated. It is little wonder that compromise takes so long. And even when consensus is reached, the underlying assumptions may be fallacies that lead to laws and programs that fail. The human mind is not adapted to understanding correctly the consequences implied by a mental model. A mental model may be correct in structure and assumptions but, even so, the human mind—either individually or as a group consensus—is apt to draw the wrong implications for the future.Thus, online conversations often involve a variety of assumptions and motives unique to each individual user. Without social context, users are often helpless to know the intentions of their counterparts. In addition to the problems of conflicting mental models often present in online discussions, the inherent lack of face-to-face communication online can encourage hostility. Professor Norman Johnson, commenting on the propensity of Internet posters to flame one another, states:The literature suggests that, compared to face-to-face, the increased incidence of flaming when using computer-mediated communication is due to reductions in the transfer of social cues, which decrease individuals' concern for social evaluation and fear of social sanctions or reprisals. When social identity and ingroup status are salient, computer mediation can decrease flaming because individuals focus their attention on the social context (and associated norms) rather than themselves.A lack of social context creates an element of anonymity, which allows users to feel insulated from the forms of punishment they might receive in a more conventional setting. Johnson identifies several precursors to flaming between users, whom he refers to as "negotiation partners," since Internet communication typically involves back-and-forth interactions similar to a negotiation. Flaming incidents usually arise in response to a perception of one or more negotiation partners being unfair. Perceived unfairness can include a lack of consideration for an individual's vested interests, unfavorable treatment (especially when the flamer has been considerate of other users), and misunderstandings aggravated by the inability to convey subtle indicators like non-verbal cues and facial expressions.
Factors
There are multiple factors that play into why people would get involved with flaming. For instance, there is the anonymity factor and that people can use different means to have their identity hidden. Through the hiding of one's identity people can build a new persona and act in a way that they normally would not when they have their identity known. Another factor in flaming is proactive aggression "which is initiated without perceived threat or provocation" and those who are recipients of flaming may counter with flaming of their own and utilize reactive aggression. Another factor that goes into flaming are the different communication variables. For instance, offline communications networks can impact the way people act online and can lead them to engage in flaming. Finally, there is the factor of verbal aggression and how people who engage in verbal aggression will use those tactics when they engage in flaming online.
Flaming can range from subtle to extremely aggressive in online behaviors, such as derogatory images, certain emojis used in combination, and even the use of capital letters. These things can show a pattern of behavior used to convey certain emotions online. Victims should do their best to avoid fighting back in an attempt to prevent a war of words. Flaming extends past social media interactions. Flaming can also take place through emails, and it may not matter so much whether someone calls an email a "flame", is based on whether she or he considers an email to be hostile, aggressive, insulting, or offensive. What matters is how the person receives the interaction. So much is lost in translation when communicating online versus in person, that it is hard to distinguish someone's intent.
History
Evidence of debates which resulted in insults being exchanged quickly back and forth between two parties can be found throughout history. Arguments over the ratification of the United States Constitution were often socially and emotionally heated and intense, with many attacking one another through local newspapers. Such interactions have always been part of literary criticism. For example, Ralph Waldo Emerson's contempt for Jane Austen's works often extended to the author herself, with Emerson describing her as "without genius, wit, or knowledge of the world". In turn, Thomas Carlyle called Emerson a "hoary-headed toothless baboon"
In the modern era, "flaming" was used at East Coast engineering schools in the United States as a present participle in a crude expression to describe an irascible individual and by extension to such individuals on the earliest Internet chat rooms and message boards. Internet flaming was mostly observed in Usenet newsgroups although it was known to occur in the WWIVnet and FidoNet computer networks as well. It was subsequently used in other parts of speech with much the same meaning.
The term "flaming" was seen on Usenet newsgroups in the eighties, where the start of a flame was sometimes indicated by typing "FLAME ON", then "FLAME OFF" when the flame section of the post was complete. This is a reference to both The Human Torch of the Fantastic Four, who used those words when activating his flame abilities, and to the way text processing programs of the time worked, by placing commands before and after text to indicate how it should appear when printed.
The term "flaming" is documented in The Hacker's Dictionary, which in 1983 defined it as "to speak rabidly or incessantly on an uninteresting topic or with a patently ridiculous attitude". The meaning of the word has diverged from this definition since then.
Jerry Pournelle in 1986 explained why he wanted a kill file for BIX:
He added, "I noticed something: most of the irritation came from a handful of people, sometimes only one or two. If I could only ignore them, the computer conferences were still valuable. Alas, it's not always easy to do".
Computer-mediated communication (CMC) research has spent a significant amount of time and effort describing and predicting engagement in uncivil, aggressive online communication. Specifically, the literature has described aggressive, insulting behavior as "flaming", which has been defined as hostile verbal behaviors, the uninhibited expression of hostility, insults, and ridicule, and hostile comments directed towards a person or organization within the context of CMC.
Types
Flame trolling
Flame trolling is the posting of a provocative or offensive message, known as flamebait, to a public Internet discussion group, such as a forum, newsgroup or mailing list, with the intent of provoking an angry response (a "flame") or argument.
Flamebait can provide the poster with a controlled trigger-and-response setting in which to anonymously engage in conflicts and indulge in aggressive behavior without facing the consequences that such behavior might bring in a face-to-face encounter. In other instances, flamebait may be used to reduce a forum's use by angering the forum users. In 2012, it was announced that the US State Department would start flame trolling jihadists as part of Operation Viral Peace.
Among the characteristics of inflammatory behavior, the use of entirely capitalized messages, or the multiple repetition of exclamation marks, along with profanity have been identified as typical.
Flame war
A flame war results when multiple users engage in provocative responses to an original post, which is sometimes flamebait. Flame wars often draw in many users, including those trying to defuse the flame war, and can quickly turn into a mass flame war that overshadows regular forum discussion.
Resolving a flame war can be difficult, as it is often hard to determine who is really responsible for the degradation of a reasonable discussion into a flame war. Someone who posts a contrary opinion in a strongly focused discussion forum may be easily labeled a "baiter", "flamer", or "troll".
Flame wars can become intense and can include "death threats, ad hominem invective, and textual amplifiers,” but to some sociologists flame wars can actually bring people together. What is being said in a flame war should not be taken too seriously since the harsh words are a part of flaming.
An approach to resolving a flame war or responding to flaming is to communicate openly with the offending users. Acknowledging mistakes, offering to help resolve the disagreement, making clear, reasoned arguments, and even self-deprecation have all been noted as worthwhile strategies to end such disputes. However, others prefer to simply ignore flaming, noting that, in many cases, if the flamebait receives no attention, it will quickly be forgotten as forum discussions carry on. Unfortunately, this can motivate trolls to intensify their activities, creating additional distractions.
"Taking the bait" or "feeding the troll" refers to someone who responds to the original message regardless of whether they are aware the original message was intended to provoke a response. Often when someone takes the bait, others will point this out to them with the acronym "YHBT", which is short for "You have been trolled", or reply with "don't feed the trolls". Forum users will usually not give the troll acknowledgement; that just "feeds the troll".
In sociology, history, or any kind of online ethnographic academic study, flame wars as a corpus, in a STS approach of controversies, may be used to understand what is at stake in a community. The idea is that the flame war drives the actors into abandoning a polite stance and forces them to engage into debate and to unveil otherwise concealed arguments. In this respect, the most interesting parts of an online corpus are the flame wars as "outbursts of heated, short and dense debates, in an ocean of evenly distributed polite messages".
Mass flamewar
A mass flamewar is a flamewar that grows out of a single post or comment into multiple other comments or posts quickly, in the same area where the original post was in. The mass flamewar usually lasts for multiple weeks or months after the first post was posted and died out.
Political flaming
Political flaming typically occur when people have their views challenged and they seek to have their anger known. Through the covering of one's identity people may be more likely to engage in political flaming. In a 2015 study conducted by Hutchens, Cicchirillo, and Hmielowski, they found that "those who were more experienced with political discussions—either online or offline—were more likely to indicate they would respond with a flame", and they also found that verbal aggression also played a role in a person engaging in political flaming.
Corporate flaming
Corporate flaming is when a large number of critical comments, usually aggressive or insulting, are directed at a company's employees, products, or brands. Common causes include inappropriate behavior of company employees, negative customer experiences, inadequate care of customers and influencers, violation of ethical principles, along with apparent injustices and inappropriate reactions. Flame wars can result in reputational damage, decreased consumer confidence, drops in stock prices and company assets, increased liabilities, increased lawsuits, and a decrease in customers, influencers and sponsors. Based on an assessment of the damage, companies can take years to recover from a flame war that may detract from their core purpose. Kayser notes that companies should prepare for possible flame wars by creating alerts for a predefined "blacklist" of words and monitoring fast-growing topics about their company. Alternatively, Kayser, points out that a flame war can lead to a positive experience for the company. Based on the content, it could be shared across multiple platforms and increase company recognition, social media fans/followers, brand presence, purchases, and brand loyalty. Therefore, the type of marketing that results from a flame war can lead to higher profits and brand recognition on a broader scale. Nevertheless, it is encouraged that when a company utilizes social media they should be aware that their content could be used in a flame war and should be treated as an emergency.
Examples
Any subject of a polarizing nature can feasibly cause flaming. As one would expect in the medium of the Internet, technology is a common topic. The perennial debates between users of competing operating systems, such as Windows, Classic Mac OS and macOS, or the Linux operating system and iOS or Android operating system, users of Intel and AMD processors, and users of the Nintendo Switch, Wii U, PlayStation 4 and Xbox One video game systems, often escalate into seemingly unending "flame wars", also called software wars. As each successive technology is released, it develops its own outspoken fan base, allowing arguments to begin anew.
Popular culture continues to generate large amounts of flaming and countless flame wars across the Internet, such as the constant debates between fans of Star Trek and Star Wars. Ongoing discussion of current celebrities and television personalities within popular culture also frequently sparks debate.
In 2005, author Anne Rice became involved in a flame war of sorts on the review boards of online retailer Amazon.com after several reviewers posted scathing comments about her latest novel. Rice responded to the comments with her own lengthy response, which was quickly met with more feedback from users.
In 2007, tech expert Kathy Sierra was a victim of flaming as an image of her depicted as a mutilated body was spread around online forums. In addition to the doctored photo being spread virally, her social security number and home address were made public as well. Consequently, Sierra effectively gave up her technology career in response to the ensuing harassment and threats that she received as a result of the flaming.
In November 2007, the popular audio-visual discussion site AVS Forum temporarily closed its HD DVD and Blu-ray discussion forums because of, as the site reported, "physical threats that have involved police and possible legal action" between advocates of the rival formats.
The 2016 Presidential election, saw a flame war take place between Republican candidate Donald Trump and the Democratic candidate Hillary Clinton. The barbs exchanged between the two was highly publicized and is an example of political flaming and a flame war.
Legal implications
Flaming varies in severity and as such so too does the reaction of states in imposing any sort of sanction. Laws vary from country to country, but in most cases, constant flaming can be considered cyber harassment, which can result in Internet Service Provider action to prevent access to the site being flamed. However, as social networks become more and more closely connected to people and their real life, the more harsh words may be considered defamation of the person. For instance, a South Korean Identity Verification law was created to help control flaming and to stop "malicious use of the internet" but opponents to the law argue that the law infringes on the right to free speech.
See also
Cyberbullying
Dogpiling
Eristic
Fisking
Forumwarz
Godwin's law
"It's okay to be white"
Internet troll
Meow Wars
Smack talk
Social software
Spiral of silence on the Internet
References
Further reading
External links
An Interactional Reconceptualization of "Flaming" and Other Problematic Messages, by Patrick B. O'Sullivan and Andrew J. Flanagin
FlameWarriors.net
Older flamebait reference on USENET, 1985 (via Google Groups)
Flame War Management Handling Crisis in the Social Media Age
Cyberbullying
Internet culture
Internet forum terminology
Internet trolling |
3835008 | https://en.wikipedia.org/wiki/James%20A.%20Baker%20%28government%20attorney%29 | James A. Baker (government attorney) | James Andrew Baker is a former American government official at the Department of Justice who served as general counsel for the Federal Bureau of Investigation (FBI).
A graduate of the University of Notre Dame and the University of Michigan Law School, he joined the Department of Justice in 1990.
In December, 2017 he was replaced as general counsel and reassigned to a different position within the FBI. It was revealed on April 19, 2018 that he was a recipient of at least one Comey memo. On May 4, 2018, Baker resigned from the FBI and joined the Brookings Institution as a fellow, writing for the justice-focused blog, Lawfare. In January 2019, Baker left Brookings to become the director of national security and cybersecurity at the R Street Institute, a conservative think-tank in Washington, D.C. He also teaches at Harvard Law School.
Education
Baker is a graduate of the University of Notre Dame and received a J.D. and M.A. from the University of Michigan in 1988. Mr. Baker has taught national security law at Harvard Law School since 2009.
Government service
Baker joined the Criminal Division of the Department of Justice through the Attorney General's Honors Program in 1990 and went on to work as a federal prosecutor with the division's fraud section. In 1996 he joined Office of Intelligence Policy and Review (OIPR). This government agency handles all Justice Department requests for surveillance authorizations under the terms of the 1978 Foreign Intelligence Surveillance Act, advises the Attorney General and all major intelligence-gathering agencies on legal issues relating to national security and surveillance, and "coordinates" the views of the intelligence community regarding intelligence legislation. Baker has often testified before Congress on behalf of Clinton and Bush administration intelligence policies, including defending The Patriot Act before the House Judiciary Committee. Regarding Baker's 2007 appearance on the PBS Frontline episode, "Spying on the Home Front", the show's producer, in a Washington Post online chat, referred to Baker as, "Mr. FISA himself".
In 1998, Baker was promoted to deputy counsel for intelligence operations. From May 2001 he served as acting counsel, and in January 2002 was appointed counsel. In January 2014, he was appointed general counsel of the FBI. As of December 2017, newly appointed director Christopher A. Wray was reassigning him from this role with his new duties unclear.
Private sector
Baker's government service was interrupted twice by stints in the private sector. Baker was assistant general counsel for national security at Verizon Business from 2008 to 2009. He was associate general counsel with Bridgewater Associates, LP from 2012 to 2014.
Controversy
In 2004, according to The Washington Post, Baker was responsible for the discovery that "the government's failure to share information" regarding the NSA electronic surveillance program had "rendered useless a federal screening system" insisted upon by the United States Foreign Intelligence Surveillance Court to prevent "tainted information"—in U.S. case law, "fruit of the poisonous tree"—from being used before the court. Baker was reported to have informed presiding federal judge Colleen Kollar-Kotelly of the FISC, whose complaints to the Justice Department led to the temporary suspension of the NSA program.
In 2007, according to The Washington Post, Baker revealed that he had informed Attorney General Alberto Gonzales "about mistakes the FBI has made or problems or violations or compliance incidents" prior to Gonzales' April 2005 testimony before the Senate Judiciary Committee that "[t]here has not been one verified case of civil liberties abuse" after 2001.
In 2017, Sinclair-owned Circa reported that Baker was under a Department of Justice criminal investigation for allegedly leaking classified national security information concerning the Trump administration to the media. The probe, described as "a strange interagency dispute that ... attracted the attention of senior lawmakers", reportedly "ended with a decision not to charge anyone," per The Washington Post.
2016 presidential election investigation
On May 10, 2019, Baker was interviewed for a taped Lawfare podcast, a justice-focused blog, during which he discussed his role in the FBI investigation of events during the 2016 presidential election that would be taken over by Robert S. Mueller III. Previously Baker had refrained from making public comment. He stated that he felt compelled to speak publicly now that the report is public and being characterized adversely by Trump and some members of his administration.
In September 2021, Special Counsel John Durham indicted Michael Sussmann, a partner for the law firm Perkins Coie, alleging he falsely told Baker during a September 2016 meeting that he was not representing a client for their discussion. Durham alleged Sussman was actually representing "a U.S. Technology Industry Executive, a U.S. Internet Company and the Hillary Clinton Presidential Campaign." Sussmann focuses on privacy and cybersecurity law and had approached Baker to discuss what then appeared to be suspicious communications between computer servers at the Russian Alfa-Bank and the Trump Organization. Sussmann had represented the Democratic National Committee regarding the Russian hacking of its computer network. Sussmann's attorneys denied he was representing the Clinton campaign and he pleaded not guilty to the charge.
Views on encryption
Baker had long supported legislation requiring encryption systems to include a means to allow access by law enforcement with a proper warrant. Baker has argued that the cybersecurity threat has become so severe that law enforcement should embrace strong encryption and adapt to the lack of easy access to plaintext messages in a published essay and in a press interview.
See also
Timeline of investigations into Donald Trump and Russia
References
External links
House Judiciary Committee testimony - short biography entered into the record
USDOJ report - minor bio details in notes
Spying on the Home Front - 2007 PBS Frontline interview with Baker regarding Foreign Intelligence Surveillance Act (photograph)
R St Institute - bio
Harvard Law School profile
Year of birth missing (living people)
Living people
United States Department of Justice lawyers
University of Notre Dame alumni
University of Michigan Law School alumni |
13041516 | https://en.wikipedia.org/wiki/Hashemite%20University | Hashemite University | The Hashemite University (الجامعة الهاشمية), often abbreviated HU, is a public university in Jordan. It was established in 1995. The university is located in the vicinity of the city of Zarqa. As regards to the study systems, it applies the credit hour system. Each college has its own number of credit hours. It is the first university in Jordan to apply the Two-Summer-Semester system.
The Hashemite University offers a variety of different master programs. It also offers an international admission program which allows non-Jordanian students to enroll at the university.
Geographical location
The Hashemite University is located in the city of Zarqa on a site parallel to two international highways. The west gate of the university, which is the main gate, opens to the international highway that links Amman with Mafraq and Irbid and from there to Syria. The south gate opens to the highway that leads to AzZarqa and from there to Iraq And Saudi Arabia.
History
The Royal Decree to establish the Hashemite University was issued on 19 June 1991. Teaching at the university started on 16 September 1995. The total area of the university's campus is 8519 acres. The university received the order of Independence of first class for its achievements in renewable energy and higher education.
Academics
The university comprises 19 colleges (faculties) and institutes. It offers 52 specialties at undergraduate level and 35 specialties at postgraduate level (doctorate, master, higher diploma, in addition to a number of professional diploma programs).
Faculty of Medicine
The decree of establishing the Faculty of Medicine at the Hashemite University was issued in the academic year 2005/2006. The faculty admitted the first intake of students in the academic year 2006/2007. It signed a collaboration protocol with the Ministry of Health and the Royal Medical Services. The agreement includes a number of terms that contribute to aiding and supporting the knowledge in the field of medicine, and to enhancing the exchange of scientific and practical experience.
The Permanent Council of the faculty was established to implement the Standards and the guidelines of granting the bachelor's degree of Medicine. The council accomplished the study plan and course descriptions for the award of this degree. The college grants the BA Degree of "Doctor of Medicine" after the completion of 257 credit hours. The plan was modified in the beginning of 2011 to bring about more focus on the practical elements.
The faculty was officially opened in an opening ceremony attended by his majesty king Abdullah II in 2010. The faculty is assigned on to World Directory of Medical Schools.
Faculty of Engineering
The Faculty of Engineering was established in August 1998. The faculty offers undergraduate and graduate degrees in eight programs. The bachelor's degrees are in Architecture, Civil, Electrical, Industrial, Biomedical, Mechanical, Computer, and Mechatronics Engineering. The Master programs offered by the faculty are Mechanical and Civil Engineering, Energy Systems, Maintenance Management and Testing Technology. On 28 August 2018, the faculty fulfilled the criteria of Accreditation Board for Engineering and Technology (ABET).
Faculty of Sciences
The Faculty of Science was initially established as part of a combined Science and Arts Faculty in 1995/1996. In 1998/1999, the Department of Geology was separated from the Faculty and become a part of the Faculty of Natural Resources and Environment and in 2001/2002. It holds four departments, which are Physics, Mathematics, Chemistry, and Biotechnology.
Faculty of Arts
The Faculty of Arts has been active at the University since its establishment in 1995. The Faculty offers courses in four majors:
Arabic Language and Literature
English Language and Literature
Literature and Cultural Studies
Humanities and Social Sciences, and Allied Humanities
International Relations and Strategic Studies
The Faculty provides MA degree in Arabic literature, Arabic linguistics, English Language and Literature, and Peace Studies and Conflict Management. Recently, an updated PhD program is established in Arabic Language and literature.
Faculty of Economics and Administrative Sciences
Since its establishment in September 1995, the Faculty of Economics and Administrative Sciences has various specialized programs that aim at achieving the educational goals of the faculty. Under this general interest, the faculty offers the following specialties: Banking and Financial sciences, Accounting, Business Administration, Economics, Financial Economics, administrative Information systems, Insurance and Risk Management, Hotel Management, Accounting and Commercial law.
The faculty also provides the following master programs: Accounting and Funding, Business Administration, Production and Operations, Funding and Investment, Business Administration (a mutual program with the University of Texas/ Arlington.)
The expansion of the BA courses has been positively reflected in the number of students who got enrolled. Statistically, 3880 enrollments have been reflected by the faculty during the year 2007/2008 compared to 123 enrollments for year 1995/1996. Currently, there are 174 students at the master level and 3706 students at the undergraduate level.
Faculty of Allied Health Sciences
The Faculty of Allied Health Sciences (FAHS) was established in 1998 by the decree of the board of trustees in (2/1/98). The students’ enrolment first start was in 2000 after a comprehensive study of the importance of such medical fields. The faculty provides a bachelor's degree in the following programs:
Clinical Nutrition and dietetics
Physical and Occupational Therapy
Medical Imaging
Medical Laboratory Sciences
It also provides MA degree in Medical Laboratory Science.
Faculty of Nursing
It was established 1999 to keep progress in the health sector in Jordan. In addition the Faculty provides one BA program in Nursing and two MA degree programs in: Cancer Nursing and Adult Health Nursing.
Faculty of Physical Education and Sport Sciences
The Faculty of Physical Education and Sport Sciences was established in 1998. The Faculty was brought to life in order to meet the increasing needs of the local community and to keep abreast with. However, it was 1999/2000 when the faculty receives its first class. The faculty provides two BA degree programs in:
Physical education and Sport Sciences
Coaching and Sport Management
Faculty of Educational Sciences
The Faculty of Educational Sciences is the heart and soul of Hashemite University. The Faculty offers all undergraduate and graduate students a rigorous and forward-looking education in the educational, pedagogical sciences, and counseling psychology, and offers opportunities for students to major in many areas of study. EDS is a great place to explore fields of study that educate the person as a whole and open doors to careers that make a difference. The faculty was established in 1995/1996 offering the following programs on both BA and MA level degrees:
Educational Psychology/ Scholastic Psychology
Art Class teaching
Educational Foundations and Administrations
Teaching and Curriculum
The Faculty of Prince Hussein Bin Abdulla II of Information Technology (IT)
The Faculty of Prince Hussein Bin Abdulla II of Information Technology at Hashemite University was established in 2001/2002 In response to the continuous development in information technology and to keep pace with times and rapid changes in this sector, and because of the increasing regional and international demand for highly qualified specialists in information technology.
The faculty provides a bachelor's degree in the following programs:
Computer Science and Applications
Computer Information Systems
Software Engineering
Business Management Technology
It also provides MA degree in: Information and Creativity Systems and Software Engineering.
Queen Rania Faculty for Childhood (QRFC)
The Faculty was established in 2002, as the first faculty dedicated to early childhood education and services in Jordan. The main goal of the Faculty is to equip and provide highly qualified graduates who are professionally and ethically ready to work with children and their families locally and regionally.
The Faculty offers undergraduate degree in Early Childhood Education, Early Childhood Care, and in Special Education.
It provides two BA Programs:
Child Education
Special Education
Queen Rania Faculty of Tourism and Heritage (QRITH)
The Queen Rania Faculty of Tourism and Heritage was established in 1999/2000 in the fields of tourism, conserving antiquities and managing cultural resources.
It provides BA degree Programs in :
Conservative sciences
Sustainable tourism
Cultural Resources Management and Museology.
Faculty of Natural Resources and Environment
The Faculty of Natural Resources and Environment is an alternative name to The Institute of Land, Water and Environment which was established at the beginning of the year 1999. The objective of having such a faculty was to strengthen the involvement of the Hashemite University, these are its departments Department of Land Management and Environment, Department of Earth Sciences and Environment, and Department of Water Management and Environment.
Faculty of Pharmacy and Pharmaceutical sciences
The university grants a bachelor's degree of pharmacy in two specialty tracks: the Administrative pharmacy and the Industrial pharmacy. This faculty accepted its first group of students during the fall semester of 2013. The faculty incorporated a hypothetical pharmacy established in 2015 to train students and enhance the practical aspect of the specialty.
Deanship of Students’ Affairs
The deanship was established in 1995-1996. It is mainly linked with students and the local community. It aims to develop students physically, mentally, socially and physiologically through fostering their personal characteristics in order to be the future leaders and be capable of bearing any responsibilities. The deanship aims at enhancing a sense of belonging and loyalty to the country (Jordan), to spread out the sense of group work, unity and sharing. In addition it strengthens national unity. The deanship is divided into seven departments.
Department of students’ care and services which has two sections: one is associated with students' services and the other with students' health care.
Department of Athletic Activities.
Department of Cultural and Artistic Activities.
Department of Students Committees which includes many students clubs such as Arabic language club, debate club, economic club and digital arabic content club (works to enrich Arabic content on Wikipedia).
International Students Office.
Alumni Students Affairs Office.
King Abdullah II Fund for Development / Career Counseling.
There are different facilities related to the deanship such as: The Deanship's Facilities
Al-karamah Theater.
(Petra Hall): symposiums and lectures
Three students lounges.
Exhibition lounge.
Music room.
Drawing studio.
Student Service facilities such as (a supermarket, a restaurant, a post Office, and Xeroxing Center
(In- door) Multipurpose sport gymnasium and out-door Pitches. in addition, the deanship offers students loans and Counseling.
Deanship of Scientific Research
In collaboration with the Ministry of Higher Education and Scientific Research, The Hashemite University Deanship of Scientific Research issues three international peer-reviewed scientific journals:
Jordan Journal of Biological Sciences| Jordan Journal of Biological Sciences (JJBS) As of 2010, two volumes of this Journal have been issued since it was established,
Jordan Journal of Mechanical and Industrial Engineering (JJMIE) An international peer-reviewed scientific journal, .
The Jordan Journal of Earth and Environmental Sciences (JJEES) An International Peer-Revievwed Research Journal Issued by Higher Scientific Research Committee, the Ministry of Higher Education and Scientific Research (Jordan), and Deanship of Academic Research and Graduate Studies in The Hashemite University. (LCCN 2008413429).
Deanship of academic development and international outreach
Faculty of Graduate Studies
The Hashemite University offers 35 programs in master, doctoral and high diploma. It also offers several professional diploma programs. The Faculty of Graduate Studies accepts outstanding students into various graduate programs, based on top-of-the-line, up-to-date study-plans that meet with the latest ever-developing international standards and market needs.
El-Hassan Bin-Talal faculty of arid land studies
It was named Arid Lands Academy then renamed as El-Hassan Bin-Talal faculty of arid land studies at the Hashemite University in Jordan launched as a tripartite collaborative endeavor between Jordan's Higher Council of Science and Technology, Hashemite University, and International Centre for Agricultural Research in the Arid Areas (ICARDA). El-Hassan Bin-Talal faculty of arid land studies born as an initiative of HRH Prince El-Hassan Bin-Talal.
El-Hassan Bin-Talal faculty of arid land studies came to enhance the agricultural investments in the arid land of Jordan, which consist about 80% of Jordan area, so it can face the rapid demand of food productions due to rapid population growth. However, the faculty proposed to transfer its experience in the arid lands investments to the countries that face the same conditions of Jordan.
The mission of the El-Hassan Bin-Talal faculty of arid land studies is to provide spearheading solutions for unbalance use for the natural resources and environment by developing interactive educational and research programs. These decisive results to be use in mitigate the climate change effect, natural resource degradation, and depleted water resources and increase the arid land productivity in Jordan. Furthermore, El-Hassan Bin-Talal faculty of arid land studies tactics to be internationally resource for sustainable development of arid lands by collaborating with other international institutions and attracting scientists and students from across the world.
El-Hassan Bin-Talal faculty of arid land studies vision is establishing an exceptional education curriculum that grants bachelor's degrees in the arid lands studies including range-livestock science and management, natural resources and the environment, economic development, policy analysis, and rural sociology. El-Hassan Bin-Talal faculty of arid land studies objectives are to impart knowledge from the scientist from Arab countries and the world on the sustainable development of arid lands, and to generate an entrepreneurial approach in solving the problems of over-use and misuse of natural resources in arid land by involving local communities, which will involve them in decision-making and open new opportunities for employment.
Library
The library contains nearly 250 thousand paper materials and about 430 billion a computerized database of scientific documents in various aspects of knowledge. The library also offers a computerized and comprehensive system for circulating books depending on radio waves for identification.
Main library
The library has undergone automation project offering computerized library information covering systems of acquisition, cataloging and circulation services using the HORIZON database management system.
The library is a member in the consortium of Jordanian Public University libraries (JoPULs). Jordan Library and Information Association, Arab Federation of University Libraries, and the International Federation of Library Associations and Institutions (IFLA), and the Center Of Excellence (COE).
Special-purpose libraries
The Medical Library
It is located in the Ibn Sina Medical complex in the Medical college section and houses many medical books for the students.
The Childhood Library
The Childhood Library houses a special collection on childhood, children, family and related subjects to serve the information needs of staff and students at the Faculty of Queen Rania for Childhood. Houses a total of 7,000 books and periodicals that focus on childhood, children family, and related subjects.
Social Work Library
Social Work Library (SWL) was launched within the context of a cooperation agreement between the Social Work Center at Hashemite University and Abdel-Hameed Shoman Foundation.
American Studies and Resources Corner
For resources on American Literature, Heritage, American Foreign and Domestic Policies, American Economy, American Society and Cities, Music and Arts, and Famous American Characters. It also organizes digital video conferencing with affiliated universities and institutions in the United States.
British Studies and Resources Corner
For English literature, English Language, and related subjects.
Units and Centers
The academic centers and the specialized and services offices
The clinical skills education and testing center (CSETC) for training the students of medicine: this center was established on 17 July 2012, and it is considered as the first of its kind in Jordan. The center aims at enhancing students' self-confidence and reducing the psychological barrier in the interaction between doctor and patient where the student learns the clinical skills from the first year of studying. The center has two laboratories:
Doll Lab: this laboratory has a control room and a video room where educating videos are displayed for students in addition to a room where a student can observe his/her college or professor while s/he is performing medical skills and how s/he is dealing with the patient. The lab also includes a room for the observer and a doll which acts like a real patient who performs human vital physiological processes such as pulsing, breathing, defecating, pressure and light sensing, and it is connected with a computer from which it the pathological case can be selected from a computerized list through a computer program as it includes 59 pathological scenarios which are able to be increased by preparing other possible pathological cases according to the training and educating need. The doll simulates wounds bleeding as the blood can get out of it as is the case with real wounds. Moreover, the doll reacts with anesthesia, and it has the ability to defecate other liquids from the body like urine and mucus as it is provided with liquids and gases regularly from the main providing room. It also can react to 250 kinds of medicine, their effect as well as their side effects as it is given the medicine through a digital injection that has a barcode and contains a liquid with the same color of the medicine. The doll is capable of being Cardiopulmonary resuscitated since it is connected with the respiratory system and the electric shocks device. The Electrocardiography is displayed on a screen which is connected to the doll in addition to two cameras, one of which records students while they are dealing with the medical case and the other records the doll and its response for the doctor and it shows their pictures on the main screen in the control room. The doll is also capable of producing sounds as those used to express pain and it also can ask for a medicine, food, or for seeing a doctor.
Sub-Laboratory clinics: the Lab consists of 23 medical training clinics, each of which contains an office, a medical bed and a computer connected to the Internet which contains an electronic medical library, in addition to a doll which differs from one clinic to another. Each clinic teaches the students a certain medical skill as blood pressure checks, the spinal cord puncture, ear checking, hearing the heart sounds, identifying the intravenous entrances, venipuncture, intravenous solutions, giving medicine through the intramuscularly injection, bones or venous. Also they learn the wounds suture, medical sutures and all kinds of surgical thread, In addition to dealing with new born babies, and the anal checking. The lab contains two rooms to monitor and follow up the students during the learning process by a faculty member to assess their practice and perfection and their behavior during the patient's diagnosis and preview. The lab also includes two rooms for communication skills provided with a display screen to watch educational videos or recordings for students during their training in addition to a smart board. It also includes a trauma man through which the student learns the surgical intervention in emergency case, like ascites, installation chest tube and surgical input by larynx.
The clinical skills education and testing center won the first prize in Al Hassan Bin Talal Award for Scientific Excellence which is managed by The Higher Council for Science and Technology.
Center for Academic Quality Assurance (CAQA): The center is responsible for approving the academic quality of the study programs in accordance with national and international standards, as well as the analysis and development of strategic plans for the different college programs.
Social Work Center: Social Work Center has established in 2006 and provided a number of services such as establishing training units, consultations, and conducting studies and research. In addition, it spreads awareness related social work.
Center for Environmental Studies (CES)
Center of Studies, Consultation and Continuing Education (CSCCS): The center consists of three departments: 1) Training department which designs and establish training programs. 2) Studies department which gives consultation and runs studies.
Linguistics Center Which is located in the Arts and Sciences College.
Center for Information and Communication Technology, The E-learning Information Technology: This center was established in the late 1997 in order to computerize the university systems. It consists of three departments: Programming Department with its applications, Networking, Communications and Technical Services Department, and The E-learning Department. This center provides the students with the ability to access the Internet by using username and password that are taken from laboratories administrators, and helps with printing service and conducting electronic exams service, in addition to courses electronic registration service and finding out students' grades and study schedules. The Center has computerized many academic systems and applications: Admission and Registration System, Faculty Members System, Students Affairs System, King Abdullah Fund System, Students Fund System, Scientific Research and Graduate Studies System, The Electronic Exams System, Students Accounts System, The Hosted Departments Systems such as (The Royal Grant, Military Service, and Cultural Counselor). As well as the Administrative Systems such as Human Resources and Salaries System, Debtsand Electronic Archiving System, Health Insurance System, and University Administration System. It also has computerized many Internet applications including University's Website, The Student Portal, Student's Guardian Portal, Staff Portal, Business Collaborators Portal, Hospitals Portal, and Electronic Publishing System, Admission Portal (Enrollment Applications). The center also contributed to the development of some systems to serve the local community such as the development of the financial system, Subscribers System, Certificate of Origin System, and Guarantees System in Zarqa and Mafraq Chamber of Commerce.
Students
Off campus Housing
The University offers housing for girls only in the city of Abdallah ben Adelaziz (Madent Elsharq) located 10 minutes away east from Zarqa city.
Double rooms: These rooms accommodate two students with two beds, and a joined bathroom.
Single rooms: This room includes 2 beds but with separate bathrooms.
Private rooms: This room includes one bed with one separate bathroom.
The presidency
The Succession for the presidency at the university includes:
Mohammed Hamdan 1992-1998.
Anwar Batikhi 1998 to 2002.
Rueda Maaytah 31 July 2002 to 26 September 2002 and the period from 20 September 2010 to 24 October 2011.
Hakam Al-Hadidi 18 December 2002 to 5 December 2004.
Omar Alshdefat 2004-2007.
Abdul Rahim Hunaiti February 2008 to 4 March 2009.
Suleiman Arabiyat from 4 March 2009 to 19 September 2010.
Kamal Bani Hani
the current President of the Hashemite University is Prof. Fawwaz M. Al-Abed Al-Haq.
University's projects
The Hashemite sun project: The University set a project to generate solar power under the name of the sun of the Hashemite which produces an electrical power capacity of 5 MW, covering twice the university's need of electricity. The project's opening ceremony was attended by Prince El Hassan bin Talal. In October 2017, the university was recognized for its sun project and won the golden prize of 3rd Emirates Energy Award in the category of "Large Energy Project" (more than 500 Kilowatt).
North Classrooms’ Compound project: the work began on the project on 12 October 2014 and laid the foundation stone on 4 January 2016 where the compound was named King Hareth the Fourth Complex the builder of Al-khaznah at Petra. The compound was fully established and funded from the budget of the University. The cost was approximately 10 million Jordanian dinars with a total area of 16 thousand square meters in addition it includes sections and deanships of both the Faculty of Queen Rania for Childhood and Queen Rania Institute of Tourism and Heritage and the faculty of educational sciences. The complex consists of 35 classrooms that can accommodate 100 students each, along an amphitheatre that can accommodate 650 students and two more amphitheaters for 250 students each. In addition to 100 offices for faculty members. The total capacity of the complex is 4560 students per hour. The facades of the complex were made of a natural white stone and the Karaki stone along with travertine.
South Classrooms’ Compound project: The project started on 10 December 2014 and the foundation stone was laid for the project on 6 May 2015 funded by Abu Dhabi Fund for Development in the United Arab Emirates as part of the grant of the Gulf with nearly 11 million Jordanian dinars for total area of 18,500 square meters and with a capacity of four thousand students per hour. The building consists of three floors and a partial fourth floor and a basement. There are 35 classrooms that are equipped with educational technologies, each fits 70 to 100-students beside the classrooms there is a main amphitheatre that can accommodate 650 students and two others for 250 students each. in addition to 16 engineering studio and a number of laboratories and 100 offices for academics and administrators.
The Faculty of Pharmaceutical Sciences Building: in November 2016 The University signed an agreement to create a building with a total area of 21,860 square meters at a cost of approximate 15 million Jordanian dinars. The building consists of four floors including a number of interactive classroom, advanced laboratories, as well as a pharmaceutical library, an interactive theater, a cafeteria, and offices for faculty and administrators.
Student's council elections
The university applied the open percentage lists since the 14th council that was conducted according to that system on 4 December 2014. The Hashemite University is considered the first Jordanian institution that applies such system. The numbers of seats reached 66 chairs in the fifteenth council as the percentage was one to 600 students. The university conducted to lists:
University list: in which the university prepares an independent election unit with 14 chairs.
College list: in which each college prepares an independent unit with 52 chairs distributed among the other colleges depending on the size of the college and the number of its students; as the faculty of engineering got 8 chairs as it is the largest college with the largest number of students, followed by the faculty if economics and administration with 7 chairs then the faculties of art and science with 4 chairs each, and the other remaining faculties got 3 chairs each but the faculty of pharmaceutical sciences got only one chair because it was freshly established.
Financial Status
The Hashemite University is free from fiscal deflect and debt. It hasn't got any financial support from the government during the current administration under the supervision of the president Dr. Kmal Bani Hani. It has saved about JD7.1 million yearly compared to the previous administrations. This has been accomplished through reducing the expenses that don't negatively affect the teaching process.
The reductions were for example: controlling overtime working hours for the academic and administrative staff, reducing the expenses of the university's celebrations, checking the sick leaves, and developing the programs of information technology throughout the university's technological center instead of getting it from outside the university. The university has also suspended all the administrative occupations for about 4 years.
The number of the academic employees compared to the administrative is 1-1.5. The university has increased its fiscal revenue through financing some projects from outside resources like the university's Masjed which was built at the expense of Shaikh Salim Al-Mazroie from United Arab Emiratis. In addition to establishing a virtual pharmacy to train pharmacy students on the expense of Dwakom Company.
The value of state contributions to the university is JD26 million in debts to the state. This number dropped in .2015 because of the lack of a deficit in the university's budget.
Affiliations and International Agreements and Classifications
International Association of Universities
Federation of the Universities of the Islamic World
Mediterranean Universities Union
Association of Arab Universities
The Hashemite University was ranked the tenth among the best universities in the Middle East according to the scientific research impact and was ranked one of the best 300 universities in the world. The university obtained 251-300 in BRICS & Emerging Economies 2017.
Councils
Deans Council
This Council at the Hashemite University consists of the President and 18 deans and three vice-presidents. The proportion of females in the Deans Council is 40 percent, it is an indication of the presence of the females in the academic, administrative and research areas.
The Board of Trustees
The Council consists of a chairman and 12 members of the first university degree holders. the Board meets at least once a month, and whenever the needed. It handles several tasks, including drawing up the university's general policy, and approve the annual plan and appointing deans and vice presidents and the heads of branches at the university as well as the establishment of colleges, institutions and departments of with its scientific centers followed by creating of disciplines and academic programs by merging or canceling them. The Board also determines the tuition fees in various disciplines upon the recommendation of the University's Council and determining the annual budget of the university after the approval of the University's Council. The Board includes two committees, one for academic affairs and is composed of five members and the second for Administrative and Financial Affairs and is composed of three members.
University Council
The Board consists of a Chairman and 43 members 18 deans and 15 representatives from the university colleges who are chosen by election as well as managers of each of these units: the library and the Financial Unit, the Centre for Information Technology as well as two representatives from the local community and the other 3 for the students, two on them are still a student and the other is a graduate with three vice-presidents.
The councils secretariat unit
The unit began its work in 1992 as the secretariat of the Royal Committee of the University then moved its business to take over the three boards of the university's secretariat: the Council of Deans, University's Council and the Board of Trustees as well as the secretariat for the Appointment and Promotion Committee and the Committee of scientific research support. The Circle became autonomous in 1998.
References
Notes
Gallery
External links
The University's website
Hashemite University official facebook page
Hashemite University official facebook page
Hashemite University official Twitter account
MOU between Wikimedia foundation and the Hashemite University
Educational institutions established in 1995
1995 establishments in Jordan |
9133071 | https://en.wikipedia.org/wiki/CANARIE | CANARIE | CANARIE (formerly the Canadian Network for the Advancement of Research, Industry and Education) is the not-for-profit organisation which operates the national backbone network of Canada's national research and education network (NREN). The organisation receives the majority of its funding from the Government of Canada. It supports the development of research software tools; provides cloud resources for startups and small businesses; provides access and identity management services; and supports the development of policies, infrastructure and tools for research data management.
History
The Canadian Network for the Advancement of Research, Industry and Education was created in 1993. It initially focused on the development of the CANARIE network, which provides interprovincial and international connectivity for Canada's National Research and Education Network (NREN). Provincial and territorial partners in the NREN provide connectivity to institutions in their jurisdictions, and connect to CANARIE to collaborate and share data and tools across Canada and around the world. The NREN connects universities, colleges, research hospitals and government research labs. CANARIE links Canada's NREN to over 100 NRENs around the world. The CANARIE network was originally called CA*net or CAnet. The original CA*net was created in 1990 with support from the National Research Council. In 1993 that CANARIE had upgraded its links to 56 kbit, to 10 Mbit/s in 1995, and then later to 20 Mbit/s. It had 100 Mbit/s aggregate capacity in 1996, and the same year the National Test Network (NTN) project introduced ATM.
In 1997, Bell Advanced Communications Inc. (later Bell Nexxia, now part of Bell Canada) was given operating control over the network operations. The replacement network, CA*net II, was launched based on NTN links and capacities, with OC-3 (155 Mbit/s) at the core. At the same time, Sympatico "DSL" service started, using the same technology. In 1998, CANARIE deployed CA*net 3, the world's first national optical research and education network, with a planned capacity of 2.5Gbit/s. In 2002, the Government of Canada committed $110 million to CANARIE to build and operate CA*net 4. CA*net 4 yielded a total network capacity of 40Gbit/s, 16 times its predecessor. CA*net 4 was based on OC-192 optical circuits, with a capability of offering users optical Lightpath services, a legacy dedicated point-to-point connection between research facilities.
CANARIE has funded the development of research software tools since 2007. In 2011, it took on the operations and support for the Canadian Access Federation, which provides participants with secure access to eduroam, an international federation of campus WiFi networks. The Canadian Access Federation also provides the trust framework to enable participants to access remote web-based datasets and tools in a secure and privacy-protecting environment. In 2011, it also launched the Digital Accelerator for Innovation and Research (DAIR) program, which provides cloud computing resources for small businesses and entrepreneurs. Since 2014, it has provided financial support for Research Data Canada (RDC), an initiative focussed on developing the standards, policies and infrastructure to support reuse and preservation of research data. In 2014, CANARIE became a partner in the Centre in Excellence in Next Generation Networks (CENGN), which supports the development and commercialization of next generation networking technologies. At the SuperComputing conference in Seattle, WA, in November 2011, CANARIE participated in the transfer of 1 petabyte of data between the California Institute of Technology and the University of Victoria at a combined rate of 186 Gbit/s, setting a world record.
The CANARIE portion of the NREN consists of 23,000 km of fibre optic cable currently transferring data at speeds as high as 100 Gbit/s.
Status
In the year ending March 31, 2016, CANARIE transferred 172,000 Terabytes of data over the CANARIE network. Data traffic on the CANARIE network is growing at an average annual rate of ~50%.
As of 2016, institutions connected to the NREN include: 85 universities, 85 colleges, and 30 CEGEPS ; 85 federal government research labs; 46 teaching and research hospitals; 10 business incubators/accelerators;; Almost 5,500 K-12 schools; 12 provincial and territorial Regional Advanced Networks (RANs); and 100+ National Research and Education Networks around the world
Programs and services
CANARIE Network Program
Research and Education (R&E) Internet service is the largest program. The core network provides full and equal support for IPv4 and IPv6 unicast and multicast routing, with external network segments that extend to international R&E exchanges in North America: Pacific Wave in Seattle, StarLight in Chicago, and Manhattan Landing (MANLAN) in New York. With anticipated traffic growth in the coming years, in 2015
CANARIE upgraded most of the core links to 100 Gbit/s. The CANARIE Network Program offers two additional services to R&E institutions:
Content Delivery Service (CDS)
The Content Delivery Service provides Canadian R&E institutions with high-speed access to content providers like Amazon, Microsoft, Google, Yahoo, Facebook and Box.net. It has become an important service within CANARIE's Network Program.
NREN Connection Service
The CANARIE Connection Service is a dedicated connection for researchers who need a direct, secure, private link with peers, within Canada or globally. The service can provide researchers up to a 100 Gbit/s point-to-point Ethernet connection to high-performance computing centres or research facilities across Canada or around the world by being directly installed into their infrastructure. NREN Program CANARIE provides funding to provincial and territorial network partners through the NREN Program. This funding ensures that the national backbone and the provincial and territorial networks continue to support Canadian innovation and leadership by increasing capacity, reliability, and upgrades to existing equipment and infrastructure, enabling network management (tools and training); and extending the reach of the provincial and territorial networks to more institutions.
Research Software Program
CANARIE's Research Software Program funds development of software tools for research. Software created under CANARIE's Research Software Program is designed to be modular and re-usable, .Reusable research software tools may be found at science.canarie.ca. These are available for use by any researcher.
Identity and Access Management – Canadian Access Federation (CAF)
CAF is a trusted access management environment that provides users Wi-Fi connectivity and content access whether at home or abroad, all using the log-in credentials of their home institution. CANARIE supports guest access to campus WiFi networks through eduroam, an international WiFi roaming federation for education, and remote access to resources through a federated framework.
Digital Accelerator for Innovation and Research – DAIR
DAIR provides Canadian entrepreneurs and small businesses with free cloud-based compute and storage resources that help speed time to market by enabling rapid and scalable product design, prototyping, validation and demonstration.
Research Data Canada
CANARIE provides funding and operational support to Research Data Canada, which has a goal of building a national Research Data Management (RDM) framework. RDC researches and recommends best practices in data stewardship; links researchers with RDM organizations and services; represents Canada's RDM community, efforts, services and resources internationally; and provides funding for RDM efforts to build RDM capacity in Canada.
Regional partners
CANARIE works with 12 provincial and territorial partner networks to provide ultra-high-speed connectivity across the country. These National Research and Education Network partners are referred to as RANs, Regional Advanced Networks, and include the following:
Yukon: Yukon College
Northwest Territories: Aurora College
Nunavut: Nunavut Arctic College
British Columbia: BCNET
Alberta: Cybera
Saskatchewan: Saskatchewan Research Network (SRnet)
Manitoba: MRnet
Ontario: Ontario Research and Innovation Optical Network (ORION)
Quebec: Réseau d'informations scientifiques du Québec (RISQ)
New Brunswick & Prince Edward Island: NB/PEI Educational Computer Network (University of New Brunswick and University of Prince Edward Island)
Nova Scotia: Atlantic Canada Organization of Research Networks – Nova Scotia (ACORN-NS)
Newfoundland and Labrador: Atlantic Canada Organization of Research Networks (ACORN-NL)
Funding model
The Government of Canada providing $105 million over five years starting in 2015–2016 as part of the Economic Action Plan 2015.
Funding history
2002–2007: $110M
2007–2012: $120M
2012–2015: $62M
2015–2020: $105M
References
External links
CANARIE
List of connected institutions
History of the Internet
National research and education networks
Scientific organizations based in Canada |
4294204 | https://en.wikipedia.org/wiki/MedCalc | MedCalc |
MedCalc is a statistical software package designed for the biomedical sciences. It has an integrated spreadsheet for data input and can import files in several formats (Excel, SPSS, CSV, ...).
MedCalc includes basic parametric and non-parametric statistical procedures and graphs such as descriptive statistics, ANOVA, Mann–Whitney test, Wilcoxon test, χ2 test, correlation, linear as well as non-linear regression, logistic regression, and multivariate statistics.
Survival analysis includes Cox regression (Proportional hazards model) and Kaplan–Meier survival analysis.
Procedures for method evaluation and method comparison include ROC curve analysis, Bland–Altman plot, as well as Deming and Passing–Bablok regression.
The software also includes reference interval estimation, meta-analysis and sample size calculations.
The first DOS version of MedCalc was released in April 1993 and the first version for Windows was available in November 1996.
Version 15.2 introduced a user-interface in English, Chinese (simplified and traditional), French, German, Italian, Japanese, Korean, Polish, Portuguese (Brazilian), Russian and Spanish.
Reviews
Stephan C, Wesseling S, Schink T, Jung K. “Comparison of eight computer programs for receiver-operating characteristic analysis.” Clinical Chemistry 2003;49:433-439.
Lukic IK. “MedCalc Version 7.0.0.2. Software Review.” Croatian Medical Journal 2003;44:120-121.
Garber C. “MedCalc Software for Statistics in Medicine. Software review.” Clinical Chemistry, 1998;44:1370.
Petrovecki M. “MedCalc for Windows. Software Review.” Croatian Medical Journal, 1997;38:178.
See also
List of statistical packages
Comparison of statistical packages
References
External links
MedCalc Statistical Software Homepage
Statistical software
Windows-only software
Biostatistics |
66272691 | https://en.wikipedia.org/wiki/SR%20University | SR University | SR UNIVERSITY Sri Rajeshwara Educational Society (SR UNIVERSITY) is one of the first private universities in Telangana state in 2020. SR University is located in Warangal, Telangana.
History
SREC was established in 2002 and is sponsored by the SR Educational Society. The institution has been ranked Number 134 in NIRF ranking and 1st rank in self-financed private institutions in India by All India ranking for institutions innovation by MHRD. SR Engineering College has been given the status of private University by the State of Telangana and then named SR University. The institution started with 100 employees has now crossed 1000 both teaching and non teaching.
Campus
The SR University campus is located in Ananthasagar village of Hasanparthy Mandal in Warangal, Telangana, India. It is in 150 acres, with both separate hostel facilities for boys and girls. There is a huge central library along with Indias largest Technology Business Incubator (TBI) in tier 2 cities.
Schools
S R University offers six schools & ten centers. with Bachelors, Masters, Doctoral programs in the following specialization with the approval of Government of India, Government of Telangana.
School of Computer Science and Artificial Intelligence
B.Tech. Computer Science and Engineering
B.Tech. (Computer Science and Engineering) - Artificial Intelligence & Machine Learning
B.Tech. (Computer Science and Engineering) - Cyber Security
B.Tech. (Computer Science and Engineering) - Business Systems
B.Tech. (Computer Science and Engineering) - Data Science
M.Tech. (Computer Science and Engineering)
M.Tech. (Artificial Intelligence & Machine Learning)
Ph.D. (Computer Science and Engineering)
School of Engineering
Electronics & Communication Engineering
B.Tech. (Electronics & Communication Engineering)
B.Tech. (Electronics & Communication Engineering) - Artificial Intelligence & Machine Learning
B.Tech. (Electronics & Communication Engineering) - Internet of Things
M.Tech. (VLSI)
M.Tech. (Internet of Things)
Ph.D. (Electronics and Communication Engineering)
Electrical Engineering
B.Tech. (Electrical and Electronics Engineering)
M.Tech. (Power Electronics)
Ph.D. (Electrical and Electronics Engineering)
Civil Engineering
B.Tech. (Civil Engineering)
M.Tech. (Construction Technology and Management)
Ph.D. (Civil Engineering)
Mechanical Engineering
B.Tech. (Mechanical Engineering)
M.Tech. (Advanced Manufacturing Systems)
Ph.D. (Mechanical Engineering)
School of Business
BBA (Finance & Accounting | Marketing | Business Analytics) MBA (Integrated) - [3+2]
MBA (Master of Business Administration )
MBA (Innovation, Entrepreneurship & Venture Development)
Ph.D. (Management)
School of Agriculture
B.Sc. (Hons) Agriculture
School of Sciences
Ph.D. (Mathematics)
Ph.D. (Physics)
Ph.D. (Chemistry)
Other Institutes of SR Group
S.R. International Institute of Technology
Sparkrill International School
Sumathi Reddy Institute of Technology for Women
S.R. Degree and P.G College
S.R. Residential Junior College for Boys (M.P.C. block)
S.R. Residential Junior College for Boys (Bi.P.C. block)
S.R. Junior College for Girls (Day and Residential)
S.R. Nava Vignana Bharathi Junior College for Boys (Day Scholars)
S.R. Junior College for Girls
S.R. Junior College for Boys
K.N.R Junior College for Boys
S.R. Junior College for Girls
Gems Junior College for Boys, Karimnagar
S.R. IIT Coaching Center
S.R. EAMCET Coaching Center
S.R. Residential High School for Boys (10th class only)
S.R. High School for Boys (Day Scholars) (10th class only)
S.R. High School for Girls (Day and Residential) (10th class only)
S.R. National High School
S.R. Junior college (DAY)
Admissions
Students are admitted into the Six Schools under the following Eligibility Criteria.
A Pass in 10+2 or equivalent examination with 50% aggregate marks.
Candidates have to be successful in SRSAT (SR Scholastic Assessment Test)/ JEE-Main/ State Level Engineering Entrance Exams across India including EAMCET/ Merit in Sports/ Cultural Activities.
Scholarship :
Scholarships will be given on basis of merit in Intermediate / 10+2 CBSE marks | JEE Mains percentile | EAMCET (TS & AP) or any other equivalent qualifying examination.
Eligibility Criteria for BBA/BBA-MBA
A Pass in 10+2 or equivalent examination with 50% and above in aggregate.
Eligibility Criteria for B.Sc. (Hons.) Agriculture
A Pass in 10+2 or equivalent examination with 50% aggregate marks. Students with Physics, Chemistry, Mathematics/ Biology (PCB) are eligible.
Two years Diploma in Agriculture / Seed Technology after 10th class or equivalent with a first-class.
Rankings
The National Institutional Ranking Framework (NIRF) ranked it 134 among engineering colleges in 2021.
Technology Business Incubator
SR Group launched SRiX (SR Innovation Exchange), a Technology Business Incubator (TBI) in Warangal. This 1,00,000 square foot state-of-the-art TBI is supported by Department of Science & Technology (DST) and Government of India to accelerate the startup eco-system. The TIB (Technology Business Incubator) was started by Kalvakunta Taraka Rama Rao.
Development Centers
Nest for Entrepreneurship in Science & Technology
Center for AI & Deep Learning (CAIDL)
The Industry-Institute Partnership Cell
ENGINEERING PROJECTS IN COMMUNITY SERVICE (EPICS)
INTERNAL QUALITY ASSURANCE CELL (IQAC)
IBM Center of Excellence
Microsoft I-Spark Center
SR – CISCO Local Academy offers CCNA Certification course
SR University is a Private University located in Warangal, Telangana, India
References
Universities in Telangana
Private universities in India
2020 establishments in Telangana
Educational institutions established in 2020 |
88162 | https://en.wikipedia.org/wiki/Andy%20M%C3%BCller-Maguhn | Andy Müller-Maguhn | Andy Müller-Maguhn (born 3 October 1971) is a member of the German hacker association Chaos Computer Club (CCC). Having been a member since 1986, he was appointed as a spokesman for the club in 1990, and later served on its board until 2012.
In an election in Autumn 2000, he was voted in as an at-large director of ICANN, which made him jointly responsible with 18 other directors for the worldwide development of guidelines and the decision of structural questions for the Internet structure. His term lasted two years, and from June 2002 to June 2004, he operated as an honorary board member of the European Digital Rights Institution (EDRi), an umbrella organization for European NGOs which campaigns for human rights in the digital age.
In 1995, Müller-Maguhn founded the "Datenreisebüro" ('Data Travel Agency'), since 2002 based in a Berlin office. Besides organising the Chaos Computer Club and hosting an electronic archive, the Datenreisebüro organises workshops which train system administrators in data protection and data security. Workshops are also held in order to create policies and structures which make data protection easier to achieve. Müller-Maguhn has also helped at several of Hackers on Planet Earth conferences.
In 2005 and 2006, Müller-Maguhn was involved on the side of the parents of the deceased hacker Boris Floricic, better known as Tron, in the case where they sought to prevent German Wikipedia from disclosing his true name, although the name had appeared in many press accounts by that point in time.
In 2011, Müller-Maguhn was criticized for his role in the CCC board's controversial decision to expel former WikiLeaks spokesman Daniel Domscheit-Berg, which was often attributed to Müller-Maguhn's close relation to Wikileaks founder Julian Assange and an ongoing conflict between Assange and Domscheit-Berg. At an extraordinary general meeting in February 2012, this decision was reverted, while Müller-Maguhn was not reelected to the board.
He appeared with Julian Assange on Episode 8 and 9 of The World Tomorrow, "Cypherpunks: 1/2".
He is a contributor to Julian Assange's 2012 book Cypherpunks: Freedom and the Future of the Internet along with Jacob Appelbaum and Jérémie Zimmermann.
References
External links
Müller-Maguhn's homepage
1971 births
Living people
Hackers
Members of Chaos Computer Club |
17933 | https://en.wikipedia.org/wiki/Latency%20%28engineering%29 | Latency (engineering) | Latency, from a general point of view, is a time delay between the cause and the effect of some physical change in the system being observed. Lag, as it is known in gaming circles, refers to the latency between the input to a simulation and the visual or auditory response, often occurring because of network delay in online games.
Latency is physically a consequence of the limited velocity at which any physical interaction can propagate. The magnitude of this velocity is always less than or equal to the speed of light. Therefore, every physical system with any physical separation (distance) between cause and effect will experience some sort of latency, regardless of the nature of the stimulation at which it has been exposed to.
The precise definition of latency depends on the system being observed or the nature of the simulation. In communications, the lower limit of latency is determined by the medium being used to transfer information. In reliable two-way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is "in-flight" at any given moment. Perceptible latency has a strong effect on user satisfaction and usability in the field of human–machine interaction.
Communications
Online games are sensitive to latency (or "lag"), since fast response times to new events occurring during a game session are rewarded while slow response times may carry penalties. Due to a delay in transmission of game events, a player with a high latency internet connection may show slow responses in spite of appropriate reaction time. This gives players with low latency connections a technical advantage.
Capital markets
Minimizing latency is of interest in the capital markets, particularly where algorithmic trading is used to process market updates and turn around orders within milliseconds. Low-latency trading occurs on the networks used by financial institutions to connect to stock exchanges and electronic communication networks (ECNs) to execute financial transactions. Joel Hasbrouck and Gideon Saar (2011) measure latency based on three components: the time it takes for information to reach the trader, execution of the trader's algorithms to analyze the information and decide a course of action, and the generated action to reach the exchange and get implemented. Hasbrouck and Saar contrast this with the way in which latencies are measured by many trading venues who use much more narrow definitions, such as, the processing delay measured from the entry of the order (at the vendor's computer) to the transmission of an acknowledgement (from the vendor's computer). Electronic trading now makes up 60% to 70% of the daily volume on the New York Stock Exchange and algorithmic trading close to 35%. Trading using computers has developed to the point where millisecond improvements in network speeds offer a competitive advantage for financial institutions.
Packet-switched networks
Network latency in a packet-switched network is measured as either one-way (the time from the source sending a packet to the destination receiving it), or round-trip delay time (the one-way latency from source to destination plus the one-way latency from the destination back to the source). Round-trip latency is more often quoted, because it can be measured from a single point. Note that round trip latency excludes the amount of time that a destination system spends processing the packet. Many software platforms provide a service called ping that can be used to measure round-trip latency. Ping uses the Internet Control Message Protocol (ICMP) echo request which causes the recipient to send the received packet as an immediate response, thus it provides a rough way of measuring round-trip delay time. Ping cannot perform accurate measurements, principally because ICMP is intended only for diagnostic or control purposes, and differs from real communication protocols such as TCP. Furthermore, routers and internet service providers might apply different traffic shaping policies to different protocols. For more accurate measurements it is better to use specific software, for example: hping, Netperf or Iperf.
However, in a non-trivial network, a typical packet will be forwarded over multiple links and gateways, each of which will not begin to forward the packet until it has been completely received. In such a network, the minimal latency is the sum of the transmission delay of each link, plus the forwarding latency of each gateway. In practice, minimal latency also includes queuing and processing delays. Queuing delay occurs when a gateway receives multiple packets from different sources heading towards the same destination. Since typically only one packet can be transmitted at a time, some of the packets must queue for transmission, incurring additional delay. Processing delays are incurred while a gateway determines what to do with a newly received packet. Bufferbloat can also cause increased latency that is an order of magnitude or more. The combination of propagation, serialization, queuing, and processing delays often produces a complex and variable network latency profile.
Latency limits total throughput in reliable two-way communication systems as described by the bandwidth-delay product.
Fiber optics
Latency in optical fiber is largely a function of the speed of light, which is 299,792,458 meters/second in vacuum. This would equate to a latency of 3.33 µs for every kilometer of path length. The index of refraction of most fiber optic cables is about 1.5, meaning that light travels about 1.5 times as fast in a vacuum as it does in the cable. This works out to about 5.0 µs of latency for every kilometer. In shorter metro networks, higher latency can be experienced due to extra distance in building risers and cross-connects. To calculate the latency of a connection, one has to know the distance traveled by the fiber, which is rarely a straight line, since it has to traverse geographic contours and obstacles, such as roads and railway tracks, as well as other rights-of-way.
Due to imperfections in the fiber, light degrades as it is transmitted through it. For distances of greater than 100 kilometers, amplifiers or regenerators are deployed. Latency introduced by these components needs to be taken into account.
Satellite transmission
Satellites in geostationary orbits are far enough away from Earth that communication latency becomes significant – about a quarter of a second for a trip from one ground-based transmitter to the satellite and back to another ground-based transmitter; close to half a second for two-way communication from one Earth station to another and then back to the first. Low Earth orbit is sometimes used to cut this delay, at the expense of more complicated satellite tracking on the ground and requiring more satellites in the satellite constellation to ensure continuous coverage.
Audio
Audio latency is the delay between when an audio signal enters and when it emerges from a system. Potential contributors to latency in an audio system include analog-to-digital conversion, buffering, digital signal processing, transmission time, digital-to-analog conversion and the speed of sound in air.
Video
Video latency refers to the degree of delay between the time a transfer of a video stream is requested and the actual time that transfer begins. Networks that exhibit relatively small delays are known as low-latency networks, while their counterparts are known as high-latency networks.
Workflow
Any individual workflow within a system of workflows can be subject to some type of operational latency. It may even be the case that an individual system may have more than one type of latency, depending on the type of participant or goal-seeking behavior. This is best illustrated by the following two examples involving air travel.
From the point of view of a passenger, latency can be described as follows. Suppose John Doe flies from London to New York. The latency of his trip is the time it takes him to go from his house in England to the hotel he is staying at in New York. This is independent of the throughput of the London-New York air link – whether there were 100 passengers a day making the trip or 10000, the latency of the trip would remain the same.
From the point of view of flight operations personnel, latency can be entirely different. Consider the staff at the London and New York airports. Only a limited number of planes are able to make the transatlantic journey, so when one lands they must prepare it for the return trip as quickly as possible. It might take, for example:
35 minutes to clean a plane
15 minutes to refuel a plane
10 minutes to load the passengers
30 minutes to load the cargo
Assuming the above are done consecutively, minimum plane turnaround time is:
35 + 15 + 10 + 30 = 90
However, cleaning, refueling and loading the cargo can be done at the same time. Passengers can only be loaded after cleaning is complete. The reduced latency, then, is:
35 + 10 = 45
15
30
Minimum latency = 45
The people involved in the turnaround are interested only in the time it takes for their individual tasks. When all of the tasks are done at the same time, however, it is possible to reduce the latency to the length of the longest task. If some steps have prerequisites, it becomes more difficult to perform all steps in parallel. In the example above, the requirement to clean the plane before loading passengers results in a minimum latency longer than any single task.
Mechanics
Any mechanical process encounters limitations modeled by Newtonian physics. The behavior of disk drives provides an example of mechanical latency. Here, it is the time seek time for the actuator arm to be positioned above the appropriate track and then rotational latency for the data encoded on a platter to rotate from its current position to a position under the disk read-and-write head.
Computer hardware and operating systems
Computers run instructions in the context of a process. In the context of computer multitasking, the execution of the process can be postponed if other processes are also executing. In addition, the operating system can schedule when to perform the action that the process is commanding. For example, suppose a process commands that a computer card's voltage output be set high-low-high-low and so on at a rate of 1000 Hz. The operating system schedules the process for each transition (high-low or low-high) based on a hardware clock such as the High Precision Event Timer. The latency is the delay between the events generated by the hardware clock and the actual transitions of voltage from high to low or low to high.
Many desktop operating systems have performance limitations which create additional latency. The problem may be mitigated with real-time extensions and patches such as PREEMPT_RT.
On embedded systems, the real-time execution of instructions is often supported by a real-time operating system.
Simulations
In simulation applications, latency refers to the time delay, often measured in milliseconds, between initial input and output clearly discernible to the simulator trainee or simulator subject. Latency is sometimes also called transport delay. Some authorities distinguish between latency and transport delay by using the term latency in the sense of the extra time delay of a system over and above the reaction time of the vehicle being simulated, but this requires detailed knowledge of the vehicle dynamics and can be controversial.
In simulators with both visual and motion systems, it is particularly important that the latency of the motion system not be greater than of the visual system, or symptoms of simulator sickness may result. This is because, in the real world, motion cues are those of acceleration and are quickly transmitted to the brain, typically in less than 50 milliseconds; this is followed some milliseconds later by a perception of change in the visual scene. The visual scene change is essentially one of change of perspective or displacement of objects such as the horizon, which takes some time to build up to discernible amounts after the initial acceleration which caused the displacement. A simulator should, therefore, reflect the real-world situation by ensuring that the motion latency is equal to or less than that of the visual system and not the other way round.
See also
Age of Information
Feedback
Interrupt latency
Jitter
Lagometer
Lead time
Memory latency
Performance engineering
Response time (technology)
Responsiveness
References
Further reading
External links
Simulating network link latency under Linux
Engineering concepts |
22166644 | https://en.wikipedia.org/wiki/Afara%20Websystems | Afara Websystems | Afara Websystems Inc. was a Sunnyvale, California, USA server company whose goal was to build servers surrounding a custom high-throughput CPU architecture, "developing IP traffic management systems that will bring quality-of-service to the next generation of IP access infrastructure." The word "Afara" means "bridge" in the West African Yoruba language.
History
The company was founded by Kunle Olukotun, a Stanford University professor. First employee to be hired was by Raza Foundries Board Member - Atul Kapadia. It was Neil Sadaranganey who was the sole business person at Afara Web Systems. He was hired out of Real Networks. Subsequently, Les Kohn (employee #2) a microprocessor designer for: Sun Microsystems UltraSPARC; Intel i860 and i960; National Semiconductor Swordfish took the basic idea and developed a product plan.
Olukotun was talking with people running data centers in 2000 and understood the problem of those centers running out of power and space. Olukotun believed that multiple processors on a chip in conjunction with multi-threading could resolve those problems. Olukotun searched for venture capital support, on the basis that a new architecture could lead to a 10x performance increase in server processing capabilities. Pierre Lamond, a partner at Sequoia Capital, introduced Olukotun to microprocessor architect Les Kohn, who designed microprocessors for Sun, Intel and National Semiconductor (where Les worked for Pierre). Les introduced Fermi Wang (a journeyman) one of his colleagues at C-Cube Microsystems, to be the acting CEO and to lead the company. It was a classic Silicon Valley startup - the headcount grew to 100 with 95 engineers to focus on engineering development and one marketing director.
Two meetings with venture capitalists were scheduled on September 11, 2001. The meetings in New York City were interrupted by the terrorist attack on the World Trade Center, but one of them resumed 2 days later. Available capital for funding the server company had vanished, as the economy started to dip into a new recession in 2001.
Rick Hetherington left Sun to create a start-up company. Venture capitalists Sequoia Capital introduced Hetherington to Olukotun. When Hetherington's startup failed, he returned to Sun. Hetherington wrote memos to Mike Splain, CTO of the Processor group at Sun, encouraging technology acquisition of Afara Websystems. Hetherington became Chief Architect for Horizontal Systems at Sun, which develops and sells servers for data centers and Web systems.
Although SPARC-based computers systems are almost exclusively deployed with Solaris, Afara instead planned to use Linux on the SPARC-based processor they developed.
The search for venture capital continued, since creating a server company requires substantial resources, but there was little available during the recession following 9/11. Afara began negotiations with Sun Microsystems, and the acquisition was consummated in July 2002. This new acquisition fell under the umbrella of Fred DeSantis, the vice president of engineering for horizontal systems at Sun. During the due-diligence process, Brian Sutphin sensed (as in Fermi Wang, the "CEO" mentioned that there were no term sheets on the table) from executives he was interacting with that Afara did not have any alternate sources of funding and reduced the offer from high triple digit millions of dollars to < $500M.
Contributions and impact
The project included many technology contributions among Linux, Solaris and SPARC. The Afara CPU used a SPARC port of Debian GNU/Linux initially. Debian GNU/Linux contributions to Afara Websystem's former CPU architecture continued to grow, including commercial support for Ubuntu, a Debian GNU/Linux-based operating system. Afara Websystems' former platform direction seemed further validated when Sun hired Ian Murdock, founder of the Debian distribution, to head operating system platform strategy, and cross-pollinate Solaris with a new OS packaging technology similar to that of Debian GNU/Linux.
The new CPU architecture of Afara Websystems, which became known as "Niagara", had enough merit to cause a competing internal Sun project under DeSantis' organization, called "Honeybee", to be canceled.
Pressure was placed on the computing industry to add cores and threads. While competing microprocessor vendors were designing dual-core chips with two dual-threads per core, the original "Niagara" architecture was a more radical design: an eight core processor with four threads per core.
The new family of SPARC microprocessors, trademarked by Sun as "CoolThreads", was released with model names of UltraSPARC T1 (2005), UltraSPARC T2 (2007), UltraSPARC T2 Plus (2008) and the further derivative UltraSPARC T3 (2010). While SPARC is an open instruction set architecture, where vendors build their own processors to an open specification defined by SPARC International, this new family of microprocessors was not only created to the open specification, but its implementation was now free, where people could download the source code, and manufacture them independently.
For web serving loads, Sun had catapulted to become the uncontested fastest single processor on the planet in December 2005, performing 7x faster than the closest Intel server, and has been consistently the highest throughput web server, with the closest competition being 2x-3x slower (socket to socket comparison) as of mid-2009.
Oracle Corporation announced its intention to acquire Sun in April 2009, a deal which closed in January 2010. By the end of 2010, market competitors started to release similar products with multiple cores, a less radical approach to threading, but with similar performance characteristics. Oracle continued the radical approach of the original Afara SPARC architecture (large numbers of threads per large number of simple cores) with the release of the SPARC T3 processor in September 2010 - the first 16 core commodity central processing unit, yielding another top performance benchmark, but only by a slim margin.
Olokotun returned to Stanford University to head its "Pervasive Parallelism Lab" in 2008, to help shape the future of software, as he did with hardware.
Fermi Wang and Les Kohn founded Ambarella with a focus on high definition video capture and delivery markets.
References
Defunct computer companies of the United States
Companies based in Sunnyvale, California
Sun Microsystems hardware
Sun Microsystems acquisitions |
13271643 | https://en.wikipedia.org/wiki/Mixed%20In%20Key | Mixed In Key | Mixed In Key (also known as MIK) is Windows and Macintosh software that simplifies a DJ technique called harmonic mixing. Mixed In Key analyzes MP3 and WAV files and determines the musical key of every file. Knowing the key, DJs can use music theory (such as the Circle of Fifths) to play songs in a harmonically-pleasing order. The software helps to eliminate dissonant tones while mixing songs together using a technique such as beatmatching.
History
Mixed In Key software was developed to provide a Windows interface for the tONaRT key detection algorithm created by zplane.development. The original tONaRT algorithm created by zplane.development had a simple Windows-based demo which could not process multiple audio files at once. Yakov Vorobyev created a simple C# .NET Windows application that could batch-process multiple files. The first version was released on March 25, 2006. Mac OS X development started shortly thereafter, and the first Mac OS X version was released on June 4, 2006.
Since May 2007, Mixed In Key LLC has improved the key detection algorithm by combining tONaRT with a custom in-house algorithm. Mixed In Key was granted a patent on this algorithm. The new algorithm was released in Version 3.0. The latest version (in December 2017) is 8.1.
Ali 'Dubfire' Shirazinia from Deep Dish was a big influence on the development of the Mac OS X version by providing feedback to the development team. After the Mac OS X version was released, Ali used Mixed In Key to help sequence songs for his Global Underground 31 Taipei album.
Products
All three suites of Mixed In Key's software, Mixed In Key, Platinum Notes, and Mashup, are used by world-renowned artists. All three software suites are available for both PCs and Macs.
Mixed In Key is the original software from Mixed In Key LLC, the software analyzes the harmonies and melodies of the selected music. For every track it shows the musical key and helps choose tracks that are harmonically compatible with each other. Mixed In Key works with Traktor, Serato, Pioneer CDJs, Ableton Live and all other DJ apps. The software is used by the likes of David Guetta and Kaskade. Other artists include Paul van Dyk, Armin van Buuren, Sebastian Ingrosso, Sasha, Grammy-winning producer Ali 'Dubfire' from Deep Dish, Pete Tong from BBC Radio 1, trance producers Blank & Jones, Above & Beyond, High Contrast, Nick Warren, BT.
Platinum Notes was the second software suite release, it allows the user to drop in music files. Platinum Notes then makes the tracks acoustically perfect, by using studio filters to process the files. The software corrects for pitch, improves volume and makes the file ready to play anywhere.
Mashup is Mixed In Key's most recent addition to its offerings. The software helps beatmatch tracks and saves results to new MP3 files, files then can be made into a podcast.
Controversy
Mixed In Key has faced criticism over its decision to require an internet connection in versions following Version 2.5. While numerous users have identified this as a deterring and unfair form of copy-protection, Mixed In Key employees have responded claiming that the new requirement has nothing to do with piracy, stating, "an internet connection is needed to analyze new files because Mixed In Key uses very expensive technology that is not available in 'offline' mode."
Allen & Heath Partnership
On December 22, 2006, Mixed In Key LLC announced a partnership with Allen & Heath to provide co-branded versions of Mixed In Key known as XONE Mixed In Key. Mixed In Key continued to sell "Original" versions of the software. Color scheme is the only difference between the two versions.
Award Nominations
DJ Magazine awarded Mixed In Key "Best DJ Tool of 2008" award, and previously nominated Mixed In Key for the "Most Innovative Product" in 2007. I-DJ Magazine has reviewed the product in Summer 2007 and gave it the "I-DJ Approved Innovation" award.
In 2009 Mixed In Key was nominated for Best New Product of the Year for the 24th Annual International Dance Music Awards, losing only to the iPhone 3G.
See also
Harmonic mixing
Music Theory
DJing
Music software
Mashup
External links
Mixed In Key's Official Website
Harmonic-Mixing.com
zplane.development
References
Audio mixing software
MacOS multimedia software
Windows multimedia software
DJ software |
9578768 | https://en.wikipedia.org/wiki/Ocrad | Ocrad | Ocrad is an optical character recognition program and part of the GNU Project. It is free software licensed under the GNU GPL.
Based on a feature extraction method, it reads images in portable pixmap formats known as Portable anymap and produces text in byte (8-bit) or UTF-8 formats. Also included is a layout analyser, able to separate the columns or blocks of text normally found on printed pages.
User interface
Ocrad can be used as a stand-alone command-line application or as a back-end to other programs.
Kooka, which was the KDE environment's default scanning application until KDE 4, can use Ocrad as its OCR engine. Since conversion to newer Qt versions, current versions of KDE no longer contain Kooka; development continues in the KDE git repository. Ocrad can be also used as an OCR engine in OCRFeeder.
History
Ocrad has been developed by Antonio Diaz Diaz since 2003. Version 0.7 was released in February 2004, 0.14 in February 2006 and 0.18 in May 2009. It is written in C++.
Archives of the bug-ocrad mailing list go back to October 2003.
Notes
References
External links
Ocrad GNU Project Homepage
Peter Selinger's Review of Linux OCR software (2007)
Andreas Gohr Linux OCR Software Comparison (2010)
Online OCR server powered by Ocrad
Tesseract & Ocrad comparison, Linux Journal (2007)
Free software programmed in C++
GNU Project software
Optical character recognition
2003 software |
2104830 | https://en.wikipedia.org/wiki/Cursor%20%28user%20interface%29 | Cursor (user interface) | In computer user interfaces, a cursor is an indicator used to show the current position for user interaction on a computer monitor or other display device that will respond to input from a text input or pointing device. The mouse cursor is also called a pointer, owing to its resemblance in usage to a pointing stick.
Etymology
Cursor is Latin for 'runner'. A cursor is a name given to the transparent slide engraved with a hairline used to mark a point on a slide rule. The term was then transferred to computers through analogy.
On 14 November 1963, while attending a conference on computer graphics in Reno, Nevada, Douglas Engelbart of Augmentation Research Center (ARC) first expressed his thoughts to pursue his objective of developing both hardware and software computer technology to "augment" human intelligence by pondering how to adapt the underlying principles of the planimeter to inputting X- and Y-coordinate data, and envisioned something like the cursor of a mouse he initially called a "bug", which, in a "3-point" form, could have a "drop point and 2 orthogonal wheels". He wrote that the "bug" would be "easier" and "more natural" to use, and unlike a stylus, it would stay still when let go, which meant it would be "much better for coordination with the keyboard."
According to Roger Bates, a young hardware designer at ARC under Bill English, the cursor on the screen was for some unknown reason also referred to as "CAT" at the time, which led to calling the new pointing device a "mouse" as well.
Text cursor
In most command-line interfaces or text editors, the text cursor, also known as a caret, is an underscore, a solid rectangle, or a vertical line, which may be flashing or steady, indicating where text will be placed when entered (the insertion point). In text mode displays, it was not possible to show a vertical bar between characters to show where the new text would be inserted, so an underscore or block cursor was used instead. In situations where a block was used, the block was usually created by inverting the pixels of the character using the boolean math exclusive or function. On text editors and word processors of modern design on bitmapped displays, the vertical bar is typically used instead.
In a typical text editing application, the cursor can be moved by pressing various keys. These include the four arrow keys, the Page Up and Page Down keys, the Home key, the End key, and various key combinations involving a modifier key such as the Control key. The position of the cursor also may be changed by moving the mouse pointer to a different location in the document and clicking.
The blinking of the text cursor is usually temporarily suspended when it is being moved; otherwise, the cursor may change position when it is not visible, making its location difficult to follow.
The concept of a blinking cursor can be attributed to Charles Kiesling Sr. via US Patent 3531796, filed in August 1967.
Some interfaces use an underscore or thin vertical bar to indicate that the user is in insert mode, a mode where text will be inserted in the middle of the existing text, and a larger block to indicate that the user is in overtype mode, where inserted text will overwrite existing text. In this way, a block cursor may be seen as a piece of selected text one character wide, since typing will replace the text "in" the cursor with the new text.
Bi-directional text
A vertical line text cursor with a small left-pointing or right-pointing appendage is for indicating the direction of text flow on systems that support bi-directional text, and is thus usually known among programmers as a 'bidi cursor'. In some cases, the cursor may split into two parts, each indicating where left-to-right and right-to-left text would be inserted.
Pointer
In computing, a pointer or mouse cursor (as part of a personal computer WIMP style of interaction) is a symbol or graphical image on the computer monitor or other display device that echoes movements of the pointing device, commonly a mouse, touchpad, or stylus pen. It signals the point where actions of the user take place. It can be used in text-based or graphical user interfaces to select and move other elements. It is distinct from the cursor, which responds to keyboard input. The cursor may also be repositioned using the pointer.
The pointer commonly appears as an angled arrow (angled because historically that improved appearance on low-resolution screens), but it can vary within different programs or operating systems. The use of a pointer is employed when the input method, or pointing device, is a device that can move fluidly across a screen and select or highlight objects on the screen. In GUIs where the input method relies on hard keys, such as the five-way key on many mobile phones, there is no pointer employed, and instead, the GUI relies on a clear focus state.
The pointer or mouse cursor echoes movements of the pointing device, commonly a mouse, touchpad or trackball.
This kind of cursor is used to manipulate elements of graphical user interfaces such as menus, buttons, scrollbars or any other widget. It may be called a "mouse pointer" because the mouse is the dominant type of pointing device used with desktop computers.
Appearance
The pointer hotspot is the active pixel of the pointer, used to target a click or drag. The hotspot is normally along the pointer edges or in its center, though it may reside at any location in the pointer.
In many GUIs, moving the pointer around the screen may reveal other screen hotspots as the pointer changes shape depending on the circumstances. For example:
In-the text that the user can select or edit, the pointer changes to a vertical bar with little cross-bars (or curved serif-like extensions) at the top and bottom — sometimes called an "I-beam" since it resembles the cross-section of the construction detail of the same name.
When displaying a document, the pointer can appear as a hand with all fingers extended allowing scrolling by "pushing" the displayed page around.
Graphics-editing pointers such as brushes, pencils, or paint buckets may display when the user edits an image.
On an edge or corner of a window the pointer usually changes into a double arrow (horizontal, vertical, or diagonal) indicating that the user can drag the edge/corner in an indicated direction to adjust the size and shape of the window.
The corners and edges of the whole screen may also act as hotspots. According to Fitts's law, which predicts the time it takes to reach a target area, moving mouse and stylus pointers to those spots is easy and fast. As the pointer usually stops when reaching a screen edge, the size of those spots can be considered of virtual infinite size, so the hot corners and edges can be reached quickly by throwing the pointer toward the edges.
While a computer process is performing tasks and cannot accept user input, a wait pointer (an hourglass in Windows before Vista and many other systems, spinning ring in Windows Vista and later, watch in classic Mac OS, or spinning pinwheel in macOS) is displayed when the mouse pointer is in the corresponding window.
When the pointer hovers over a hyperlink, a mouseover event changes the pointer into a hand with an outstretched index finger. Often some informative text about the link may pop up in a tooltip, which disappears when the user moves the pointer away. The tooltips revealed in the box depending on the implementation of the web browser; many web browsers will display the "title" of the element, the "alt" attribute, or the non-standard "tooltips" attribute. This pointer shape was first used for hyperlinks in Apple Computer's HyperCard.
In Windows 7, when Windows Touch was introduced in the mainstream to make Windows more touch-friendly, a touch pointer is displayed instead of the mouse pointer. The touch pointer can be switched off in Control Panel and resembles a small diamond shape. When the screen is touched a blue ripple appears around the touch pointer to provide visual touch feedback. When swiping to scroll etc., the touch pointer would follow the finger as it moves. If touch and hold to right-click is enabled, touching and holding will show a thick white ring around the touch pointer. When this ring appears, releasing one's finger would perform a right-click.
If a pen is used the left-click ripple is colorless instead of blue and the right-click ring is a thinner ring that appears closer to the pen tip making contact with the screen. A click (either left or right) will not show the touch pointer, but swiping would still show the pointer which would follow the pen tip.
Also, the touch pointer would only appear on the desktop once a user has signed in to Windows 7. On the sign-in screen, the mouse cursor would simply jump to the point touched and a left click would be sent on a tap, similar to when a touch input is used on operating systems before Windows 7.
In Windows 8 and above, visual touch feedback displays a translucent circle where the finger makes contact with the screen, and a square when attempting to touch and hold to right-click. A swipe is shown by a translucent line of varying thickness. Feedback can be switched on and off in Pen and Touch settings of the Control Panel in Windows 8 and Windows 8.1 or in the Settings app on Windows 10, and feedback can also be made darker and larger where it needs to be emphasized, such as when presenting. However, the touch pointer is normally less commonly visible in touchscreen environments of Windows operating systems later than Windows 7.
The mouse-over or hover gesture can also show a tooltip, which presents information about what the pointer is hovering over; the information is a description of what selecting an active element is for or what it will do. The tooltip appears only when stationary over the content. A common use of viewing the information is when browsing the internet to know the destination of a link before selecting it, if the URL of the text is not recognizable.
When using touch or a pen with Windows, hovering when supported or performing a set gesture or flick may show the tooltip
I-beam pointer
The I-beam pointer (also called the I-cursor) is a cursor shaped like a serifed capital letter "I". The purpose of this cursor is to indicate that the text beneath the cursor can be highlighted and sometimes inserted or changed.
Pointer trails and animation
can be used to enhance its visibility during movement. Pointer trails are a feature of GUI operating systems to enhance the visibility of the pointer. Although disabled by default, pointer trails have been an option in every version of Microsoft Windows since Windows 3.1x.
When pointer trails are active and the mouse or stylus is moved, the system waits a moment before removing the pointer image from the old location on the screen. A copy of the pointer persists at every point that the pointer has visited at that moment, resulting in a snake-like trail of pointer icons that follow the actual pointer. When the user stops moving the mouse or removes the stylus from the screen, the trails disappear and the pointer returns to normal.
Pointer trails have been provided as a feature mainly for users with poor vision and for screens where low visibility may become an issue, such as LCD screens in bright sunlight.
In Windows, pointer trails may be enabled in the Control Panel, usually under the Mouse applet.
Introduced with Windows NT, an animated pointer was a small looping animation that was played at the location of the pointer. This is used, for example, to provide a visual cue that the computer is busy with a task. After their introduction, many animated pointers became available for download from third party suppliers. Unfortunately, animated pointers are not without their problems. In addition to imposing a small additional load on the CPU, the animated pointer routines did introduce a security vulnerability. A client-side exploit known as the Windows Animated Cursor Remote Code Execution Vulnerability used a buffer overflow vulnerability to load malicious code via the animated cursor load routine of Windows.
Editor
A pointer editor is software for creating and editing static or animated mouse pointers. Pointer editors usually support both static and animated mouse cursors, but there are exceptions. An animated cursor is a sequence of static cursors representing individual frames of an animation. A pointer editor should be able to:
Modify pixels of a static cursor or each frame in an animated cursor.
Set the hot spot of a static cursor or a frame of an animated cursor. The hot spot is a designated pixel that defines the clicking point.
Add or remove frames in an animated cursor and set their animation speed.
Pointer editors are occasionally combined with icon editors because computer icons and cursors share similar properties. Both contain small raster images and the file format used to store icons and static cursors in Microsoft Windows is similar.
Despite the similarities, pointer editors differ from icon editors in several ways. While icons contain multiple images with different sizes and color depths, static cursors (for Windows) only contain a single image. Pointer editors must provide the means to set the hot spot. Animated pointer editors additionally must be able to handle animations.
3D cursor
The idea of a cursor being used as a marker or insertion point for new data or transformations, such as rotation, can be extended to a 3D modeling environment. Blender, for instance, uses a 3D cursor to determine where future operations are to take place.
See also
Susan Kare, designer of several of the common cursor shapes
Microangelo Toolset
Mouse Sonar
Screen hotspot
Throbber
Tooltip
Cursorial
References
External links
Creating and controlling browser cursors
Cross-browser CSS custom cursors
Installing A Cursor On Your Computer
Windows Desktop Application Design Guidelines: Common Pointer Shapes
Apple Human Interface Guidelines: Pointers
Graphical user interface elements
User interfaces
User interface techniques
Virtual reality
Human communication
Human–machine interaction |
41984254 | https://en.wikipedia.org/wiki/ANGLE%20%28software%29 | ANGLE (software) | ANGLE (Almost Native Graphics Layer Engine) is an open source, cross-platform graphics engine abstraction layer developed by Google. ANGLE translates OpenGL ES 2/3 calls to DirectX 9, 11, or OpenGL API calls. It's a portable version of OpenGL but with limitations of OpenGL ES standard.
The API is mainly designed to bring up a high-performance OpenGL compatibility to MS Windows and to web browsers such as Chromium by translating OpenGL calls to Direct3D, which has much better driver support. On Windows systems, there are two backend renderers for ANGLE: the oldest one uses Direct3D 9.0c, while the newer one uses Direct3D 11.
ANGLE is currently used by Google Chrome (it's embedded into the Blink browser engine), Firefox, Edge, WebKit, and the Qt Framework. The engine is also used by Windows 10 for compatibility with apps ported from Android. Throughout 2019 Apple team contributed a Metal API backend for the ANGLE so Apple devices could run on their native graphics APIs.
The ANGLE is distributed under BSD-license.
History
The project started as a way for Google to bring full hardware acceleration for WebGL to Windows without relying on OpenGL graphics drivers. Google initially released the program under the BSD license.
The current production version (2.1.x) implements OpenGL ES 2.0, 3.0 and (for some platforms) 3.1 and EGL 1.4, claiming to pass the conformance tests for both. Work was started on then future OpenGL ES 3.0 version, for the newer Direct3D 11 backend.
The capability to use ANGLE in a Windows Store app was added in 2014. Microsoft contributed support for lower feature levels to the project. Supporting CoreWindow and SwapChainPanel in ANGLE's EGL allows applications to run on Windows 8.1, Windows Phone 8.1, and later.
Level of OpenGL ES support via backing renderers
Software utilizing ANGLE
ANGLE is currently used in a number of programs and software.
Chromium and Google Chrome. Chrome uses ANGLE not only for WebGL, but also for its implementation of the 2D HTML5 canvas and for the graphics layer of the Google Native Client (which is OpenGL ES 2.0 compatible).
Safari web browser uses ANGLE as basis for its WebGL implementation.
Firefox uses ANGLE as the default WebGL backend on Windows.
Qt 5 uses ANGLE as the default renderer for its OpenGL ES 2.0 API wrapper and other Qt elements which use it on Windows.
Candy Crush Saga uses ANGLE as the default renderer in its Windows Store version of the application.
Cocos2d uses ANGLE as its rendering engine for applications published to the Windows Store.
ANGLE for Windows Store provides Windows developers precompiled ANGLE binaries via a NuGet package.
Stellarium provides two versions for Windows: the default version uses OpenGL, the alternative version uses ANGLE as the renderer.
Shovel Knight uses ANGLE as rendering engine, as seen in final credits.
RuneScape NXT client uses ANGLE to provide a DirectX 9 compatibility mode for older graphics cards.
Krita started using ANGLE as the rendering engine on Windows starting on version 3.3.0.
Microsoft Edge has ANGLE as a rendering option in the "Standards Preview" page in Windows Insider build 17025.
GTA V included ANGLE in the installation, normally at Systemdrive.
OpenRA uses ANGLE for rendering on Windows
References
External links
Free 3D graphics software
Free software programmed in C++
C++ libraries
Application programming interfaces
Cross-platform software
Graphics libraries
Software using the BSD license |
13889493 | https://en.wikipedia.org/wiki/University%20of%20Versailles%20Saint-Quentin-en-Yvelines | University of Versailles Saint-Quentin-en-Yvelines | Versailles Saint-Quentin-en-Yvelines University (, UVSQ) is a French public university created in 1991, located in the department of Yvelines and, since 2002, in Hauts-de-Seine.
Consisting of eight separate campuses, it is mainly located in the cities of Versailles, Saint-Quentin-en-Yvelines, Mantes-en-Yvelines and Vélizy-Villacoublay / Rambouillet. It is one of the five universities of the Academy of Versailles.
Versailles Saint-Quentin-en-Yvelines University is a constituent university of the federal Paris-Saclay University.
It is one of the four universités nouvelles (new universities) inaugurated in the Île-de-France region after the 2000 University project (). It has a population of 19,000 students, a staff of 752 people, and 1,389 teachers and researchers, as well as an additional 285 external teachers.
The university teaches courses in the fields of natural science, social science, political science, engineering, technology, and medicine. It also provides interdisciplinary courses covering the relationships across economics, ethics, natural environment and sustainable development.
History
Origin
A branch of the Pierre and Marie Curie University was created in Versailles in 1987 to increase the university's capacity for enrollment. It focused on the study of science, and merged with the law annex of Paris West University Nanterre La Défense established two years earlier at Saint-Quentin-en-Yvelines. In the 1990s, the Ministry of Higher Education and Research developed the French higher education modernization plan called Université 2000, and created eight unaffiliated universities, known as new universities. The Versailles Saint-Quentin-en-Yvelines University was officially created on 22 July 1991 from the relocation of the two centres of Pierre and Marie Curie University and Paris West University Nanterre La Défense.
A new status
In 1996, the status of the university was changed by law. The law, legislated in 1984, required that the university have elected officials. Michel Garnier was its first president, and created committees that included a board of directors, a scientific council, and a student life council. Dominique Gentile was the second president, and created new annexes such as the Ph.D. School, the University Institute of Technology, and Professionalized University Institute of art, science, culture, and multimedia. During this period, the College of Medicine of Paris-Descartes University was relocated to Saint-Quentin-en-Yvelines and renamed "Paris-Île-de-France-Ouest" (PIFO). It is part of the UVSQ today. Increasing tertiary enrollment in France from 1997 to 2002 increased enrollment at UVSQ.
2002 to present
In 2002, Sylvie Faucheux became the president of the university. The university applied the réforme LMD to its courses in 2004.
The CFA d'Alembert was created at Guyancourt in 2006. In 2007, Unité de formation et de recherche médicale Paris Île-de-France Ouest (PIFO) moved to Saint-Quentin-en-Yvelines, and UniverSud Paris was established, with Paris-Sud 11 University, École Normale Supérieure de Cachan, and UVSQ as members, among others. In 2010, the humanities and social sciences college of UVSQ was split into four: the institut des études culturelles (cultural studies institute), institut supérieur de management (management institute), institut de langues et études internationales (languages and international studies institute), and social sciences research institute. The university expanded to develop the plateau de Saclay (in English European Silicon Valley), an area with world-class universities and research centers. As a founding member of UniverSud Paris, UVSQ supported scientific cooperation and the Paris-Saclay research-intensive cluster. A partnership with Cergy-Pontoise University was considered in 2011, and Université du grand ouest parisien was created in February 2012. An Institut d'études politiques commons for both universities is being studied.
Students' profile and demographic changes
This table shows the changes in student population, from 1993 to 2013. The number of students enrolled has more than doubled over the past ten years.
In 2013-2014, the UVSQ welcomed approximately 9,600 undergraduate, 6,600 graduate and 700 post-graduate students.
The most popular degree courses are law, economics, and management, with almost 6,200 students representing 37% of total enrollment in 2013, followed by science and engineering school (31%).
Overall, more than half of the undergraduate students enrolled every year are female (60%), except for first-year students at Mantes and Vélizy IUT (University Institute of Technology), two-thirds of whom are male.
The social backgrounds of students are on average, more advantageous than the students matriculated in French public universities. Approximately 50% of first-year students come from business and professional families, and globally about four out of five students belong to wealthy and cultivated social classes. The percentage of students coming from low-income families increased from 14% in 1992 to 20-23% in 2013, but remains a minority. This rate, which is common to most selective universities,
reflects the population of the region where the UVSQ is located. Two thirds of first-year students come from the Yvelines department (66%), with a significant concentration around Versailles and Saint-Quentin.
The university welcomes an increasing number of international students; in 2010 there were 2,400 enrolled from 72 different countries, making up 13,3% of UVSQ students. Most were enrolled in Ph.D. programs; 45% of all Ph.D. students were from other countries.
Buildings and sites
Campus
The university is primarily located on four campuses in Versailles, Saint-Quentin-en-Yvelines, Mantes-en-Yvelines and Vélizy-Villacoublay / Rambouillet, but has eight campuses in total, throughout two departments and seven communes; together these cover 160,000 m². At the Versailles campus, there is the sciences college and ISTY, the computer science college. At the campus of Saint-Quentin-en-Yvelines, there are the disciplines of law and political sciences, social sciences, medical research and the sciences of the universe observatory. Another campus of the sciences college is located at Le Chesnay. Vélizy-Villacoublay, which includes the IUT of Vélizy and Rambouillet, is one of its annexes. The campus of Mantes-la-Jolie houses the IUT of Mantes-en-Yvelines. Mantes-la-ville includes the mechatronics college of ISTY.
Other graduate school and institutes
The Unité de formation et de recherche médicale Paris Île-de-France Ouest (in English Paris Île-de-France West Medical Research and Training Departement; also called PIFO or Paris Ouest) is a faculty department of UVSQ. PIFO left Paris Descartes University to join UVSQ between 2001 and 2002, and is located in Guyancourt. Associated hospitals include the Raymond Poincaré University Hospital, Ambroise-Paré Hospital, Foch Hospital, André Mignot Hospital, Poissy St-Germain Hospital, Sainte-Périne Hospital and René Huguenin Hospital. Two schools of midwifery are located in the PIFO department.
Observatory
The observatory, also called OVSQ, supports sustainable development, an important goal for UVSQ. It is located on Saint-Quentin-en-Yvelines campus, and emphasizes observing, teaching, and supporting the environment. OVSQ researches environmental changes, including its sanitary and socioeconomic impacts, and is involved in spatial programs supported by the CNES (French National Center of Spatial Studies) and the ESA (European Spatial Agency). OVSQ supports international projects that monitor the atmosphere, and develops instruments to observe and analyze natural and social phenomena. To prepare future generations in the field of sustainable development, the OVSQ provides an interdisciplinary culture to the students comprising economics, humanities, and environmental studies. In France, OVSQ partners with Fondaterra (European foundation for sustainable territories) and the international industrial professorship, Econoving, to develop sustainable environments and eco-innovations. The OVSQ partners with the Pierre-Simon Laplace Institute, located in Guyancourt. The observatory contributes to climate study, and supports the Intergovernmental Panel on Climate Change (IPCC). The OVSQ allows the university to be an important part of the Climate-Environment-Energy center of the Paris-Saclay Campus. The observatory also contributes to the European community of Knowledge and Innovation dedicated to the Climate (KIC).
PhD graduate schools
The university has PhD graduate schools that take care of the PhD students and also habilitation. There are five schools and 102 Doctorates have been delivered in 2009.
The Cultures, Regulations, Institutions and Territories PhD graduate school, which studies various topics specific to social sciences and humanities as well as legal and political sciences.
The Genome Organizations PhD graduate school (co-accreditation with University of Évry Val d'Essonne), which is interested in biology and genomics in particular, and interfaces with mathematics, computer science, physics and chemistry.
The Environmental Science in Ile de France PhD graduate school (co-accreditation with Pierre and Marie Curie University and École normale supérieure Paris-Saclay) covers the multidisciplinary fields related to the understanding of the physical, chemical and biological equilibria of the Earth's environment.
The Science and Technology of Versailles PhD graduate school, which studies chemistry, physics, mathematics, and engineering sciences.
The public health PhD graduate school (in partnership with Paris-Sud 11 University and Paris Descartes University), which has three laboratories: research centre for epidemiology and population health, health-environment-aging, pharmaco-epidemiology and infectious diseases.
Libraries
Its main library, inaugurated in June 2002, has an area of 8400 m2 and has over 100,000 books. In 2005, the university inaugurated the library of Saint-Quentin-en-Yvelines with an area of 7500 m2 on three levels and 1,100 reading places. September 2011 saw the commissioning of the new library on the sciences campus, located in Versailles. In total, the university has six academic libraries located, in addition to the campus already mentioned, on those of Vélizy, Boulogne-Billancourt, Rambouillet and Mantes, consisting of about 200,000 books, 5,000 digital books and 26,000 magazines and newspapers.
Administration and organization
Governance
Like all the établissement public à caractère scientifique, culturel et professionnel, the university is managed by a president elected by a board of directors, who is a member of the three councils of the university. Staff representatives (including academics) and external representatives on the boards of the university have a term of four years, and student members are elected for two years.
The board of directors, which has 30 members, determines the policy of the institution, and is responsible for its budget, jobs repartition, and the approval of agreements and conventions.
The academic and university life council, which has 40 members, is responsible for initial training and continuing education, and helps the board of directors by asking for the accreditation of new courses.
The scientific council, which has 40 members, is responsible for research activities and development, and gives its opinion on changing Ph.D. courses.
Presidents
Since its foundation in 1991, there have been six presidents at Versailles Saint-Quentin-en-Yvelines University.
Michel Garnier was the first President of UVSQ from 1991 to 1997. He is an alumnus of the Ecole Normale Supérieure, professor of geophysics, electronics and signal processing. As President of Pierre and Marie Curie University, he created at first an office in Versailles in 1987 called the Faculty of Science. He was then president of UVSQ from 1991 to 1997.
Dominique Gentile served as President of UVSQ from 1997 to 2002. University professor and qualified teacher of physical sciences, he began his career by working successively in the laboratory of fluid mechanics, then in the laboratory of physical mechanics. First professor and then vice-president, he became president of UVSQ in 1997. In 2003 he was director of L’Institut National des Sciences et Techniques Nucléaires. In 2008, he was director of l’Institut des sciences et techniques des Yvelines (Isty), a public engineering school at UVSQ.
Sylvie Faucheux is a French professor who served as the third president of Versailles Saint-Quentin-en-Yvelines University, from 2002 until 2012. She is an academic, a French politician, and a specialist in environmental economics and sustainable development. In December 2002, she was elected President of UVSQ. She was reelected in 2008 for a four-year term until April 12, 2012. She was director of the Academy of Dijon from October 2012 to February 2014.
Jean-Luc Vayssière, Professor of Biology, was the fourth president of Versailles Saint-Quentin-en-Yvelines University, from 2012 until April 2016. Before being elected president, he was chief of staff at UVSQ between 2004 and 2008. From 2008, he was vice-president of the board of directors.
Didier Guillemot, a doctor, was the fifth president of UVSQ, from May 2016 to September 2017. Before being elected President, he was head of the Biostatistics, Biomathematics, Pharmacoepidemiology and Infectious Diseases Laboratory between 2007 and 2016.
Alain Bui, a doctor, is the sixth and current president of UVSQ, since October 2017. Before being elected President, he was a teacher at the sciences college between 2008 and 2016, and vice president from 2016 to 2017.
Finances
Budget
The university had a budget of 166 million euros in 2011, which was a 16% increase from 2010. The university had a budget of 143 million euros in 2010, which was 13.1% higher than that of 2009. The budget for normal operations grew by 29.2% during the period 2007–2010. The amount of investment over the same period was about 3.3 million euros in total.
Employment structure except for teachers of hospital department
In 2009-2010, according to the official report published in June 2010 by the human resources department of the Ministry of Higher Education and Research, Versailles Saint-Quentin-en-Yvelines University employed 229 non-permanent teachers, which corresponds to 119 Full-time equivalents, which is 20.4% of the total teachers of the university, placing it well above the national average of 15.5%. The total number of permanent teachers is 464, including 121 university professors, 250 docents or assistants, and 93 secondary school teachers.
Versailles University foundation
The UVSQ Foundation is intended to help UVSQ in Yvelines. Issues in society, the scientific higher education, and scientific research evolve from a very competitive environment. In the Yvelines, UVSQ plays a vital role in high-quality education, and participates actively in the construction of the University Paris-Saclay, in close collaboration with Grandes Ecoles (HEC, Polytechnique, Supélec, etc.), Université Paris Sud and research organizations, including CNRS, CEA, INSERM, and INRA. The mission of the UVSQ Foundation is to facilitate this evolution. The UVSQ Foundation offers training programs, research, and support to its sponsors with projects relating to social responsibility.
The UVSQ foundation was created in May 2010 by nine founding members: the university, technical center for mechanical industries, Graduate school of engineering in electrical engineering, Graduate school of aeronautical technology and automotive, IFP Energies Nouvelles, the National research institute for transport and safety, and PSA Peugeot Citroën companies, Renault, Valeo and Safran group. Its projects include a young talent UVSQ scholarship, development of the university library holdings, Equal Opportunities Program, business researcher club, and thesis prize.
Academic profile
Rankings
Components
Versailles Saint-Quentin-en-Yvelines University is divided into ten components. There are six faculties, two University Institutes of Technology, one School of Engineering and one Observatory.
Versailles Saint-Quentin-en-Yvelines University is also managing three schools in partnership with other institutions.
Courses
In 2012, Versailles Saint-Quentin-en-Yvelines University offers 50 Bachelor's degree, 53 diplôme universitaire (university degree), 10 E-learning courses, 95 Master's degree, 13 diplôme universitaire de technologie (university diploma in Technology), 2 Diplôme d'Ingénieur, 1 Doctor of Medicine degree, 1 Midwifery degree and 2 diplôme d'accès aux études universitaires (access to university degree).
In 2012, the Bachelor's degree is issued in four areas: arts-humanities-languages, law-economy-management, humanities and social sciences, and science-technology-health. In 2012, the Master's degree is issued in five areas: arts-humanities-languages, law-economy-management, humanities and social sciences, environmental science-territory and economics, and science-technology-health.
Research
Research activities at the university are done in 35 laboratories. Twelve of them are affiliated with the French National Centre for Scientific Research. It has six departments and a total of nearly 950 researchers and 715 PhD students.
The Chemistry, Physics, Materials, Renewable Energy Department, center for the study of materials and solids hosted in 2013 the Institut de la fiabilité des matériaux pour la mécatronique et les systèmes complexes (Institute of materials reliability for complex systems and mechatronics).
The Environment and Sustainable Development department covers the sciences, humanities, economics, and medicine.
The Mathematics, Computer Science, Engineering Sciences department examines two themes: mathematics and computer and systems engineering.
The Biology and Health department includes about 300 researchers and 1,200 medical students. Research includes biology, medicine, epidemiology and population health. It is attached to several hospitals, including Ambroise-Paré, Raymond Poincaré, Sainte-Périne, and Foch.
The Cultures, Humanities and Sciences department has three main areas of research and training: languages and civilizations, culture and business, and social sciences.
The Institutions and Organizations department focuses on management science, law and political science.
The Institut Pierre-Simon Laplace (Pierre-Simon Laplace Institut), a research institute in global environmental sciences, located at Guyancourt, has six laboratories, three of which are under partial guardianship of UVSQ: terrestrial and planetary study center, aeronomy department, and laboratoire des Sciences du Climat et de l'Environnement (Climate science and environment laboratory).
The aeronomy department and a part of the terrestrial and planetary study center merged on 1 January 2009 into the laboratoire atmosphères, milieux et observations spatiales (LATMOS, atmospheres, environments and space observations laboratory), and was placed under the supervision of UVSQ.
Teachers and former teachers
Several sociologists teach or have taught at the university, including Roland Guillon, known for his work on the problem of employment and capital; Laurent Mucchielli, specialist in criminology including issues of crime and violence of immigrant populations; Philippe Robert, specialized in the study of delinquency and deviance; and Étienne Anheim, Didier Demazière and Claude Dubar.
UVSQ also counts among its current or former teachers historians like Bernard Cottret, honorary member of the Institut Universitaire de France; Christian Delporte, specialist in political and cultural history of the 20th-century of France; Bruno Laurioux, French Middle Ages historian; Jean-Yves Mollier and Loïc Vadelorge, both specialized in contemporary history.
In science, people like the creator of the ext2 file system Rémy Card, the docent in practice and theory of photography Fabien Danesi, the deputy director of the École Normale Supérieure Jean-Charles Darmon or the chemist and member of the French Academy of Sciences Gérard Férey teach or have taught at the university.
Valérie-Laure Bénabou, teacher of private law, is also on the faculty. And among tutor there is Pierre-Hugues Barré.
Doctors honoris causa
During the honorary degree graduation ceremony on 18 October 2011 at the Versailles Orangerie, Sylvie Faucheux, who was President of the university, awards, in the presence of Alain Boissinot, six personalities: Andrew Abbott (sociology teacher at the University of Chicago), George Bermann (international law teacher at Columbia University), Amos Gitai (director and filmmaker Israeli), Robin Hartshorne (mathematics teacher at the University of California), Günther Lottes (history teacher at the University of Potsdam) and Seiji Miyashita (physics teacher at the University of Tokyo).
External image
The university has a logo showing a white sunrise on a green earth referring to the sustainable development, a major theme of the university. The diary is named T'DACtu and is published every two months. It is for partners of the university, and also for students and staff. A movie diary called UVSQ & Vous is also available each month. The university is the initiator of the European project Europolytec, whose purpose is to design a website dedicated to careers in computer science and mechatronics.
Student life
Student services
Direction de la Vie Étudiante
It advises and supports new foreign students, helps students find housing, and hosts students' initiatives, like culture service, sports service, and associations service. The DVE is also responsible for managing spaces where students eat lunch, take breaks, and work. These spaces are located on the first floor of Buffon building at Versailles campus and at Vauban building on the Saint Quentin en Yvelines campus. The DVE provides students discounts for cultural events. There is a cultural program in the Yvelines each semester and cinemas, theatres, and concerts. In Guyancourt, Versailles and Vélizy campuses, students have access to 20 sports activities.
CROUS
The CROUS (Centres Régionaux des Oeuvres Universitaires et Scolaires) is a service to improve student life. Every student can access its services. CROUS can help students find accommodation closer to their universities. In 2011, more than 8,500 students were housed in 25 residences managed by the CROUS. The CROUS awards scholarships to students, and has caseworkers who help students. On campus, there are restaurants managed by the CROUS, where students have meals at reduced prices. The CROUS provides free job or internship advertisements for students.
Financial help
Students can apply between 15 January and 30 April to receive financial aid for the next school year. Criteria include household income, number of dependent children, and distance from campus. There is aid for most qualifying applicants in preparing to take competitive exam to work on public services. Students can apply for bank loans guaranteed by the French state, and do not need guarantees.
Student associations
Approximately 30 associations offer students activities in science, social sciences and humanities, law, medicine, humanitarian-social-environment, handicap, international, communication, reflexion, student involvement and mechatronics. It is easy to join in or to create an association. Being in an association gives the students the opportunity to imagine and develop specific unprofitable projects. Associations can present these projects and receive a grant from the FSDIE commission (Solidarity and Development Fundings for Student Initiatives). This funding partly is from the registration fees from the students each year. This commission gathers three to four times a year (September, November, March and June) and votes to award grants to projects. Other financial partners of the university can contribute to help the associations finance projects. In order to give the students a place to gather and elaborate projects or to entertain, a student house opened in 2013 on the Versailles campus. It was designed by architect Fabienne Bulle, and has an area of 1,730 m2. This building accommodates local trade and cultural activities, a multipurpose room, an art room, service areas, cafeteria, and associations offices.
International relations
From the beginning, the university has developed international networks such as Erasmus, the Conférence des recteurs et des principaux des universités du Québec (CREPUQ), or other partnerships with universities abroad. The number of partners was about 230 in 2011. The university welcomed 330 international students in 2010. The number of students from Erasmus from 2003 to 2008 was between 0.41 and 0.67% of the student body, which ranked the university 61 out of 75 in French universities for this programme.
The university welcomes foreign students who want to obtain a French degree. In 2010, there were about 2,400 foreign students, which was 13.3% of the student body. There is a greater percentage in the PhD courses, where almost half of the students (45%) are foreign.
Sociology
Of the 14,226 students in 2004, 77.1% were holders of a Baccalauréat général (general Baccalauréat), 12.3% a Baccalauréat technologique (technological Baccalauréat) and 0.7% a Baccalauréat professionnel (vocational Baccalauréat). Furthermore, 12.5% of students received scholarships. Concerning social origins, 49.2% of the students have a favored social origin, 20.2% were from disadvantaged social backgrounds, and 30.6% have an average social origin.
Notes and references
Notes
References
Bibliography
Agence d'évaluation de la recherche et de l'enseignement supérieur, Rapport d'évaluation de l'université de Versailles-Saint-Quentin-en-Yvelines, January 2010
Comité national d'évaluation des établissements publics à caractère scientifique, culturel et professionnel, L'université de Versailles-Saint-Quentin-en-Yvelines, Rapport d'évaluation, December 2006
External links
Open archives of the University
Universities and colleges in Versailles
1991 establishments in France
Educational institutions established in 1991
Universities and colleges in Saint-Quentin-en-Yvelines |
166842 | https://en.wikipedia.org/wiki/Safari%20%28web%20browser%29 | Safari (web browser) | Safari is a graphical web browser developed by Apple. It is primarily based on open-source software, and mainly WebKit. It succeeded Netscape Navigator, Cyberdog and Internet Explorer for Mac as the default web browser for Macintosh computers. It is supported on macOS, iOS, and iPadOS; a Windows version was offered from 2007 to 2010.
Safari was introduced within Mac OS X Panther in January 2003, and as of 2021, has progressed through fifteen major versions. The third generation (January 2007) brought compatibility to the iPhone via iPhone OS 1, while the Macintosh edition was topped with the fastest browser performance at that time. The fifth version (June 2010) introduced a less distracted page reader, extension and developer tools; it was also the final version for Windows. In the eleventh version (September 2017), it added support for Intelligent Tracking Prevention. The thirteenth version included various privacy and application updates such as the FIDO2 USB security key authentication and web Apple Pay support. The fourteenth version, released in November 2020, was 50% faster than Google Chrome, and consumed less battery than other standard competitors. The fifteenth version (July 2021) is the current revision, featuring a redesigned interface.
Apple used a remotely updated plug-in blacklist license to prevent potentially dangerous or vulnerable plugins from running on Safari. In the Pwn2Own contest at the 2008 CanSecWest security conference, Safari caused Mac OS X to be the first OS to fall in a hacking competition. It received criticism for its approach to software distribution and its past limitations of ad blockers. The Safari Developer Program, which granted members the privilege to develop extensions for the browser was available for $USD 99 per year. , it was ranked as the second most-used web browser after Google Chrome, with an approximate 18.43% of market share worldwide, and 38.88% in the US.
History and development
Prior to 1997, Apple's Macintosh computers were shipped with the browsers Netscape Navigator and Cyberdog. It was later replaced by Microsoft's Internet Explorer for Mac within Mac OS 8.1 under the five-year agreement between Apple and Microsoft. In these periods, Microsoft announced three major revisions of Internet Explorer for Mac which were used by Mac OS 8 and Mac OS 9, though Apple continued to support Netscape Navigator as an alternative. In May 2000, Microsoft ultimately released a Mac OS X edition of Internet Explorer for Mac, which was bundled as the default browser in all Mac OS X releases from Mac OS X DP4 to Mac OS X v10.2.
Before the name Safari, a couple of others were drafted including the title 'Freedom'. For over a year, it was privately referred to as 'Alexander', which means strings in coding formats; and 'iBrowse' prior to Safari was conceived.
Safari 1
On January 7, 2003, at Macworld San Francisco, Apple CEO Steve Jobs announced Safari that was based on the company's internal KHTML rendering engine fork WebKit. Apple released the first beta version exclusively on Mac OS X the same day. Later that date, several official and unofficial beta versions followed until version 1.0 was released on June 23, 2003. On Mac OS X v10.3, Safari was pre-installed as the system's default browser, rather than requiring a manual download, as was the case with the previous Mac OS X versions. Safari's predecessor, the Internet Explorer for Mac, was then included in 10.3 as an alternative.
Safari 2
In April 2005, Engineer Dave Hyatt fixed several bugs in Safari. His experiment beta passed the Acid2 rendering test on April 27, 2005, marking it the first browser to do so. Safari 2.0 which was released on April 29, 2005, was the sole browser Mac OS X 10.4 offered by default. Apple touted this version as it was capable of running a 1.8x speed boost compared to version 1.2.4 but it did not yet featured the Acid2 bug fixes. These major changes were initially unavailable for end-users unless they privately installed and compiled the WebKit source code or ran one of the nightly automated builds available at OpenDarwin. In version 2.0.2, released on October 31, 2005, it had finally included modifications on Acid2 bug fixes.
In June 2005 in efforts of KHTML criticisms over the lack of access to change logs, Apple moved the development source code and bug tracking of WebCore and JavaScriptCore to OpenDarwin. They have also open-sourced WebKit. The source code is for non-renderer aspects of the browser such as its GUI elements and the remaining proprietary. The final stable version of Safari 2 and the last version released exclusively with Mac OS X, Safari 2.0.4, was updated on January 10, 2006, for Mac OS X. It was only available within Mac OS X Update 10.4.4, and it delivered fixes to layout and CPU usage issues among other improvements.
Safari 3
On January 9, 2007, at Macworld San Francisco, Jobs unveiled that Safari was ported to the newly-introduced iPhone within iPhone OS (later called iOS). The mobile version was capable of displaying full, desktop-class websites. At WWDC 2007, Jobs announced Safari 3 for Mac OS X 10.5, Windows XP, and Windows Vista. He ran a benchmark based on the iBench browser test suite comparing the most popular Windows browsers to the browser, and claimed that Safari had the fastest performance. His claim was later examined by a third-party site called Web Performance over HTTP load times. They verified that Safari 3 was indeed the fastest browser on the Windows platform in terms of initial data loading over the Internet, though it was only negligibly faster than Internet Explorer 7 and Mozilla Firefox when it came to static content from the local cache.
The initial Safari 3 beta version for Windows, released on the same day as its announcement at WWDC 2007, contained several bugs and a zero day exploit that allowed remote code executions. The issues were then fixed by Apple three days later on June 14, 2007, in version 3.0.1. On June 22, 2007, Apple released Safari 3.0.2 to address some bugs, performance problems, and other security issues. Safari 3.0.2 for Windows handled some fonts that were missing in the browser but already installed on Windows computers such as Tahoma, Trebuchet MS, and others. The iPhone was previously released on June 29, 2007, with a version of Safari based on the same WebKit rendering engine as the desktop version but with a modified feature set better suited for a mobile device. The version number of Safari as reported in its user agent string is 3.0 was in line along with the contemporary desktop editions.
The first stable, non-beta version of Safari for Windows, Safari 3.1, was offered as a free download on March 18, 2008. In June 2008, Apple released version 3.1.2, which addressed a security vulnerability in the Windows version where visiting a malicious web site could force a download of executable files and execute them on the user's desktop. Safari 3.2, released on November 13, 2008, introduced anti-phishing features using Google Safe Browsing and Extended Validation Certificate support. The final version of Safari 3 was version 3.2.3, which was released on May 12, 2009, with security improvements.
Safari 4
Safari 4 was released on June 11, 2008. It was the first version that had completely passed the Acid3 rendering test. It incorporated WebKit JavaScript engine SquirrelFish that significantly enhanced the browser's script interpretation performances by 29.9x. SquirrelFish was later evolved to SquirrelFish Extreme, later also marketed as Nitro, which had 63.6x faster performances. A public beta of Safari 4 was experimented in February 24, 2009.
Safari 4 relied on Cover Flow to run the History and Bookmarks, and it featured Speculative Loading that automatically pre-loaded document informations which were required to visit a particular website. The top sites can displayed up to 24 thumbnails based on the frequently visited sites in startup. The desktop version of Safari 4 made uses of a redesign similar to that of the iPhone. The update also commissioned many developer tool improvements including Web Inspectors, CSS element viewings, JavaScript debuggers and profilers, offline tables, database managements, SQL supports, and resource graphs. In additions to CSS retouching effects, CSS canvas, and HTML5 content. It replaced the initial Mac OS X-like interface with native Windows themes on Windows using native font renderings.
Safari 4.0.1 was released for Mac on June 17, 2009, and fixed Faces bugs in iPhoto '09. Safari 4 in Mac OS X v10.6 "Snow Leopard" has built-in 64-bit support, which makes JavaScript load up to 50% faster. It also has native crash resistances that would maintain it intact if a plugin like Flash player crashes, though other tabs or windows would not be affected. Safari 4.0.4, the final version which was released on November 11, 2009, for both Mac and Windows, which further improved the JavaScript performances.
Safari 5
Safari 5 was released on June 7, 2010, and was the final version (version 5.1.7) for Windows. It featured a less distractive screen reader, and had a 30x faster JavaScript performances. It incorporated numerous developer tool improvements including HTML5 interoperability, and the accessibility to secure extensions. The progress bar was re-added in this version as well. Safari 5.0.1 enabled the Extensions PrefPane by default, rather than requiring users to manual set it in the Debug menu.
Apple exclusively released Safari 4.1 concurrently with Safari 5 for Mac OS X Tiger. It made uses many features that were found in Safari 5, though it excluded the Safari Reader and Safari Extensions. Apple released Safari 5.1 for both Windows and Mac on July 20, 2011, within Mac OS X 10.7 Lion, which had a faster performance to the addition of 'Reading List'. The company simultaneously announced Safari 5.0.6 in late June 2010 within Mac OS X 10.5 Leopard, though the new functions were excluded from Leopard users.
Several HTML5 features become compatible within Safari 5. It added supports for full-screen video, closed caption, geolocation, EventSource, and a now obsolete early variant of the WebSocket protocol. The fifth major version of Safari added supports for Full-text search, and a new search engine, Bing. Safari 5 supported Reader, which displays web pages in a continuous view, without advertisements. Safari 5 supported a smarter address field and DNS prefetching that automatically found links and looked up addresses on the web. New web pages loaded faster using Domain Name System (DNS) prefetching. The Windows version received an extra update on Graphic acceleration as well. The blue inline progress bar was returned to the address bar; in addition to the spinning bezel and loading indicator introduced in Safari 4. Top Sites view now had a button to switch to Full History Search. Other features included Extension Builder for developers of Safari Extensions. Other changes included an improved inspector. Safari 5 supports Extensions, add-ons that customize the web browsing experience. Extensions are built using web standards such as HTML5, CSS3, and JavaScript.
Safari 6
Safari 6.0 was previously referred to as Safari 5.2 until Apple changed in WWDC 2012. The stable release of Safari 6 coincided with the release of OS X Mountain Lion on July 25, 2012, and was integrated within OS. As a result, it was no longer available for download from Apple's website or any other sources. Apple released Safari 6 via Software Update for users of OS X Lion. It was not released for OS X versions before Lion or for Windows. The company later quietly removed references and links for the Windows version of Safari 5. Microsoft had also removed Safari from its browser-choice page.
On June 11, 2012, Apple released a developer preview of Safari 6.0 with a feature called iCloud Tabs, which syncings with open tabs on any iOS or other OS X device that ran the latest software. It updated new privacy features, including an "Ask websites not to track me" preference and the ability for websites to send OS X 10.8 Mountain Lion users notifications, though it removed RSS support. Safari 6 had the Share Sheets capability in OS X Mountain Lion. The Share Sheet options were: Add to Reading List, Add Bookmark, Email this Page, Message, Twitter, and Facebook. Tabs with full-page previews were added, too. The sixth major version of Safari, it added options to allow pages to be shared with other users via email, Messages, Twitter, and Facebook, as well as making some minor performance improvements. It added supports for in CSS. Additionally, various features were removed including Activity Window, separate Download Window, direct support for RSS feeds in the URL field, and bookmarks. The separate search field and the address bar were also no longer available as a toolbar configuration option. It instead it was replaced by the smart search field, a combination of the address bar and the search field.
Safari 7
Safari 7 was announced at WWDC 2013, and it brought a number of JavaScript performance improvements. It made uses of Top Site and Sidebar, Shared Links, and Power Saver which paused unused plugins. Safari 7 for OS X Mavericks and Safari 6.1 for Lion and Mountain Lion were all released along with OS X Mavericks in the special event on October 22, 2013.
Safari 8
Safari 8 was announced at WWDC 2014 and was released within OS X Yosemite. It comprised the Javascript engine WebGL, a stronger privacy management, an improved iCloud integration, and a redesigned interface. It was also faster and more efficient, with additional develop markups including 2D and 3D interactive JavaScript API WebGL, JavaScript Promises, CSS Shapes & Composting mark up, IndexedDB, Encrypted Media Extensions, and SPDY protocol.
Safari 9
Safari 9 was announced in WWDC 2015 and was released within OS X El Capitan. New features included audio muting, more options for Safari Reader, and improved autofill. It was not fully available for the previous OS X Yosemite, as Apple required it to be upgraded to Capitan.
Safari 10
Safari 10 was released within OS X Yosemite and OS X El Capitan on September 20, 2016. It had a redesigned Bookmark and History views, and double-clicking will centralized focus on a particular folder. The update redirected Safari extensions to be saved directly to Pocket and Dic Go. Software improvements included Autofill quality from the Contrast card and Web Inspector Timelines Tab, in-line sub-headlines, bylines, and publish dates. The ut tracks and re-applies zoomed level to websites, and legacy plug-ins were disabled by default in favor of HTML5 versions of websites. Recently closed tabs can be reopened via the History menu, or by holding the "+" button in the tab bar, and using Shift-Command-T. When a link opens in a new tab; it is now possible to hit the back button or swipe to close it and go back to the original tab. Debugging is now supported on the Web Inspector. Safari 10 also includes several security updates, including fixes for six WebKit vulnerabilities and issues related to Reader and Tabs. The first version of Safari 10 was released on September 20, 2016, and the last version (10.1.2) was released on July 19, 2017.
Safari 11
Safari 11 was released within macOS High Sierra on on September 19, 2017. It was also compatible to OS X El Capitan and macOS Sierra. Safari 11 included several new features such as Intelligent Tracking Prevention which aimed to prevent cross-site tracking by placing limitations on cookies and other website data. Intelligent Tracking Prevention allowed first-party cookies to continue track the browser history, though with time limits. For example, first-party cookies from ad-tech companies such as Google/Alphabet Inc., were set to expire in 24-hours after the visit.
Safari 12
Safari 12 was released within macOS Mojave on September 17, 2018. It was also available to macOS Sierra and macOS High Sierra on September 17, 2018. Safari 12 included several new features such as Icons in tabs, Automatic Strong Passwords, and Intelligent Tracking Prevention 2.0. Safari version 12.0.1 was released on October 30, 2018, within macOS Mojave 10.14.1, and Safari 12.0.2 was released on December 5, 2018, under macOS 10.14.2. Support for developer-signed classic Safari Extensions has been dropped. This version would also be the last that supported the official Extensions Gallery. Apple also encouraged extension authors to switch to Safari App Extensions, which triggered negative feedbacks from the community.
Safari 13
Safari 13 was released within macOS Catalina at WWDC 2019 on June 3, 2019. Safari 13 included several new features such as prompting users to change weak passwords, FIDO2 USB security key authentication support, Sign in with Apple support, Apple Pay on the Web support, and increased speed and security. Safari 13 was released on September 20, 2019, on macOS Mojave and macOS High Sierra.
Safari 14
In June 2020 it was announced that macOS Big Sur will include Safari 14. Safari 14 introduced new privacy features, including Privacy Report, which shows blocked content and privacy information on web pages. Users will also receive a monthly report on trackers that Safari has blocked. Extensions can also be enabled or disabled on a site-by-site basis. Safari 14 introduced partial support for the WebExtension API used in Google Chrome, Microsoft Edge, Firefox, and Opera, making it easier for developers to port their extensions from those web browsers to Safari. Support for Adobe Flash Player will also be dropped from Safari, 3 months ahead of its end-of-life. A built-in translation service allows translation of a page to another language. Safari 14 was released as a standalone update to macOS Catalina and Mojave users on September 16, 2020. It added Ecosia as a supported search engine.
Safari 15
Safari 15 was released within macOS Monterey and was also available for macOS Big Sur and macOS Catalina on September 20, 2021. It featured a redesigned interface and tab groups that blended better into the background. There was also a new home page and extension supports on the iOS and iPadOS editions.
Safari Technology Preview
Safari Technology Preview was first released alongside OS X El Capitan 10.11.4. Safari Technology Preview releases include the latest version of WebKit, which included Web technologies in the future stable releases of Safari so that developers and users can install the Technology Preview release on a Mac, test those features, and provide feedback.
Safari Developer Program
The Safari Developer Program was a program dedicated for in-browser extension and HTML developers.
It allowed members to write and distribute extensions for the browser through the Safari Extensions Gallery. It was initially free until it was incorporated into the Apple Developer Program in WWDC 2015, which cost $99 a year at the time, but has since increased 3-fold. The charges prompted frustrations from developers. Within OS X El Capitan, Apple implemented the Secure Extension Distribution to further improve its security, and it automatically updated all extensions within the Safari Extensions Gallery.
Other features and system requirements
On macOS, Safari is a Cocoa application. It used Apple's WebKit for rendering web pages and running JavaScript. WebKit consisted of WebCore (based on Konqueror's KHTML engine) and JavaScriptCore (originally based on KDE's JavaScript engine, named KJS). Like KHTML and KJS, WebCore and JavaScriptCore were free software and were released under the terms of the GNU Lesser General Public License. Some Apple improvements to the KHTML code were merged back into the Konqueror project. Apple had also released some additional codes under the open source 2-clause BSD-like license. The version of Safari included in Mac OS X v10.6 (and later versions) is compiled for 64-bit architecture. Apple claimed that running Safari in 64-bit mode would the increase rendering speeds by up to 50%.
Until Safari 6.0, it included a built-in web feed aggregator that supported the RSS and Atom standards. Current features included Private Browsing (a mode in which the browser retains no record of information about the user's web activity), the ability to archive web content in WebArchive format, the ability to email complete web pages directly from a browser menu, the ability to search bookmarks, and the ability to share tabs between all Mac and iOS devices running appropriate versions of software via an iCloud account. WebKit2 has a multiprocess API for WebKit, where the web-content is handled by a separate process than the application using WebKit. Apple announced WebKit2 in April 2010. Safari for OS X switched to the new API with version 5.1. Safari for iOS switched to WebKit2 with iOS 8.
Security
Plugins
Apple used a remotely updated plug-in blacklist to prevent potentially dangerous or vulnerable plugins from running on Safari. Initially, Flash and Java contents were blocked on some early versions of Safari. Since Safari 12 support for NPAPI plugins (except for Flash) has been completely dropped. Starting with the release of Safari 14, support for Adobe Flash Player will be dropped altogether.
License
The license has common terms against reverse engineering, copying and sub-licensing, open-source except parts, and its warranties and liability. The permission to opt-out of tracking was limited to specific devices. For example, Windows user is restricted to run opt-out of tracking since their license omits the opening If clause. All users were allowed to opt-out of location tracking by not using location services. Optionally, users can choose to enable a withdrawable diagnostic and usage collection program, which permitted Apple and its associated to collect, use manage their datas and informations under the terms that they wouldn't publicly identify them.
Apple defined "personal" does not cover "unique device identifiers" such as serial number, cookie number, or IP address, so the uses of these were permitted by law. In September 2017 Apple announced that it will use artificial intelligence (AI) to reduce the ability of advertisers to track Safari users as they browse the web. Cookies used for tracking will be allowed for 24 hours, then disabled, unless AI judges the user wants the cookie. Major advertising groups objected, saying it will reduce the free services supported by advertising, while other experts praised the change.
Browser exploits
In the Pwn2Own contest at the 2008 CanSecWest security conference in Vancouver, British Columbia, Safari caused Mac OS X to be the first OS to fall in a hacking competition. Participants competed to find a way to read the contents of a file located on the user's desktop in one of three operating systems: Mac OS X Leopard, Windows Vista SP1, and Ubuntu 7.10. On the second day of the contest, when users were allowed to physically interact with the computers (the prior day permitted only network attacks), Charlie Miller compromised Mac OS X through an unpatched vulnerability of the PCRE library used by Safari. Miller was aware of the flaw before the conference and worked to exploit it unannounced, as is the common approach in these contests. The exploited vulnerability and other flaws were patched in Safari 3.1.1.
In the 2009 Pwn2Own contest, Charlie Miller performed another exploit of Safari to hack into a Mac. Miller again acknowledged that he knew about the security flaw before the competition and had done considerable research and preparation work on the exploit. Apple released a patch for this exploit and others on May 12, 2009, with Safari 3.2.3.
In January 2022, browser fingerprinting and fraud detection service FingerprintJS found a vulnerability in the IndexDB API implementation in WebKit Storage used by Safari 15 on macOS, iOS and iPadOS. This vulnerability allows a malicious site to access browsing history and activity as well as user private session data on other websites which is a violation of the same-origin policy. The vulnerability was assigned CVE-2022-22594 and patched by Apple. The fix was released alongside iOS 15.3 and macOS 12.2 on January 26, 2022.
Criticism
Distribution through Apple Software Update
An earlier version of Apple Software Update (bundled with Safari, QuickTime, and iTunes for Microsoft Windows) selected Safari for installation from a list of Apple programs to download by default, even when it did not detect an existing installation of Safari on a user's machine. John Lilly, former CEO of Mozilla, stated that Apple's use of its updating software to promote its other products was "a bad practice and should stop." He argued that the practice "borders on malware distribution practices" and "undermines the trust that we're all trying to build with users." Apple spokesman Bill Evans sidestepped Lilly's statement, saying that Apple was only "using Software Update to make it easy and convenient for both Mac and Windows users to get the latest Safari update from Apple." Apple also released a new version of Apple Software Update that puts new software in its own section, though still selected for installation by default. By late 2008, Apple Software Update no longer selected new installation items in the new software section by default.
Security updates for Snow Leopard and Windows platforms
Software security firm Sophos detailed how Snow Leopard and Windows users were not supported by the Safari 6 release at the time, while there were over 121 vulnerabilities left unpatched on those platforms. Since then, Snow Leopard has had only three minor version releases (the most recent in September 2013), and Windows has had none. While no official word has been released by Apple, the indication is that these are the final versions available for these operating systems, and both retain significant security issues.
Failure to adopt modern standards
While Safari pioneered several now standard HTML5 features (such as the Canvas API) in its early years, it has come under attack for failing to keep pace with some modern web technologies. Since 2015, iOS has allowed third party web browsers to be installed, including Chrome, Firefox, Opera and Edge; however, they are all forced to use the underlying WebKit browser engine, and inherit its limitations.
Intentionally limiting ad blockers and tracking protection
Beginning in 2018, Apple made technical changes to Safari's content blocking functionality which prompted backlash from users and developers of ad blocking extensions, who said the changes made it impossible to offer a similar level of user protection found in other browsers. Internally, the update limited the number of blocking rules which could be applied by third-party extensions, preventing the full implementation of community-developed blocklists. In response, several developers of popular ad and tracking blockers announced their products were being discontinued, as they were now incompatible with Safari's newly limited content blocking features. As a matter of policy, Apple requires the use of WebKit, Safari's underlying rendering engine, in all browsers developed for its iOS platform, preventing users from installing any competing product which offers full ad blocking functionality. Beginning with Safari 13, popular extensions such as uBlock Origin will no longer work.
Market share
In 2009, Safari had a market share of 3.85%. It remained stable in that rank for five years with market shares of 5.56% (2010), 7.41% (2011), 10.07% (2012), and 11.77% (2013). In 2014, it caught up with Firefox with a market share of 14.20%. In 2015, Safari became the second most-used web browser worldwide after Google Chrome, and had a market share of 13.01%. From 2015 to 2020, it occupied market shares of 14.02%, 14.86%, 14.69%, 17.68% and 19.25, respectively. , Google Chrome continued to be the most popular browser with Safari (19.22%) following behind in second place.
See also
List of web browsers
History of web browsers
United States v. Google Inc. in which the FTC alleged that Google misrepresented privacy assurances to Safari users
References
External links
Safari security vulnerabilities at CVE Details
2003 software
Apple Inc. software
Companies' terms of service
Computer-related introductions in 2003
IOS
IOS web browsers
IOS-based software made by Apple Inc.
MacOS web browsers
Software based on WebKit
Web browsers |
50995048 | https://en.wikipedia.org/wiki/Martin%20Wirsing | Martin Wirsing | Martin Wirsing (born 24 December 1948 in Bayreuth) is a German computer scientist, and Professor at the Ludwig-Maximilians-Universität München, Germany.
Biography
Wirsing studied Mathematics at Ludwig-Maximilians-Universität München (LMU) and at Université Paris 7, obtaining the Diplom in Mathematics from LMU and the Mâitrise-ès-Sciences Mathématiques at the Université Paris 7. Supervised by Kurt Schütte, he received his PhD from LMU in 1976, with a thesis on a topic in mathematical logic (Das Entscheidungsproblem der Prädikatenlogik mit Identität und Funktionszeichen). In 1975-1983 he was a research assistant at the chair of F.L. Bauer at Technical University of Munich where in 1984 he completed his Habilitation in Informatics; in 1985 Wirsing became full professor and Chair of Informatics at the University of Passau and in 1992 he returned to LMU as the Chair of Programming and Software Engineering. Several years he served as Dean, Head of Department and Vice President of the Senat of LMU. Since 2010 he is Vice President for Teaching and Studies of LMU. In July 2016, he was awarded a Degree of Doctor of Science (Honoris Causa) by Royal Holloway, University of London.
His research interests comprise software engineering and its formal foundations, autonomous self-aware systems, and digitisation of universities. In 2006-2015 he was coordinating the European IP projects SENSORIA (2006-2010) on software engineering for service-oriented systems and ASCENS (2010-2015) on engineering collective autonomic systems. In 2007-2010 Martin Wirsing was the chairman of the Scientific Board of INRIA and in 2014-2017 a member of the scientific committee of Institut Mines-Télécom. Currently, he is a member of the board of trustees of Max Planck Institute of Psychiatry and of the scientific committees of the University of Bordeaux and IMDEA Software Institute. He is a member of the editorial board of several scientific journals and book series including Theoretical Computer Science (journal), International Journal of Software and Informatics, and Electronic Proceedings in Theoretical Computer Science.
Selected papers and books
Martin Wirsing: Algebraic Specification. In: J. van Leeuwen (ed.): Handbook of Theoretical Computer Science, Amsterdam, North-Holland, 1990, pp. 675–788 ()
Pietro Cenciarelli, Alexander Knapp, Bernhard Reus, and Martin Wirsing. An Event-Based Structural Operational Semantics of Multi-Threaded Java. In: Jim Alves-Foss (ed.): Formal Syntax and Semantics of Java, Lect. Notes Comp. Sci. 1523, Berlin: Springer, 1999, pp. 157–200 ()
Iman Poernomo, John Crossley, Martin Wirsing: Adapting Proofs-as-Programs: The Curry—Howard Protocol. Springer Monographs in Computer Science, 2005, 420 pages ()
Martin Wirsing, Jean-Pierre Banatre, Matthias Hölzl, Axel Rauschmayer (Eds.): Software-Intensive Systems and New Computing Paradigms. Lecture Notes in Computer Science 5380, Springer-Verlag, 2008, 265 pages ()
Martin Wirsing, Matthias Hölzl (Eds.): Rigorous Software Engineering for Service-Oriented Systems - Results of the SENSORIA Project on Software Engineering for Service-Oriented Computing. Lecture Notes in Computer Science 6582, Springer 2011, 737 pages ()
Jonas Eckhardt, Tobias Mühlbauer, Musab AlTurki, José Meseguer, Martin Wirsing: Stable Availability under Denial of Service Attacks through Formal Patterns. In: Juan de Lara, Andrea Zisman (Eds.): Fundamental Approaches to Software Engineering - 15th International Conference, FASE 2012. Lecture Notes in Computer Science 7212, Springer 2012, pp. 78–93 ()
Martin Wirsing, Matthias Hölzl, Nora Koch and Philip Mayer (eds.). Software Engineering for Collective Autonomic Systems: Results of the ASCENS Project, Vol. 8998 LNCS, Springer, 2015, 533 pages ()
Lenz Belzner, Rolf Hennicker, Martin Wirsing: OnPlan: A Framework for Simulation-Based Online Planning. Christiano Braga, Peter Csaba Ölveczky: Formal Aspects of Component Software - 12th International Conference, FACS 2015, Niterói, Brazil, October 14–16, 2015, Revised Selected Papers. Lecture Notes in Computer Science 9539, Springer 2016, pp. 1–30 ()
External links
Home page
Home page at LMU
Rocco De Nicola, Rolf Hennicker (eds.):Software, Services, and Systems - Essays Dedicated to Martin Wirsing on the Occasion of His Retirement from the Chair of Programming and Software Engineering. Lecture Notes in Computer Science 8950, Springer 2015,
Publications of Martin Wirsing indexed by the DBLP Bibliography Server at the University of Trier
References
German computer scientists
1948 births
University of Passau faculty
Ludwig Maximilian University of Munich faculty
Ludwig Maximilian University of Munich alumni
Living people |
20509939 | https://en.wikipedia.org/wiki/J.%20Random%20Hacker | J. Random Hacker | In computer slang, J. Random Hacker is an arbitrary programmer (hacker).
"J. Random Hacker" is a popular placeholder name in a number of books and articles in programming. J. Random Hacker even authored a book about ease of malicious hacking, Adventures of a Wi-Fi Pirate. Also, J. Random Hacker was a main developer of I2P software.
Over time, J. Random X has become a popular cliché, a snowclone, in computer lore, with more types of "random" (meaning "arbitrary") categories of people, such as "J. Random Newbie", J. Random User, or J. Random Luser.
See also
Alice and Bob, placeholder names often used when discussing computer security
Acme Corporation, placeholder name often used to describe a company
References
Internet slang
Computer humor
Placeholder names |
7416002 | https://en.wikipedia.org/wiki/Zoho%20Office%20Suite | Zoho Office Suite | Zoho Office Suite is an Indian web-based online office suite containing word processing, spreadsheets, presentations, databases, note-taking, wikis, web conferencing, customer relationship management (CRM), project management, invoicing and other applications. It is developed by Zoho Corporation.
History
Zoho Office Suite was launched in 2005 with a web-based word processor. Additional products such as spreadsheets and presentations, were incorporated later into Zoho. Zoho applications are distributed as software as a service (SaaS).
Zoho uses an open application programming interface for its Writer, Sheet, Show, Creator, Meeting, and Planner products. It also has plugins into Microsoft Word and Excel, an OpenOffice.org plugin, and a plugin for Firefox.
Zoho Sites is an online, drag and drop website builder. It provides web hosting, unlimited storage, bandwidth and web pages. Features also include an array of website templates and mobile websites.
Zoho CRM is a customer relationship management application with features like procurement, inventory, and some accounting functions from the realm of ERP. The free version is limited to 10 users.
In October 2009, Zoho integrated some of their applications with the Google Apps online suite. This enabled users to sign into both suites under one login. Zoho and Google still remain separate, competing companies.
In 2020, Zoho Workplace won Rank 1 in the Indian government's 'Atmanirbhar Bharat App Innovation challenge' in Office category while Zoho Invoice, Books & Expense won Rank 1 in business category.
References
Remote desktop
Remote administration software
Project management software
Web applications
Online office suites
Zoho
Web hosting
Web development software
Human resource management software
Customer relationship management software
Accounting software
Collaborative software
Bug and issue tracking software
Help desk software
Business intelligence
Reporting software
Free reporting software
Business intelligence companies
Data analysis software
Web conferencing
Communication software
Remote desktop software for Linux
Remote control |
25254660 | https://en.wikipedia.org/wiki/Silicon%20Labs | Silicon Labs | Silicon Laboratories, Inc. (Silicon Labs) is a fabless global technology company that designs and manufactures semiconductors, other silicon devices and software, which it sells to electronics design engineers and manufacturers in Internet of Things (IoT) infrastructure worldwide.
It is headquartered in Austin, Texas, United States. The company focuses on microcontrollers (MCUs) and wireless system on chips (SoCs) and modules. The company also produces software stacks including firmware libraries and protocol-based software, and a free software development platform called Simplicity Studio.
Silicon Labs was founded in 1996 and released its first product, an updated DAA design that enabled manufacturers to reduce the size and cost of a modem, two years later. During its first three years, the company focused on RF and CMOS integration, and developed the world's first CMOS RF synthesizer for mobile phones which was released in 1999. Following the appointment of Tyson Tuttle as the CEO in 2012, Silicon Labs has increasingly focused on developing technologies for the IoT market, which in 2019 accounted for more than 50 percent of the company's revenue, but in 2020 had increased to about 58 percent.
In August 2019, Silicon Labs had more than 1,770 patents worldwide issued or pending.
Company history
Silicon Labs was founded by Crystal Semiconductor (now owned by Cirrus Logic Inc.) alumni Nav Sooch, Dave Welland and Jeff Scott in 1996. It became a publicly traded company in 2000. The first product, an updated DAA design, was released in the market in 1998. It cost significantly less than traditional DAAs and used less space compared to established products, which made it an instant success, taking the company's sales from $5.6 million in 1998 to nearly $47 million in 1999.
During its early years, the company focused on developing an improved RF synthesizer for mobile phones that would cost less and take up less space. It introduced its first RF Chip in late 1999.
Since 2012, Silicon Labs has been increasingly focused on developing technologies for the evolving IoT market. On April 22, 2021, Silicon Labs announced the sale of its infrastructure and automotive business to Skyworks Solutions Inc for $2.75 billion. The deal was closed on July 26, 2021.
In July 2021, it was announced that Tyson Tuttle would be stepping down as CEO.In January 2022 former president, Matt Johnson, completed the transition into the CEO position.
Key product launches
In 1998, released updated DAA design.
In 1999, launched RF Chip.
In 2001, released first products in its timing portfolio, a family of clock generators designed for high-speed communication systems.
In 2003, entered the mixed-signal MCU market with analog-intensive high-speed 8-bit MCUs.
In 2004, released its first crystal oscillator family featuring patented digital phase locked loop (DSPLL) technology.
In 2005, introduced a single-chip FM receiver, which enabled FM radio to be installed in a new range of applications.
In 2006, entered the automotive electronics market with the launch of an integrated MCU family.
In 2007, launched industry's first single-port PoE interface with integrated DC-DC controller.
In 2008, released industry's smallest fully integrated automotive AM/FM radio receiver IC.
In 2009, entered the human interface market with a portfolio of fast-response touch, proximity and ambient light sensor devices.
In 2010, introduced industry's first single-chip multimedia digital TV demodulator.
In 2011, released industry's first single-chip hybrid TV receiver.
In 2012, entered the ARM-based 32-bit MCU market with a line of mixed-signal MCUs with USB and non-USB options.
In 2013, introduced the world's first single-chip digital radio receivers for consumer electronics.
In 2014, released the world's first digital ultraviolet index sensors.
In 2015, launched Thread networking technology for connecting devices including wireless sensor networks, thermostats, connected lighting devices and control panels.
In 2016, released Gecko family of multiprotocol wireless SoC devices.
In 2017, launched industry's first wireless clocks that support 4G/LTE and Ethernet.
In 2018, launched Z-Wave 700 hardware/software IoT platform.
In 2019, launched updated version of wireless Gecko web development platform.
In 2021, launched Wi-SUN® technology
In 2021, announced that Silicon Labs wireless devices support Matter end products
Leadership
Matt Johnson, Chief Executive Officer
John Hollister, Chief Financial Officer
Daniel Cooley, Chief Technology Officer
Karuna Annavajjala, Chief Information Officer
Serena Townsend, Senior Vice President and Chief People Officer
Megan Lueders, Chief Marketing Officer
Brandon Tolany, Senior Vice President of Worldwide Sales and Marketing
Sandeep Kumar, Senior Vice President of Worldwide Operations
Sharon Hagi, Chief Security Officer
Néstor Ho, Chief Legal Officer, Vice President and Corporate Secretary
Products
Silicon Labs provides semiconductor products for use in a variety of connected devices. The company also provides development kits and software including Simplicity Studio, an integrated development environment for IoT connected device applications.
'Silicon Labs' portfolio is built around the Internet of Things (IoT) focus area, primarily focused on home and life and industrial and commercial wireless applications.
Internet of Things
Wireless:
System-on-Chip
Mesh Networking Modules
Protocols supported include:
Bluetooth
Proprietary wireless protocols for Sub-GHz and 2.4 GHz frequencies
Zigbee
Z-Wave for smart home applications
Thread networking solutions
Wi-Fi transceivers, transceiver modules, Xpress modules, stand-alone modules
Wi-SUN®
MCUs
EFM8 8-bit MCUs
EFM32 32-bit MCUs
Sensors
Security technologies
Silicon Labs’ product portfolio is protected by a range of security measures:
Anti-rollback prevention
Protects device by preventing the execution of previous versions of authenticated firmware that might carry security flaws
Cryptographic accelerator
Differential Power Analysis (DPA) countermeasures
Protected secret key storage
Public Key Infrastructure
IoT Device Certificate Authority enabling device-to-device or device-to-server identity authentication
Secure boot
Secure Boot with Root of Trust and Secure Loader (RTSL) provides additional security for loading initial code to the system microcontroller
Secure debug with lock/unlock
Access to debug port controlled by a unique lock token generated by signing a revocable unique identifier with a customer generated private key
Secure link
Encrypting the link between a host processor and radio transceiver or network co-processor (NCP)
Secure programming at manufacturing
Secure Vault
Integrated hardware and software security technology Features include:
Secure device identity
Secure key management and storage
Advanced tamper detection
True Random Number Generator
Protocols
Silicon Labs technologies support seven wireless protocols.
Bluetooth
Bluetooth software enables developers to utilize Bluetooth LE, Bluetooth 5, Bluetooth 5.1, Bluetooth 5.2, and Bluetooth mesh. Bluetooth SDK can be used to create standalone Bluetooth applications for Wireless Gecko SoCs or modules, or network co-processor (NCP) applications. Products include:
Bluetooth SoCs
Certified Bluetooth modules
Software
Proprietary wireless protocols
Devices cover sub-GHz and 2.4 GHz frequencies, delivering ultra-low power, long range, up to 20dBm output power and different modulation schemes for major frequency bands. Products include:
Transceivers
Multi-band wireless SoCs for IoT applications
Wireless MCPs
RF synthesizers
Dynamic Multi-protocol (DMP) for smartphone connectivity in long-range solutions
SDKs for accelerating proprietary protocol development
Thread
Technologies that enabling IP connectivity through self-healing mesh features, native IPv6 based connectivity and different security options. Products include:
Software stacks
Development tools
Modules
SoCs
Reference designs
Zigbee
Software stacks and development tools for Zigbee applications, including Mesh Networking SoCs and modules.
Z-Wave
Modules and SoCs for applications in sectors including smart home, hospitality and MDUs, where sensors and battery-operated devices require long range and low power.
Wi-Fi
Wi-Fi SoCs and modules designed for applications requiring low power and good RF performance, such as IoT. Products include:
Wi-Fi transceivers
Transceiver modules
Xpress modules
Stand-alone modules
Wi-SUN®
Wi-SUN (Wireless Smart Ubiquitous Network) is a field area network (FAN) to enable long-distance connectivity (https://www.allaboutcircuits.com/news/wisun-new-wireless-standard-rivaling-lorawan-nb-iot-smart-cities/). The Wi-SUN technology aims to simplify LPWAN deployment and enable secure wireless connectivity in applications including advanced metering infrastructure (AMI), street lighting networks, asset management, and parking, air quality, and waste management sensors.
Matter
Matter is a global IoT connectivity standard that builds on top of existing IP-connectivity protocols to enable cross-platform IoT communication, encompassing end products, mobile applications, and cloud services. Silicon Labs wireless devices are available for the development of Matter end products that support Thread, Wi-Fi, and Bluetooth protocols.
Industry associations
Silicon Labs is a founding member of both the ZigBee Alliance and the Thread Group, and is on the Board of Directors at the Wi-SUN Alliance.
The company is also a member of the Bluetooth Special Interest Group, Wi-Fi Alliance, Z-Wave Alliance and a Gold member of the Open Connectivity Foundation and the RISC-V Foundation.
Acquisitions
Krypton Isolation Inc. (2000)
Cygnal Integrated Products (2003)
Silicon Magike (2005)
Silembia (2006)
Integration Associates (2008)
Silicon Clocks and ChipSensors (2010)
SpectraLinear (2011)
Ember Corporation (2012)
Energy Micro (2013)
Touchstone Semiconductor (2014)
Bluegiga and Telegesis(2015)
Micrium (2016)
Zentri (2017)
Z-Wave, acquired from Sigma Designs (2018)
IEEE 1588 precision time protocol (PTP) software and module assets from Qulsar (2019)
Redpine Signals’ connectivity business (2020)
Finances
For the fiscal year 2020, Silicon Labs reported GAAP earnings of $12.5 million with an annual revenue of $886 million. Its market capitalization was valued at $6.02 billion in February 2021.
Locations
Silicon Labs is headquartered in Austin, Texas, with regional offices in Boston, Massachusetts and San Jose, California. The company has also corporate offices in Quebec, Canada; Copenhagen, Denmark; Espoo, Finland; Budapest, Hungary; Oslo, Norway and Singapore.
It has 15 sales offices across the world. These include Boston and San Jose in the US; Beijing, Shanghai, Shenzhen and Wuhan in China; Espoo, Finland; Montigny-le-Bretonneux, France; Munich, Germany; Milan, Italy; Tokyo, Japan; Seoul, South Korea; Singapore; Taipei, Taiwan; and Camberley, the UK.
Silicon Labs has a wireless development center in Hyderabad, India.
References
Fabless semiconductor companies
Semiconductor companies of the United States
Electronics companies established in 1996
American companies established in 1996
Manufacturing companies based in Austin, Texas
Companies listed on the Nasdaq |
46242051 | https://en.wikipedia.org/wiki/Victor%20V.%20Solovyev | Victor V. Solovyev | Victor V. Solovyev () is Chief Scientific Officer of Softberry Inc.. He has previously served as a Professor of Computer Science in the Computer, Electrical and Mathematical Sciences and Engineering Division at King Abdullah University of Science and Technology (KAUST) (2013-2015) and in the Department of Computer Science, Royal Holloway College, University of London (2003-2012). He served on the Editorial Board of Mathematical Biosciences and was a founder of Softberry Inc..
Research
Victor Solovyev works with developing statistical approaches, machine learning algorithms, computational platforms and bioinformatics tools for high-throughput biological big data analysis. He is interested in genome structural and functional annotation and applying it for rational design of biological systems.
Education
Victor Solovyev received Ph.D. in Genetics from Russian Academy of Sciences in 1985 and M.S. in Physics from Novosibirsk State University in 1978.
Career
Victor Solovyev joined KAUST in 2013 as Professor in the Computer, Electrical and Mathematical Sciences and Engineering Division. He has previously served as a Professor of Computer Science in the Department of Computer Science, Royal Holloway College, London University (2003-2012). He was the Genome annotation Group Leader in Joint Genomic Institute, Lawrence Berkeley National Lab (2003) and was the Director of Bioinformatics at EOS Biotechnology (1999-2002). He formerly served as a leader of Computational Genomics Group at the Sanger Centre, Cambridge, UK (1997-1999). He also held positions of Assistant Professor at Baylor College of Medicine, computational scientist at Amgen Inc., visiting scientist at Supercomputer Center, Florida State University, Visiting Professor at ITBA (Milan, Italy) and a group leader at the Institute of Cytology and Genetics, Novosibirsk, Russia.
Software developing
About 100 software applications implemented as standalone programs or combined in pipeline or packages, have been developed under his guidance. Many of these programs are available for academic community to run on-line or for downloading. Scientific community actively uses these applications. For example, Fgenesh eukaryotic gene identification program has been used/cited in >3000 scientific publications, according to Google scholar data; Fgenesb bacterial genome annotation pipeline based on Markov chain models was significantly superior to other approaches, in gene finding in bacterial community sequences; MolQuest is the most comprehensive, easy-to-use desktop application for sequence analysis and molecular biology data management.
Other interests
Besides bioinformatics his interests include cryptography and information security. FendoffF - an application to encrypt your passwords, files or images that uses several original encryption methods is developed for iOS and Android as well as for desktop computers. He also developed Wild West Chess computer game.
References
External links
Russian bioinformaticians
Living people
Year of birth missing (living people)
King Abdullah University of Science and Technology faculty |
371658 | https://en.wikipedia.org/wiki/Loadable%20kernel%20module | Loadable kernel module | In computing, a loadable kernel module (LKM) is an object file that contains code to extend the running kernel, or so-called base kernel, of an operating system. LKMs are typically used to add support for new hardware (as device drivers) and/or filesystems, or for adding system calls. When the functionality provided by an LKM is no longer required, it can be unloaded in order to free memory and other resources.
Most current Unix-like systems and Microsoft Windows support loadable kernel modules under different names, such as kernel loadable module (kld) in FreeBSD, kernel extension (kext) in macOS (now deprecated), kernel extension module in AIX, kernel-mode driver in Windows NT and downloadable kernel module (DKM) in VxWorks. They are also known as kernel loadable modules (or KLM), and simply as kernel modules (KMOD).
Advantages
Without loadable kernel modules, an operating system would have to include all possible anticipated functionality compiled directly into the base kernel. Much of that functionality would reside in memory without being used, wasting memory, and would require that users rebuild and reboot the base kernel every time they require new functionality.
Disadvantages
One minor criticism of preferring a modular kernel over a static kernel is the so-called fragmentation penalty. The base kernel is always unpacked into real contiguous memory by its setup routines; thus, the base kernel code is never fragmented. Once the system is in a state in which modules may be inserted, for example once the filesystems have been mounted that contain the modules, it is likely that any new kernel code insertion will cause the kernel to become fragmented, thereby introducing a minor performance penalty by using more TLB entries, causing more TLB misses.
Implementations in different operating systems
Linux
Loadable kernel modules in Linux are loaded (and unloaded) by the modprobe command. They are located in /lib/modules or /usr/lib/modules and have had the extension .ko ("kernel object") since version 2.6 (previous versions used the .o extension). The lsmod command lists the loaded kernel modules. In emergency cases, when the system fails to boot due to e.g. broken modules, specific modules can be enabled or disabled by modifying the kernel boot parameters list (for example, if using GRUB, by pressing 'e' in the GRUB start menu, then editing the kernel parameter line).
License issues
In the opinion of Linux maintainers, LKM are derived works of the kernel. The Linux maintainers tolerate the distribution of proprietary modules, but allow symbols to be marked as only available to GNU General Public License (GPL) modules.
Loading a proprietary or non-GPL-compatible module will set a 'taint' flag in the running kernel—meaning that any problems or bugs experienced will be less likely to be investigated by the maintainers. LKMs effectively become part of the running kernel, so can corrupt kernel data structures and produce bugs that may not be able to be investigated if the module is indeed proprietary.
Linuxant controversy
In 2004, Linuxant, a consulting company that releases proprietary device drivers as loadable kernel modules, attempted to abuse a null terminator in their MODULE_LICENSE, as visible in the following code excerpt:
MODULE_LICENSE("GPL\0for files in the \"GPL\" directory; for others, only LICENSE file applies");
The string comparison code used by the kernel at the time tried to determine whether the module was GPLed stopped when it reached a null character (\0), so it was fooled into thinking that the module was declaring its license to be just "GPL".
FreeBSD
Kernel modules for FreeBSD are stored within /boot/kernel/ for modules distributed with the operating system, or usually /boot/modules/ for modules installed from FreeBSD ports or FreeBSD packages, or for proprietary or otherwise binary-only modules. FreeBSD kernel modules usually have the extension .ko. Once the machine has booted, they may be loaded with the kldload command, unloaded with kldunload, and listed with kldstat. Modules can also be loaded from the loader before the kernel starts, either automatically (through /boot/loader.conf) or by hand.
macOS
Some loadable kernel modules in macOS can be loaded automatically. Loadable kernel modules can also be loaded by the kextload command. They can be listed by the kextstat command. Loadable kernel modules are located in bundles with the extension .kext. Modules supplied with the operating system are stored in the /System/Library/Extensions directory; modules supplied by third parties are in various other directories.
NetWare
A NetWare kernel module is referred to as a NetWare Loadable Module (NLM). NLMs are inserted into the NetWare kernel by means of the LOAD command, and removed by means of the UNLOAD command; the modules command lists currently loaded kernel modules. NLMs may reside in any valid search path assigned on the NetWare server, and they have .NLM as the file name extension.
VxWorks
A downloadable kernel module (DKM) type project can be created to generate a ".out" file which can then be loaded to kernel space using "ld" command. This downloadable kernel module can be unloaded using "unld" command.
Solaris
Solaris has a configurable kernel module load path, it defaults to /platform/platform-name/kernel /kernel /usr/kernel. Most kernel modules live in subdirectories under /kernel; those not considered necessary to boot the system to the point that init can start are often (but not always) found in /usr/kernel. When running a DEBUG kernel build the system actively attempts to unload modules.
Binary compatibility
Linux does not provide a stable API or ABI for kernel modules. This means that there are differences in internal structure and function between different kernel versions, which can cause compatibility problems. In an attempt to combat those problems, symbol versioning data is placed within the .modinfo section of loadable ELF modules. This versioning information can be compared with that of the running kernel before loading a module; if the versions are incompatible, the module will not be loaded.
Other operating systems, such as Solaris, FreeBSD, macOS, and Windows keep the kernel API and ABI relatively stable, thus avoiding this problem. For example, FreeBSD kernel modules compiled against kernel version 6.0 will work without recompilation on any other FreeBSD 6.x version, e.g. 6.4. However, they are not compatible with other major versions and must be recompiled for use with FreeBSD 7.x, as API and ABI compatibility is maintained only within a branch.
Security
While loadable kernel modules are a convenient method of modifying the running kernel, this can be abused by attackers on a compromised system to prevent detection of their processes or files, allowing them to maintain control over the system. Many rootkits make use of LKMs in this way. Note that on most operating systems modules do not help privilege elevation in any way, as elevated privilege is required to load a LKM; they merely make it easier for the attacker to hide the break-in.
Linux
Linux allows disabling module loading via sysctl option /proc/sys/kernel/modules_disabled. An initramfs system may load specific modules needed for a machine at boot and then disable module loading. This makes the security very similar to a monolithic kernel. If an attacker can change the initramfs, they can change the kernel binary.
macOS
In OS X Yosemite and later releases, a kernel extension has to be code-signed with a developer certificate that holds a particular "entitlement" for this. Such a developer certificate is only provided by Apple on request and not automatically given to Apple Developer members. This feature, called "kext signing", is enabled by default and it instructs the kernel to stop booting if unsigned kernel extensions are present. In El Capitan and later releases, it is part of System Integrity Protection.
In older versions of macOS, or if kext signing is disabled, a loadable kernel module in a kernel extension bundle can be loaded by non-root users if the OSBundleAllowUserLoad property is set to True in the bundle's property list. However, if any of the files in the bundle, including the executable code file, are not owned by root and group wheel, or are writable by the group or "other", the attempt to load the kernel loadable module will fail.
Solaris
Kernel modules can optionally have a cryptographic signature ELF section which is verified on load depending on the Verified Boot policy settings. The kernel can enforce that modules are cryptographically signed by a set of trusted certificates; the list of trusted certificates is held outside of the OS in the ILOM on some SPARC based platforms. Userspace initiated kernel module loading is only possible from the Trusted Path when the system is running with the Immutable Global Zone feature enabled.
See also
NetWare Loadable Module
References
FreeBSD
Linux kernel
Operating system kernels |
58829502 | https://en.wikipedia.org/wiki/Yes%20or%20Yes | Yes or Yes | Yes or Yes (stylized as YES or YES) is the sixth extended play (EP) by the South Korean girl group Twice. It was released on November 5, 2018, by JYP Entertainment and distributed by Iriver. It contains seven tracks, including the lead single of the same name and the Korean version of "BDZ". Twice members Jeongyeon, Chaeyoung and Jihyo took part in writing lyrics for three songs on the EP.
The album became a commercial success for the group, topping the Gaon Album Chart and becoming Twice's first Korean album to top Japan's Oricon Album Chart. It recorded over 300,000 copies sold, and with its release, Twice reached an accumulated number of over 3 million albums sold in South Korea. A reissue, titled The Year of "Yes", was released on December 12, 2018.
Background and release
In early October 2018, advertisements with the phrase "Do you like Twice? Yes or Yes" () were put up on subway billboards, drawing attention online. On October 11, JYP Entertainment confirmed that Twice planned to release a third Korean album that year on November 5. Yes or Yes was revealed as the album's title on October 20 and a special video commemorating Twice's third anniversary contained a short clip of the album's lead single of the same name.
Twice released their first group teaser photo regarding their comeback on October 23. On October 24, individual teaser posters featuring Nayeon, Jeongyeon, and Momo were uploaded. A track list image for the album's eponymous title track was also posted, revealing that it was written by Sim Eun-jee, who previously worked with Twice as a songwriter for "Knock Knock". On October 25, individual teaser photos featuring Sana, Jihyo, and Mina were posted by the group. On the same day, a second track list image for the album was posted, revealing the titles of three songs written by Twice members: "LaLaLa" penned by Jeongyeon, "Young & Wild" co-written by Chaeyoung, and "Sunset" being written by Jihyo. On October 26, individual teaser photos featuring Dahyun, Chaeyoung, and Tzuyu were uploaded. A third track list image unveiling additional details about the album was also posted, revealing seven songs in total.
On October 27, a second group teaser photo was released by Twice. On October 28, a second set of individual teaser photos featuring each member was uploaded. Twice then revealed their first music video teaser for "Yes or Yes" on October 29. On October 30, Twice unveiled their third group teaser poster. The following day, the group released the second music video teaser for the album's title track, revealing their opening choreography. A full preview of the album's contents was revealed by the group on November 1. On November 2, Twice uploaded their third music video teaser, revealing more of their choreography and opening verse. More parts of the lead track's opening verse was revealed by the group on November 3. A highlight medley featuring snippets from all of the album's tracks was uploaded on November 4.
Yes or Yes alongside its eponymous lead single was officially released on November 5, with Twice holding their live showcase at the KBS Arena Hall in Hwagok-dong, Gangseo-gu, Seoul.
Composition
Yes or Yes is an EP consisting of seven tracks. The title track "Yes or Yes" was composed by David Amber and Andy Love, with Korean lyrics by Sim Eun-jee. Amber previously co-composed "Heart Shaker" and Sim Eun-jee co-wrote lyrics for "Knock Knock". "Yes or Yes" was described as a bright and lively "color pop" song in the synth-pop genre with influences from Motown, reggae and arena pop. Lyrically, it is about only being able to reply "yes" to a confession of love.
"Say You Love Me" is an upbeat song which lyrically describes the feeling of one who is admitting to their romantic interest and waiting for their reply. "LaLaLa" is written by Jeongyeon, and is described as a "quintessential love song". "Young & Wild" is penned by Chaeyoung and lyrically talks about self-confidence. "Sunset", written by Jihyo, features a mono-speaker sound effect with its lyrics comparing one's romantic interest to a sunset. "After Moon" is classified as a ballad track. The album's final track is the Korean version of "BDZ" from their Japanese album BDZ.
Promotion
Two days before the album's release, Twice appeared on the television show Knowing Bros and performed part of "Yes or Yes" for the first time. The group held a showcase for the album on November 5, 2018 at the KBS Arena Hall in Gangseo-gu, Seoul. The first televised performance of "Yes or Yes" was at the 2018 MBC Plus X Genie Music Awards on November 6. Twice also appeared on Idol Room as part of the promotion for the album.
The group promoted their mini-album on several Korean music show programs, first performing the title track and "BDZ" on M Countdown on November 8. They also performed on KBS2's Music Bank on November 9 and 23, SBS' Inkigayo on November 11, MBC M's Show Champion on November 14, and MBC's Show! Music Core on November 17. The title track "Yes or Yes" garnered a total of four music show wins, first getting a win on Show Champion on November 14. It received a music show win on Music Bank and Inkigayo, and achieved its fourth win on Show Champion for the second week.
Twice also performed "Yes or Yes" at the 39th Blue Dragon Film Awards held on November 23.
Commercial performance
Following the release of Yes or Yes, the lead single achieved an 'all-kill' by topping the real-time rankings on Melon, Mnet, Naver, Genie, Olle, Soribada, and Bugs. The EP also reached the top of 17 iTunes Album charts. Additionally, all seven tracks from the mini-album charted in the top 7 of Japan's Line Music charts. In South Korea, the album topped the Gaon Album Chart and the title track topped the Gaon Digital Chart after the first week of its release. Yes or Yes was Twice's first Korean album to rank number 1 on Japan's Oricon Albums Chart and Digital Albums Chart. On November 11, Yes or Yes received a Platinum certification from Gaon for reaching sales of over 250,000 copies. The album then ranked at number three on the Monthly Gaon Album Chart for the month of November, recording 322,803 copies sold.
With the release of Yes or Yes, Twice reached an accumulated number of over 3 million albums sold in South Korea, achieving the feat within three years of their career.
Track listing
Content production
Credits adapted from album liner notes.
Locations
Recording
The Vibe Studio, Seoul, South Korea ("Yes or Yes")
821 Sound, Seoul, South Korea ("Yes or Yes")
Ingrid Studio, Seoul, South Korea ("Yes or Yes", "Young & Wild")
U Productions, Seoul, South Korea ("Say You Love Me", "LaLaLa", "Young & Wild", "Sunset", "After Moon")
Feeline Studio, Seoul, South Korea ("Say You Love Me")
MonoTree Studio, Seoul, South Korea ("LaLaLa")
Iconic Studio, Seoul, South Korea ("Sunset")
JYPE Studios, Seoul, South Korea ("BDZ")
Mixing
Rcave Sound, Seoul, South Korea ("Yes or Yes", "Say You Love Me", "LaLaLa", "Young & Wild", "Sunset", "After Moon")
Mirrorball Studios, North Hollywood, California ("BDZ")
Mastering
Sterling Sound, New York City, New York ("Yes or Yes", "BDZ")
821 Sound Mastering, Seoul, South Korea ("Say You Love Me", "LaLaLa", "Young & Wild", "Sunset", "After Moon")
Photography
Miss Yoon in Wonderland, Seoul, South Korea
Personnel
J. Y. Park "The Asiansoul" – producer, all instruments (on "BDZ")
Lee Ji-young – direction and coordination (A&R)
Jang Ha-na – music (A&R)
Kim Yeo-joo (Jane Kim) – music (A&R)
Kim Ji-hyeong – production (A&R)
Cha Ji-yoon – production (A&R)
Kang Geon – production (A&R)
Hwang Hyun-joon – production (A&R)
Kim Bo-hyeon – design (A&R), album art direction and design, web design
Kim Tae-eun – design (A&R), album art direction and design
Seo Yeon-ah – design (A&R), web design
Lee So-yeon – design (A&R), album art direction and design
Lee Ga-young – design (A&R), album art direction and design, web design
Choi Hye-jin – recording engineer (on "Yes or Yes", "Say You Love Me" and "Sunset")
Eom Se-hee – recording engineer (on "Yes or Yes" and "BDZ")
Jang Han-soo – recording engineer (on "After Moon")
Lee Sang-yeop – recording engineer (on "LaLaLa" and "Young & Wild")
Woo Min-jeong – recording engineer (on "Yes or Yes" and "Young & Wild")
Sophia Pae – recording engineer, vocal director and background vocals (on "Say You Love Me" and "Sunset")
Choo Dae-kwon (MonoTree) – recording engineer, vocal director (on "LaLaLa")
Kim Jeong – recording engineer (on "Sunset")
Lee Tae-seop – mixing engineer (on "Yes or Yes", "Say You Love Me", "LaLaLa", "Sunset" and "After Moon")
Lim Hong-jin – mixing engineer (on "Young & Wild")
Tony Maserati – mixing engineer (on "BDZ")
Kwon Nam-woo – mastering engineer (on "Say You Love Me", "LaLaLa", "Young & Wild", "Sunset" and "After Moon")
Chris Gehringer – mastering engineer (on "Yes or Yes" and "BDZ")
Naive Production – video director
Kim Young-jo – video executive producer
Yoo Seung-woo – video executive producer
Choi Pyeong-gang – video co-producer
Kwak Gi-gon at TEO Agency – photographer
Ahn Yeon-hoo – photographer
Son Eun-hee at Lulu – hair director
Jung Nan-young at Lulu – hair director
Choi Ji-young at Lulu – hair director
Im Jin-hee at Lulu – hair director
Jo Sang-ki at Lulu – make-up director
Jeon Dal-lae at Lulu – make-up director
Zia at Lulu – make-up director
Won Jung-yo at Bit&Boot – make-up director
Choi Su-ji at Bit&Boot – make-up director
Oh Yu-ra – style director
Shin Hyun-kuk – management and marketing director
Daseul Kim – choreographer
Today Art – printing
David Amber – programming, keyboards, guitars (on "Yes or Yes")
Sim Eun-jee – vocal director (on "Yes or Yes")
Twice – background vocals (on "Yes or Yes" and "BDZ")
Kwon Seon-young – background vocals (on "Yes or Yes" and "Young & Wild")
Jeong Yu-ra at Anemone Studio – digital editor (on "Yes or Yes" and "Young & Wild")
Secret Weapon – all instruments, computer programming (on "Say You Love Me" and "Sunset")
Jiyoung Shin NYC – additional editor (on "Say You Love Me", "LaLaLa", "Sunset" and "BDZ")
Albi Albertsson – all instruments, computer programming (on "LaLaLa")
Yoo Young-jin – background vocals (on "LaLaLa")
Doko – vocal director, background vocals (on "Young & Wild")
Kim Woong – drum, bass guitar, synthesizer, piano (on "After Moon")
Kim So-ri – background vocals (on "After Moon")
Moon Soo-jeong – digital editor (on "After Moon")
Lee Hae-sol – all instruments, computer programming (on "BDZ")
Jung Jae-pil – guitars (on "BDZ")
Dr. Jo – vocal director (on "BDZ")
Charts
Weekly charts
Year-end charts
Certifications
Accolades
Release history
References
2018 EPs
Twice (group) EPs
JYP Entertainment EPs
Korean-language EPs
IRIVER EPs
Republic Records EPs |
1734894 | https://en.wikipedia.org/wiki/TJX%20Companies | TJX Companies | The TJX Companies, Inc. (abbreviated TJX) is an American multinational off-price department store corporation, headquartered in Framingham, Massachusetts. It was formed as a subsidiary of Zayre Corp. in 1987, and became the legal successor to Zayre Corp. following a company reorganization in 1989.
, TJX operates its flagship brand, TJ Maxx (in the United States) and TK Maxx (in Europe), Marshalls, HomeGoods, HomeSense, Sierra in the United States, and HomeSense, Marshalls, Winners in Canada. There are over 4,557 discount stores in the TJX portfolio located in nine countries. TJX ranked No. 97 in the 2021 Fortune 500 list of the largest United States corporations by total revenue.
History
Zayre
The roots of The TJX Companies date back to 1977 when the first TJ Maxx store opened in Auburn, Massachusetts as part of the discount department store chain Zayre. In June 1987, Zayre established The TJX companies as a subsidiary. In the first half of 1988, Zayre stores had operating losses of $69 million on sales of $1.4 billion. Observers blamed technological inferiority, poor maintenance, inappropriate pricing, and inventory pileups, and Zayre appeared ripe for takeover. Throughout all this, however, The TJX Companies subsidiary continued to yield a profit. In October 1988, Zayre Corp. decided to focus its energies on TJX. It sold the entire chain of nearly 400 Zayre stores to Ames Department Stores Inc. In exchange, the company received $431.4 million in cash, a receivable note, and what was then valued at $140 million of Ames cumulative senior convertible preferred stock.
The company continued focus on its core business, selling unrelated operations including BJ's Wholesale Club and Home Club, leaving it with just one brand, T.J. Maxx. In June 1989, Zayre Corp. acquired the outstanding minority interest in TJX and merged with the subsidiary, changing its name from Zayre Corp. to The TJX Companies, Inc. in the process. The newly named company began trading on the New York Stock Exchange.
Expansion
In 1990, TJX expanded into an additional store brand division, and at the same time it first went international, as it entered the Canadian market by acquiring the five-store Winners chain. Two years later, it launched its third brand, HomeGoods, in the United States. TJX's expansion beyond North America came in 1994, when the fourth brand division, T.K. Maxx, was founded in the United Kingdom, and then expanded into Ireland. In 1995, TJX doubled in size when it acquired Marshalls, its fifth brand. T.J. Maxx and Marshalls later became consolidated as two brands under a single division, The Marmaxx Group. The following year, TJX Companies Inc. was added to the Standard & Poor's S&P 500 Composite Index, which consists of 500 of the largest companies in the United States. TJX sold Hit or Miss, a discount mall based clothing store in 1995 as well through an employee leveraged buyout.
TJX launched a sixth brand, A.J. Wright, in 1998 in the eastern U.S. The brand went national in 2004 when it opened its first stores in California on the west coast. The company's seventh brand division, HomeSense, formed in 2001, was a Canadian brand modeled after the existing US brand, HomeGoods.
In 2002, TJX revenue reached almost $12 billion. In mid-2003, TJX acquired an eighth brand division, Bob's Stores, concentrated in New England. In Canada, TJX began to configure some Winners and HomeSense stores side by side as superstores. The superstores feature open passageways between them, with dual branding. TJX's revenue in 2003 reached over $13 billion. TJX began to test the side-by-side superstore model in the United States in 2004, combining some of each of the two Marmaxx brand stores with HomeGoods. The company reached 141st position in the 2004 Fortune 500 rankings, with almost $15 billion in revenue. That year was also marked by the death of retired Zayre founder Stanley Feldberg.
In April 2008, TJX launched the HomeSense brand in the UK, with six stores opening throughout May. The brand is more upmarket than its Canadian namesake. Later that year, in August, TJX sold Bob's Stores to Versa Capital Management and Crystal Capital.
In December 2010, TJX announced that the A.J. Wright stores would be closed, cutting about 4,400 jobs, and that more than half of them would reopen under other company brands.
In July 2015, TJX acquired the Trade Secret and Home Secret off-price retail businesses from Australian company Gazal Corporation Limited. The deal was completed in December. In October, Ernie Herrman was named CEO of the company, replacing Carol Meyrowitz. He took over in January 2016.
In November 2019, TJX purchased a 25% stake in Russian retailer Familia.
COVID-19 impact
On August 19, 2020, TJX Companies continue to deal with the COVID-19 pandemic's effect on its business. The company announced that revenues dropped 31% over the months of May, June, and July, primarily due to extensive closures of the shop for around one-third of the period. TJX Companies reported a second-quarter loss of $214 million.
Incidents
Computer systems intrusion
On January 17, 2007, TJX announced that it was the victim of an unauthorized computer systems intrusion. It discovered in mid-December 2006 that its computer systems were compromised and customer data was stolen. The hackers accessed a system that stores data on credit card, debit card, check, and merchandise return transactions. The intrusion was kept confidential as requested by law enforcement. TJX said that it was working with General Dynamics, IBM and Deloitte to upgrade computer security.
By the end of March 2007, the number of affected customers had reached 45.7 million, and prompted credit bureaus to seek legislation requiring retailers to be responsible for compromised customer information saved in their systems. In addition to credit card numbers, personal information such as social security numbers and driver's license numbers from 451,000 customers were downloaded by the intruders. The breach was possible due to a non-secure wireless network in one of the stores. Eleven men were charged in the theft, and one (Damon Patrick Toey) pleaded guilty to numerous charges related to the breach. Another, Jonathan James, professed his innocence and later committed suicide, apparently out of the belief that he was going to be indicted. The alleged ringleader Albert Gonzalez, was later indicted in August 2009 with attacking Heartland Payment Systems, where 130 million records were compromised.
List of brands
Current brands
Divisions
Marmaxx – TJ Maxx and Marshalls (US)
HomeGoods – HomeGoods and HomeSense (US)
TJX Canada – Winners, HomeSense (Canada), and Marshalls (Canada)
TJX International – TK Maxx and HomeSense (UK and Ireland)
Former brands
References
Sources
External links
1989 establishments in Massachusetts
Companies based in Framingham, Massachusetts
Companies listed on the New York Stock Exchange
Retail companies established in 1989
Retail companies of the United Kingdom |
22992399 | https://en.wikipedia.org/wiki/Construction%20and%20Analysis%20of%20Distributed%20Processes | Construction and Analysis of Distributed Processes | CADP (Construction and Analysis of Distributed Processes) is a toolbox for the design of communication protocols and distributed systems. CADP is developed by the CONVECS team (formerly by the VASY team) at INRIA Rhone-Alpes and connected to various complementary tools. CADP is maintained, regularly improved, and used in many industrial projects.
The purpose of the CADP toolkit is to facilitate the design of reliable systems by use of formal description techniques together with software tools for simulation, rapid application development, verification, and test generation.
CADP can be applied to any system that comprises asynchronous concurrency, i.e., any system whose behavior can be modeled as a set of parallel processes governed by interleaving semantics. Therefore, CADP can be used to design hardware architecture, distributed algorithms, telecommunications protocols, etc.
The enumerative verification (also known as explicit state verification) techniques implemented in CADP, though less general that theorem proving, enable an automatic, cost-efficient detection of design errors in complex systems.
CADP includes tools to support use of two approaches in formal methods, both of which are needed for reliable systems design:
Models provide mathematical representations for parallel programs and related verification problems. Examples of models are automata, networks of communicating automata, Petri nets, binary decision diagrams, boolean equation systems, etc. From a theoretical point of view, research on models seeks for general results, independent of any particular description language.
In practice, models are often too elementary to describe complex systems directly (this would be tedious and error-prone). A higher level formalism known as process algebra or process calculus is needed for this task, as well as compilers that translate high-level descriptions into models suitable for verification algorithms.
History
Work began on CADP in 1986, when the development of the first two tools, CAESAR and ALDEBARAN, was undertaken. In 1989, the CADP acronym was coined, which stood for CAESAR/ALDEBARAN Distribution Package. Over time, several tools were added, including programming interfaces that enabled tools to be contributed: the CADP acronym then became the CAESAR/ALDEBARAN Development Package. Currently CADP contains over 50 tools. While keeping the same acronym, the name of the toolbox has been changed to better indicate its purpose:
Construction and Analysis of Distributed Processes.
Major releases
The releases of CADP have been successively named with alphabetic letters (from "A" to "Z"), then with the names of cities hosting academic research groups actively working on the LOTOS language and, more generally, the names of cities in which major contributions to concurrency theory have been made.
Between major releases, minor releases are often available, providing early access to new features and improvements. For more information, see the change list page on the CADP website.
CADP features
CADP offers a wide set of functionalities, ranging from step-by-step simulation to massively parallel model checking. It includes:
Compilers for several input formalisms:
High-level protocol descriptions written in the ISO language LOTOS. The toolbox contains two compilers (CAESAR and CAESAR.ADT) that translate LOTOS descriptions into C code to be used for simulation, verification, and testing purposes.
Low-level protocol descriptions specified as finite state machines.
Networks of communicating automata, i.e., finite state machines running in parallel and synchronized (either using process algebra operators or synchronization vectors).
Several equivalence checking tools (minimization and comparisons modulo bisimulation relations), such as BCG_MIN and BISIMULATOR.
Several model-checkers for various temporal logic and mu-calculus, such as EVALUATOR and XTL.
Several verification algorithms combined: enumerative verification, on-the-fly verification, symbolic verification using binary decision diagrams, compositional minimization, partial orders, distributed model checking, etc.
Plus other tools with advanced functionalities such as visual checking, performance evaluation, etc.
CADP is designed in a modular way and puts the emphasis on intermediate formats and programming interfaces (such as the BCG and OPEN/CAESAR software environments), which allow the CADP tools to be combined with other tools and adapted to various specification languages.
Models and verification techniques
Verification is comparison of a complex system against a set of properties characterizing the intended functioning of the system (for instance, deadlock freedom, mutual exclusion, fairness, etc.).
Most of the verification algorithms in CADP are based on the labeled transition systems (or, simply, automata or graphs) model, which consists of a set of states, an initial state, and a transition relation between states. This model is often generated automatically from high level descriptions of the system under study, then compared against the system properties using various decision procedures. Depending on the formalism used to express the properties, two approaches are possible:
Behavioral properties express the intended functioning of the system in the form of automata (or higher level descriptions, which are then translated into automata). In such a case, the natural approach to verification is equivalence checking, which consists in comparing the system model and its properties (both represented as automata) modulo some equivalence or preorder relation. CADP contains equivalence checking tools that compare and minimize automata modulo various equivalence and preorder relations; some of these tools also apply to stochastic and probabilistic models (such as Markov chains). CADP also contains visual checking tools that can be used to verify a graphical representation of the system.
Logical properties express the intended functioning of the system in the form of temporal logic formulas. In such a case, the natural approach to verification is model checking, which consists of deciding whether or not the system model satisfies the logical properties. CADP contains model checking tools for a powerful form of temporal logic, the modal mu-calculus, which is extended with typed variables and expressions so as to express predicates over the data contained in the model. This extension provides for properties that could not be expressed in the standard mu-calculus (for instance, the fact that the value of a given variable is always increasing along any execution path).
Although these techniques are efficient and automated, their main limitation is the state explosion problem, which occurs when models are too large to fit in computer memory. CADP provides software technologies for handling models in two complementary ways:
Small models can be represented explicitly, by storing in memory all their states and transitions (exhaustive verification);
Larger models are represented implicitly, by exploring only the model states and transitions needed for the verification (on the fly verification).
Languages and compilation techniques
Accurate specification of reliable, complex systems requires a language that is executable (for enumerative verification) and has formal semantics (to avoid any as language ambiguities that could lead to interpretation divergences between designers and implementors). Formal semantics are also required when it is necessary to establish the correctness of an infinite system; this cannot be done using enumerative techniques because they deal only with finite abstractions, so must be done using theorem proving techniques, which only apply to languages with a formal semantics.
CADP acts on a LOTOS description of the system. LOTOS is an international standard for protocol description (ISO/IEC standard 8807:1989), which combines the concepts of process algebras (in particular CCS and CSP and algebraic abstract data types. Thus, LOTOS can describe both asynchronous concurrent processes and complex data structures.
LOTOS was heavily revised in 2001, leading to the publication of E-LOTOS (Enhanced-Lotos, ISO/IEC standard 15437:2001), which tries to provide a greater expressiveness (for instance, by introducing quantitative time to describe systems with real-time constraints) together with a better user friendliness.
Several tools exist to convert descriptions in other process calculi or intermediate format into LOTOS, so that the CADP tools can then be used for verification.
Licensing and installation
CADP is distributed free of charge to universities and public research centers. Users in industry can obtain an evaluation license for non-commercial use during a limited period of time, after which a full license is required. To request a copy of CADP, complete the registration form at. After the license agreement has been signed, you will receive details of how to download and install CADP.
Tools summary
The toolbox contains several tools:
CAESAR.ADT is a compiler that translates LOTOS abstract data types into C types and C functions. The translation involves pattern-matching compiling techniques and automatic recognition of usual types (integers, enumerations, tuples, etc.), which are implemented optimally.
CAESAR is a compiler that translates LOTOS processes into either C code (for rapid prototyping and testing purposes) or finite graphs (for verification). The translation is done using several intermediate steps, among which the construction of a Petri net extended with typed variables, data handling features, and atomic transitions.
OPEN/CAESAR is a generic software environment for developing tools that explore graphs on the fly (for instance, simulation, verification, and test generation tools). Such tools can be developed independently of any particular high level language. In this respect, OPEN/CAESAR plays a central role in CADP by connecting language-oriented tools with model-oriented tools. OPEN/CAESAR consists of a set of 16 code libraries with their programming interfaces, such as:
Caesar_Hash, which contains several hash functions
Caesar_Solve, which resolves boolean equation systems on the fly
Caesar_Stack, which implements stacks for depth-first search exploration
Caesar_Table, which handles tables of states, transitions, labels, etc.
A number of tools have been developed within the OPEN/CAESAR environment:
BISIMULATOR, which checks bisimulation equivalences and preorders
CUNCTATOR, which performs on-the-fly steady state simulation
DETERMINATOR, which eliminates stochastic nondeterminism in normal, probabilistic, or stochastic systems
DISTRIBUTOR, which generates the graph of reachable states using several machines
EVALUATOR, which evaluates regular alternation-free mu-calculus formulas
EXECUTOR, which performs random execution of code
EXHIBITOR, which searches for execution sequences matching a given regular expression
GENERATOR, which constructs the graph of reachable states
PREDICTOR, which predict the feasibility of reachability analysis,
PROJECTOR, which computes abstractions of communicating systems
REDUCTOR, which constructs and minimizes the graph of reachable states modulo various equivalence relations
SIMULATOR, X-SIMULATOR and OCIS, which allow interactive simulation
TERMINATOR, which searches for deadlock states
BCG (Binary Coded Graphs) is both a file format for storing very large graphs on disk (using efficient compression techniques) and a software environment for handling this format, including partitioning graphs for distributed processing. BCG also plays a key role in CADP as many tools rely on this format for their inputs/outputs. The BCG environment consists of various libraries with their programming interfaces, and of several tools, including:
BCG_DRAW, which builds a two-dimensional view of a graph,
BCG_EDIT which allows to modify interactively the graph layout produced by Bcg_Draw
BCG_GRAPH, which generates various forms of practically useful graphs
BCG_INFO, which displays various statistical information about a graph
BCG_IO, which performs conversions between BCG and many other graph formats
BCG_LABELS, which hides and/or renames (using regular expressions) the transition labels of a graph
BCG_MERGE, which gathers graph fragments obtained from distributed graph construction
BCG_MIN, which minimizes a graph modulo strong or branching equivalences (and can also deal with probabilistic and stochastic systems)
BCG_STEADY, which performs steady-state numerical analysis of (extended) continuous-time Markov chains
BCG_TRANSIENT, which performs transient numerical analysis of (extended) continuous-time Markov chains
PBG_CP, which copies a partitioned BCG graph
PBG_INFO, which displays information about a partitioned BCG graph
PBG_MV which moves a partitioned BCG graph
PBG_RM, which removes a partitioned BCG graph
XTL (eXecutable Temporal Language), which is a high level, functional language for programming exploration algorithms on BCG graphs. XTL provides primitives to handle states, transitions, labels, successor and predecessor functions, etc. For instance, one can define recursive functions on sets of states, which allow to specify in XTL evaluation and diagnostic generation fixed point algorithms for usual temporal logics (such as HML, CTL, ACTL, etc.).
The connection between explicit models (such as BCG graphs) and implicit models (explored on the fly) is ensured by OPEN/CAESAR-compliant compilers including:
CAESAR.OPEN, for models expressed as LOTOS descriptions
BCG.OPEN, for models represented as BCG graphs
EXP.OPEN, for models expressed as communicating automata
FSP.OPEN, for models expressed as FSP descriptions
LNT.OPEN, for models expressed as LNT descriptions
SEQ.OPEN, for models represented as sets of execution traces
The CADP toolbox also includes additional tools, such as ALDEBARAN and TGV (Test Generation based on Verification) developed by the Verimag laboratory (Grenoble) and the Vertecs project-team of INRIA Rennes.
The CADP tools are well-integrated and can be accessed easily using either the EUCALYPTUS graphical interface or the SVL scripting language. Both EUCALYPTUS and SVL provide users with an easy, uniform access to the CADP tools by performing file format conversions automatically whenever needed and by supplying appropriate command-line options as the tools are invoked.
Awards
In 2002, Radu Mateescu, who designed and developed the EVALUATOR model checker of CADP, received the Information Technology Award attributed during the 10th edition of the yearly symposium organized by the Foundation Rhône-Alpes Futur.
In 2011, Hubert Garavel, software architect and developer of CADP, received the Gay-Lussac Humboldt Prize.
In 2019, Frédéric Lang and Franco Mazzanti won all the gold medals for the parallel problems of the RERS challenge by using CADP to successfully and correctly evaluate 360 computational tree logic (CTL) and linear temporal logic (LTL) formulas on various sets of communicating state machines.
In 2020, Frédéric Lang, Franco Mazzanti, and Wendelin Serwe won three gold medals at the RERS'2020 challenge by correctly solving 88% of the "Parallel CTL" problems, only giving "don't know" answers for 11 formulas out of 90.
In 2021, Hubert Garavel, Frédéric Lang, Radu Mateescu, and Wendelin Serwe jointly won the Innovation Prize of Inria – Académie des sciences – Dassault Systèmes for their scientific work that led to the development of the CADP toolbox.
See also
SYNTAX compiler generator (used to build many CADP compilers and translators)
References
External links
http://cadp.inria.fr/
http://vasy.inria.fr/
http://convecs.inria.fr/
Model checkers
Process calculi
Formal methods
Formal specification languages
Concurrency (computer science)
Concurrency control
Synchronization |
1595676 | https://en.wikipedia.org/wiki/DOS%20extender | DOS extender | A DOS extender is a computer software program running under DOS that enables software to run in a protected mode environment even though the host operating system is only capable of operating in real mode.
DOS extenders were initially developed in the 1980s following the introduction of the Intel 80286 processor (and later expanded upon with the Intel 80386), to cope with the memory limitations of DOS.
DOS extender operation
A DOS extender is a program that "extends" DOS so that programs running in protected mode can transparently interface with the underlying DOS API. This was necessary because many of the functions provided by DOS require 16-bit segment and offset addresses pointing to memory locations within the first 640 kilobytes of memory. Protected mode, however, uses an incompatible addressing method where the segment registers (now called selectors) are used to point to an entry in the Global Descriptor Table which describes the characteristics of the segment. The two methods of addressing are mutually exclusive, with the processor having to make costly switches to real (or V86) mode to service non-protected mode requests.
In addition to setting up the environment and loading the actual program to be executed, the DOS extender also provides (amongst other things) a translation layer that maintains buffers allocated below the 1 MB real mode memory barrier. These buffers are used to transfer data between the underlying real mode operating system and the protected mode program. Since switching between real/V86 mode and protected mode is a relatively time consuming operation, the extender attempts to minimize the number of switches by duplicating the functionality of many real mode operations within its own protected mode environment. As DOS uses interrupts extensively for communication between the operating system and user level software, DOS extenders intercept many of the common hardware (e.g. the real-time clock and keyboard controller) and software (e.g. DOS itself and the mouse API) interrupts. Some extenders also handle other common interrupt functions, such as video BIOS routines.
Essentially, a DOS extender is like a miniature operating system, handling much of the functionality of the underlying operating system itself.
Development history
The DOS extender was arguably invented by Phar Lap, but it was Tenberry Software's (formerly Rational Systems) 386 extender DOS/4GW that brought protected mode DOS programs to a mass market. Included with Watcom's C, C++, and Fortran compilers for 386 class processors, it soon became a ubiquitous mainstay of PC applications and games such as id Software's successful Doom.
While initially it was the memory-hungry business applications that drove the development of DOS extenders, it would be PC games that truly brought them into the spotlight. As a result of the development of DOS extenders, two new software interfaces were created to take care of the many potential conflicts that could arise from the varied methods of memory management that already existed, as well as provide a uniform interface for client programs.
The first of these interfaces was the Virtual Control Program Interface (VCPI), but this was rapidly overshadowed by the DOS Protected Mode Interface (DPMI) specification, which grew from the Windows 3.0 development. They provided an API through which an extended program could interface with real mode software, allocate memory, and handle interrupt services. They also provided an easy method for the extender to set up the switch to protected mode, and allowed multiple protected mode programs to coexist peacefully.
DOS extenders
DOS/4G and DOS/4GW and DOS/16M by Tenberry Software, Inc.
286|DOS Extender and 386|DOS Extender by Phar Lap. Later superseded by the TNT Dos Extender.
PROT by Al Williams, a 32-bit DOS extender published in Dr. Dobb's Journal and in two books. This extender had the virtue of running DOS and BIOS calls in emulated mode instead of switching back to real mode.
PMODE and PMODE/W by Thomas Pytel and Charles Sheffold. The latter was for Watcom C as an alternative to DOS/4GW, and was quite popular with demoscene programmers
CauseWay was a formerly proprietary extender that competed with DOS4G. As of 2000 it has been released as open source. A few rare games such as DaggerFall use it.
DOS/32 as an alternative to DOS/4G by Narech K.
Ergo (formerly Eclipse, formerly A. I. Architects) OS/286 and OS/386 extenders, and DPM16 and DPM32 servers
386Power 32-bit DOS Extender is an extender for 32-bit Assembly apps. Includes source code.
all Microsoft Windows versions since 1990, except NT branch, include both a DPMI server and DOS extender.
HX DOS Extender provides limited Win32 support to allow Windows console and some Win32 GUI applications to run under DOS. It contains both 16-bit and 32-bit DPMI servers (HDPMI16/HDPMI32) for use with protected mode DOS programs
DosWin32 provides limited Win32 support
WDosX was an early implementation of limited Win32 support, used by the TMT Pascal compiler.
Borland Power Pack was an extender included with some of their development suites that could access a limited portion of the Win32 API.
TASM, again from Borland, included 32RTM with DPMI32VM and RTM with DPMI16BI, two DPMI hosts.
CWSDPMI by Charles W. Sandmann, a DPMI server for use with 32-bit protected mode DOS DJGPP programs.
QDPMI by Quarterdeck Office Systems, was a DPMI host included with QEMM.
GO32, used in older (pre-v2) versions of DJGPP, and Free Pascal
D3X is an DPMI sever written entirely in Assembly. Still in alpha state, but discontinued before completion.
DPMIONE is another DPMI sever. Originally developed for 32 bit programs generated by Borland C++ and Delphi.
DBOS by Salford Software, a 32-bit protected mode DOS extender used primarily by their FTN77 Fortran Compiler
X32 and X32VM by FlashTek and supported as a target by Digital Mars compilers
BLINKER by Blink Inc Version 3 and above provided a 286 DOS Extender for several 16 bit DOS compilers including CA-Clipper, Microsoft C/C++, PASCAL, FORTRAN and Borland C/C++. Supported unique 'Dual Mode' executables capable of running in either real or protected mode depending on the run time environment.
EMX
Notable DOS extended applications
ATT Graphics Software Labs 'RIO' -- Resolution Independent Objects' graphics software.
Adobe Acrobat Reader 1.0 (uses an early version of DOS/4GW professional)
AutoCAD 11 (PharLap 386)
Lotus 1-2-3 Release 3 (Rational Systems DOS/16M)
Oracle Professional
IBM Interleaf
Major BBS, a 1980s BBS software package that utilized the Phar Lap DOS extender.
Quarterdeck DESQview and DESQview/X multitasking software
Watcom's C, C++ and Fortran compilers for the x86
Countless DOS games from the early to mid 1990s, mostly using DOS/4GW, including:
id Software's DOOM and its sequels, as well as Quake (built with DJGPP)
Looking Glass Studios' System Shock
Parallax Software's Descent
Crack dot com's Abuse
Blizzard Entertainment's Warcraft: Orcs & Humans and Warcraft II: Tides of Darkness
3D Realms' Duke Nukem 3D
Midway's Mortal Kombat
Westwood Studios' Command & Conquer and Command & Conquer: Red Alert
DMA Design (now Rockstar North)'s Grand Theft Auto. Later versions of the game were ported to Windows in order to make it more compatible with modern computers.
Comanche: Maximum Overkill by NovaLogic used a custom Unreal mode memory manager which required a 80386 processor and was incompatible with memory managers and virtual DOS boxes, requiring a complicated DOS boot menu configuration in CONFIG.SYS. Later revisions included a DOS extender which solved the problem.
Ultima VII and Ultima VII Part Two: Serpent Isle by Origin Systems also used an custom Unreal mode memory manager called the Voodoo Memory Manager which was incompatible with EMS memory and memory managers such as EMM386.
References
External links
HX-DOS
The Free Country's list of DOS extenders |
16989288 | https://en.wikipedia.org/wiki/Rajeev%20Motwani | Rajeev Motwani | Rajeev Motwani (Hindi: राजीव मोटवानी
, March 24, 1962 – June 5, 2009) was an Indian American professor of Computer Science at Stanford University whose research focused on theoretical computer science. He was an early advisor and supporter of companies including Google and PayPal, and a special advisor to Sequoia Capital. He was a winner of the Gödel Prize in 2001.
Education
Rajeev Motwani was born in Jammu, Jammu and Kashmir, India on March 24, 1962, into a Sindhi Hindu family and grew up in New Delhi. His father was in the Indian Army. He had two brothers. As a child, inspired by luminaries like Gauss, he wanted to become a mathematician.
Motwani went to St Columba's School, New Delhi. He completed his B.Tech. in Computer Science from the Indian Institute of Technology Kanpur in Kanpur, Uttar Pradesh in 1983 and got his Ph.D. in Computer Science from the University of California, Berkeley in Berkeley, California, United States in 1988, under the supervision of Richard M. Karp.
Career
Motwani joined Stanford soon after U.C. Berkeley.
He founded the Mining Data at Stanford project (MIDAS), an umbrella organization for several groups looking into new and innovative data management concepts. His research included data privacy, web search, robotics, and computational drug design. He is also one of the originators of the Locality-sensitive hashing algorithm.
Motwani was one of the co-authors (with Larry Page and Sergey Brin, and Terry Winograd) of an influential early paper on the PageRank algorithm. He also co-authored another seminal search paper What Can You Do With A Web In Your Pocket with those same authors.
PageRank was the basis for search techniques of Google (founded by Page and Brin), and Motwani advised or taught many of Google's developers and researchers, including the first employee, Craig Silverstein.
He was an author of two widely used theoretical computer science textbooks: Randomized Algorithms with Prabhakar Raghavan and Introduction to Automata Theory, Languages, and Computation with John Hopcroft and Jeffrey Ullman.
He was an avid angel investor and helped fund a number of startups to emerge from Stanford. He sat on boards including Google, Kaboodle, Mimosa Systems (acquired by Iron Mountain Incorporated), Adchemy, Baynote, Vuclip, NeoPath Networks (acquired by Cisco Systems in 2007), Tapulous and Stanford Student Enterprises. He was active in the Business Association of Stanford Entrepreneurial Students (BASES).
He was a winner of the Gödel Prize in 2001 for his work on the PCP theorem and its applications to hardness of approximation.
He served on the editorial boards of SIAM Journal on Computing, Journal of Computer and System Sciences, ACM Transactions on Knowledge Discovery from Data, and IEEE Transactions on Knowledge and Data Engineering.
Death
Motwani was found dead in his pool in the backyard of his Atherton, San Mateo County, California home on June 5, 2009. The San Mateo County coroner, Robert Foucrault, ruled the death an accidental drowning. Toxicology tests showed that Motwani's blood alcohol content was 0.26 percent.
He could not swim, but was planning on taking lessons, according to his friends.
Personal life
Motwani, and his wife Asha Jadeja Motwani, had two daughters named Naitri and Anya.
After his death his family donated US$1.5 million in 2011, a building was named in his honor at IIT Kanpur.
Awards
Gödel Prize in 2001
Okawa Foundation Research Award
Arthur Sloan Research Fellowship
National Young Investigator Award from the National Science Foundation
Distinguished Alumnus Award from IIT Kanpur in 2006
Bergmann Memorial Award from the US-Israel Bi-National Science Foundation
IBM Faculty Award
References
External links
Mathematician at heart
Professor Rajeev Motwani at The Telegraph
Indian emigrants to the United States
Stanford University School of Engineering faculty
Theoretical computer scientists
American computer scientists
Gödel Prize laureates
IIT Kanpur alumni
University of California, Berkeley alumni
Google people
1962 births
2009 deaths
American people of Sindhi descent
St. Columba's School, Delhi alumni
Sindhi people
Sindhi computer scientists
Scientists from Jammu and Kashmir
People from Jammu (city)
20th-century Indian mathematicians
People from Atherton, California
Indian computer scientists |
38542908 | https://en.wikipedia.org/wiki/Silenced%20%28album%29 | Silenced (album) | Silenced is the sixth full-length studio album by The Black Dog released in 2005 on CD. It's the first album Ken Downie recorded and produced together with Martin and Richard Dust, owners of the label Dust Science Recordings.
It harks back to Black Dog's debut Bytes, a record that remains a landmark album in electronic music's development. Martin Dust explained: "We never set off to make it like Bytes. My idea was to create something that you could come home to after you'd just been to a club or gig, that would start at the right pace and then just wind down into a great album and just chill out."
Martin Dust had been "friends with Ken for probably nine or ten years. The main connection is that we both had an interest in internet bulletin board systems and punk. I used to run a bulletin board with an old style modem and communication and information file exchange and hacking and stuff, so right from the beginning I've always been talking to Ken, about that, about music, about everything really. We just struck a friendship up like that and swapped music and ideas and continue from there."
Track listing
"Trojan Horus (Part 1)" - 5:41
"Trojan Horus (Part 2)" - 2:29
"Lam Vril" - 4:31
"Truth Benders D.I.E" - 3:22
"Bolt 23 Blue Screen ov Death" - 0:37
"Alt/Return/Dash/Kill" - 3:54
"Bolt 777 Ordinary Boy" - 0:41
"Drexian City R.I.D.E" - 3:38
"Remote Viewing" - 4:34
"Gummi Void" - 4:55
"Machine Machina" - 1:35
"The Stele of Revealing" - 2:56
"Songs for Other People" - 2:07
"Break Down on Lake Shore Drive" - 1:10
"Bolt 33 Glitch and Chin" - 1:03
"Sudden Intake" - 5:11
"4 3s 555 (Part 1)" - 2:57
"4 3s 555 (Part 2)" - 8:13
Composed & produced by Ken Downie, Martin Dust & Richard Dust
Bass on "Remote Viewing" by Webby
Bite Thee Back EP
"4 3s 777" - 6:10
"Bite Thee Back" - 6:53
"Invoke" - 4:10
"Evoke" - 4:24
Trojan Horus EP
"Trojan Horus (Parts 1 & 3)" - 8:29
"D.O.G. Style" - 4:29
"Evoke (Carl Taylor Deep in Detroit Mix)" - 5:52
Remote Viewing EP
"Remote Viewing" - 5:04
"Because They Said So" - 3:48
"Mr Burroughs to the Curiosity Phone Please" - 6:17
The Remixes EP
"4 3s 555 (Vince Watson Remix)" - 10:42
"The Stele Of Revealing (Carl Taylor Remix)" - 7:19
"D.O.G. Style (The Black Dog's Late Night Porn Mix)" - 3:55
References
External links
Silenced at discogs.com
2005 albums
The Black Dog (band) albums |
4905858 | https://en.wikipedia.org/wiki/Geocast | Geocast | Geocast refers to the delivery of information to a subset of destinations in a wireless peer-to-peer network identified by their geographical locations. It is used by some mobile ad hoc network routing protocols, but not applicable to Internet routing.
Geographic addressing
A geographic destination address is expressed in three ways: point, circle (with center point and radius), and polygon (a list of points, e.g., P(1), P(2), …, P(n–1), P(n)). A geographic router (Geo Router) calculates its service area (geographic area it serves) as the union of the geographic areas covered by the networks attached to it. This service area is approximated by a single closed polygon. Geo Routers exchange service area polygons to build routing tables. The routers are organized in a hierarchy.
Applications
Geographic addressing and routing has many potential applications in geographic messaging, geographic advertising, delivery of geographically restricted services, and presence discovery of a service or mobile network participant in a limited geographic area (see Navas, Imieliński, 'GeoCast - Geographic Addressing and Routing'.)
See also
Abiding Geocast / Stored Geocast
References
External links
RFC 2009 GPS-Based Addressing and Routing
A Survey of Geocast Routing Protocols
Efficient Point to Multipoint Transfers Across Datacenters
Ad hoc routing protocols |
48893231 | https://en.wikipedia.org/wiki/The%20Emoji%20Movie | The Emoji Movie | The Emoji Movie is a 2017 American computer-animated science fiction comedy film directed by Tony Leondis, who wrote the script with Eric Siegel and Mike White. It stars the voices of T.J. Miller, James Corden, Anna Faris, Maya Rudolph, Steven Wright, Jennifer Coolidge, Christina Aguilera, Sofía Vergara, Sean Hayes, and Patrick Stewart. Based on emojis, the film centers on a multi-expressional Gene (Miller), who lives in a teenager's (Jake T. Austin) smartphone, having a journey to become a normal meh form like his parents (Wright and Collidge).
The Emoji Movie premiered on July 23, 2017 at the Regency Village Theatre and was theatrically released in the United States five days later. The movie was a commercial success after grossing $217 million worldwide, but was universally lambasted from critics, who criticized its script, humor, use of product placement, tone, voice performances, lack of originality, and plot, with negative comparisons and similarities to other animated films, especially Wreck-It Ralph (2012), The Lego Movie (2014) and Inside Out (2015). The Emoji Movie gained four awards at the 38th Golden Raspberry Awards for Worst Picture and three technical categories, being the first ever animated film to do so.
Plot
Gene is an emoji that lives in Textopolis, a digital city inside the phone of his user, a teenager named Alex. He is the son of two meh emojis named Mel and Mary and is able to make multiple expressions despite his parents' upbringing. His parents are hesitant about him going to work, but Gene insists so that he can feel useful. Upon receiving a text from his love interest Addie McCallister, Alex decides to send her an emoji. When Gene is selected, he panics, makes a panicked expression, and wrecks the text center. Gene is called in by Smiler, a smiley emoji and leader of the text center, who concludes that Gene is a "malfunction" and therefore must be deleted. Gene is chased by bots but is rescued by Hi-5, a once-popular emoji who has since lost his fame due to lack of use. He tells Gene that he can be fixed if they find a hacker, and Hi-5 accompanies him so that he can reclaim his fame.
Smiler sends more bots to look for Gene when she finds out that he has left Textopolis, as his actions have caused Alex to think that his phone needs to be fixed. Gene and Hi-5 come to a piracy app where they meet a hacker emoji named Jailbreak, who wants to reach Dropbox so that she can live in the cloud. The trio is attacked by Smiler's bots, but manage to escape into the game Candy Crush. Jailbreak reveals that Gene can be fixed in the cloud, and the group goes off into the Just Dance app. While there, Jailbreak is revealed to be a princess emoji who fled home after tiring of being stereotyped. They are once again attacked by bots, and their actions cause Alex to delete the Just Dance app. Gene and Jailbreak escape, but Hi-5 is taken along with the app and ends up in the trash.
Mel and Mary go searching for Gene and have a very lethargic argument. They make up in the Instagram app when Mel reveals that he, too, is a malfunction, explaining Gene's behavior. While traveling through Spotify, Jailbreak admits that she likes Gene just the way he is and that he should not be ashamed of his malfunction. The two start to fall in love and Gene silently debates his choice to change himself. They make it to the trash and rescue Hi-5, but are soon attacked by a bot upgraded with illegal malware. They evade it by entangling its arms and enter Dropbox, where they encounter a firewall. After many tries, the gang gets past it with a password being Addie's name and make it to the cloud, where Jailbreak prepares to reprogram Gene. Gene admits his feelings for Jailbreak, but she wishes to stick to her plan of venturing into the cloud, unintentionally causing Gene to revert to his apathetic programming out of heartbreak. Suddenly, the upgraded bot sneaks into the cloud and captures Gene, prompting Hi-5 and Jailbreak to go after him with a Twitter bird summoned by Jailbreak in her princess form.
As Smiler prepares to delete Gene, Mel and Mary arrive. Mel reveals to everyone that he is also a malfunction, prompting Smiler to threaten to delete him as well. Jailbreak and Hi-5 arrive and disable the bot, which falls on top of Smiler. Alex has since taken his phone to a store in hopes that a factory reset performed by technical support would restore his phone's functionality, which would entail total destruction of Gene's world should such operation complete. Out of desperation, Gene prepares to have himself texted to Addie, making numerous faces to express himself. Realizing that Addie received a text from him, Alex cancels the factory reset just as it nearly finishes, saving the emoji and finally getting to speak with Addie, who likes the emoji Alex sent. Gene accepts himself for who he is and is celebrated by all of the emojis.
In a mid-credits scene, Smiler has been relegated to the "loser lounge" with the other unused and forgotten emojis for her crimes, wearing numerous braces due to her teeth being chipped by the bot, and playing and losing a game of Go Fish.
Voice cast
T.J. Miller as Gene Meh, an outsider "meh" emoji who can show multiple expressions
James Corden as Hi-5, a hand emoji representing a high five signal
Anna Faris as Jailbreak, a hacker emoji who is later revealed to be a princess emoji named Linda.
Maya Rudolph as Smiler, a smiley emoji. As the original emoji, she is the systems supervisor of the text center.
Steven Wright as Mel Meh, Gene's emoji father who is later revealed to have the same multi-expressionist condition as his son
Jennifer Coolidge as Mary Meh, Gene's emoji mother
Patrick Stewart as Poop, a well-mannered poop emoji
Christina Aguilera as Akiko Glitter, a "super cool" dancer that lives inside the Just Dance app
Sofía Vergara as Flamenca, a flamenco dancer emoji
Sean Hayes as Steven, a devil emoji
Rachael Ray as Spam, a spam message
Jeff Ross as an Internet troll
Jake T. Austin as Alex, a human teenager who owns the phone where Gene and his emoji friends live
Tati Gabrielle as Addie McCallister, Alex's love interest
Rob Riggle (uncredited) as an ice cream emoji
Conrad Vernon as a Trojan Horse
Tony Leondis as Laughter, Broom, and Pizza
Liam Aiken as Ronnie Ramtech, one of the two programmers that select which Emoji to display on a phone.
Production
Development
The film was inspired by director Tony Leondis' love of Toy Story (1995). Wanting to make a new take on the concept, he began asking himself, "What is the new toy out there that hasn't been explored?" At the same time, Leondis received a text message with an emoji, which helped him realize that this was the world he wanted to explore. In fleshing out the story, Leondis considered having the emojis visit the real world. However, his producer felt that the world inside a phone was much more interesting, which inspired Leondis to create the story of where and how the emojis lived. As Leondis is gay, he connected to Gene's plight of "being different in a world that expects you to be one thing," and in eventually realizing that the feeling held true for most people, Leondis has said the film "was very personal".
In July 2015, it was announced that Sony Pictures Animation had won a bidding war against Warner Bros. Pictures and Paramount Pictures over production rights to make the film, with the official announcement occurring at the 2016 CinemaCon. The film was fast tracked into production by the studio after the bidding war. Unlike most other animated films, the film had a production time of two years, as there were concerns that the movie would become outdated due to the evolution of phone technology.
Casting
On World Emoji Day on July 17, 2016, Miller was announced as the lead. Leondis created the part with Miller in mind, although the actor was initially hesitant to play the role, only accepting after Leondis briefed him on the story. Leondis chose Miller because "when you think of irrepressible, you think of TJ. But he also has this surprising ability to break your heart". in addition Miller also contributed some re-writes. In October 2016, it was announced that Ilana Glazer and Corden would join the cast as well. Glazer was later replaced by Anna Faris. According to Jordan Peele, he was initially offered the role of "Poop", which he would go on to state led to his decision to retire from acting. The part would ultimately go to Patrick Stewart.
Music
The film's score was composed by Patrick Doyle, who previously composed the score for Leondis' Igor (2008). Singer Ricky Reed recorded an original song, "Good Vibrations", for the film. While also voicing a character in the film, Christina Aguilera's song "Feel This Moment" was also used during the film and the end credits.
Marketing
On December 20, 2016, a teaser trailer for the film was released, which received overwhelming criticism from social media users, collecting almost 22,000 "dislikes" against 4,000 "likes" within the first 24 hours of its release. A second trailer was released on May 16, 2017, which also received an extremely negative reception. Sony promoted the release of the latter trailer by hosting a press conference in Cannes, the day before the 2017 Cannes Film Festival, which featured T. J. Miller parasailing in. Variety called the event "slightly awkward", and The Hollywood Reporter described it as "promotional ridiculousness".
Sony Pictures was later criticized after the film's official Twitter account posted a promotional picture of a parody of The Handmaid's Tale, featuring Smiler. The parody was considered to be "tasteless" due to the overall themes of the work, and the image was deleted afterward.
On July 17, 2017, the Empire State Building was lit "emoji yellow". That same day, director Tony Leondis and producer Michelle Raimo Kouyate joined Jeremy Burge and Jake T. Austin to ring the closing bell of the New York Stock Exchange and Saks Fifth Avenue hosted a promotional emoji red carpet event at its flagship store to promote branded Emoji Movie merchandise.
On July 20, 2017, Sony Pictures invited YouTuber Jacksfilms (whom they considered "the [No. 1] fan of the Emoji Movie") to the world premiere and sent him a package containing various Emoji Movie memorabilia including fidget spinners, face masks, and a plushie of the poop emoji. Jacksfilms had praised the movie four months prior, although it was sarcasm and he was actually making fun of the movie.
Release
The Emoji Movie premiered on July 23, 2017, at the Regency Village Theatre in Los Angeles. The film was originally scheduled for general release on August 11 and August 4, but it was moved up to July 28. In theaters, The Emoji Movie was accompanied by a short film Puppy! (2017)
The Emoji Movie was released on 4K Ultra HD Blu-ray, Blu-ray, and DVD on October 24, 2017, by Sony Pictures Home Entertainment. According to The Numbers, the domestic DVD sales are $8,616,759 and the Blu-ray sales are $6,995,654.
Reception
Box office
The Emoji Movie grossed $86.1 million in the United States and Canada and $131.7 million in other territories, for a worldwide total of $217.8 million, against a production budget of $50 million.
The film was released with Atomic Blonde on July 28, 2017. The Emoji Movie grossed $10.1 million on its first day, including $900,000 from Thursday night previews. The film debuted at second grossing $25.7 million from 4,075 theaters. Its second weekend earnings dropped by 50 percent to $12.4 million, and followed by another $6.5 million the third weekend. The Emoji Movie completed its theatrical run in the United States and Canada on November 30, 2017.
Review embargoes for the film were lifted midday July 27, only a few hours before the film premiered to the general public, in a move considered among several tactics studios are using to try to curb bad Rotten Tomatoes ratings. Speaking of the effect embargoing reviews until last minute had on the film's debut, Josh Greenstein, Sony Pictures president of worldwide marketing and distribution, said, "The Emoji Movie was built for people under 18 ... so we wanted to give the movie its best chance. What other wide release with a score under 8 percent has opened north of $20 million? I don't think there is one."
Critical response
The Emoji Movie has an approval rating of based on professional reviews on the review aggregator website Rotten Tomatoes, with an average rating of . Its critical consensus displays a no symbol emoji ("🚫") in place of text. Metacritic (which uses a weighted average) assigned The Emoji Movie a score of 12 out of 100 based on 26 critics, indicating "overwhelming dislike". Audiences polled by CinemaScore gave the film an average grade of "B" on an A+ to F scale.
David Ehrlich of IndieWire gave the film a D, writing: "Make no mistake, The Emoji Movie is very, very, very bad (we're talking about a hyperactive piece of corporate propaganda in which Spotify saves the world and Sir Patrick Stewart voices a living turd), but real life is just too hard to compete with right now." Alonso Duralde of TheWrap was also critical of the film, calling it "a soul-crushing disaster because it lacks humor, wit, ideas, visual style, compelling performances, a point of view or any other distinguishing characteristic that would make it anything but a complete waste of your time".
Glen Kenny of The New York Times described the film as "nakedly idiotic", stating that the film plays off a Hollywood idea that the "panderingly, trendily idiotic can be made to seem less so". Owen Gleiberman of Variety lambasted the film as "hectic situational overkill" and "lazy" while viciously criticizing the film, writing: "There have been worse ideas, but in this case the execution isn't good enough to bring the notion of an emoji movie to funky, surprising life." Writing in The Guardian, Charles Bramesco called the film "insidious evil" and wrote that it was little more than an exercise in advertising smartphone downloads to children. Reviewers like The Washington Post, The Guardian, the Associated Press, The New Republic, and the Hindustan Times also cited the film's negative comparisons and similarities to Inside Out (2015), Toy Story (1995), The Lego Movie (2014), Wreck-It Ralph (2012), Bee Movie (2007), among others.
Nigel Andrews of the Financial Times, however, gave the film 3/5 stars, writing: "Occasionally it's as if The Lego Movie is reaching out a long, friendly arm to Inside Out and falling into the chasm between. But the film is inventive too", while Jake Wilson of The Sydney Morning Herald gave the film 4/5 stars, calling it "a rare attempt by Hollywood to come to grips with the online world".
Accolades
The Emoji Movie led the 38th Golden Raspberry Awards season with five nominations (including The Razzie Nominee So Rotten You Loved It). It received four Razzies, making the first ever animated film: Worst Picture, Worst Director, Worst Screen Combo, and Worst Screenplay.
Notes
References
External links
2017 films
2017 computer-animated films
2010s American animated films
2010s science fiction comedy films
American buddy films
American children's animated adventure films
American children's animated comic science fiction films
American children's animated science fantasy films
American computer-animated films
American films
Animation controversies in film
Advertising and marketing controversies in film
Animated buddy films
Columbia Pictures films
Columbia Pictures animated films
Emoji
English-language films
Sony Pictures Animation films
Works set in computers
Films directed by Tony Leondis
Films scored by Patrick Doyle
Films with screenplays by Tony Leondis
Films with screenplays by Mike White
2017 comedy films
Golden Raspberry Award winning films |
2240648 | https://en.wikipedia.org/wiki/Field%20ration | Field ration | A field ration (also known as combat ration, ration pack, or food packet) is a canned or pre-packaged meal, easily prepared and consumed by military troops. They are distinguished from regular military garrison rations by virtue of being designed for minimal preparation in the field, using canned, vacuum-sealed, pre-cooked or freeze-dried foods, powdered beverage mixes and concentrated food bars, as well as for long shelf life. Most field rations typically contain meat as one of their main courses. The iron ration is a soldier's dry emergency rations.
Such meals also prove invaluable for disaster relief operations, where large stocks of these can be ferried and distributed easily, and provide basic nutritional support to victims before kitchens can be set up to produce fresh food. Rations intended for emergency or disaster relief are often referred to as survival rations.
Most armed forces in the world today now field some form of pre-packaged combat ration, often suitably tailored to meet national, regional or ethnic tastes.
Traditionally hexamine has been the preferred solid fuel for cooking rations. An alternative is gelatinised ethanol.
Americas
Argentina
The Ración de Combate (Individual) was introduced in 2003, consisting of a gray plastic-foil laminate pouch containing a mix of canned and dehydrated foods, plus minimal supplements, for 1 soldier for 1 day. All products in the RC are domestically produced, commercially available items. Each ration contains: canned meat, small can of meat spread, crackers, instant soup, cereal bar with fruit, a chocolate bar with nuts or caramels, instant coffee, orange juice powder, sugar, salt, a heating kit with disposable stove and alcohol-based fuel tablets, disposable butane lighter, resealable plastic bag, cooked rice and a pack of paper tissues.
Menu #1 contains: corned beef, meat pâté, crisp water crackers, and instant soup with fideo pasta.
Menu #2 contains: roasted beef in gravy, meat pâté, whole wheat crackers, and quick-cooking polenta in cheese sauce.
Brazil
The Ração Operacional de Combate – R2 is the current field & combat ration for the Brazilian Army. It is based on the earlier, but similar, RAC (Ração Alternativa de Combate, 24 horas) developed by the Brazilian Navy for use by Naval Infantry units. It contains the food and supplemental items needed by 1 soldier for 24 hours. It is to be used in situations where no other type of ration is available. All foods are packed inside 4-ply plastic & aluminum polylaminate retort pouches and are ready to eat without further preparation.
The ration is packed inside a heavy-duty (.25 mm thick) matte green or olive drab polyethylene bag measuring 300 mm wide by 400 mm long. It is printed with the logo of the Brazilian Army, "Ração de Combate R2 (24 Horas)" and Menu information. Inside are 5 thinner (.10 mm) semi-transparent plastic bags, one for each meal and one for the accessories. Each bag is printed with meal information and contents.
Bag #1: Desjejum (Breakfast, 130 mm x 200 mm)*
Bag #2: Almoço (Lunch, 240 mm x 300 mm)
Bag #3: Jantar (Dinner, 240 mm x 300 mm)
Bag #4: Ceia (Supper, 130 mm x 200 mm)
Bag #5: Acessórios (Accessories, 160 mm x 260 mm)
*also called "Café da Manhã" - e.g. "Morning Coffee",
Each breakfast consists of: 40 g instant coffee w/milk & sugar; 25 g cereal bar, 2 slices of toast (15 g total), and 15 g tub of jelly.
Each lunch consists of: 250 g retort pouched main meal, 150 g pouch of precooked rice, 40 g pouch of cassava pudding, 10 g instant coffee, 2 x 6 g packets of refined sugar, a 25 g bar of pressed raw brown sugar or banana or fruit-flavored sugar, and 45 g of fruit juice powder.
Each dinner consists of: 250 g retort pouched main meal, 100 g can or pouch of precooked sausages, 10 g instant coffee, 2 x 6 g packets refined sugar, 50 g jelly beans or hard candy, 45 g of fruit drink powder.
Each supper consists of: 40 g of chocolate milk powder, 25 g cereal bar, 2 slices of toast (15 g total), and 15 g tub of jelly.
The accessory packet contains: disposable ration heater (round, with air holes in side), 120 g can of alcohol-based fuel (or plastic can with about 6 solid alcohol fuel tablets), 1 box or folder of moisture-resistant matches (20 total), a strip of 5 water purification tablets, 55 g envelope of electrolyte replacement beverage powder, and 6 sheets of multipurpose paper.
The 250 g main meal pouches differ by menu, two provided for each menu (lunch & dinner) as follows:
Menu #1: shredded beef in gravy, spaghetti w/meat sauce
Menu #2: chicken w/vegetables, rice w/black beans & beef
Menu #3: black bean stew, ground beef w/potatoes
Menu #4: dried beef w/pumpkin, chicken w/mixed vegetables and pasta
Menu #5: white beans w/sausage, risotto w/meat & vegetables
Brazil also fields the Ração Operacional de Emergência – R3. This is a 12-hour ration to be used in situations where cooked foods cannot be provided for all meals.
The ration is similar to the R2 and uses the same components, but contains less food. The bag is printed with the emblem of the Brazilian Army, "Ração de Emergência R3 (12 Horas)" and Menu information. Inside are 3 thin plastic bags: 2 meal bags and 1 accessory packet.
Information from Mreinfo.com.
Canada
Canada provides each soldier with a complete pre-cooked meal known as the IMP (Individual Meal Pack), packaged inside a heavy-duty folding paper bag. There are 5 breakfast menus, 6 lunch menus, and 6 supper menus. Canadian rations provide generous portions and contain a large number of commercially available items. Like the US ration, the main meal is precooked and ready-to-eat, packed in heavy-duty plastic-foil retort pouches boxed with cardboard. Typically, the ration contains a meal item (beans and wiener sausages, scalloped potatoes with ham, smoked salmon fillet, macaroni and cheese, cheese omelette with mushrooms, shepherd's pie, etc.), wet-packed (sliced or mashed) fruit in a boxed retort pouch, and depending on the meal a combination of instant soup or cereal, fruit drink crystals, jam or cheese spread, peanut butter, honey, crackers, bread (bun) compressed into a retort pouch, coffee and tea, sugar, commercially available chocolate bars and hard candy, a long plastic spoon, paper towels and wet wipes. Canada also makes limited use of a Light Meal Pack containing dried meat or cheese, dried fruit, a granola bar, a breakfast cereal square, a chocolate bar, hard candy, hot cocoa mix, tea, and two pouches of instant fruit drink. Canadian ration packs also contain a book of cardboard matches.
Colombia
Colombia issues the Ración de Campaña, a very dark olive green (almost black) plastic bag weighing between 1092 and 1205 grams and providing . Inside are the MRE-like retort pouch main courses and supplements needed by 1 soldier for 1 day. The individual meals, which cater to South American tastes, consist of a breakfast, a lunch, and a main meal (Tamal, envueltos, lentils with chorizo, arvejas con carne, garbanzo beans a la madrileña, arroz atollado, ajiaco con pollo, and sudado con papas y carne). The ration also includes bread products, beverage mixes, candy and accessories. All items except the beverage mixes require no further preparation and can be eaten either hot or cold. The beverage powders must be mixed with hot or cold water before consumption. Each ration also contains raw sugar, a can of condensed milk, sandwich cookies, sweetened and thickened cream spread, hard candy or caramels, peanuts or trail mix or 25 g of roasted almonds, instant coffee, salt, paper towels, a plastic spoon, 2 water purification tablets, and a multivitamin tablet.
Mexico
The Mexican defense department (SEDENA) issues the "Ración Diaria Individual de Combate" box or "individual soldiers daily combat meal" box. It is packaged in an olive green and black plastic box with the contents printed on the front; the box contains three individual meal packs containing meals providing which are meant to sustain a soldier for one day. Each individual meal package contains two main retort pouches which are meant to be eaten with each other. The first retort pouch usually contains a meat product (such as beef, pork, sausage, fish, ham, seafood, chicken, tuna, bacon or other meats which are usually mixed with a flavoring sauce and vegetables) the second retort pouch contains a staple food (rice, hominy, noodles, beans, pasta, eggs or more vegetables). Each meal package also contains salt, spices, condensed milk, cream, butter, chorizo spread, dried fruit or preserves, bread, crackers, sugar, custard, cookies, canned fish, cocoa mix, nuts, chocolate or other candies, vitamins, a large pouch of drinking water, a pouch of Jumex fruit juice or Coca-Cola, biodegradable napkins and utensils and water purification tablets. Some meal packages do not contain the two main retort pouches and instead contain a single larger pouch with a finished meal such as tamales or steak and eggs but, these are usually only available when close to a base or when the military is operating in an urban area. When these were handed out by the Mexican military during their assistance in the Hurricane Katrina relief operation many Americans who received them gave very high praise about their taste and variety.
United States
The United States' Meal, Ready-to-Eat (MRE) is packaged similarly to the Canadian ration. Each sealed plastic bag contains one entire precooked meal, with a number of supplements and accessories. The original 12 menus have been expanded to 24 and now contain a variety of ethnic and special request items as well. Kosher/Halal and Vegetarian menus are also provided. Each meal bag contains an main course (packaged in a four-layer plastic and foil laminate retort pouch), 8 hardtack crackers, some form of spread (cheese, peanut butter, or jelly), a fruit-based beverage powder, some form of dessert (cake, candy, cookies, or fruit), and an accessory packet containing coffee or tea, creamer, sugar, salt, matches, a plastic spoon, and toilet paper. A chemical heater is packed with every meal.
The First Strike Ration (FSR) is a compact, eat-on-the-move ration to be used for no more than three days during initial periods of highly intense, highly mobile combat assaults. A single FSR (24 hours food) is about 50% of the size and weight of three MREs. Each FSR provides (15% protein, 53% carbohydrates, 34% fat), versus the in three MREs, and has a two-year shelf life when stored at . An FSR is packed in a single trilaminate bag and contains filled pocket sandwiches, a pouch of tuna or chicken, two packets of ERGO high-energy drink mix, two high-energy cereal bars (First Strike Bars), a dairy-based calcium-enriched dessert bar, two packets of beef jerky (BBQ or Teriyaki flavored), fortified applesauce, nut and fruit mix, caffeinated gum, and an accessory pack containing a beverage mix, salt, matches, tissues, plastic spoon, and cleansing moist towelettes. The FSR comes in three menus:
Europe
Czech Republic
After joining NATO, the Czechs developed a combat ration known as the Bojová Dávka Potravin (BDP). The BDP comes in two versions, type I and II, each holding two ready-to-eat main courses packed in large foil "cans" (beef roast with rice, pork goulash with potato, spicy risotto, pork with carrots and vegetables, etc.), a small plastic cup of lunch meat spread, cheese spread, hard bread, cookies, jam, instant coffee, tea bags, fruit-flavored multivitamin drink tablets, vitamin C enriched fruit drink powder, a chocolate bar, sugar, salt, chewing gum, wet napkins, paper towels, a plastic bag, and a menu and instruction sheet. A modified version of the BDP known as the KDP (Konzervovaná Dávka Potravin) is also used. This contains the same items as the BDP, but adds an aluminium cup, plastic utensils, a folding stove with fuel tablets and matches, and soap.
Denmark
The Danish military developed a modern field ration inspired by Norwegian and American rations. It consist of Drytech freeze-dried main meals and several additional items such as dried fruits and nuts, energy bars, hard biscuits, meat pâté, etc.
Finland
When (during peacetime) conscript soldiers are not provided with meals cooked either in garrisons or attached field kitchens, they are provided with rations (colloquially known as sissi rations) packed in a clear plastic bag. Several different menus exist, however all include foil packed crispbread, coffee and tea, sugar, chocolate, small tins of beef or pork, chewing gum, dry porridge, energy drink powder etc. Soups and porridges that are meant to be mixed with water and cooked are usually prepared in Trangia-type portable stoves that are shared by the pair in a fire and maneuver team, or in individual mess kits.
France
The French 24-hour combat ration, the RCIR (ration de combat individuelle réchauffable) comes in 14 menus packed in a small cardboard box. Inside are two pre-cooked, ready-to-eat meal main courses packed in thin metal cans somewhat like oversized sardine tins, and an hors d'oeuvre in a more conventional can or tin. Current main courses include items such as beef salad, tuna and potatoes, salmon with rice and vegetables, shepherd's pie, rabbit casserole, chili con carne, paella, veau marengo (veal), navarin d'agneau (lamb), poultry and spring vegetables, etc. Hors d'oeuvres include: salmon terrine, chicken liver, tuna in sauce, fish terrine, duck mousse, etc. Each meal box also contains a package of instant soup, hard crackers, cheese spread, chocolate, caramels or boiled sweets, instant café-au-lait, sugar, cocoa powder, matches, a disposable folding ration heater and fuel tablets, and water purifying tablets.
Germany
Germany uses the Einmannpackung to provide two substantial meals to each soldier. Practice is to provide one hot cooked meal for the other meal whenever possible. A heater or oven is not included since an Esbit cooker is part of each soldier's personal equipment. Enough food items are contained within the Einmannpackung to sustain the soldier for 24 hours. Currently there are three menus; each includes two meals out of a selection of 19 meals, with several heavy-duty foil trays containing items such as lentils with sausages, Yugoslav Sausage, Goulash, beef patties in tomato sauce, Italian pasta, or Tofu stir-fry. There are also three smaller foil "cans" of bread spreads such as cheese spread, liver-sausage, dried-meat sausage, or cheese spread with green peppers. The meal box also includes: thinly sliced rye bread (170 g), hard crackers (1100 kcal), a foil can of fruit salad, instant cream of wheat, instant fruit juice powder, instant coffee, instant tea, powdered cream, a chocolate bar, sugar, salt, gum, jam, water purifying tablets, two plastic bags, matches, paper towels and a user guide.
Bundeswehr Rations
The Einmannpackung ration of the Bundeswehr is supplied in two types, rations 1 to 5 are packaged in a grey cardboard box with the meals packaged in sealed heavy duty foil trays which may be heated by immersing in hot water. The trays are opened using a knife or other sharp implement. Rations 6 to 19 are packaged in a resealable carry pouch, which is NATO Olive, desert brown or transparent. The meals are packed in retort pouches.
Individual EPa Rations I-V
Rations 6 to 19 are packaged in a resealable carry pouch, which is NATO Olive, desert brown or transparent. The meals are packed in retort pouches.
Day Rations XV-XIX
Like other German rations, is packed in a resealable carry pouch with the meals in retort pouches.
Earlier versions of German Rations
The Wehrmacht in the field were provided rations from field kitchens based on the garrison ration. However additional classes of ration were available. The march ration was a cold food ration issued for not more than three or four consecutive days to units in transit either on carrier or by foot. It consisted of approximately 700 grams of bread, 200 grams of cold meat or cheese, 60 grams of bread spreads, 9 grams of coffee (or 4 grams of tea), 10 grams of sugar, and six cigarettes. Thus it had a total weight of about 980 grams. An iron ration consisted of 250 grams of biscuits, 200 grams of cold meat, 150 of preserved vegetables, 25 of coffee, and 25 of salt. Total weight was 650 grams without packing and 825 grams with packing. An iron half-ration was composed of 250 grams of biscuits and 200 grams of preserved meat; thus its total weight was 450 grams without packing and 535 grams with packing.
Greece
The primary operational ration used in Greece is the so-called "Merida Eidikon Dynameon" (Special Forces' Ration, also known as a 4B-ration), a 24-hour ration pack inside a cardboard box measuring and weighing . Most items are commercially procured, with the main meals in round pull-ring cans. Typical contents include: a 200 g canned meat ("SPAM"); 280 g can of meat with vegetables (beef and potatoes, etc.) (termed Prepared Food With Meat or ΠΦΜΚ); a 280 g can of cooked vegetables (green peas, etc.) (Prepared Food Without Meat or ΠΦΑΚ); an 85 g can of cheese; 6 hard biscuits; 40 g honey; three 50 g packages of raisins or chocolate; 30 g sugar; 1.5 g black tea, 2 g instant coffee; 19 g instant milk powder; two small packets of salt; a multivitamin tablet; 4 water purification tablets; a pack of tissues; a disposable ration heater with 5 fuel tablets; and a box of matches. In wartime, packs of locally commandeered cigarettes may also be issued.
Ireland
Ireland fields a 24-hour ration pack somewhat similar to that used by the British. It is packed in a large ziplock plastic bag and contains two pre-cooked main meals and items to be eaten throughout the day. Included are: instant soup, ramen noodles, an oatmeal block, a high-energy protein bar, both brown and fruit biscuits, sweets, and a selection of beverage mixes. Breakfast (bacon and beans or sausage and beans) is packaged in a retort pouch while dinner (Beef Casserole, Irish Stew, Chicken Curry, or a vegetarian main course) comes in either a flat tin or microwaveable plastic tray. Desserts consist of a retort-pouched dessert (chocolate pudding, syrup pudding, fruit dumplings), a Kendal mint cake, and a roll of fruit lozenges. Beverages include tea bags, instant coffee, hot cocoa, and a powdered isotonic drink mix. Also included are a pack of tissues, a small scouring pad, matches, water purification tablets, salt and pepper packets, sugar, dry cream powder, moist towelettes, and individual packets of foot powder.
Italy
Italy uses the "Razione Viveri Speciali da Combattimento", consisting of a heavy duty brownish-green plastic bag with three thin white cardboard cartons inside (one for breakfast, one for lunch and one for dinner), each containing meal items plus accessories.
There are seven menus, called "modules", identified by colors: yellow, red, grey, green, white, pink and blue.
Typically, breakfast consists of: a chocolate bar, fruit candy, crackers or sweet bread, instant coffee, sugar, and a tube of sweetened condensed milk. A lunch will have: two pull-ring cans with precooked foods (Tortellini al Ragù, Pasta e Fagioli, Wurstel, Tacchino in Gelatina, Insalata di Riso, etc.), a small can of fruit cocktail, a multivitamin tablet, energy and fiber tablets, instant coffee, sugar, and a plastic spoon wrapped with a napkin.
Dinner will consist of two more meal cans plus crackers, an energy bar, instant coffee, and sugar.
Accessories are: a folding stove, fuel tablets, water purification tablets, toothpick, matches, and three small disposable toothbrushes with pre-applied tooth powder.
Lithuania
Lithuanian field rations are based on the US Army's MRE – they come in 10 menus packed in a dark green plastic bag, and besides the main meal in a retort pouch they also include two small dark chocolate bars, honey or jam, four hard-tack biscuits, a handful of almonds or hazelnuts, instant drink mix, tea or coffee, sugar, an antiseptic wipe, matches, solid fuel tablets, a flat disposable stove, a flameless heater (similar to the US one) and a cable-tie used to seal waste packaging back into the outer bag after use.
Netherlands
The Netherlands version of the 24-hour ration, the "Gevechtsrantsoen," (Combat ration) includes canned or retort pouched items, plus hard biscuits, jam, cheese spread, 3 cans of meat spread and 1 can of tuna spread, a chocolate bar, a roll of mints, instant coffee, tea, hot chocolate, lemon-flavour energy drink powder, instant soup, a vitamin pill, and supplementary items. The canned main course is packed in a thin aluminium can rather like a large sardine tin, containing 400 g of a precooked item such as rice with vegetables and beef, chicken with rice and curry, potatoes with sausage and green vegetables, or sauerkraut with sausage and green vegetables. The newer retort-pouches contain a 350 g serving of dishes such as brown beans with pork, chili con carne, corned beef hash, or chicken and pasta in tomato sauce. The ration pack provides breakfast and lunch only; the two canned or pouched main meals are issued separately.
Norway
Norway utilize a 24-hour ration pack (Norwegian "feltrasjon") designed by Drytech, consisting of 2 freeze-dried main meals, a packet of compressed breakfast cereal, packets of instant soup, and supplements. These are packed in 3 green polylaminate bags labelled "Breakfast", "Lunch", or "Dinner", overwrapped in clear plastic and issued as one day's ration. Depending on the soldiers activity, the rations are delivered in two different sizes of either 3800 kcal or 5000 kcal. Included are a substantial assortment of beverages (cocoa mix, instant coffee, energy drink powder, and herbal teas), plus thin sliced rye bread and chocolate, chewing gum, a vitamin tablet, and litter bags. There are 7 completely different menus, and ongoing development to meet different nations requirements. The main meals are for example Chili con carne, different pasta dishes, Beef Stew, Beef and Potato Casserole, Lamb Mulligatawny, Cod and Potato Casserole, Pasta Bolognese, Wolf-fish with Prawns and Dill, Sweet and Sour Chicken, Rice in Basil Sauce etc. Small tins of fish are often provided separately.
Poland
The current Polish combat ration (Zestaw Żywnościowy Indywidualnej Racji Suchej) is packed in a green plastic-foil bag containing: 2 small cans of meat or meat spread or cheese, 2 packages of hard crackers, a tube of sweetened condensed milk, 2 packets of instant coffee, a packet of instant tea, 3 sugar packets, an individually wrapped Vitamin C fortified boiled sweets, a stick of chewing gum, safety matches, a menu and instruction sheet, a plastic bag, and 2 paper towels.
Field ration (24h) "RB1" / "RB2" / "RB3"
Meal A (breakfast):
- goulash 400 g / beans with sausage and meat in tomato sauce 400 g / pork shoulder with rice and vegetables 400 g
- pâté 100 g
- jam 25 g
- crispbread 50 g
- instant tea 30 g
- fruit bar
- flameless heater
- sachet water 45 ml
Meal B (lunch):
- chicken with rice and vegetables 400 g / spaghetti with meat 400 g / bogracz (Hungarian dish densely with beef ) 400 g
- crackers 45 g
- instant tea 30 g
- condensed milk tube 100 g
- dark chocolate 50 g
- flameless heater
- sachet water 45 ml
Meal C (dinner):
- canned meat 100 g
- crackers 45 g
- honey 25 g
- instant tea 30 g
- fruit bar
Accessories:
- sugar 10 g x3
- coffee candy x3
- vitamin C candy x3
- chewing gum x3
- salt, pepper x3
- dried fruits 50 g
- instant tea
- instant borsch
- plastic bag
- matches
- toilet paper
- wet wipe tissue x3
- cutlery
Energy value 3496,15kcal / 3693,82 kcal / 3459,6 kcal
Weight
Russian Federation
Since the turn of the millennium Russia issues the Individual Food Rations (Individual'nyi Ratsion Pitaniya (IRP) (Индивидуальный рацион питания/ИРП), a new self-contained ration, containing the whole daily food intake for an individual soldier in the field. However, in its most frequent form it is not dietary complete, and is intended only as a stop-gap measure to be issued until the normal supply lines (with their field kitchens) are established and the hot food delivery started, to be issued for no more than six days straight. Russian Ministry of defence does not strictly prescribe the contents of the ration, only some basic packaging and inventory requirements, so every producer issues their own version. Most commonly it is packaged into a sturdy plastic blister box (nicknamed "The Frog" in the field for its olive-green color), or plastic-sealed cardboard box that contains five to six entrees in laminated foil cans or retort pouches, four to six pack of crackers or preserved bread, two to three dessert items in form of a spread or fruit bar, four beverage concentrate pouches, some seasonings (salt, pepper, sugar, ketchup), and various sundry items like sanitizing wipes/paper towels, spoons, can opener, four hexamine fuel tablets, folding heater, matches and water purifier tablets. The types of entrees vary with the producer and the issued menu (of which there are usually 7 to 12), but the common set is based on a traditional Russian outdoorsmen fare, is largely formed out of the commercially available canned food, and usually includes 1 portion of stewed beef or pork, two meat-with-vegetables dishes, like various porridges, stews or canned fish, and one or two spreads, such as liver pâté, sausage stuffing or processed cheese. Desserts may include fruit jams, chocolate and/or walnut spreads, chocolate bars, sweetened condensed milk, etc., but baked goods are usually avoided out of concerns about their shelf life. Other variants may add canned speck and/or dried fish or exchange the hexamine tablets for the flameless heater.
Spain
The Spanish Army issues an individual meal pack, available in 5 different menus, comprising a small cardboard box overwrapped with drab green polyethylene. Inside are 3 canned meals, plus accessories. Typical contents (Menu B) include: stewed steak, pickled mackerel, liver pâté with red peppers, an envelope of instant soup, a can of fruit, 2 salt tablets, 2 water purification tablets, a large multivitamin tablet, 10 sheets of general purpose paper, a book of matches, a folding can opener, a small folding ration heater and 2 fuel tablets, and an instruction sheet in three languages (Spanish, English and French). Crackers or bread are issued separately.
Sweden
The Swedish armed forces use ration packs from the Swedish-developed 24 hour meals. 24 hour meals have a long range of menus (approx. 200) and can deliver both freezedried and wet meals. The Swedish concept (combat edition) consists of several versions for different use, in all climate zones, and various types of missions. Examples of different types of rations: 1-course (patrolration), 2-course, 3-course and 4-course versions with a variation of 40 different meals, both wet and dry. The rations varies from 1300 kcal to 5000 kcal. The ration is packed in a transparent durable plastic bag that is resealable with a ziplock. The contents are 1–4 main meals with energy bars, protein bars, nuts, energy drinks, whole wheat bread, peanut butter, desserts and spices for example. The durable bag change size depending on the version for optimal space usage in cartons and soldiers' backpacks. 24 hour meals have been developed at a rapid pace and are currently producing their 5th generation (first in 2008). R&D are working close with soldiers in Scandinavia and various missions around the world.
United Kingdom
12 Hour Operational Ration Pack
The 12 hour operational ration pack (ORP) is designed for patrolling for durations of 4–12 hours and for is suitable for remote guard posts, drivers and as a supplement to normal rations for where daily calorie expenditure is likely to exceed 6000 kcal (25,120 kJ), for instance, troops undergoing arduous duties.
The 12 hour ORP contains a main meal packed in a retort pouch, a number of snack items, drink powders and a flameless ration heater (FRH). However it does not contain any hot beverage items.
There are 10 menu choices including one vegetarian.
The 12 hour ORP provides a minimum of 2000 kcals (8,374 kJ).
24 Hour Operational Ration Pack
The UK provides the Operational Ration Pack, General Purpose. Packed inside a small cardboard box, each ration has enough retort-pouched and canned foods to feed one soldier for 24 hours. Seven menus (plus vegetarian and religious variants) provide two precooked meals (Breakfast and Main Meal) plus a midday snack. Example (Menu A) Breakfast: Hamburger and beans, Instant Porridge. All ration packs also contain Oatmeal Block, Fruit Biscuits, Biscuits Brown (a more compact alternative to bread), a sachet of instant soup and jam or yeast extract (a Marmite like spread) for a lunchtime snack, and chocolate (in the form of a specially made Yorkie bar which is flatter than civilian bars, or, more recently, a simple unbranded bar of milk chocolate), though this has been phased out with the introduction of the more recent multi-climate ration packs, and boiled sweets (hard candy) for snacking whilst on patrol, or in free time. Main Meal: Instant soup, Chicken with Mushroom and Pasta, Treacle Pudding. Each pack also contains instant coffee, tea bags, creamer, sugar, hot cocoa mix, beef/vegetable stock powder, lemon/orange powder or Lucozade electrolyte powder, matches, packet of tissues, chewing-gum, a small bottle of Tabasco sauce, and water purifying tablets. They sometimes also contain chicken and herb pâté. Also available are Kosher/Halal, Vegetarian, and Hindu/Sikh specific menus. Regardless of their contents, these ration packs are referred to as Rat-Packs or Compo (short for Composite Rations) by the soldiers who eat them. In addition to containing the 24-hour ration, the outside of the cardboard box has a range card printed on its side for use by the soldier to record key features and their range from their position. Other variations designed for specific environments exist.
The rations are now being issued with a new folding cooker and fire-lighting fuel called FireDragon made in Wales by BCB International Ltd
24 Hour Multi Climate Ration Box A
24 Hour Multi Climate Ration Box B
24 Hour Multi Climate Ration Sikh/Hindu
24 Hour Multi Climate Ration Halal/Kosher
24 Hour Multi Climate Ration Vegetarian
24 Hour Jungle Ration
The 24 Hour Jungle ration is based on the standard 24 Hour ration with additional supplements and a Flameless Ration Heater (FRH). The Jungle ration is designed for use by the special forces and other specialist units.
The 24 Hour Jungle Ration provides a minimum of 4500 kcals (18,840 kJ) a day.
Cold Climate Ration
The Cold Climate Ration (CCR) is a specialist and lightweight, high calorie 24 hour ration designed for use by troops above the snow line or in the high Arctic. It comprises mainly dehydrated main meals with a range of snacks designed to be eaten on the go.
There are 8 menu choices available.
The cold climate ration provides a minimum of 5500 kcals (23,030 kJ) a day.
10 Man Operational Ration Pack
The UK also fields a larger pack of rations intended to feed ten soldiers for 24 hours from centralised but basic preparation; generally similar in content to the single issue ORP but tending to contain larger quantities of food in cylindrical tin cans to be divided up on preparation, rather than individual retort pouches or packets. Even dry materials like sugar or biscuits are often packed in these cans. They contain ingredients for baking bread and tinned food, including vegetables, corned beef and sausages in lard. Also included are chocolate, pre-cooked chicken or beef in gravy and soya mince. Ten boxed one-man ORPs are supplied in larger boxes identical in shape to the single ten-man pack.
10 Man Operational Ration Pack Menus A-E
Emergency Flying Rations (EFR) Mark 4
The Mark 4 EFR is designed for crews of fast jets. It consists of a flat tin it contains 100 g of fruit flavoured sweets, (9 to be eaten each day) 2 spring handles and a plastic bag. The container can be used for boiling water and hot drinks can be made by dissolving the sweets in hot water. The Mark 4 EFR is built into ejector seats.
Emergency Flying Rations (EFR) Mark 9
The Mark 9 EFR is designed for crews of multi-engine aircraft. It consists of a two piece aluminium container, four wire spring handles, two emergency food packs (eight portions per pack), one packet of beef stock drinking cubes (six cubes per pack), two packets of sugar cubes (twelve cubes per pack), one beverage pack (containing seven sachets of instant coffee, four sachets of instant tea and seven sachets of vegetable creamer), two spatulas, one polythene bag and an instruction leaflet.
Costs of rations
The cost of a 10-man ration pack is £55.00. The cost of a 24-hour operational ration pack is £10.00
Earlier Versions of British Field Rations
1940s
In 1943, The 24 Hour Ration was devised as a direct replacement to the 48 Hour Mess tin ration. It contained only dried goods (no tins), to save weight and tinplate, which was the criticism of the earlier mess tin ration. It was first issued to troops on D-Day to provide interim food prior to supply lines being established which would permit 14 Man Composition Rations being brought ashore.
There were two packs (contents identical), the standard 24 Hour Ration and the 24 Hour Ration (Assault), the former fitting into the larger portion of the mess tin and the latter fitting into the smaller portion. The pack provided approximately 4000 calories.
The contents of the ration pack were as follows, most of which were wrapped in either cellophane or in white, heat-sealed wax paper with royal blue writing;
1 Block of dried meat (beef or lamb),
2 sweetened oatmeal blocks,
Tea, Milk and sugar cubes,
10 biscuits (plain, service),
2 bars of raisin chocolate,
1 bar of vitamin enriched chocolate (vit. A, B, C, D and Calcium),
of boiled sweets,
2 packets of peppermint chewing gum,
4 meat extract cubes,
4 cubes of sugar,
Salt,
4 sheets of Latrine paper.
Information taken from DSIR 26-344 ration specifications, 1944 and Supplies and Transport, 1954.
1970s
The 24 hour GS (General Service) ration pack was supplied with the contents in cans or packets.
*Biscuits AB stands for "Biscuits-Alternative Bread", these were called more colourful names by members of the British Army due to the fact they caused constipation.
1980s
The 24 hour GS (General Service) ration pack was supplied with the contents in cans or packets.
Arctic Rations
Arctic rations were dehydrated and issued to troops serving in arctic areas where snow could be melted to rehydrate the dehydrated contents.
1990s
In the 1990s cans were replaced with retort pouches and menu options improved and expanded. The ration was redesignated as the 24 Hour General Purpose (GP) Ration Pack.
Ukraine
The Ukrainian combat ration is based on a previous Russian version, consisting of commercially available cans and dried foods packed together in a sectioned box (resembles a takeout tray) made of very thin green plastic. Inside are: two 250 g mmin Meal cans (boiled buckwheat groats and buckwheat w/beef); two 100 g cans of meat spread (liver pâté and beef in lard); a 160 g can of herring or mackerel; six 50 g packages small, hard crackers (resemble oyster crackers); two foil pouches (20 g each) of jam or jelly; six boiled sweets two tea bags; an envelope of instant cherry juice powder; a chicken flavour bouillon cube; two packets of sugar; and three dining packets, each with a plastic spoon, a napkin, and a moist towelette.
Portugal
Developed and fielded the Ração Individual de Combate (RIC). Packed in a camouflage cardboard box measuring and weighing , the ration provides 3 meals per day. Maximum use is made of off-the-shelf commercial items, including canned main menu items (still with their original labels). A typical RIC (menu 4) will contain: two 415 g "poptop" cans (beef w/vegetables and chili con carne), a flat 115 g can of sardines, round 65 g can of liver paste, sweet bread, crackers, packaged bread, 2 pouches of fruit jam, pouch of quince cream, hot chocolate or instant coffee, isotonic drink mix, instant milk powder, chewing gum, boiled sweets, sugar, salt, water purification tablets, matches, 6 fuel tablets, a folding stove, plastic cutlery, a pack of tissues, a plastic bag, and an instruction/menu sheet.
Middle East
Israel
The Israeli "battle ration" (Manat Krav) is designed to be shared by four soldiers. It contains: 1 can of rice filled vine leaves, 8 small cans of tuna, canned olives, a can of sweet corn, a can of pickled cucumbers, 1 can of halva spread and 1 chocolate spread, a can of peanuts, fruit flavored drink powder, and bread or matzoh crackers. There is also an "ambush pack" of candy and high-energy protein bars.
In 2008, Israel introduced a new field ration to supplement the traditional Manat Krav. Unlike previous rations, the new Battle Ration consists of individual, self-heating, ready-to-eat meals packed inside plastic-aluminum trays. They are designed to be carried and used by infantry troops for up to 24 hours, until regular supply lines can be established. Ten menus are available, including chicken, turkey and kebab; each meal pack is supplemented with dry salami, dried fruit, tuna, halva, sweet roll, and preserved dinner rolls. However, as of 2012, the older rations were still in use.
In 2011, as a result of the manufacturer going bankrupt, the IDF phased out the can of corned beef (known as 'Loof'), which had been part of the battle ration since the nation's founding. It would be replaced by "ground meat with tomato sauce".
Many different recipes and different ways of serving the rations have developed in Israel. With the can of tuna, for example, traditionally cooked using toilet paper soaked in oil.
Saudi Arabia
Saudi Arabia uses a combat meal that is packed inside a brown plastic bag about the size and shape of a US MRE pouch. It contains a small can of tuna, a small can of sardines or salmon or beef, a small can of cheese or thickened cream, an envelope of instant noodle soup, hard crackers and dry toast (like Zwieback), a small bag of raisins or dried fruit, a small package of dates, a small bag of nuts, plus instant coffee, tea bags, sugar packets, matches, and a bag of spiced dried chickpea powder.
United Arab Emirates
The UAE utilizes a European-style combat ration pack containing food and accessories for one soldier for 24 hours. Packed in the UAE utilizing imported components, the ration box measures 245 mm × 195 mm × 115 mm and weighs 2.0 kg. Inside are 4 resealable (ziplock type) plastic bags, labeled in both Arabic & English, containing Breakfast, Lunch, Dinner, and Miscellaneous.
A typical Breakfast bag has 2 foil-wrapped packages of hard brown biscuits, 1 small jar of apricot jam, a can of tuna, and an accessory pack (plastic spoon, salt, pepper, and napkin).
Lunch contains a retort pouch of precooked rice, a retort pouch of chicken curry, a pouch date pudding, and another accessory pack.
Dinner has a retort pouch of pasta rigatoni, an envelope instant soup, and a third accessory pack.
The Miscellaneous bag contains a small bag of hard candy, 4 packets of sugar, 4 tea bags, 2 small envelopes of milk powder, and 3 foil envelopes of instant orange juice powder.
Also included are: a can of fruit, a package of ramen noodles, 2 flameless chemical ration heaters, a menu/instruction sheet, 1 pack dried hummus powder, and a book of matches.
Oceania
Australia
Australia currently supplies three different types of military ration packs – Combat Ration One Man, Combat Ration Five Man and Patrol Ration One Man.
Combat Ration One Man is a complete 24-hour ration pack that provides two substantial meals per day and a wide variety of drinks and snacks for the remainder of the day. Most items, such as Beef Kai Si Ming, Dutch-style Beef with Vegetables, Beef with Spaghetti, Baked Beans, Sausages with Vegetables, or Chicken with pasta and vegetables, are packed in 250 gram sized plastic-foil retort pouches. Included with every meal pack is a pouch of instant rice or instant mashed potatoes, a fruit and grain bar, 2 envelopes of instant drink powder, some biscuits, an "Anzac Biscuit", a chocolate bar, M&M's, coffee, tea, sugar, crackers, cheese spread, jam, sweetened condensed milk, hard sweets, and Vegemite. It is packed in a tough clear polyethylene bag and weighs around . In practical use, these packs are "stripped" by removing and trading with other soldiers, those components that are unlikely to be consumed by the person carrying the pack. This also reduces the weight of the packs, allowing more to be carried. There are eight menu choices, one of which is vegetarian. None of them are allergen free since Defence Force members are typically selected, among many other attributes, for their no known allergy status.
Combat Ration Five Man contains a similar array of components as the Combat Ration One Man. However, it is provided in a tough fibreboard carton rather than in individual unitised polyethylene bags. It is a group feeding solution, and it is impractical to use on an individual basis for main meals. There are a multiple of group-sized retort pouches – 500 gram as opposed to 250 gram, several of which are required to be heated in order to provide a complete meal. Examples include Beef & Blackbean Sauce, Chicken Satay. Common elements include rice and vegetables such as corn, potatoes and carrots. The accessories such as snacks are consumable and can be carried individually. There are five menu choices, and each Combat Ration Five Man weighs around .
Patrol Ration One Man is a complete 24-hour ration pack that contains freeze dried main meals, meaning that the total weight of each pack is reduced, however a correspondingly higher quantity of water must be carried in order to reconstitute the main meal. Otherwise, it is similar to the Combat Ration One Man. It is packed in tough clear polyethylene bags and is available in five menu choices.
New Zealand
New Zealand issues an Operational Ration Pack designed to provide one soldier with three complete meals. Based around two ready-to-eat retort pouches (e.g. Lamb Casserole, Chicken Curry), the ORP comes in 4 menus. Also included are: Anzac biscuits, chocolate bars, URC fruit grains, muesli bars, instant soup powder, instant noodles, muesli cereal, a tube of condensed milk, hard crackers, tinned cheese, cocoa powder, instant coffee, tea bags, instant sport drink powder, sugar, salt, pepper, glucose sweets, Marmite, jam, ketchup, onion flakes, waterproofed matches, a resealable plastic bag, and a menu sheet.
Operational Ration Pack 1 Man-24 Hours
The Patrol Ration Pac (PRP) is a shelf-stable product that provides an efficient, flexible and nutritionally robust feeding method. The PRP is designed to cover activities when you have access to other food sources during the day and is ideal to replace a single meal or provide snack options. The PRP provides approximately one-third of the energy and nutrient requirements of most military personnel during moderate, prolonged-intensity physical activity, in a temperate environment. Therefore, it is desirable that all of the food in the pack is eaten.
Menus A, B and C contain main meals that can be heated using a flameless ration heater along with other ready to eat foods and a beverage powder. Menu D provides ready to eat snack foods and no beverage powder.
Asia
Brunei
The Royal Brunei Army uses a 24-hour ration pack that provides a soldier with an entire day's supply of food, plus a limited number of health and hygiene items. Maximum use is made of plastic-foil laminate pouches, and most items can be eaten without further preparation. Currently, four menus are fielded, and all menus are compatible with Muslim dietary restrictions. Example Menu (F): 5 x 170-gram retort pouches (Biriani Chicken, Mutton Curry, Sardines in Tomato Sauce, Bubur Jagong/ Corn Porridge, Pineapple Pajeri); plus individual servings of pineapple jam, instant coffee, teabags, sugar, salt, pepper, steminder powder, hot chili sauce, MSG, a multivitamin energy tablet, tissue paper, scouring pad with soap, and matches.
India
Indian Armed Forces have a host of Meals Ready To Eat (MRE) including the One Man Combo Pack Ration, Mini Combo Pack, Survival Ration, a ration for marine commandos and Main Battle Tank (MBT) Rations. The shelf-life of the ration is 12 months. India has adopted retort processing technology for combat rations.
The MREs use pre-cooked thermostabilized entrees in a plastic-foil laminate retort pouch. The ration does not require cooking and the contents may be eaten cold, though warming is preferred. An entire day's worth of food, plus accessory items, is packed inside a heavy-duty olive green plastic bag with pasted on label. The menu consists of several different Vegetarian and Non-Vegetarian products that cater to Indian tastes, such as sooji halwa, chapaties, tea mix, chicken biryani, chicken curry, Kebab, Tandoori, Panneer, Organic Egg, Butter nun,mutton biryani, Mutton curry, Vegetable biryani, rajma curry, dal fry, jeera rice, Dal makhani, vegetable pulav and mixed vegetable curry, alongside pickled hot seasoning, in small plastic pouches.
The One Man Combo Pack consists of early morning tea, breakfast, mid morning tea, lunch, evening tea, and dinner. The menus feature both dehydrated and ready-to-eat products, and include a folding stove and hexamine fuel tablets. The ration weighs 880 grams and provides . The Mini Combo Pack is a simplified version of the One Man Combo Pack, weighing 400 g and providing .
The survival ration consists of a soft bar and chikki. The daily survival ration per man consists of: Soft bar 100 g x 2, Chikki (sugar base) 50 g x 3, Chikki (Jaggery base) 50 g x 3. This provides around , which is more than the normal survival ration used by most nations.
Uniquely, India also developed an operational ration pack specifically for Main Battle Tank (MBT) and other Armored vehicle crews. Designed to sustain four soldiers for 72 h in closed-in battle conditions, the MBT ration is based on instant/ready to eat foods and ration/survival bars. First and second day ration packs weigh 2 kg each and provide per soldier, while the third day ration pack weighs 1.5 kg and supplies .
Indonesia
The Indonesian National Armed Forces (TNI) has introduced the Ransum TNI (Indonesian for "TNI ration") in the mid-1970s in order to standardize nutrition for soldiers in field. There are three types of ration and each daily ration consist of three menus (breakfast, lunch, and dinner), a pack of supplementary drinks, providing approximately in total. Primary menus are often fried rice, often with regional variants such as Javanese or Balinese fried rice. The supplementary drinks are instant coffee, powdered fruit juice or vitamin supply, tea bags and powdered milk. All rations should be heated for 10–15 minutes over fire with the included stove and solid fuel tablets (for canned meals), or by submerging in boiling water (for meals packed in retort pouches). Virtually all products are made in Indonesia and manufactured according to Indonesian military standard.
Japan
The Japan Self-Defense Forces utilize two types of combat rations, Type I combat ration (戦闘糧食 I型) and Type II combat ration (戦闘糧食 II型). The older Type I ration consists almost entirely of canned foods weighing a total of 780 g per meal; a normal three-day ration has up to 36 cans weighing more than 7 kilograms. Eight menus are available, based around a 400 g can of rice and 2–3 smaller supplemental cans. Typical contents include: rice (white rice, sekihan (rice with red beans), mixed rice with vegetables, or rice with mushrooms), a main meal can (chicken and vegetables, beef with vegetables, fish and vegetables, or hamburger patties), pickled vegetables (Takuan(yellow radish) or red cabbage) and sometimes a supplemental can (tuna in soy or beef in soy). In the latest type I combat rations, cans have been replaced by retort pouches.
The newer, lighter Type II ration was originally intended to replace the Type I and consists of pre-cooked, ready-to-eat items in plastic-foil laminate retort pouches, packed in turn inside a drab green polyethylene meal bag. Each meal consists of two 200 g pouches of rice (white rice, rice with red beans, mixed rice with meat and vegetables, fried rice, curried rice pilaf, rice with green peas, or rice with wild herbs) plus 2–3 supplementary pouches. Main meal pouches contain: hamburger patties, frankfurters, beef curry, grilled chicken, Chinese meatballs, Sweet and Sour pork, grilled salmon, Yakitori chicken, mackerel in ginger sauce, chicken and vegetables, and tuna. Also included are pouches of pickled vegetables (yellow radish, red cabbage, Takana pickles, pickled hari-hari, or bamboo shoots) or salad (potato salad or tuna salad) and instant soup (Miso, Egg Drop, Wakame seaweed, or mushroom).
Type I Combat Ration (Old model)
The old model type I combat rations were supplied in cans, with one large can containing the rice portion and 2 or 3 smaller cans containing other portions.
Type I Combat Ration (New model)
The new model type I combat ration is supplied in olive drab retort pouches and overwrapped in an olive drab bag.
The acquisition cost to the SDF is 554 Yen
Type II Combat (Old model)
Type II Combat Ration (Improved version)
The acquisition cost to the SDF is 329 Yen
Malaysia
The Malaysian Army version of the 24-hour ration pack is intended to provide one man with sufficient food and supplements for one day. Most items are domestically procured and cater to local tastes and religious dietary requirements. The ration makes extensive use of commercially available canned and dehydrated items. Wherever possible, plastic-foil pouches are used instead of cans. The ration is supplemented with precooked or freeze-dried rice. Example menu C: Beef Kurma, Chicken Masak Merah, Fish Curry, and Sambal Shrimp; Bean Curd and Vegetable mix; long bean stew; canned pineapple and canned papaya; 2 packages of quick-cooking porridge (black bean porridge and flour porridge); military biscuits; jam; instant coffee; tea; instant milk powder; sugar; salt; vitamin tablets; matches; and napkins.
People's Republic of China
The Chinese People's Liberation Army has introduced a new set of rations in 2018 consisting of pre-packaged single-person meals sealed in hard plastic retort pouches. The Chinese military rations are of two types: Instant Meal Individual (three-item menu) and Self-Heating Individual (twelve-item menu) (Type 13 and 09). A typical Chinese breakfast ration contains roughly and includes a compressed food packet, an energy bar, an egg roll with pork, pickled mustard tuber, and a powdered beverage pack. Each Self-Heating package comes with an insulated flameless heater that is activated by water.
Philippines
The Philippine Army formerly had a combat ration similar to those of the United States Army. Typically, they include a small can of sardines or tuna, instant noodles, crackers, instant coffee, a small packet of peanuts, ginger tea, and a biscuit or cookie. Chocolate manufactured for hot conditions are sometimes issued. Canned rice is also issued.
In 2016, it was announced that the Armed Forces of the Philippines will get new "Ready-to-Eat" rations. They are packed in green plastic retort pouches and considered fit to eat for Muslim service members (halal). For example, Menu #2 has four packs of cooked rice, one tuna rice with sisig, one pack of chicken sausage with sauce, one pack of chicken lechon paksiw, one pack of powdered milk, one pack of 3-in-1 coffee, one pack of plain crackers, a spork and wet tissues.
Singapore
The Singapore Armed Forces issues three types of combat rations – Type M (Muslim), Type N (Non-Muslim), and Type V (Vegetarian). Each type comes in 4 or 5 different menus, packed in a heavy-duty green plastic bag similar to a US MRE bag, but measuring 205 mm x 190 mm x 115 mm (8" x 7.5" x 4.5") and weighing . Most items are retort-pouched (in the form of a watery paste and eaten straight from the pouch) and (except for the hot beverages) can be eaten without further preparation. The ration provides three meals and a variety of between-meal snacks, averaging per day. Each ration bag includes 2 retort-pouched main courses, a dessert, and an accessory pack containing 2 fruit bars, 4 packages of cookies, an envelope of isotonic drink mix powder, an envelope of instant flavored tea mix, a hot beverage (coffee, cocoa, or tea), an envelope of cereal mix, candy, matches, fuel tablets, and tissue paper. A package of instant noodles is provided with every meal pack, but is issued separately. Typical Type M (Menu #1): Rendang Mutton with rice; Tandoori Chicken with rice; Red Bean dessert. Typical Type N (Menu #5): Pasta Bolognese; Yellow Rice with Chicken; Barley Dessert with milk. Typical Type V (Menu #1): Mock Chicken Curry with rice; Vegetarian Fried Noodle; Green Bean dessert with coconut milk.
Sri Lanka
The primary operational ration used in Sri Lanka is the "jungle ration," a 24-hour ration pack whose components are produced and assembled in Sri Lanka. It is issued to soldiers at the rate of one per soldier per day, and contains both food and sundry items designed to sustain troops where food storage and preparation facilities are not practical. All meals are precooked, requiring neither cooking nor preparation, and all items are packaged inside sealed plastic packages or lightweight aluminium cans. Precooked rice is included as part of every meal. Typical contents are: chicken curry with potatoes, vegetable curry, precooked rice, hard crackers, processed cheese, soup cubes, instant milk powder, orange drink powder, and dates or dried pineapple. A sundry pack containing tea bags, sugar, salt, glucose tablets, seasonings, matches, plastic bags, and toilet paper is included with every ration pack.
South Korea
The modern Korean army issues 2 types of field rations, Type I and Type II. Type I ration has ready-to-eat foods packed in foil-plastic trilaminate pouches, placed in turn inside a thin cardboard box. Typical contents include: 1 pouch (250 g) precooked white rice with meat and vegetables, plus a separate seasoning packet; 1 pouch (250 g) precooked rice with red beans; 1 packet (100 g) of 6 pork sausages in BBQ; 1 packet (100 g) kimchi; and 1 packet (50 g) cooked black beans. The Type II ration is a smaller, lighter, freeze-dried single-meal ration consisting of several small pouches packed inside a larger gray plastic pouch measuring 225 mm x 200 mm x 90 mm and weighing 278 g. Typical contents include: freeze dried rice (various flavors, usually with meat and vegetables included), a pouch of instant soup, flavored sesame oil, seasoning and spice packets, dried chives and chocolate.
Taiwan
The ROC Armed Forces issues two types of field rations. One of them is called "field combat ration pack" (野戰口糧), which contains crackers, bakkwa, dry mango, nuts, chocolate paste, candy and energy drink. The other one is "field combat retort pouch" (野戰加熱式餐盒), which has 13 types of flavor in total.
Vietnam
In the Vietnam War, the Vietcong army often brought dried cooked vegetable, bags of pork floss, some contained nutrition tabs, some ginger candy as their ration food. In modern Vietnam, the field ration is very popular with soldiers, sailors and travellers. Some popular field rations in Vietnam today are: the Army Field Ration BB107, the Paratrooper Dry Provision (for pilots) and the Chinese 3 star field ration that is contained in an iron box, easy to carry.
The word "field ration" in Vietnamese called "lương khô" (Han-Nom: 糧枯).
The Ministry of Defense is also developing new MREs for the Special Operations Force as well as the Border Patrol Force. The new MREs are used in three main courses - breakfast, lunch and dinner. They mostly contain Vietnamese braised pork, meat stews, sticky rice, white rice, dried vegetables that can be heated with water, nutrition drinks, fruit juices, nutrition snacks, eateries (spoon, fork and straw), napkins and toothpicks.
United Nations
During peacekeeping operations in Lebanon, UN peacekeepers were reported to make use of a ration packaged similar to that of the U.S MRE designated the “Individual Food Ration” (French: Ration Alimentaire Individuelle). These rations are meant to be consumed over a period of 24 hours, and are notoriously difficult to acquire by civilians. There are 12 available menus in the form of 3 “Western” (Pasta with Beef and Chickpea Stew, Vegetables with Beef and Tomato and Cheese Pasta, Chilli Con Carne and Baked Beans), 3 Halal, 3 Kosher, 3 Vegetarian. Each ration also comes with both sweet and salty biscuits and an accessory pack containing fruit muesli, fruit jelly, fruit jam, dark chocolate, cheese spread, chewing gum, eight pouches of sugar, salt, pepper, ketchup and Mexican sauce. Another accessory pack with instant coffee, tea, an instant fruit drink and a hyperprotein drink is also included.
NATO Standard
NATO defines a General Purpose Individual Operational Ration as "A self-contained combat ration that provides adequate food for 24 hours for one person to maintain health, physical performance, and cognitive function under routine training or operational conditions. This ration is shelf stable and may require water to rehydrate some of the contents. The contents may be eaten hot or cold. General purpose individual operational rations are intended to be used during standard military operations in very broad but typically moderate operational conditions."
Shelf life
The shelf life of the ration from the time of delivery to the contracting authority must be at least 24 months at a storage temperature of 25 °C.
Nutritional content
NATO bases the nutritional content requirement on a reference soldier weighing , who on normal operations would have an energy expenditure of approximately 3,600 kcal per day. For combat operations, i.e., missions involving sustained, dismounted light-infantry or Special Forces operations energy expenditure is estimated to be 4,900 kcal per day, however this is seen to be a worst-case scenario. Operational individual rations are designed to be used for a period of 30 days after which supplements of fresh food be given and medical screening for nutritional deficiencies be increased.
Menu Fatigue
To avoid menu fatigue resulting from lack of variety in the ration, all 24-hour operational rations should, at a minimum, include:
Main courses (breakfast, lunch, dinner, or unspecified) generally intended to be eaten heated
Snacks, savoury and sweet (bars, chocolates, caramels, dried meat, nuts, crackers, cookie etc.)
Beverages, hot and cold (coffee, tea, hot chocolate, sports drinks, etc.)
Spreads (cheese, jam, peanut butter, etc.) and breads.
The 21-meal, 7-day menu cycle shall offer sufficient variety to at least allow a soldier to have two different meals each day for a period of 7 days, without repetition, although a breakfast meal and a variety of small snack and beverage items may be repeated. It is recommended that coffee and/or tea be provided for each meal.
Serving Temperature
It is desirable that the main course components and hot beverages be provided with a heater to enable these components to be heated to at least 62 °C from an ambient temperature of 20 °C in maximum 12 minutes. However, main course components or entrées must be consumable without heating.
Accessories
It is desirable that all necessary equipment to heat and consume an individual ration be included in the ration pack. Every ration or meal shall contain at least a spoon, unless all food items are intended as eat-out-of-hand and do not require any eating utensils for consumption.
Water treatment items are not required in the individual ration pack. If any water treatment items are included in the individual ration packs, their primary packaging should clearly mention that these items are for water treatment only (and not for direct consumption).
Some rations include a separate bag intended for collection and disposal of packing materials or packaging waste generated from consuming/using the ration components. However, if no separate refuse bag is provided with the ration, some portion of the ration packaging shall be usable or easily adaptable as a means to collect miscellaneous packaging waste that is generated.
Food Packaging
Protective packaging of components or items in a ration that are typically in contact with the product or food items is referred to as primary packaging. Secondary packaging is that packaging which is outside the primary packaging layer and in the case of general purpose individual operational rations this packaging is used to group several primary packages together. Lastly, tertiary packaging is that used to support bulk storage, shipping, and handling of product in the distribution supply chain. Rations are grouped at this level in fibreboard boxes or cases and subsequently palletised as unit loads for ease and efficiency of handling and distribution.
It is preferable that the packaging be easily opened without specific tools. If specific tools are required, they should be included in the ration pack, or alternatively a set should exist containing the ration pack and all necessary specific tools.
Primary and/or secondary packaging should be waterproof. Secondary packaging should be insect resistant. Tertiary packaging must be water resistant.
It is recommended that the rations be stacked on NATO type pallets (1200 x 1000 millimeters) for standardisation purposes. The minimum number of rations on a pallet position (i.e., on one single or on two stacked pallets) shall be 150 days of supply (DOS). The height and weight for a single pallet position, including the pallet(s), shall not exceed 2.2 meters and 1000 kg respectively. A pallet of rations must contain different menus.
See also
History of military nutrition
History of military nutrition in the United States
List of military food topics
Military ration
Steve1989MREInfo
References
External links
Operational Rations of the Department of Defense, 7th Edition
'Rations information from around the world – MREInfo'
Military food |
40894235 | https://en.wikipedia.org/wiki/Helix%20ALM | Helix ALM | Helix ALM, formerly called TestTrack, is application lifecycle management (ALM) software developed by Perforce. The software allows developers to manage requirements, defects, issues and testing during software development.
History
Helix ALM's precursor, TestTrack Pro, was developed by Seapine Software, and first shipped in 1996. In November 2016, Perforce acquired Seapine, and rebranded the software as Helix ALM.
Functionality
The software tracks software development processes including feature requests and requirements to design revisions and actual changes in the code. It keeps track of what tests were done, what was tested, who performed the test and when, on what platform, under which configuration and in what language. It offers the ability to create, manage, and link artifacts from the beginning through the end of a design and development project providing end-to-end traceability of all development artifacts and giving managers a better handle on the shifting requirements that define their projects. It enforces regulatory compliance to meet regulatory compliance requirements, including 21 CFR Part 11 and Sarbanes-Oxley.
Architecture
Helix ALM has a client–server architecture. The server manages a central database of requirements, test cases, testing evidence, defects, feature requests, work items, test configurations, users, and security group. The client and server communicate via a TCP/IP connection using 512-bit encryption.
Server
Helix ALM stores data in a variety of relational database management systems including SQL Server, Oracle, and Postgres.
Clients
There are several different categories of Helix ALM clients: GUI, Web UI, SOAP, REST API, and plugin.
The cross-platform GUI client is developed with Qt and available on Windows, Mac OS X, and Linux. It fully supports all end-user operations and administration operations.
The unified web application allows software developers and testers to create and review requirements, work with issues, and execute and track tests from their web browser.
Helix ALM's SOAP SDK allows language and platform independent way to extend built-in functionality by writing applications that access and manipulate its data.
The plugin interfaces integrate with popular IDEs to perform functionality, such as closing a defect or manually assign a work item to another team member, from the third-party applications. Helix ALM plugins are available for Eclipse, Visual Studio, Outlook, Excel, and QA Wizard. Helix ALM also integrates with various SCM tools including Git, CVS, Perforce, Subversion, Surround SCM, and SourceSafe.
See also
Comparison of issue tracking systems
References
External links
Helix ALM page on Perforce website
Proprietary version control systems
Project management software |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.