id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
1471470
https://en.wikipedia.org/wiki/Novell%20S-Net
Novell S-Net
S-Net (aka ShareNet) was a network operating system and the set of network protocols it used to talk to client machines on the network. Released by Novell in 1983, the S-Net operating system was an entirely proprietary operating system written for the Motorola 68000 processor. It used a star network topology. S-Net has also been called NetWare 68, with the 68 denoting the 68000 processor. It was superseded by NetWare 86, which was written for the Intel 8086 processor, in 1985. References Network operating systems S-Net Proprietary operating systems
56166481
https://en.wikipedia.org/wiki/BambooHR
BambooHR
BambooHR is an American technology company that provides human resources software as a service. Founded in 2008 by Ben Peterson and Ryan Sanders, the company is based in Lindon, Utah. BambooHR's services include an applicant tracking system and an employee benefits tracker. In 2019, Gadjo C Sevilla and Brian T. Horowitz wrote in PC Magazine that BambooHR is "pricier than competing products" and "lacking in benefits administration (BA) features compared to rival solutions" but its "solid feature set and user-friendly interface push it to the top of our list". History BambooHR was founded in 2008 by Ben Peterson and Ryan Sanders. Based in Lindon, Utah, the company has a dancing panda mascot. BambooHR had 470 employees in November 2019. In 2019, BambooHR served 11,000 clients that were based in 100 countries. BambooHR customers in 2017 included Shopify, Foursquare, and Reddit. Most of BambooHR's employees are salaried employees and do not qualify for overtime. BambooHR has an "anti-workaholic policy" in which employees are forbidden from laboring over a 40-hour week. Company co-founder Ryan Sanders' rationale was that occupational burnout has a detrimental effect on the health of his employees and negatively affects their families and his company. He started taking this view during his organizational leadership studies as a graduate student at Gonzaga University. BambooHR started a "paid paid vacation" policy in 2015. Employees who have worked at BambooHR for at least six months are eligible for $2,000 in reimbursements for vacation expenses like airline tickets, hotel rooms, and other tourist activities. Joe Fryer profiled BambooHR's vacation policy on the Today show on September 16, 2016. According to a 2015 article in The Wall Street Journal, BambooHR terminated an employee for violating their workweek policy by working more than 40 hours. In October 2019, BambooHR appointed Brad Rencher, who had previously been a marketing executive at Adobe Inc., to replace Ben Peterson as the company's CEO. Peterson became a co-chairman on the board of directors with fellow co-founder Ryan Sanders. Software BambooHR provides companies a human resources management software as a service. The service has a dashboard homepage with different sections for employee information, vacation time record keeping, and reports. The sections are: "My Info", "Employees", "Job Openings", "Reports", and "Files". The dashboard also contains an employee's image, contact details, and vacation time remaining. It has areas that show scheduled lessons and business communications. Users can see their coworkers' birthdays and scheduled time off. BambooHR's software includes an applicant tracking system. The system has a catalogue of job opportunities and data about each opportunity including the hiring manager, how many people have applied, and how long it has been posted. BambooHR's integration allows job openings to be posted at the same time to the company's jobs page and to career sites like Glassdoor and Indeed. BambooHR allows companies to record employee benefits. It includes reports that help employees fill out Affordable Care Act compliance forms. In 2017, Juan Martinez and Rob Marvin wrote in PC Magazine that BambooHR's benefits administration functionalities are inferior to Zenefits'. They concluded that although the software is "easy to get up and running", it is more expensive than a large number of its competitors and its website is "pretty but lacks functionality". It also has a performance review feature. It provides an open API to allow combination with other HR software services. In 2017, it started BambooHR Marketplace to allow software developers to market HR apps they have integrated with BambooHR. References External links Official website American companies established in 2008 Companies based in Utah County, Utah Human resource management software Online companies of the United States Privately held companies of the United States Software companies based in Utah 2008 establishments in Utah Software companies established in 2008 Business services companies established in 2008
44820520
https://en.wikipedia.org/wiki/1998%20Sun%20Bowl
1998 Sun Bowl
The 1998 Norwest Sun Bowl was played by the TCU Horned Frogs and the USC Trojans. This was the 65th Sun Bowl held and the last sponsored by Norwest Corporation due to next year's game being held by Wells Fargo. Background Paul Hackett was in his first year (of three) with USC, having led them to a bowl game for the first time since the 1996 Rose Bowl. This would be Hackett's only bowl game with USC. TCU hadn't been to a bowl game since the 1994 Independence Bowl nor won one since the 1957 Cotton Bowl Classic. Despite having only a 6–5 record, they were invited to a bowl game due to the Big Ten not having enough bowl-eligible teams. They were coached by first year Dennis Franchione. Game summary Basil Mitchell had only 19 carries but ran for 185 yards and two touchdowns as TCU scored on their first three possessions and dominated the time of possession in the first half (having the ball for 20:15). TCU quarterback Patrick Batteaux also had two touchdowns for TCU, both rushing. But Carson Palmer (who had 280 yards passing) threw two touchdowns (one to Billy Miller and the other to Petros Papadakis) that made it 28–16 going into the fourth quarter. After stopping TCU on a drive, the Trojans got the ball back and drove to the TCU 20 early in the fourth quarter. But TCU's defense stuffed their offense, as USC kicked a field goal to make 28–19. From that point on, TCU ate up most of the clock as USC did not score again, giving TCU their first bowl win since 1957 in what would be the first of six consecutive bowl appearances. Defensive Lineman London Dunlap was named Jimmy Rogers, Jr. Most Valuable Lineman and Running back Basil Mitchell was named C.M. Hendricks Most Valuable Player. References 1998–99 NCAA football bowl games Sun Bowl TCU Horned Frogs football bowl games USC Trojans football bowl games December 1998 sports events in the United States 1998 in sports in Texas
19572250
https://en.wikipedia.org/wiki/Locus%20Computing%20Corporation
Locus Computing Corporation
Locus Computing Corporation was formed in 1982 by Gerald J. Popek, Charles S. Kline and Gregory I. Thiel to commercialize the technologies developed for the LOCUS distributed operating system at UCLA. Locus was notable for commercializing single-system image software and producing the Merge package which allowed the use of DOS and Windows 3.1 software on Unix systems. Locus was acquired by Platinum Technology Inc in 1995. Products AIX for IBM PS/2 and System/370 Locus was commissioned by IBM to produce a version of the AIX UNIX based operating system for the PS/2 and System/370 ranges. The single-system image capabilities of LOCUS were incorporated under the name of AIX TCF (transparent computing facility). OSF/1 AD for the Intel Paragon Locus was commissioned by Intel to produce a multiprocessor version of OSF/1 for the Intel Paragon a massively parallel NORMA (No Remote Memory Access) system. The system was known as OSF/1 AD, where AD stood for "Advanced Development". To allow inter processor process migration and communication between the individual nodes of the Paragon system they re-worked the TCF technology from LOCUS as Transparent Network Computing, or TNC, inventing the concept of the VPROC (virtual process) an analogy of the VNODE (virtual inode) from the SunOS virtual file system. UnixWare NonStop Clusters Locus was commissioned by Tandem Computers to include their TNC technology in a highly available single-system image clustering system based on SCO UnixWare, UnixWare NonStop Clusters. During the course of the project Locus was acquired by Platinum Technology Inc, who transferred the team working on NonStop Clusters to Tandem. Tandem were later bought by Compaq. The UnixWare product was acquired from SCO by Caldera Systems/Caldera International, who discontinued commercialization of the NonStop Clusters product in favor of the simpler Reliant HA system. Compaq then decided to release the NonStop Clusters code as open source software, porting it to Linux as the OpenSSI project. Merge Merge was a system developed by Locus in late 1984 for the AT&T 6300+ computer, which allowed DOS (and hence DOS applications) to be run under the native UNIX SVR2 operating system. The 6300+ used an Intel 80286 processor and included special-purpose circuitry to allow virtualization of the 8086 instruction set used by DOS. Merge was later modified to use the virtual 8086 mode provided by Intel 80386 processors. It was sold for Microport SVR3 and later SCO Unix and UnixWare. In the late 1980, the main commercial competitor of Merge was VP/IX developed by Interactive Systems Corporation and Phoenix Technologies. Around 1994, Merge included an innovative socket API that used Intel ring 2 for virtualization. Although this was the fastest network access of any Windows virtualization system then on the market, it did not increase sales enough to make Locus independent. This socket API was designed and developed by Real Time, Inc. of Santa Barbara. Locus eventually joined the Microsoft WISE program which gave them access to Windows source code, which allowed later versions of Merge to run Windows Shrink wrapped applications without a copy of Windows. PC-Interface PC-Interface was a popular Lan-based Cross-Platform Integration Toolkit for Unix, providing MS-DOS/Windows/Macintosh and Unix integration using Unix as the file system. It supported AIX, Santa Cruz Operation Inc, UnixWare and Motorola 9000 and many other Unixes and came with one Mac and one MS-DOS/Windows client. References Defunct computer companies based in California Technology companies based in Greater Los Angeles Companies based in Los Angeles County, California Software companies established in 1982 Software companies disestablished in 1995 1982 establishments in California 1995 disestablishments in California Defunct software companies of the United States Defunct companies based in Greater Los Angeles 1995 mergers and acquisitions
50404485
https://en.wikipedia.org/wiki/Greek%20epic%20in%20film
Greek epic in film
Greek mythology has consistently served as a source for many filmmakers due to its artistic appeal. Antiquity has been reimagined in many ways and these recreations have been met with great public success regardless of their individual achievements. The plot lines of epic poetry are even more appealing with their enthralling battles, heroic characters, monsters, and gods. And now, with modern technology and computer-generated imagery (CGI), our ability as a society to recreate Greek mythology on screen has improved greatly. As a scholar of Homer put it,"At the beginning of literature, when heroic poetry reach society as a whole...society listened; in the twentieth century society views... the modern heroic medium is film, and not necessarily the productions that are held in highest critical regard." Homer's Iliad The ancient Greek epic poem the Iliad (Ἰλιάς) details the final year of the decade-long Trojan War. The war began due to divine interaction with mortals: Eris, personification of strife and discord, presented a golden apple to the goddesses Aphrodite, Athena and Hera but instructed that only the fairest of them could keep it. Zeus sends the three women to Paris, the prince of Troy, who decides to award the apple to Aphrodite, who promises to make the beautiful Helen – wife of Greek King Menelaus – his wife. Paris' taking of Helen begins the fighting between the Greeks and Trojans. Although the initial rivalry exists between King Menelaus and Paris, the epic poem highlights the argument between Greek hero Achilles and the Greek King Agamemnon, Achilles' inner turmoil about whether or not he wants to engage in the fighting, and the overall heroic desire for kleos (κλέος, "glory, fame") and eventual nostos (νόστος, "homecoming"). Helen of Troy (1956) Helen of Troy is a 1956 epic film based on Homer's Iliad and Odyssey. Filmed in parts of Rome, the film retells the story of the Trojan War, despite some major changes from the Iliad storyline. Troy (2004) Troy is an American epic adventure war film directed by Wolfgang Peterson. Based on Homer's Iliad, the film describes the story of the Trojan War following the attack of Troy by Greek forces as well as the stories of those involved. Although the Iliad describes the story of the rivalry between Achilles and Agamemnon in the ninth year, Troy tells the story of the entire decade-long war. The film's ending is not taken from the Iliad but from Virgil's Aeneid, showing Hector’s death and funeral instead. The film’s cast includes actors Brad Pitt, Orlando Bloom, and Eric Bana. Homer's Odyssey Another Homeric epic often used in modern film, the Odyssey (Ὀδύσσεια) serves as a sort of sequel to the Iliad. The poem narrates the journey home (or nostos) of Greek hero Odysseus after ten years of fighting in the Trojan War. Like the Iliad, the poem begins in medias res, or "in the middle of things," beginning Ulysses (1954) Ulysses is a fantasy-adventure film based on Homer's Odyssey. The film tells the story of Ulysses’ attempt to return home after the Trojan War, as well as the adventures that ensue. In the film, actor Silvana Mangano plays both Penelope, the wife of Ulysses as well as the sorceress, Circe. The 1954 film also features actors Kirk Douglas and Anthony Quinn. Ulysses was a tremendous success which later led to the production of the film Hercules in 1958, which was credited with igniting the Italian sword-and-sandal craze in the 1960s. L'Odissea (1968) The Odyssey (L'Odissea in Italian) is an eight-episode European TV miniseries broadcast on RAI (Italian state TV) in 1968 and based on Homer's Odyssey. An Italian, Yugoslavian, German and French Radiodiffusion-Télévision Française coproduction, it was directed by Franco Rossi, assisted by Piero Schivazappa and Mario Bava; the cast includes Bekim Fehmiu as Odysseus and Irene Papas as Penelope, Samson Burk as the Cyclops, as well as Barbara Bach as Nausicaa, and Gérard Herter. Several critics consider the series to be a masterful representation of the ancient world. The adaptation is considered by some to be the most faithful rendering of Homer's epic on screen, by including most of the characters and events, as well as by attempting to fill with graphic details. The Odyssey (1997) The Odyssey is a British-American fantasy-adventure television series based on Homer's ancient Greek epic poem, The Odyssey. The miniseries won a 1997 Emmy Award and was nominated for a Golden Globe. The two-part NBC miniseries was filmed in Malta, Turkey, parts of England, as well as places around the Mediterranean where the story took place. Apollonius Rhodes' Argonautika Jason and the Argonauts (1963) Jason and the Argonauts is an American-British fantasy film made in 1963. Known for its fantasy creatures and iconic fight scenes, the film was released by Columbia Pictures in association with Morningside Productions. The film tells the story of the Greek hero as he leads a team of adventurers in a quest for the legendary Golden Fleece. Hercules (1958) Hercules is an Italian epic fantasy based upon both Hercules and the Quest for the Golden Fleece myths. The film's screenplay is based loosely upon the myths of Hercules and the Greek epic poem Argonautika by Apollonius of Rhodes. Hercules tells the tale of the Greek hero as he sails with the Argonauts, experiencing adventure and romance. Film History of film
5230712
https://en.wikipedia.org/wiki/Daniel%20J.%20Barrett
Daniel J. Barrett
Daniel J. Barrett is a writer, software engineer, and musician. He is best known for his technology books. Writing Barrett has written a number of technical books on computer topics. The most well-known are Linux Pocket Guide and SSH, The Secure Shell: The Definitive Guide. His books have been translated into Chinese, Czech, French, German, Hungarian, Italian, Japanese, Korean, Polish, Portuguese, Russian, and Spanish. Corporate use of MediaWiki Barrett, author of the book MediaWiki (), has received media coverage for his deployment of MediaWiki in corporate environments. Gentle Giant Barrett has been active in the resurgence of 1970s progressive rock band Gentle Giant from the 1990s onward. He created the official Gentle Giant Home Page in 1994, and though it began as a fan site, it was adopted by the band and is listed as the "Official Gentle Giant website" on the band's CD re-releases. In 1996, Barrett compiled a 2-CD set of their songs for PolyGram entitled Edge of Twilight. Later, he also helped to coordinate the creation of the boxed sets Under Construction and Unburied Treasure. Humor In 1988, Barrett wrote and recorded the song "Find the Longest Path," a parody incorporating an NP-complete problem in computer science and the frustrations of graduate school. It has been played at mathematics conferences, incorporated into several YouTube videos by other people, and independently performed by a choral ensemble at ACM SIGCSE 2013. Computer scientist Robert Sedgewick ends his algorithms course on Coursera with this song. Bibliography Barrett, Daniel J., Bandits on the Information Superhighway, 1996, . Barrett, Daniel J., NetResearch: Finding Information Online, 1997, . Barrett, Daniel J., Polylingual Systems: An Approach to Seamless Interoperability, Doctoral dissertation, University of Massachusetts Amherst, February 1998. Barrett, Daniel J., and Silverman, Richard E., SSH, The Secure Shell: The Definitive Guide, 2001, . Barrett, Daniel J., Silverman, Richard E., Byrnes, Robert A., Linux Security Cookbook, 2003, . Barrett, Daniel J., Linux Pocket Guide, 2004, . Barrett, Daniel J., Silverman, Richard E., Byrnes, Robert A., SSH, The Secure Shell: The Definitive Guide, Second Edition, 2005, . Barrett, Daniel J., MediaWiki, October 2008, . Barrett, Daniel J., Linux Pocket Guide, Second Edition, March 2012, . Barrett, Daniel J., Macintosh Terminal Pocket Guide, June 2012, . Barrett, Daniel J., Linux Pocket Guide, Third Edition, June 2016, . Barrett, Daniel J., Efficient Linux at the Command Line, March 2022, . Translations Bandits on the Information Superhighway: Barrett, Daniel J., Gauner Und Ganoven Im Internet, 1998, . (German) Barrett, Daniel J., Bandité na informační dálnici, 1999, . (Czech) NetResearch: Finding Information Online: Barrett, Daniel J., 網路搜尋寶典, 1998, . (Simplified Chinese) SSH, the Secure Shell: The Definitive Guide: Barrett, Daniel J., and Silverman, Richard E., SSH: Secure Shell - Ein umfassendes Handbuch, December 2001, . (German) Barrett, Daniel J., and Silverman, Richard E., SSH, le shell sécurisé: La référence, January 2002, . (French) Barrett, Daniel J., and Silverman, Richard E., SSH, KompletnÍ průvodce, April 2003, . (Czech) Barrett, Daniel J., Silverman, Richard E., Byrnes, Robert A., 実用SSH 第2版—セキュアシェル徹底活用ガイド, November 2006, . (Japanese) Linux Security Cookbook: Barrett, Daniel J., Silverman, Richard E., Byrnes, Robert A., Linux Sicherheits-Kochbuch, October 2003, . (German) Barrett, Daniel J., Silverman, Richard E., Byrnes, Robert A., Linuxセキュリティ クックブック――システム防御のためのレシピ集, November 2003, . (Japanese) Barrett, Daniel J., Silverman, Richard E., Byrnes, Robert A., Linux Bezpieczenstwo Receptury, 2003, . (Polish) Barrett, Daniel J., Silverman, Richard E., Byrnes, Robert A., Linux Biztonsági Eljárások, 2004, . (Hungarian) Linux Pocket Guide: Barrett, Daniel J., Linux - kurz & gut, September 2004, . (German) Barrett, Daniel J., Linux: основнЫе командЫ, 2005, . (Russian) Barrett, Daniel J., Linux: Guida pocket, May 2005, . (Italian) Barrett, Daniel J., Linuxハンドブック――機能引きコマンドガイド, August 2005, . (Japanese) Barrett, Daniel J., Linux: Kapresní přehled, 2006, . (Czech) Barrett, Daniel J., Linux - précis & concis, February 2006, . (French) Barrett, Daniel J., Linux - Guia de Bolso, July 2006, . (Portuguese) Barrett, Daniel J., Guia de Bolsillo - Linux, September 2012, . (Spanish) Barrett, Daniel J., Linux - kurz & gut (2. Auflage), September 2012, . (German) Barrett, Daniel J., Linux Leksykon Kieszonkowy, Wydanie II, 2013, . (Polish) Barrett, Daniel J., Linux口袋书(第2版), July 2014, (Chinese) Barrett, Daniel J., Linux - kurz & gut, Die wichtigen Befehle (3. Auflage), February 2017, . (German) Barrett, Daniel J., Linux Leksykon Kieszonkowy, Wydanie III, 2017, . (Polish) Barrett, Daniel J., Linux命令速查手册(第3版), January 2018, . (Simplified Chinese) Barrett, Daniel J., 리눅스 핵심 레퍼런스, February 2018, . (Korean) MediaWiki: Barrett, Daniel J., MediaWiki efficace: Installer, utiliser et administrer un wiki, March 2009, . (French) References American computer programmers American technology writers American humorists American rock musicians Amiga people Usenet people 1963 births Living people
60708433
https://en.wikipedia.org/wiki/Jill%20Macoska
Jill Macoska
Jill A. Macoska is an American scientist and professor. She is the Alton J. Brann endowed chair and distinguished professor of science and mathematics at the University of Massachusetts Boston. Education Macoska earned her Ph.D. in biochemistry from the City University of New York in 1988. She completed postdoctoral work at Harvard University in molecular genetics and at the Michigan Cancer Foundation. Career and research Macoska the Alton J. Brann Distinguished Professor in Science and Mathematics, and Professor of Biological Sciences at University of Massachusetts Boston. For the past 20 years, her research has focused on elucidating the molecular genetic alterations and dysfunctional intracellular signaling mechanisms that promote prostate pathobiology. Macoska serves as the first director of the Center for Personalized Cancer Therapy. References 20th-century American women 21st-century American women 20th-century American biologists 21st-century American biologists 20th-century American women scientists 21st-century American women scientists Year of birth missing (living people) Living people City University of New York alumni University of Massachusetts Boston faculty American biochemists Women biochemists American geneticists Women geneticists American women academics
10006
https://en.wikipedia.org/wiki/Electronic%20musical%20instrument
Electronic musical instrument
An electronic musical instrument or electrophone is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener. An electronic instrument might include a user interface for controlling its sound, often by adjusting the pitch, frequency, or duration of each note. A common user interface is the musical keyboard, which functions similarly to the keyboard on an acoustic piano, except that with an electronic keyboard, the keyboard itself does not make any sound. An electronic keyboard sends a signal to a synth module, computer or other electronic or digital sound generator, which then creates a sound. However, it is increasingly common to separate user interface and sound-generating functions into a music controller (input device) and a music synthesizer, respectively, with the two devices communicating through a musical performance description language such as MIDI or Open Sound Control. All electronic musical instruments can be viewed as a subset of audio signal processing applications. Simple electronic musical instruments are sometimes called sound effects; the border between sound effects and actual musical instruments is often unclear. In the 21st century, electronic musical instruments are now widely used in most styles of music. In popular music styles such as electronic dance music, almost all of the instrument sounds used in recordings are electronic instruments (e.g., bass synth, synthesizer, drum machine). Development of new electronic musical instruments, controllers, and synthesizers continues to be a highly active and interdisciplinary field of research. Specialized conferences, notably the International Conference on New Interfaces for Musical Expression, have organized to report cutting-edge work, as well as to provide a showcase for artists who perform or create music with new electronic music instruments, controllers, and synthesizers. Classification In musicology, electronic musical instruments are known as electrophones. Electrophones are the fifth category of musical instrument under the Hornbostel-Sachs system. Musicologists typically only classify music as electrophones if the sound is initially produced by electricity, excluding electronically controlled acoustic instruments such as pipe organs and amplified instruments such as electric guitars. The category was added to the Hornbostel-Sachs musical instrument classification system by Sachs in 1940, in his 1940 book The History of Musical Instruments; the original 1914 version of the system did not include it. Sachs divided electrophones into three subcategories: 51=electrically actuated acoustic instruments (e.g., pipe organ with electronic tracker action) 52=electrically amplified acoustic instruments (e.g., acoustic guitar with pickup) 53=instruments which make sound primarily by way of electrically driven oscillators The last category included instruments such as theremins or synthesizers, which he called radioelectric instruments. Francis William Galpin provided such a group in his own classification system, which is closer to Mahillon than Sachs-Hornbostel. For example, in Galpin's 1937 book A Textbook of European Musical Instruments, he lists electrophones with three second-level divisions for sound generation ("by oscillation," "electro-magnetic," and "electro-static"), as well as third-level and fourth-level categories based on the control method. Present-day ethnomusicologists, such as Margaret Kartomi and Terry Ellingson, suggest that, in keeping with the spirit of the original Hornbostel Sachs classification scheme, if one categorizes instruments by what first produces the initial sound in the instrument, that only subcategory 53 should remain in the electrophones category. Thus, it has been more recently proposed, for example, that the pipe organ (even if it uses electric key action to control solenoid valves) remain in the aerophones category, and that the electric guitar remain in the chordophones category, and so on. Early examples In the 18th-century, musicians and composers adapted a number of acoustic instruments to exploit the novelty of electricity. Thus, in the broadest sense, the first electrified musical instrument was the Denis d'or keyboard, dating from 1753, followed shortly by the clavecin électrique by the Frenchman Jean-Baptiste de Laborde in 1761. The Denis d'or consisted of a keyboard instrument of over 700 strings, electrified temporarily to enhance sonic qualities. The clavecin électrique was a keyboard instrument with plectra (picks) activated electrically. However, neither instrument used electricity as a sound-source. The first electric synthesizer was invented in 1876 by Elisha Gray. The "Musical Telegraph" was a chance by-product of his telephone technology when Gray accidentally discovered that he could control sound from a self-vibrating electromagnetic circuit and so invented a basic oscillator. The Musical Telegraph used steel reeds oscillated by electromagnets and transmitted over a telephone line. Gray also built a simple loudspeaker device into later models, which consisted of a diaphragm vibrating in a magnetic field. A significant invention, which later had a profound effect on electronic music, was the audion in 1906. This was the first thermionic valve, or vacuum tube and which led to the generation and amplification of electrical signals, radio broadcasting, and electronic computation, among other things. Other early synthesizers included the Telharmonium (1897), the Theremin (1919), Jörg Mager's Spharophon (1924) and Partiturophone, Taubmann's similar Electronde (1933), Maurice Martenot's ondes Martenot ("Martenot waves", 1928), Trautwein's Trautonium (1930). The Mellertion (1933) used a non-standard scale, Bertrand's Dynaphone could produce octaves and perfect fifths, while the Emicon was an American, keyboard-controlled instrument constructed in 1930 and the German Hellertion combined four instruments to produce chords. Three Russian instruments also appeared, Oubouhof's Croix Sonore (1934), Ivor Darreg's microtonal 'Electronic Keyboard Oboe' (1937) and the ANS synthesizer, constructed by the Russian scientist Evgeny Murzin from 1937 to 1958. Only two models of this latter were built and the only surviving example is currently stored at the Lomonosov University in Moscow. It has been used in many Russian movies—like Solaris—to produce unusual, "cosmic" sounds. Hugh Le Caine, John Hanert, Raymond Scott, composer Percy Grainger (with Burnett Cross), and others built a variety of automated electronic-music controllers during the late 1940s and 1950s. In 1959 Daphne Oram produced a novel method of synthesis, her "Oramics" technique, driven by drawings on a 35 mm film strip; it was used for a number of years at the BBC Radiophonic Workshop. This workshop was also responsible for the theme to the TV series Doctor Who a piece, largely created by Delia Derbyshire, that more than any other ensured the popularity of electronic music in the UK. Telharmonium In 1897 Thaddeus Cahill patented an instrument called the Telharmonium (or Teleharmonium, also known as the Dynamaphone). Using tonewheels to generate musical sounds as electrical signals by additive synthesis, it was capable of producing any combination of notes and overtones, at any dynamic level. This technology was later used to design the Hammond organ. Between 1901 and 1910 Cahill had three progressively larger and more complex versions made, the first weighing seven tons, the last in excess of 200 tons. Portability was managed only by rail and with the use of thirty boxcars. By 1912, public interest had waned, and Cahill's enterprise was bankrupt. Theremin Another development, which aroused the interest of many composers, occurred in 1919–1920. In Leningrad, Leon Theremin built and demonstrated his Etherophone, which was later renamed the Theremin. This led to the first compositions for electronic instruments, as opposed to noisemakers and re-purposed machines. The Theremin was notable for being the first musical instrument played without touching it. In 1929, Joseph Schillinger composed First Airphonic Suite for Theremin and Orchestra, premièred with the Cleveland Orchestra with Leon Theremin as soloist. The next year Henry Cowell commissioned Theremin to create the first electronic rhythm machine, called the Rhythmicon. Cowell wrote some compositions for it, which he and Schillinger premiered in 1932. Ondes Martenot The 1920s have been called the apex of the Mechanical Age and the dawning of the Electrical Age. In 1922, in Paris, Darius Milhaud began experiments with "vocal transformation by phonograph speed change." These continued until 1927. This decade brought a wealth of early electronic instruments—along with the Theremin, there is the presentation of the Ondes Martenot, which was designed to reproduce the microtonal sounds found in Hindu music, and the Trautonium. Maurice Martenot invented the Ondes Martenot in 1928, and soon demonstrated it in Paris. Composers using the instrument ultimately include Boulez, Honegger, Jolivet, Koechlin, Messiaen, Milhaud, Tremblay, and Varèse. Radiohead guitarist and multi-instrumentalist Jonny Greenwood also uses it in his compositions and a plethora of Radiohead songs. In 1937, Messiaen wrote Fête des belles eaux for 6 ondes Martenot, and wrote solo parts for it in Trois petites Liturgies de la Présence Divine (1943–44) and the Turangalîla-Symphonie (1946–48/90). Trautonium The Trautonium was invented in 1928. It was based on the subharmonic scale, and the resulting sounds were often used to emulate bell or gong sounds, as in the 1950s Bayreuth productions of Parsifal. In 1942, Richard Strauss used it for the bell- and gong-part in the Dresden première of his Japanese Festival Music. This new class of instruments, microtonal by nature, was only adopted slowly by composers at first, but by the early 1930s there was a burst of new works incorporating these and other electronic instruments. Hammond organ and Novachord In 1929 Laurens Hammond established his company for the manufacture of electronic instruments. He went on to produce the Hammond organ, which was based on the principles of the Telharmonium, along with other developments including early reverberation units. The Hammond organ is an electromechanical instrument, as it used both mechanical elements and electronic parts. A Hammond organ used spinning metal tonewheels to produce different sounds. A magnetic pickup similar in design to the pickups in an electric guitar is used to transmit the pitches in the tonewheels to an amplifier and speaker enclosure. While the Hammond organ was designed to be a lower-cost alternative to a pipe organ for church music, musicians soon discovered that the Hammond was an excellent instrument for blues and jazz; indeed, an entire genre of music developed built around this instrument, known as the organ trio (typically Hammond organ, drums, and a third instrument, either saxophone or guitar). The first commercially manufactured synthesizer was the Novachord, built by the Hammond Organ Company from 1938 to 1942, which offered 72-note polyphony using 12 oscillators driving monostable-based divide-down circuits, basic envelope control and resonant low-pass filters. The instrument featured 163 vacuum tubes and weighed 500 pounds. The instrument's use of envelope control is significant, since this is perhaps the most significant distinction between the modern synthesizer and other electronic instruments. Analogue synthesis 1950–1980 The most commonly used electronic instruments are synthesizers, so-called because they artificially generate sound using a variety of techniques. All early circuit-based synthesis involved the use of analogue circuitry, particularly voltage controlled amplifiers, oscillators and filters. An important technological development was the invention of the Clavivox synthesizer in 1956 by Raymond Scott with subassembly by Robert Moog. French composer and engineer Edgard Varèse created a variety of compositions using electronic horns, whistles, and tape. Most notably, he wrote Poème électronique for the Phillips pavilion at the Brussels World Fair in 1958. Modular synthesizers RCA produced experimental devices to synthesize voice and music in the 1950s. The Mark II Music Synthesizer, housed at the Columbia-Princeton Electronic Music Center in New York City. Designed by Herbert Belar and Harry Olson at RCA, with contributions from Vladimir Ussachevsky and Peter Mauzey, it was installed at Columbia University in 1957. Consisting of a room-sized array of interconnected sound synthesis components, it was only capable of producing music by programming, using a paper tape sequencer punched with holes to control pitch sources and filters, similar to a mechanical player piano but capable of generating a wide variety of sounds. The vacuum tube system had to be patched to create timbres. In the 1960s synthesizers were still usually confined to studios due to their size. They were usually modular in design, their stand-alone signal sources and processors connected with patch cords or by other means and controlled by a common controlling device. Harald Bode, Don Buchla, Hugh Le Caine, Raymond Scott and Paul Ketoff were among the first to build such instruments, in the late 1950s and early 1960s. Buchla later produced a commercial modular synthesizer, the Buchla Music Easel. Robert Moog, who had been a student of Peter Mauzey and one of the RCA Mark II engineers, created a synthesizer that could reasonably be used by musicians, designing the circuits while he was at Columbia-Princeton. The Moog synthesizer was first displayed at the Audio Engineering Society convention in 1964. It required experience to set up sounds but was smaller and more intuitive than what had come before, less like a machine and more like a musical instrument. Moog established standards for control interfacing, using a logarithmic 1-volt-per-octave for pitch control and a separate triggering signal. This standardization allowed synthesizers from different manufacturers to operate simultaneously. Pitch control was usually performed either with an organ-style keyboard or a music sequencer producing a timed series of control voltages. During the late 1960s hundreds of popular recordings used Moog synthesizers. Other early commercial synthesizer manufacturers included ARP, who also started with modular synthesizers before producing all-in-one instruments, and British firm EMS. Integrated synthesizers In 1970, Moog designed the Minimoog, a non-modular synthesizer with a built-in keyboard. The analogue circuits were interconnected with switches in a simplified arrangement called "normalization." Though less flexible than a modular design, normalization made the instrument more portable and easier to use. The Minimoog sold 12,000 units. Further standardized the design of subsequent synthesizers with its integrated keyboard, pitch and modulation wheels and VCO->VCF->VCA signal flow. It has become celebrated for its "fat" sound—and its tuning problems. Miniaturized solid-state components allowed synthesizers to become self-contained, portable instruments that soon appeared in live performance and quickly became widely used in popular music and electronic art music. Polyphony Many early analog synthesizers were monophonic, producing only one tone at a time. Popular monophonic synthesizers include the Moog Minimoog. A few, such as the Moog Sonic Six, ARP Odyssey and EML 101, could produce two different pitches at a time when two keys were pressed. Polyphony (multiple simultaneous tones, which enables chords) was only obtainable with electronic organ designs at first. Popular electronic keyboards combining organ circuits with synthesizer processing included the ARP Omni and Moog's Polymoog and Opus 3. By 1976 affordable polyphonic synthesizers began to appear, notably the Yamaha CS-50, CS-60 and CS-80, the Sequential Circuits Prophet-5 and the Oberheim Four-Voice. These remained complex, heavy and relatively costly. The recording of settings in digital memory allowed storage and recall of sounds. The first practical polyphonic synth, and the first to use a microprocessor as a controller, was the Sequential Circuits Prophet-5 introduced in late 1977. For the first time, musicians had a practical polyphonic synthesizer that could save all knob settings in computer memory and recall them at the touch of a button. The Prophet-5's design paradigm became a new standard, slowly pushing out more complex and recondite modular designs. Tape recording In 1935, another significant development was made in Germany. Allgemeine Elektricitäts Gesellschaft (AEG) demonstrated the first commercially produced magnetic tape recorder, called the Magnetophon. Audio tape, which had the advantage of being fairly light as well as having good audio fidelity, ultimately replaced the bulkier wire recorders. The term "electronic music" (which first came into use during the 1930s) came to include the tape recorder as an essential element: "electronically produced sounds recorded on tape and arranged by the composer to form a musical composition". It was also indispensable to Musique concrète. Tape also gave rise to the first, analogue, sample-playback keyboards, the Chamberlin and its more famous successor the Mellotron, an electro-mechanical, polyphonic keyboard originally developed and built in Birmingham, England in the early 1960s. Sound sequencer During the 1940s–1960s, Raymond Scott, an American composer of electronic music, invented various kind of music sequencers for his electric compositions. Step sequencers played rigid patterns of notes using a grid of (usually) 16 buttons, or steps, each step being 1/16 of a measure. These patterns of notes were then chained together to form longer compositions. Software sequencers were continuously utilized since the 1950s in the context of computer music, including computer-played music (software sequencer), computer-composed music (music synthesis), and computer sound generation (sound synthesis). Digital era 1980–2000 Digital synthesis The first digital synthesizers were academic experiments in sound synthesis using digital computers. FM synthesis was developed for this purpose; as a way of generating complex sounds digitally with the smallest number of computational operations per sound sample. In 1983 Yamaha introduced the first stand-alone digital synthesizer, the DX-7. It used frequency modulation synthesis (FM synthesis), first developed by John Chowning at Stanford University during the late sixties. Chowning exclusively licensed his FM synthesis patent to Yamaha in 1975. Yamaha subsequently released their first FM synthesizers, the GS-1 and GS-2, which were costly and heavy. There followed a pair of smaller, preset versions, the CE20 and CE25 Combo Ensembles, targeted primarily at the home organ market and featuring four-octave keyboards. Yamaha's third generation of digital synthesizers was a commercial success; it consisted of the DX7 and DX9 (1983). Both models were compact, reasonably priced, and dependent on custom digital integrated circuits to produce FM tonalities. The DX7 was the first mass market all-digital synthesizer. It became indispensable to many music artists of the 1980s, and demand soon exceeded supply. The DX7 sold over 200,000 units within three years. The DX series was not easy to program but offered a detailed, percussive sound that led to the demise of the electro-mechanical Rhodes piano, which was heavier and larger than a DX synth. Following the success of FM synthesis Yamaha signed a contract with Stanford University in 1989 to develop digital waveguide synthesis, leading to the first commercial physical modeling synthesizer, Yamaha's VL-1, in 1994. The DX-7 was affordable enough for amateurs and young bands to buy, unlike the costly synthesizers of previous generations, which were mainly used by top professionals. Sampling The Fairlight CMI (Computer Musical Instrument), the first polyphonic digital sampler, was the harbinger of sample-based synthesizers. Designed in 1978 by Peter Vogel and Kim Ryrie and based on a dual microprocessor computer designed by Tony Furse in Sydney, Australia, the Fairlight CMI gave musicians the ability to modify volume, attack, decay, and use special effects like vibrato. Sample waveforms could be displayed on-screen and modified using a light pen. The Synclavier from New England Digital was a similar system. Jon Appleton (with Jones and Alonso) invented the Dartmouth Digital Synthesizer, later to become the New England Digital Corp's Synclavier. The Kurzweil K250, first produced in 1983, was also a successful polyphonic digital music synthesizer, noted for its ability to reproduce several instruments synchronously and having a velocity-sensitive keyboard. Computer music An important new development was the advent of computers for the purpose of composing music, as opposed to manipulating or creating sounds. Iannis Xenakis began what is called musique stochastique, or stochastic music, which is a method of composing that employs mathematical probability systems. Different probability algorithms were used to create a piece under a set of parameters. Xenakis used graph paper and a ruler to aid in calculating the velocity trajectories of glissando for his orchestral composition Metastasis (1953–54), but later turned to the use of computers to compose pieces like ST/4 for string quartet and ST/48 for orchestra (both 1962). The impact of computers continued in 1956. Lejaren Hiller and Leonard Issacson composed Illiac Suite for string quartet, the first complete work of computer-assisted composition using algorithmic composition. In 1957, Max Mathews at Bell Lab wrote MUSIC-N series, a first computer program family for generating digital audio waveforms through direct synthesis. Then Barry Vercoe wrote MUSIC 11 based on MUSIC IV-BF, a next-generation music synthesis program (later evolving into csound, which is still widely used). In mid 80s, Miller Puckette at IRCAM developed graphic signal-processing software for 4X called Max (after Max Mathews), and later ported it to Macintosh (with Dave Zicarelli extending it for Opcode) for real-time MIDI control, bringing algorithmic composition availability to most composers with modest computer programming background. MIDI In 1980, a group of musicians and music merchants met to standardize an interface by which new instruments could communicate control instructions with other instruments and the prevalent microcomputer. This standard was dubbed MIDI (Musical Instrument Digital Interface). A paper was authored by Dave Smith of Sequential Circuits and proposed to the Audio Engineering Society in 1981. Then, in August 1983, the MIDI Specification 1.0 was finalized. The advent of MIDI technology allows a single keystroke, control wheel motion, pedal movement, or command from a microcomputer to activate every device in the studio remotely and in synchrony, with each device responding according to conditions predetermined by the composer. MIDI instruments and software made powerful control of sophisticated instruments easily affordable by many studios and individuals. Acoustic sounds became reintegrated into studios via sampling and sampled-ROM-based instruments. Modern electronic musical instruments The increasing power and decreasing cost of sound-generating electronics (and especially of the personal computer), combined with the standardization of the MIDI and Open Sound Control musical performance description languages, has facilitated the separation of musical instruments into music controllers and music synthesizers. By far the most common musical controller is the musical keyboard. Other controllers include the radiodrum, Akai's EWI and Yamah's WX wind controllers, the guitar-like SynthAxe, the BodySynth, the Buchla Thunder, the Continuum Fingerboard, the Roland Octapad, various isomorphic keyboards including the Thummer, and Kaossilator Pro, and kits like I-CubeX. Reactable The Reactable is a round translucent table with a backlit interactive display. By placing and manipulating blocks called tangibles on the table surface, while interacting with the visual display via finger gestures, a virtual modular synthesizer is operated, creating music or sound effects. Percussa AudioCubes AudioCubes are autonomous wireless cubes powered by an internal computer system and rechargeable battery. They have internal RGB lighting, and are capable of detecting each other's location, orientation and distance. The cubes can also detect distances to the user's hands and fingers. Through interaction with the cubes, a variety of music and sound software can be operated. AudioCubes have applications in sound design, music production, DJing and live performance. Kaossilator The Kaossilator and Kaossilator Pro are compact instruments where the position of a finger on the touch pad controls two note-characteristics; usually the pitch is changed with a left-right motion and the tonal property, filter or other parameter changes with an up-down motion. The touch pad can be set to different musical scales and keys. The instrument can record a repeating loop of adjustable length, set to any tempo, and new loops of sound can be layered on top of existing ones. This lends itself to electronic dance-music but is more limited for controlled sequences of notes, as the pad on a regular Kaossilator is featureless. Eigenharp The Eigenharp is a large instrument resembling a bassoon, which can be interacted with through big buttons, a drum sequencer and a mouthpiece. The sound processing is done on a separate computer. XTH Sense The XTH Sense is a wearable instrument that uses muscle sounds from the human body (known as mechanomyogram) to make music and sound effects. As a performer moves, the body produces muscle sounds that are captured by a chip microphone worn on arm or legs. The muscle sounds are then live sampled using a dedicated software program and a library of modular audio effects. The performer controls the live sampling parameters by weighing force, speed and articulation of the movement. AlphaSphere The AlphaSphere is a spherical instrument that consists of 48 tactile pads that respond to pressure as well as touch. Custom software allows the pads to be indefinitely programmed individually or by groups in terms of function, note, and pressure parameter among many other settings. The primary concept of the AlphaSphere is to increase the level of expression available to electronic musicians, by allowing for the playing style of a musical instrument. Chip music Chiptune, chipmusic, or chip music is music written in sound formats where many of the sound textures are synthesized or sequenced in real time by a computer or video game console sound chip, sometimes including sample-based synthesis and low bit sample playback. Many chip music devices featured synthesizers in tandem with low rate sample playback. DIY culture During the late 1970s and early 1980s, DIY (Do it yourself) designs were published in hobby electronics magazines (notably the Formant modular synth, a DIY clone of the Moog system, published by Elektor) and kits were supplied by companies such as Paia in the US, and Maplin Electronics in the UK. Circuit bending In 1966, Reed Ghazala discovered and began to teach math "circuit bending"—the application of the creative short circuit, a process of chance short-circuiting, creating experimental electronic instruments, exploring sonic elements mainly of timbre and with less regard to pitch or rhythm, and influenced by John Cage’s aleatoric music concept. Much of this manipulation of circuits directly, especially to the point of destruction, was pioneered by Louis and Bebe Barron in the early 1950s, such as their work with John Cage on the Williams Mix and especially in the soundtrack to Forbidden Planet. Modern circuit bending is the creative customization of the circuits within electronic devices such as low voltage, battery-powered guitar effects, children's toys and small digital synthesizers to create new musical or visual instruments and sound generators. Emphasizing spontaneity and randomness, the techniques of circuit bending have been commonly associated with noise music, though many more conventional contemporary musicians and musical groups have been known to experiment with "bent" instruments. Circuit bending usually involves dismantling the machine and adding components such as switches and potentiometers that alter the circuit. With the revived interest for analogue synthesizer circuit bending became a cheap solution for many experimental musicians to create their own individual analogue sound generators. Nowadays many schematics can be found to build noise generators such as the Atari Punk Console or the Dub Siren as well as simple modifications for children toys such as the famous Speak & Spell that are often modified by circuit benders. Modular synthesizers The modular synthesizer is a type of synthesizer consisting of separate interchangeable modules. These are also available as kits for hobbyist DIY constructors. Many hobbyist designers also make available bare PCB boards and front panels for sale to other hobbyists. 2010s According to a forum post in December 2010, Sixense Entertainment is working on musical control with the Sixense TrueMotion motion controller. Immersive virtual musical instruments, or immersive virtual instruments for music and sound aim to represent musical events and sound parameters in a virtual reality so that they can be perceived not only through auditory feedback but also visually in 3D and possibly through tactile as well as haptic feedback, allowing the development of novel interaction metaphors beyond manipulation such as prehension. See also Electronic music Experimental musical instrument Live electronic music Visual Music Organizations STEIM Technologies Oscilloscope Stereophonic sound Individual techniques Chiptune Circuit bending Instrument families Drum machine Synthesizer Analog synthesizer Vocoder Individual instruments (historical) Electronic Sackbut Continuum Fingerboard Spharophon Individual instruments (modern) Atari Punk Console Kraakdoos Metronome Razer Hydra Electronic music-instruments in Indian & Asian traditional music Electronic tanpura Shruti box References External links 120 Years of Electronic Music A chronology of computer and electronic music (including instruments) History of Electronic Music (French) Tons of Tones !! : Site with technical data on Electronic Modelling of Musical Tones DIY DIY Hardware and Software Discussion forum at Electro-music.com The Synth-DIY email list Ken Stone's Do-It-Yourself Page Music From Outer Space Information and parts to self-build a synthesizer. SDIY wiki a wiki about DIY electronic musical instruments Visit-able museums and collections Horniman Museum's music gallery, London, UK. Has one or two synths behind glass. Moogseum, Asheville, North Carolina, USA Musical Museum, Brentford, London, UK. Mostly electro-mechanical instruments. Musical Instrument Museum, Phoenix, Arizona, USA Staatliches Institut für Musikforschung, Berlin, Germany Swiss Museum & Center for Electronic Music Instruments The National Music Centre Collection, Canada Vintage Synthesizer Museum, California, USA Washington And Lee University Synthesizer Museum , Washington, USA Popular music Electronic dance music Audio engineering
30858735
https://en.wikipedia.org/wiki/Dave%20Buchwald
Dave Buchwald
Dave Buchwald (; born September 4, 1970), is a filmmaker and former phone phreak, hacker, and leader of the Legion of Doom in the mid-1980s, then known as Bill From RNOC. Hacker In the late 1980s, as a teenager, Buchwald was a social engineer, known for manipulating phone system employees anywhere in the United States into performing tasks that gave him access to systems. He used the handle Bill From RNOC with Arnock being short for the town IBM is based out of. He was also known for hacking Bell and AT&T systems (specifically COSMOS, SCCS, and LMOS), which allowed him virtually unrestricted access to phone lines, including the ability to monitor conversations, throughout the United States. He was the lead author of the popular PENIX suite of hacking tools. Many of his techniques are outdated but some of his original ideas are still in use by social engineers and security professionals today. In 1995, Dave was introduced to film making when he served as a technical consultant to the movie Hackers, editing the screenplay and personally coaching many members of the principal cast. Security career In 1997, Buchwald co-founded Crossbar Security with Mark Abene (a.k.a. Phiber Optik) and Andrew Brown. Crossbar provided information security services for a number of large corporations, but became a casualty of the dot-com bubble. Crossbar Security went defunct in 2002, largely due to cuts in corporate security spending and an increase in the cost of corporate computer security advertising. Arts career Buchwald works as a film editor, freelance photographer, and graphic designer in New York City. He has been regularly producing cover art for 2600: The Hacker Quarterly since 2000 using the pseudonym Dabu Ch'wald. In August 2006, he completed his first feature film, Urchin. He has recently produced and edited the independent film Love Simple and is in pre-production on the film Kuru, the second movie by the production company The Enemy. A short he edited entitled Floating Sunflowers won the Gold Remi award for Best Comedy Short at the 47th Worldfest-Houston International Film and Video Festival in April 2014. In August, 2018, Dave and his collaborator, Michael Lee Nirenberg, presented sample scenes from a documentary series they are producing, Reverse Engineering at the DEF CON annual hacking conference in Las Vegas. He currently resides in the Bay Ridge area of Brooklyn, New York. References External links Official site for Urchin (film) Profile in Forbes Magazine Love Simple website Dave Buchwald's website Living people American film editors RNOC, Bill from 1970 births People from Bay Ridge, Brooklyn
3067437
https://en.wikipedia.org/wiki/Priam%27s%20Treasure
Priam's Treasure
Priam's Treasure is a cache of gold and other artifacts discovered by classical archaeologists Frank Calvert and Heinrich Schliemann at Hissarlik, on the northwestern coast of modern Turkey. The majority of the artifacts are currently in the Pushkin Museum in Moscow. Schliemann claimed the site to be that of Homeric Troy, and assigned the artifacts to the Homeric king Priam. This assignment is now thought to be a result of Schliemann's zeal to find sites and objects mentioned in the Homeric epics which take place in what is now northwestern Turkey. At the time the stratigraphy at Troy had not been solidified, which was done subsequently by the archaeologist Carl Blegen. The layer in which Priam's Treasure was alleged to have been found was assigned to Troy II, whereas Priam would have been king of Troy VI or VII, occupied hundreds of years later. Background With the rise of modern critical history, Troy and the Trojan War were consigned to the realms of legend. As early as 1822, however, the famed Scottish journalist and geologist Charles Maclaren had identified the mound at Hissarlik, near the town of Chanak (Çanakkale) in north-western Anatolia, Turkey, as a possible site of Homeric Troy. Later, starting in the 1840s, Frank Calvert (1828–1908), an English expatriate who was an enthusiastic amateur archaeologist as well as a consular official in the eastern Mediterranean region, began exploratory excavations on the mound, part of which was on a farm belonging to his family, and ended up amassing a large collection of artefacts from the site. Meanwhile, Heinrich Schliemann, a well-heeled international entrepreneur and passionate antiquities hunter, had begun searching in Turkey for the site of the historical Troy, starting at Pınarbaşı, a hilltop at the south end of the Trojan Plain. Disappointed there, Schliemann was about to give up his explorations when Calvert suggested excavating the mound of Hissarlik. Guided to the site by Calvert, Schliemann conducted excavations there in 1871–73 and 1878–79, uncovering (and substantially destroying) the ruins of a series of ancient cities, dating from the Bronze Age to the Roman period. Schliemann declared one of these cities—at first Troy I, later Troy II—to be the city of Troy, and this identification was widely accepted at that time. His and Calvert's findings included the thousands of artefacts -- such as diadems of woven gold, rings, bracelets, intricate earrings and necklaces, buttons, belts and brooches -- which Schliemann, in his boundless ignorance, chose to call "Priam's treasure." Schliemann described one great moment of discovery, which supposedly occurred on or about May 27, 1873, in his typically colorful, if unreliable, manner: Schliemann's oft-repeated story of the treasure being carried by his wife, Sophie, in her shawl was untrue. Schliemann later admitted making it up, saying that at the time of the discovery Sophie was in fact with her family in Athens, following the death of her father. The treasure A partial catalogue of the treasure is approximately as follows: a copper shield a copper cauldron with handles an unknown copper artifact, perhaps the hasp of a chest a silver vase containing two gold diadems (the "Jewels of Helen"), 8750 gold rings, buttons and other small objects, six gold bracelets, two gold goblets a copper vase a wrought gold bottle two gold cups, one wrought, one cast a number of red terracotta goblets an electrum cup (mixture of gold, silver, and copper) six wrought silver knife blades (which Schliemann put forward as money) three silver vases with fused copper parts more silver goblets and vases thirteen copper lance heads fourteen copper axes seven copper daggers other copper artifacts with the key to a chest The treasure as an art collection Apparently, Schliemann smuggled Priam's Treasure out of Anatolia. The officials were informed when his wife, Sophia, wore the jewels for the public. The Ottoman official assigned to watch the excavation, Amin Effendi, received a prison sentence. The Ottoman government revoked Schliemann's permission to dig and sued him for its share of the gold. Schliemann went on to Mycenae. There, however, the Greek Archaeological Society sent an agent to monitor him. Later Schliemann traded some treasure to the government of the Ottoman Empire in exchange for permission to dig at Troy again. It is located in the Istanbul Archaeology Museum. The rest was acquired in 1881 by the Royal Museums of Berlin (Königliche Museen zu Berlin). After the capture of the Zoo Tower by the Red Army during the Battle in Berlin, Professor Wilhelm Unverzagt turned the treasure over to the Soviet Art Committee, saving it from plunder and division. The artefacts were then flown to Moscow. During the Cold War, the Soviet government denied any knowledge of the fate of Priam's Treasure. However, in 1994 the Pushkin Museum admitted it possessed the Trojan gold. Russia keeps what the West terms the looted art as compensation for the destruction of Russian cities and looting of Russian museums by Nazi Germany in World War II. A 1998 Russian law, the Federal Law on Cultural Valuables Displaced to the USSR as a Result of the Second World War and Located on the Territory of the Russian Federation, legalizes the looting in Germany as compensation and prevents Russian authorities from proceeding to restitutions. Authenticity of the treasure There have always been doubts about the authenticity of the treasure. Within the last few decades these doubts have found fuller expression in articles and books. Notes References Silberman, Neil Asher (1989). Between Past and Present: Archaeology, Ideology and Nationalism in the Modern Middle East, Doubleday, . Smith, Philip, editor (1976). Heinrich Schliemann: Troy and Its Remains: A Narrative of Researches and Discoveries Made on the Site of Ilium, and in the Trojan Plain, Arno Press, New York, . A catalog of artifacts from Schliemann's excavations at Troy, with photographs. Traill, David (1997). Schliemann of Troy: Treasure and Deceit, St. Martin's Press, Wood, Michael (1987). In Search of the Trojan War, New American Library, . External links Art News article, originally published in April 1991 revealing the secret Soviet collections of looted art, including the Schliemann collection. Calvert's Heirs Claim Schliemann Treasure Looted Art BBC radio documentary on art looted by the Soviets at the end of World War II, with special mention of the Schliemann collection Pushkin Museum of Fine Arts collection of Schliemann's treasure Art collections in Germany Art collections in Russia Troy Treasure troves of Turkey Archaeological discoveries in Turkey 1873 archaeological discoveries Art and cultural repatriation after World War II Tourist attractions in Moscow Gold objects Art crime Antiquities of the Pushkin Museum Russia–Turkey relations
63403754
https://en.wikipedia.org/wiki/A%20Short%20Hike
A Short Hike
A Short Hike is an adventure video game by Canadian indie game designer Adam Robinson-Yu (also known as adamgryu). It is an open world exploration game in which the player is tasked with reaching the summit of a mountain to get cellphone reception. The game was released for Microsoft Windows, macOS, and Linux in July 2019, for Nintendo Switch in August 2020, and for PlayStation 4 and Xbox One in November 2021. A Short Hike received positive reviews from critics, who widely praised its relaxing gameplay, freedom of exploration, and flying mechanics, while some criticized the short length and the handling of some story elements. It won the Seumas McNally Grand Prize at the 2020 Independent Games Festival. Gameplay The player has the ability to jog, climb, swim, and glide through an open-world park while controlling Claire, an anthropomorphic bird. To reach the peak, the player must find or purchase golden feathers that afford extra jumps and the ability to climb rock faces. According to a sign on Hawk Peak Trail, the peak can be reached with as few as seven golden feathers. Once the player reaches the peak, they are free to explore the park and complete side activities as they please. In addition to the main goal, the island is populated with other animals, who offer side-quests and activities including fishing, finding lost items, and playing a volleyball-like mini-game called "beachstickball". Rewards for these activities include items which improve the player's ability to explore the park, such as running shoes or a compass. The player can also collect shells, sticks, coins, and other items to help complete these side-quests. The game has an adaptive soundtrack that changes based on world events such as the weather, or player actions such as flying. It also features a dynamic camera created with the Unity tool, Cinemachine. Plot The protagonist and player character is Claire, a young bird who spends her days off by traveling to Hawk Peak Provincial Park, where her Aunt May works as a ranger. In an opening cutscene, Claire's mother drives her to a ferry that will take her to the park for the summer. When Claire arrives, her Aunt informs her that there's no cellphone reception in the park except for at Hawk Peak. Claire has never hiked the Hawk Peak Trail before, but is expecting an important call, so she decides to go to the summit. It is then up to the player whether Claire helps the other animals on the island or heads straight for Hawk Peak. A sign at the mountain's base warns it is a strenuous hike, and other characters will remark that the trail is too difficult for them. When Claire reaches the peak, she congratulates herself for making it and sits in view of an aurora. Soon her cell phone rings, revealing the caller is her mother. She acknowledges that she had a surgery after sending Claire away. Claire is upset she wasn't there for her, but Claire's mother says she is proud of Claire for climbing Hawk Peak. The call is interrupted when an updraft emerges from the mountain. It makes Claire nervous, but her mother urges her to ride it before it disappears. Claire rides the updraft, soaring over the park. Claire can then return to her Aunt, whereupon she explains to her all of the side activities she did on her hike. Development In December 2018, Robinson-Yu took a break from developing his Untitled Paper RPG and started work on A Short Hike. Playing the games The Haunted Island, a Frog Detective Game and Minit convinced him that short games can be successful, too. He later shared a prototype of A Short Hike on Twitter. Rendering the game's world using "big crunchy pixels" and simple models allowed Robinson-Yu to expand the scope of the game in spite of his limited art skills. The color palette for the game is sampled directly from photos of the Canadian Shield in autumn. Composer Mark Sparling created an adaptive soundtrack system that combines layers of melodies and ambient music depending on where the player is and how they are traversing the terrain. Sparling cited as influences the Studio Ghibli composer, Joe Hisaishi; the soundtracks for Animal Crossing: New Leaf and Firewatch; the Sufjan Stevens folk album, Carrie & Lowell; and the Steve Reich minimalist album, Music for 18 Musicians. After receiving funding via the Humble Original program, Robinson-Yu committed to releasing the game in three months. He tracked his progress using a simplified version of the scrum framework, and used existing Unity tools such as InControl and Cinemachine, as well as assets from previous projects. He also prioritized adding content over fixing small bugs. Release The game was first released for subscribers of the Humble Monthly program on April 5, 2019, and later as a standalone game for Microsoft Windows, macOS, and Linux on July 30, 2019. A version for Nintendo Switch was released on August 18, 2020. Versions for PlayStation 4 and Xbox One were released on November 16, 2021. Reception A Short Hike received "generally favorable reviews" according to the review aggregator website Metacritic, receiving aggregate scores of 80/100 for the PC version, and 88/100 for the Switch version. Reviewers praised the peacefulness and replay value of the setting, as well as the open-ended gameplay, with several critics comparing the game's open world to The Legend of Zelda: Breath of the Wild. Reviewing the game for Nintendo Life, Stuart Gipp awarded A Short Hike a 10/10, calling it "a truly complete game" and "a milestone in indie games" for its freedom of gameplay and what he felt was "a fat-free experience". Cathlyn Vania of Adventure Gamers rated the game 4.5/5 stars, calling it a "relaxing adventure filled with not only humor but the tenderness of personal connections". Matthew Reynolds of Eurogamer named A Short Hike one of his Games of the Year, stating that the density of the island resulted in "an adventure far richer than games many times its length". Khee Hoon Chan of GameSpot similarly praised the game for its controls, free-roaming gameplay, and overall "comforting, even pastoral allure", awarding it a 9/10. Kevin Mersereau of Destructoid was slightly more mixed, but overall positive, rating the game a 7.5/10. He called it a "palate-cleanser" that is "relaxed" and "unique", albeit "far from perfect", praising the game's flying mechanics, but criticizing its short length and "anvil-dropped" plot elements. GameCentral review similarly critiqued the game's length and the reveal of who Claire's phone call is from, but awarded the game an 8/10, calling it "utterly charming and perfectly paced". Washington Post critic, Christopher Byrd, described the game as "built to foster a spirit of comfort rather than risk", criticizing the easiness of player tasks while still recommending the game as "an achievement". Awards References External links Official website of Adam Robinson-Yu 2019 video games Indie video games Adventure games Independent Games Festival winners Linux games MacOS games Nintendo Switch games Open-world video games PlayStation 4 games Seumas McNally Grand Prize winners Single-player video games Video games about birds Video games developed in Canada Video games set in Canada Video games featuring female protagonists Video games set in forests Video games set on fictional islands Video games with cel-shaded animation Windows games Xbox One games
23443433
https://en.wikipedia.org/wiki/Grid%20Systems%20Corporation
Grid Systems Corporation
Grid Systems Corporation (stylized as GRiD) was an early portable computer manufacturer, based in United States and oriented for producing of rugged and semi-rugged machines; currently the Grid computer brand still exist as Grid Defence Systems Ltd. in United Kingdom. History Early history Grid Systems Corporation was founded in late 1979 by John Ellenby, who left his job at Xerox Parc and joined Glenn Edens, Dave Paulsen and Bill Moggridge to form one of Silicon Valley's first stealth companies. The company went public in March 1981. It was located at 47211 Lakeview Boulevard, Fremont, California, 94537. The "Grid" name with the unusual lowercase "i" in the middle was the result of discussion between John Ellenby, Glenn Edens and John Ellenby's wife, Gillian Ellenby, who pushed for the final choice. The lowercase "i" was a note of thanks to Intel for helping in the early days. Sale of company and US division In 1988, Tandy Corporation purchased the Grid company. AST Computer acquired the US wing of company, and was itself later acquired by Samsung. Grid still produced the GRiDCASE laptops, and the first GRiDPad tablet also was released in 1989; Also a few rebranded models of another manufacturers were released, include Tandy/Victor Technologies Grid 386 (Compaq SLT clone), GRiDPad SL 2050 (Samsung PenMaster clone) and AST GRiDPad 2390 (Casio Zoomer/ Tandy Z-PDA clone). Edens co-founded Waveform Corp and in 2003 joined the board of F5 Networks Inc., and John Ellenby went on to co-found the companies Agilis and Augmented Reality pioneer GeoVector. GRiD Defence Systems Grid Defence Systems formed in London, England by former employees during a management buyout of the former GRiD Computer Systems UK Ltd. in 1993. The UK Grid company starts with a simply "GRiD 1###"-branded rugged laptop line, and in 1995 was reintroduced the GRiDCASE line. Innovations Grid developed and released several pioneering ideas: First portable computer. Marketed almost exclusively to CEOs The Grid Compass 1100, the first clamshell laptop computer Patented the "clamshell" laptop design First portable to use non-volatile bubble-memory Grid-OS was a multi-tasking Text-based user interface (or TUI) and operating system First use of electro-luminescent displays in a portable First use of magnesium for the case First use of the Intel 8086 and 8087 floating-point co-processor in a commercial product Pioneered the concept of a "bus" for connecting peripherals (using GPIB) First computer that included a fully functional telephone and telephone handset The first commercially available tablet-type portable computer was the GRiDPad, released in September 1989. Its operating system was based on MS-DOS. A GRiD Compass 1101 was the first laptop in space. Required special modification to add a fan to pull air through the case. Subsequently a GRiD 1530 flew on STS-29 in March 1989. OldComputers.net called the 1982 GRiD Compass 1101 the "grand-daddy of all present-day laptop computers". It had 256k RAM, an 8086, 320x240 screen, and 384k of internal 'bubble memory' that held data with power off. See also Grid Compass Grid GridCase GRiDPad References External links GRiD history 1979–2020 (on the website of GRiD Defence Systems, UK) Pioneering the Laptop – The GRiD Compass Annotated bibliography of references to handwriting recognition and tablet and touch computers Notes on the History of Pen-based Computing (YouTube) 1000BiT in english and Italian Computer companies established in 1979 Computer companies established in 1993 Computer companies disestablished in 1988 Computer companies disestablished in 1993 Corporate spin-offs 1979 establishments in California 1988 disestablishments in California 1993 establishments in the United Kingdom Laptops
48204802
https://en.wikipedia.org/wiki/TOPCIT
TOPCIT
Test of Practical Competency in ICT (TOPCIT) is a performance-evaluation-centered test designed to diagnose and assess the competency of Information Technology specialists and Software Developers that is critically needed to perform jobs on the professional frontier. TOPCIT was developed and is administered by Korea's Ministry of Science, ICT and Future Planning (MSIP) and the Institute for Information and Communications Technology Promotion. They are government agencies that overlook and manage ICT related R&D, policy, and HR development. Background Companies and higher education institutions voiced the need for a standardized and objective competency index that can reinforce the on-site competency of ICT/SW college students and narrow the gap between the viewpoints of industrial and academic circles regarding the qualifications of a competent specialist in the field. Objective To improve the quality of ICT/SW education at universities, resolve the manpower shortage experienced by ICT companies, and expand the growth potential of ICT/SW industries and education system: TOPCIT has been developed to objectively assess the competency of those planning on entering the ICT field. The analyzed data will assist universities and industries in admitting students or hiring new recruits respectively. TOPCIT measures the competency by evaluating the test-takers’ answers to a series of creative problem solving questions and by assessing their executive ability. Participating Companies and Universities A total of 269 people from 231 companies and educational academies participated and founded the TOPCIT (August 2013). Through mutual development of companies and academies, TOPCIT was made with an objective of closing the gap between the industry and academic circles regarding the practical qualifications of a competent specialist in this field. Through the systematic network between companies and schools, the gap between the demand of skilled workers that companies want and the skilled workers that the educational academies graduate will also be closed. Contents TOPCIT has a total of 65 questions with up to 1,000 points. There are 4 types of questions in the test: multiple-choice, short-answer, descriptive-writing, and critical-thinking questions. There is a technical field and business field in the TOPCIT. Technical Field The Technical Field tests the Ability of software development, Database Construction and Operation, and the Understanding and Utilization of Network and Security. Software: The software module tests the test takers' Understanding of Software, Ability to Analyze & Design Software, Develop & Test Software, Manage Software, and Implement Integrated Technology. Database: The Database module tests the knowledge of concepts and structure of database, ability to design, program, and operate database, and the understanding of database applications. Network and Security: The Network and Security module tests the examinees’ knowledge of Network Concepts, Network Infrastructure Technology, Network Application Technology, IT Security, Ability run IT Security, and the knowledge of the latest IT Security Technology and Standards Business Field The Business Field tests the ability of Understanding IT Business, Technical Communication Skills, and Project Management. IT Business: The IT Business Module consists of Understanding IT Business and Utilizing IT Business Technical Communications: The Technical Communication Module consists of Understanding Business Communications and Utilizing Technical Documentation. Project Management: The Project Management Module consists of Understanding of Project, Project Management, and Project Tools and Evaluation. Test scores There are five TOPCIT Competency Levels and they are categorized as shown in the table below. Notes and references External links Test of Practical Competency in ICT Test of Practical Competency in ICT Institute for Information and Communications Technology Promotion Information technology qualifications
188371
https://en.wikipedia.org/wiki/Reconfigurable%20computing
Reconfigurable computing
Reconfigurable computing is a computer architecture combining some of the flexibility of software with the high performance of hardware by processing with very flexible high speed computing fabrics like field-programmable gate arrays (FPGAs). The principal difference when compared to using ordinary microprocessors is the ability to make substantial changes to the datapath itself in addition to the control flow. On the other hand, the main difference from custom hardware, i.e. application-specific integrated circuits (ASICs) is the possibility to adapt the hardware during runtime by "loading" a new circuit on the reconfigurable fabric. History The concept of reconfigurable computing has existed since the 1960s, when Gerald Estrin's paper proposed the concept of a computer made of a standard processor and an array of "reconfigurable" hardware. The main processor would control the behavior of the reconfigurable hardware. The latter would then be tailored to perform a specific task, such as image processing or pattern matching, as quickly as a dedicated piece of hardware. Once the task was done, the hardware could be adjusted to do some other task. This resulted in a hybrid computer structure combining the flexibility of software with the speed of hardware. In the 1980s and 1990s there was a renaissance in this area of research with many proposed reconfigurable architectures developed in industry and academia, such as: Copacobana, Matrix, GARP, Elixent, NGEN, Polyp, MereGen, PACT XPP, Silicon Hive, Montium, Pleiades, Morphosys, and PiCoGA. Such designs were feasible due to the constant progress of silicon technology that let complex designs be implemented on one chip. Some of these massively parallel reconfigurable computers were built primarily for special subdomains such as molecular evolution, neural or image processing. The world's first commercial reconfigurable computer, the Algotronix CHS2X4, was completed in 1991. It was not a commercial success, but was promising enough that Xilinx (the inventor of the Field-Programmable Gate Array, FPGA) bought the technology and hired the Algotronix staff. Later machines enabled first demonstrations of scientific principles, such as the spontaneous spatial self-organisation of genetic coding with MereGen. Theories Tredennick's Classification The fundamental model of the reconfigurable computing machine paradigm, the data-stream-based anti machine is well illustrated by the differences to other machine paradigms that were introduced earlier, as shown by Nick Tredennick's following classification scheme of computing paradigms (see "Table 1: Nick Tredennick’s Paradigm Classification Scheme"). Hartenstein's Xputer Computer scientist Reiner Hartenstein describes reconfigurable computing in terms of an anti-machine that, according to him, represents a fundamental paradigm shift away from the more conventional von Neumann machine. Hartenstein calls it Reconfigurable Computing Paradox, that software-to-configware (software-to-FPGA) migration results in reported speed-up factors of up to more than four orders of magnitude, as well as a reduction in electricity consumption by up to almost four orders of magnitude—although the technological parameters of FPGAs are behind the Gordon Moore curve by about four orders of magnitude, and the clock frequency is substantially lower than that of microprocessors. This paradox is partly explained by the Von Neumann syndrome. High-performance computing High-Performance Reconfigurable Computing (HPRC) is a computer architecture combining reconfigurable computing-based accelerators like field-programmable gate array with CPUs or multi-core processors. The increase of logic in an FPGA has enabled larger and more complex algorithms to be programmed into the FPGA. The attachment of such an FPGA to a modern CPU over a high speed bus, like PCI express, has enabled the configurable logic to act more like a coprocessor rather than a peripheral. This has brought reconfigurable computing into the high-performance computing sphere. Furthermore, by replicating an algorithm on an FPGA or the use of a multiplicity of FPGAs has enabled reconfigurable SIMD systems to be produced where several computational devices can concurrently operate on different data, which is highly parallel computing. This heterogeneous systems technique is used in computing research and especially in supercomputing. A 2008 paper reported speed-up factors of more than 4 orders of magnitude and energy saving factors by up to almost 4 orders of magnitude. Some supercomputer firms offer heterogeneous processing blocks including FPGAs as accelerators. One research area is the twin-paradigm programming tool flow productivity obtained for such heterogeneous systems. The US National Science Foundation has a center for high-performance reconfigurable computing (CHREC). In April 2011 the fourth Many-core and Reconfigurable Supercomputing Conference was held in Europe. Commercial high-performance reconfigurable computing systems are beginning to emerge with the announcement of IBM integrating FPGAs with its IBM Power microprocessors. Partial re-configuration Partial re-configuration is the process of changing a portion of reconfigurable hardware circuitry while the other portion keeps its former configuration. Field programmable gate arrays are often used as a support to partial reconfiguration. Electronic hardware, like software, can be designed modularly, by creating subcomponents and then higher-level components to instantiate them. In many cases it is useful to be able to swap out one or several of these subcomponents while the FPGA is still operating. Normally, reconfiguring an FPGA requires it to be held in reset while an external controller reloads a design onto it. Partial reconfiguration allows for critical parts of the design to continue operating while a controller either on the FPGA or off of it loads a partial design into a reconfigurable module. Partial reconfiguration also can be used to save space for multiple designs by only storing the partial designs that change between designs. A common example for when partial reconfiguration would be useful is the case of a communication device. If the device is controlling multiple connections, some of which require encryption, it would be useful to be able to load different encryption cores without bringing the whole controller down. Partial reconfiguration is not supported on all FPGAs. A special software flow with emphasis on modular design is required. Typically the design modules are built along well defined boundaries inside the FPGA that require the design to be specially mapped to the internal hardware. From the functionality of the design, partial reconfiguration can be divided into two groups: dynamic partial reconfiguration, also known as an active partial reconfiguration - permits to change the part of the device while the rest of an FPGA is still running; static partial reconfiguration - the device is not active during the reconfiguration process. While the partial data is sent into the FPGA, the rest of the device is stopped (in the shutdown mode) and brought up after the configuration is completed. Current systems Computer emulation With the advent of affordable FPGA boards, students' and hobbyists' projects seek to recreate vintage computers or implement more novel architectures. Such projects are built with reconfigurable hardware (FPGAs), and some devices support emulation of multiple vintage computers using a single reconfigurable hardware (C-One). COPACOBANA A fully FPGA-based computer is the COPACOBANA, the Cost Optimized Codebreaker and Analyzer and its successor RIVYERA. A spin-off company SciEngines GmbH of the COPACOBANA-Project of the Universities of Bochum and Kiel in Germany continues the development of fully FPGA-based computers. Mitrionics Mitrionics has developed a SDK that enables software written using a single assignment language to be compiled and executed on FPGA-based computers. The Mitrion-C software language and Mitrion processor enable software developers to write and execute applications on FPGA-based computers in the same manner as with other computing technologies, such as graphical processing units (“GPUs”), cell-based processors, parallel processing units (“PPUs”), multi-core CPUs, and traditional single-core CPU clusters. (out of business) National Instruments National Instruments have developed a hybrid embedded computing system called CompactRIO. It consists of reconfigurable chassis housing the user-programmable FPGA, hot swappable I/O modules, real-time controller for deterministic communication and processing, and graphical LabVIEW software for rapid RT and FPGA programming. Xilinx Xilinx has developed two styles of partial reconfiguration of FPGA devices: module-based and difference-based. Module-based partial reconfiguration permits to reconfigure distinct modular parts of the design, while difference-based partial reconfiguration can be used when a small change is made to a design. Intel Intel supports partial reconfiguration of their FPGA devices on 28 nm devices such as Stratix V, and on the 20 nm Arria 10 devices. The Intel FPGA partial reconfiguration flow for Arria 10 is based on the hierarchical design methodology in the Quartus Prime Pro software where users create physical partitions of the FPGA that can be reconfigured at runtime while the remainder of the design continues to operate. The Quartus Prime Pro software also support hierarchical partial reconfiguration and simulation of partial reconfiguration. Classification of systems As an emerging field, classifications of reconfigurable architectures are still being developed and refined as new architectures are developed; no unifying taxonomy has been suggested to date. However, several recurring parameters can be used to classify these systems. Granularity The granularity of the reconfigurable logic is defined as the size of the smallest functional unit (configurable logic block, CLB) that is addressed by the mapping tools. High granularity, which can also be known as fine-grained, often implies a greater flexibility when implementing algorithms into the hardware. However, there is a penalty associated with this in terms of increased power, area and delay due to greater quantity of routing required per computation. Fine-grained architectures work at the bit-level manipulation level; whilst coarse grained processing elements (reconfigurable datapath unit, rDPU) are better optimised for standard data path applications. One of the drawbacks of coarse grained architectures are that they tend to lose some of their utilisation and performance if they need to perform smaller computations than their granularity provides, for example for a one bit add on a four bit wide functional unit would waste three bits. This problem can be solved by having a coarse grain array (reconfigurable datapath array, rDPA) and a FPGA on the same chip. Coarse-grained architectures (rDPA) are intended for the implementation for algorithms needing word-width data paths (rDPU). As their functional blocks are optimized for large computations and typically comprise word wide arithmetic logic units (ALU), they will perform these computations more quickly and with more power efficiency than a set of interconnected smaller functional units; this is due to the connecting wires being shorter, resulting in less wire capacitance and hence faster and lower power designs. A potential undesirable consequence of having larger computational blocks is that when the size of operands may not match the algorithm an inefficient utilisation of resources can result. Often the type of applications to be run are known in advance allowing the logic, memory and routing resources to be tailored to enhance the performance of the device whilst still providing a certain level of flexibility for future adaptation. Examples of this are domain specific arrays aimed at gaining better performance in terms of power, area, throughput than their more generic finer grained FPGA cousins by reducing their flexibility. Rate of reconfiguration Configuration of these reconfigurable systems can happen at deployment time, between execution phases or during execution. In a typical reconfigurable system, a bit stream is used to program the device at deployment time. Fine grained systems by their own nature require greater configuration time than more coarse-grained architectures due to more elements needing to be addressed and programmed. Therefore, more coarse-grained architectures gain from potential lower energy requirements, as less information is transferred and utilised. Intuitively, the slower the rate of reconfiguration the smaller the energy consumption as the associated energy cost of reconfiguration are amortised over a longer period of time. Partial re-configuration aims to allow part of the device to be reprogrammed while another part is still performing active computation. Partial re-configuration allows smaller reconfigurable bit streams thus not wasting energy on transmitting redundant information in the bit stream. Compression of the bit stream is possible but careful analysis is to be carried out to ensure that the energy saved by using smaller bit streams is not outweighed by the computation needed to decompress the data. Host coupling Often the reconfigurable array is used as a processing accelerator attached to a host processor. The level of coupling determines the type of data transfers, latency, power, throughput and overheads involved when utilising the reconfigurable logic. Some of the most intuitive designs use a peripheral bus to provide a coprocessor like arrangement for the reconfigurable array. However, there have also been implementations where the reconfigurable fabric is much closer to the processor, some are even implemented into the data path, utilising the processor registers. The job of the host processor is to perform the control functions, configure the logic, schedule data and to provide external interfacing. Routing/interconnects The flexibility in reconfigurable devices mainly comes from their routing interconnect. One style of interconnect made popular by FPGAs vendors, Xilinx and Altera are the island style layout, where blocks are arranged in an array with vertical and horizontal routing. A layout with inadequate routing may suffer from poor flexibility and resource utilisation, therefore providing limited performance. If too much interconnect is provided this requires more transistors than necessary and thus more silicon area, longer wires and more power consumption. Challenges for operating systems One of the key challenges for reconfigurable computing is to enable higher design productivity and provide an easier way to use reconfigurable computing systems for users that are unfamiliar with the underlying concepts. One way of doing this is to provide standardization and abstraction, usually supported and enforced by an operating system. One of the major tasks of an operating system is to hide the hardware and present programs (and their programmers) with nice, clean, elegant, and consistent abstractions to work with instead. In other words, the two main tasks of an operating system are abstraction and resource management. Abstraction is a powerful mechanism to handle complex and different (hardware) tasks in a well-defined and common manner. One of the most elementary OS abstractions is a process. A process is a running application that has the perception (provided by the OS) that it is running on its own on the underlying virtual hardware. This can be relaxed by the concept of threads, allowing different tasks to run concurrently on this virtual hardware to exploit task level parallelism. To allow different processes and threads to coordinate their work, communication and synchronization methods have to be provided by the OS. In addition to abstraction, resource management of the underlying hardware components is necessary because the virtual computers provided to the processes and threads by the operating system need to share available physical resources (processors, memory, and devices) spatially and temporarily. See also Computing with Memory Glossary of reconfigurable computing iLAND project M-Labs One chip MSX PipeRench PSoC Sprinter References Further reading Cardoso, João M. P.; Hübner, Michael (Eds.), Reconfigurable Computing: From FPGAs to Hardware/Software Codesign, Springer, 2011. S. Hauck and A. DeHon, Reconfigurable Computing: The Theory and Practice of FPGA-Based Computing, Morgan Kaufmann, 2008. J. Henkel, S. Parameswaran (editors): Designing Embedded Processors. A Low Power Perspective; Springer Verlag, March 2007 J. Teich (editor) et al.: Reconfigurable Computing Systems. Special Topic Issue of Journal it — Information Technology, Oldenbourg Verlag, Munich. Vol. 49(2007) Issue 3 T.J. Todman, G.A. Constantinides, S.J.E. Wilton, O. Mencer, W. Luk and P.Y.K. Cheung, "Reconfigurable Computing: Architectures and Design Methods", IEEE Proceedings: Computer & Digital Techniques, Vol. 152, No. 2, March 2005, pp. 193–208. A. Zomaya (editor): Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with Emerging Technologies; Springer Verlag, 2006 J. M. Arnold and D. A. Buell, "VHDL programming on Splash 2," in More FPGAs, Will Moore and Wayne Luk, editors, Abingdon EE & CS Books, Oxford, England, 1994, pp. 182–191. (Proceedings,International Workshop on Field-Programmable Logic, Oxford, 1993.) J. M. Arnold, D. A. Buell, D. Hoang, D. V. Pryor, N. Shirazi, M. R. Thistle, "Splash 2 and its applications, "Proceedings, International Conference on Computer Design, Cambridge, 1993, pp. 482–486. D. A. Buell and Kenneth L. Pocek, "Custom computing machines: An introduction," The Journal of Supercomputing, v. 9, 1995, pp. 219–230. External links Lectures on Reconfigurable Computing at Brown University Introduction to Dynamic Partial Reconfiguration ReCoBus-Builder project for easily implementing complex reconfigurable systems DRESD (Dynamic Reconfigurability in Embedded System Design) research project Digital electronics
38668094
https://en.wikipedia.org/wiki/Autopsy%20%28software%29
Autopsy (software)
Autopsy is computer software that makes it simpler to deploy many of the open source programs and plugins used in The Sleuth Kit. The graphical user interface displays the results from the forensic search of the underlying volume making it easier for investigators to flag pertinent sections of data. The tool is largely maintained by Basis Technology Corp. with the assistance of programmers from the community. The company sells support services and training for using the product. The tool is designed with these principles in mind: Extensible — the user should be able to add new functionality by creating plugins that can analyze all or part of the underlying data source. Centralised — the tool must offer a standard and consistent mechanism for accessing all features and modules. Ease of Use — the Autopsy Browser must offer the wizards and historical tools to make it easier for users to repeat their steps without excessive reconfiguration. Multiple Users — the tool should be usable by one investigator or coordinate the work of a team. The core browser can be extended by adding modules that help scan the files (called "ingesting"), browse the results (called "viewing") or summarize results (called "reporting"). A collection of open-source modules allow customization. Process Autopsy analyzes major file systems (NTFS, FAT, ExFAT, HFS+, Ext2/Ext3/Ext4, YAFFS2) by hashing all files, unpacking standard archives (ZIP, JAR etc.), extracting any EXIF values and putting keywords in an index. Some file types like standard email formats or contact files are also parsed and cataloged. Users can search these indexed files for recent activity or create a report in HTML or PDF summarizing important recent activity. If time is short, users may activate triage features that use rules to analyze the most important files first. Autopsy can save a partial image of these files in the VHD format. Correlation Investigators working with multiple machines or file systems can build a central repository of data allowing them to flag phone numbers, email addresses, file or other pertinent data that might be found in multiple places. The SQL Lite or PostgreSQL data base stores the information so investigators can find all occurrences of names, domains, phone numbers or USB registry entries. Language Version 2 of Autopsy is written in Perl and it runs on all major platforms including Linux, Unix, macOS, and Windows. It relies upon The Sleuth Kit to analyze the disk. Version 2 is released under the GNU GPL 2.0. Autopsy 3.0 is written in Java using the NetBeans platform. It was released under the Apache license 2.0. Autopsy 4.0 runs on Windows, Linux, and macOS. Autopsy depends on a number of libraries with various licenses. It works with SQL Lite and PostgreSQL data bases to store information. The indices for searching keywords are built with Lucene / SOLR. References External links Autopsy official website The Sleuth Kit official website Computer forensics Free security software Unix security-related software Hard disk software Digital forensics software Software using the Apache license
62305142
https://en.wikipedia.org/wiki/Marine%20Tactical%20Data%20System
Marine Tactical Data System
Marine Tactical Data System, commonly known as MTDS, was a mobile, ground based, aviation command and control system developed by the United States Marine Corps for the execution of anti-air warfare in support of the Fleet Marine Force (FMF). It was the Marine Corps' first semi-automated system capable of collecting, processing, computing and displaying aircraft surveillance data while also sharing that information with other participating units via tactical data link. The system was developed in the late 1950s/early 1960s when it was recognized that due to the speed, range and complexity of fighter aircraft operations effective air control and air defense demanded enhanced situational awareness. MTDS was a spiral development of the United States Navy's Navy Tactical Data System (NTDS). At the time it was developed, it was the largest research and development project ever undertaken by the Marine Corps. Produced by Litton Systems Inc. in Van Nuys, California, MTDS took almost a decade to develop. When finally fielded in September 1966, it was the premier air defense command and control system in the United States Military. It saw its widest operational use during the Vietnam War where it was utilized to great effect controlling and deconflicting aircraft in the Northern portion of South Vietnam from July 1967 through January 1971. MTDS remained the backbone of Marine Corps air defense operations until it was replaced by the AN/TYQ-23 Tactical Air Operations Module in the early 1990s. Background Marine Corps Air Warning Program The Marine Corps’ air warning program was developed during World War II to provide early warning and fighter control for Marine Corps forces ashore during amphibious operations. Through the 1950s Marine Corps air defense equipment and tactics continued to rely on manual plotting of air tracks based on voice calls from ground control intercept (GCI) controllers. By the mid-1950s, early warning, fighter control, and ground controlled intercept (GCI) were performed by the Marine Air Control Squadrons as part of Marine Aviation. There were three MACS assigned to each Marine Amphibious Force. These squadrons provided air defense command and control centers known as Counter Air Operations Centers (CAOC) that relied on Marines to manually plot aircraft tracks on a large map based on voice or telephone calls from radar operators. Controllers manually calculated intercepts using vectors, headings, and speed. Precursor systems and early development In 1944, the British Air Force installed analog computers at Chain Home stations in order to automatically convert radar plots into map locations. After the war, the Royal Navy began to develop a command and control system known as the Comprehensive Display System (CDS) which further allowed operators the ability to assign identifications to objects on their radar screens. This made it easier for operators to vector friendly fighters onto intercept courses during ground controlled intercept. The United States Navy became very interested in the Comprehensive Display System after seeing a demonstration. At the same time that the US Navy was looking to improve the air defense capabilities of the fleet, air defense in the United States also took on a much greater priority after the Soviet Union exploded its first nuclear weapon on 29 August 1949. The United States established a committee chaired by MIT professor George Valley, later to be known as the Valley Committee. The committee determined that the greatest threat to the nation's air defense was low-flying aircraft capable of avoiding widely dispersed GCI radars. To counter this the committee recommended that a large number of ground based radar systems be installed all over the United States to provide complete coverage. This large number of stations required a command and control center that could aggregate radar track data in real time. The amount of data necessary for this meant that it could not be done manually and would require a computer. Thus was born the Semi-Automatic Ground Environment also known as "SAGE." SAGE was a system of large computers and associated networking equipment that coordinated data from many radar sites and processed it to produce a single unified image of the airspace over a wide area. In 1953,at the same time that the USAF was developing SAGE, the Naval Research Laboratory showed Marine Corps representatives the findings of its Electronic Tactical Data Systems program to determine the services' interest in any further development. There was no money at the time however the Marine Corps continued to refine requirements for their future automated tactical data system for air defense operations. When the United States Navy eventually wrote the requirements for NTDS they also included specifications for a ground-based unit to be developed by the Marine Corps. The Chief of Naval Operations officially authorized the development of NTDS in April 1956. Concurrently, Headquarters Marine Corps, began to budget the system and further refine requirements. The requirements were developed by the Marine Corps Electronics Branch which was a department underneath the Electronics Development Division within the Navy's Bureau of Ships. When funds became available in 1957 a contract was awarded to Litton Systems Incorporated for the development of the Marine Tactical Data System. System Major Components & Subcomponents (AN/TYQ-1) - Tactical Air Command Center (TACC), produced by Philco Ford - Furnished the automatic displays and communications to provide for the overall command and coordination of all air operations in an area of operations. (AN/TYQ-2) - Tactical Air Operations Center (TAOC), produced by Litton Industries - An operational complex of 14 shelters containing computers to track and process radar information and communications equipment for the execution of anti-air warfare. (AN/TYA-5) - Central Computer Group - shelter that contains the electronic data processing equipment that forms the core of the AN/TYQ-2. (AN/TYA-6) - Data Processor Group - transportable shelter containing 2D radar and electronic data processing equipment. (AN/TYA-7) - Geographic Display Generations Group - transportable shelter containing electronic scanning, mapping, and processing equipment. (AN/TYA-9) - Operator Group - shelter containing electronic data processing, display, and communications equipment. (AN/TYA-12) - Communications Group - shelter containing electronic digital and communication equipment. (AN/TYA-23) - Two shelters comprising the primary facilities for the test and repair of circuit plug-in cards and analog modules. (AN/TYA-25) - Photographic Transport Group - shelter containing commercial photographic equipment and developing facilities. (AN/TYA-26) - Ancillary Group - shelter containing the consoles and displays of associated radars and radio direction finding equipment. (AN/TYA) - Maintenance Group - Two shelters for test and repair of the magnetic drum assembly, microppsitioner, power supplies, and communications modules. (AN/TYQ-3) - Tactical Data Communications Central (TDCC), produced by Litton Industries. - The system employed a UNIVAC CP-808 computer which was a light-weight version of the CP-642B utilized by NTDS. The TDCC hosted the critical operational software to drive MTDS and exchange air command and control data with NTDS and other joint command and control systems. (AN/TYA-20) - Shelter housing the CP-808 computer. Design MTDS consisted of three major components that worked in concert to automate early warning, fighter direction and the control of surface to air missiles within the Marine Air-Ground Task Force (MAGTF). Under MTDS, air defense within the Marine Corps was to be combined into a new agency known as the Tactical Air Operations Center (TAOC). The TAOC automated air defense functions which up to that point had to be completed manually. Operators could control more than 20 simultaneous intercepts while the computer tracked up to 250 air targets. Ensuring that the entirety of the system was helicopter transportable was a major factor influencing much of the design of MTDS. Marine Corps forces operating from amphibious shipping needed the ability to sling load MTDS huts underneath helicopters in order to get it ashore during an amphibious operation. This meant that controlling the weight of each section of the system was critical and those weights were defined by the cargo carrying capacity of the Marine Corps helicopter fleet. In its original design, MTDS planned to utilize TADIL-A/Link 11 to communicate between all participating Marine Corps units (TACC, LAAM, etc...) and the Navy's NTDS afloat. Early studies determined this would quickly overwhelm TADIL-A and create increased latency in the system. If track latency was too great then operators would not be able to properly control aircraft. There were also questions about the viability of utilizing high frequency radio waves in mountainous terrain where the Marine Corps may need to operate. To overcome these deficiencies, the Marine Corps investigated an emerging technology known as Tropospheric scatter. This eventually led to the development of the AN/TRC-97 built by RCA. The TRC-97 provided data, voice and teletype connectivity for MTDS and would grow to become the backbone of USMC and USAF long haul communications for years to come. Early in the design phase of MTDS, it was decided to use magnetic drum memory computers. Memory drums were used as the digital storage elements and system clock pulse generators in the Central Computer Group and each of the Radar and Identification Data Processors (RIDP). The drums utilized had a capacity of 1,123,200 bits and operated at a speed of 2667 rpm, and generated a clock frequency of 333 kcs. There were no issues with this design until shortly after preduction had begun in 1964. At that time a senior defense official questioned the efficacy of drum memory computers and requested a review of the program. Production was halted in order to examine whether the system would be better off utilizing Magnetic-core memory computers. After a few month delay, the Marine Corps was able to show that the drum memory system already in place met all reliability requirements set forth for the program. Production of the system was allowed to continue. Testing In the early 1960s, Marine Air Control Squadron 3 (MACS-3) at Marine Corps Air Station Santa Ana, California was administratively detached from the I Marine Amphibious Force and moved under Air, Fleet Marine Forces Pacific for purposes of testing MTDS equipment and operational concepts. MACS-3 Sub-Unit 1 located at Marine Corps Base Twentynine Palms, CA took delivery of the second MTDS system in September 1962. In 1963, the program was in serious trouble and then Commandant of the Marine Corps General David M. Shoup named Colonel Earl E. Anderson as the first program manager for MTDS. In March 1965, MACS-3 accepted the first production model of MTDS for operational testing. System development and testing were completed in October 1965. Training & Fielding MACS-3 graduated its first class of MTDS operators and maintainers on 8 October 1963. The initial course was twenty weeks long for maintainers and six weeks long for operators. Classroom instructions was provided by the Marines of MACS-3, field representatives from Litton Industries and civilians from the US Navy's Naval Aviation Engineering Service Unit. MTDS began fielding to the operating forces in July 1966 with the last system fielding in August 1973. MTDS was replaced by the AN/TYQ-23 Tactical Air Operations Module (TAOM), a joint USAF/USMC program. Development of the AN/TYQ-23 began in the early 1980s and coincided with the development of the USMC's next generation long range radar, the AN/TPS-59. Operational testing lasted from 1985 through 1991. Testing was not complete before the beginning of Gulf War in 1990 therefore Marine Air Control Squadron 2 utilized MTDS when combat operations commenced in January 1991. Fielding of the AN/TYQ-23 began shortly after the end of the Gulf War and MTDS was concurrently removed from the Marine Corps inventory. Operational Use The first MTDS system fielded to the Fleet Marine Force was given to Marine Air Control Squadron 4 at Marine Corps Base Camp Pendleton, California in September 1966. Shortly thereafter the squadron was informed that they were deploying to South Vietnam to replace Marine Air Control Squadron 7 (MACS-7). In November 1966 they sent an advanced party to scout the best locations for MTDS in country. They eventually decided upon the Monkey Mountain Facility near Danang. This site was chosen because of it was co-located with the HAWK Missile Batteries of the 1st Light Antiaircraft Missile Battalion and the United States Air Force's Panama Air Control Facility. The site also provided excellent line of sight to United States Seventh Fleet ships operating in Yankee Station in the Gulf of Tonkin. MACS-4 arrived in Vietnam in June 1967 and was established and operating on top of Monkey Mountain beginning 6 July 1967. On 13 January 1971 at 0001, MACS-4 made its last tactical transmission in support of operations during the Vietnam War. During its time in Vietnam utilizing MTDS, MACS-4 controlled or assisted 472,146 aircraft. Even though MACS-4 departed Vietnam on 31 January 1971 it maintained a small detachment of twenty Marines on top of Monkey Mountain to man the AN/TYQ-3 - Tactical Data Communications Central (TDCC). The AN/TYQ-3 facilitated critical data exchange between the USAF and USN during the later stages of the Vietnam War. This detachment remained in support of operations until 14 February 1973. In 1969, the Marine Corps fielded the AN/TPS-32 radar which was the service's first three dimensional radar and was optimized for operations with MTDS. Legacy The development of MTDS coincided with the fielding of the MIM-23 HAWK Missile system, and the AN/TPQ-10 Radar Course Directing Central. The arrival of these highly technical systems, and the concurrent need for specialists to operate them, was a catalyst for the professionalization of aviation command and control in the Marine Corps. Recognizing the need for a separate headquarters to oversee these specialized units and the agencies and equipment they provide, the Marine Corps recommissioned the Marine Air Control Groups in September 1967. This laid the foundation for what is now known as the Marine Air Command and Control System (MACCS). Testing and fielding of MTDS along with various other automated systems in the 1960s highlighted that the Marine Corps was not properly staffed to develop, test and acquire new digital equipment. Lessons learned from MTDS's testing and development and the recognized need to support current and future tactical data systems led to the development of the Marine Corps Tactical Systems Support Activity (MCTSSA) based at MCB Camp Pendleton, CA. MCTSSA was organized in 1970 and its structure came from MACS-3 which was concurrently decommissioned. See also United States Marine Corps Aviation List of United States Marine Corps aviation support squadrons Citations References Military computers United States Marine Corps equipment Cold War military computer systems of the United States
3005170
https://en.wikipedia.org/wiki/Bell%27s%20law%20of%20computer%20classes
Bell's law of computer classes
Bell's law of computer classes formulated by Gordon Bell in 1972 describes how types of computing systems (referred to as computer classes) form, evolve and may eventually die out. New classes of computers create new applications resulting in new markets and new industries. Bell considers the law to be partially a corollary to Moore's law which states "the number of transistors per chip double every 18 months". Unlike Moore's law, a new computer class is usually based on lower cost components that have fewer transistors or less bits on a magnetic surface, etc. A new class forms about every decade. It also takes up to a decade to understand how the class formed, evolved, and is likely to continue. Once formed, a lower priced class may evolve in performance to take over and disrupt an existing class. This evolution has caused clusters of scalable personal computers with 1 to thousands of computers to span a price and performance range of use from a PC, through mainframes, to become the largest supercomputers of the day. Scalable clusters became a universal class beginning in the mid-1990s; by 2010, clusters of at least one million independent computers will constitute the world's largest cluster. Definition: Roughly every decade a new, lower priced computer class forms based on a new programming platform, network, and interface resulting in new usage and the establishment of a new industry. Established market class computers aka platforms are introduced and continue to evolve at roughly a constant price (subject to learning curve cost reduction) with increasing functionality (or performance) based on Moore's law that gives more transistors per chip, more bits per unit area, or increased functionality per system. Roughly every decade, technology advances in semiconductors, storage, networks, and interfaces enable a new, lower cost computer class aka platform to form to serve a new need that is enabled by smaller devices e.g. less transistors per chip, less expensive storage, displays, i/o, network, and unique interface to people or some other information processing sink or source. Each new lower priced class is then established and maintained as a quasi independent industry and market. Such a class is likely to evolve to substitute for an existing class or classes as described above with computer clusters. Computer classes that conform to the law mainframes (1960s) minicomputers (1970s) personal computers and workstations evolving into a network enabled by Local Area Networking or Ethernet (1980s) web browser client-server structures enabled by the Internet (1990s) cloud computing, e.g., Amazon Web Services (2006) or Microsoft Azure (2012) hand held devices from media players and cell phones to tablets, e.g., Creative, iPods, BlackBerrys, iPhones, Smartphones, Kindles, iPads (c. 2000–2010) Wireless sensor networks (WSNs) that enable sensor and actuator interconnection, enabling the evolving Internet of Things. (c. >2005) Beginning in the 1990s, a single class of scalable computers or mega-servers, (built from clusters of a few to tens of thousands of commodity microcomputer-storage-networked bricks), began to cover and replace mainframes, minis, and workstations to become the largest computers of the day, and when applied for scientific calculation they are commonly called a supercomputer. History Bell's law of computer classes and class formation was first mentioned in 1970 with the introduction of the Digital Equipment PDP-11 mini to differentiate it from mainframes and the potentially emerging micros. The law was described in 1972 by Gordon Bell. The emergence and observation of a new, lower-priced microcomputer class based on the microprocessor stimulated the creation of the law that Bell described in articles and Bell's books. Other computer industry laws See also the several laws (e.g. Moore's law, Metcalfe's law) that describe the computer industry. References Adages Bell's Law of Computer Classes Computer architecture statements
18525913
https://en.wikipedia.org/wiki/HSC%20Manannan
HSC Manannan
HSC Manannan is a wave-piercing high-speed catamaran car ferry built by Incat, Australia in 1998. After commercial service in Australia and New Zealand, she was chartered to the US military as Joint Venture (HSV-X1). Now owned and operated by the Isle of Man Steam Packet Company, she mainly provides a seasonal service between Douglas Harbour and Port of Liverpool. Early history Manannan is one of six 96-metre (WPC 96) catamarans built by Incat, Australia. She was built as Incat 050 in 1998. Under the name Devil Cat, she operated for a short period as a commercial ferry for TT-Line. A spell followed crossing the Cook Strait as Top Cat. Then she was acquired by the United States Navy and converted for military purposes. United States Navy In 2001, she was contracted by the United States Armed Forces for a five-year, joint Army/Navy program, as Joint Venture (HSV-X1). A flight deck was added to accommodate various US military helicopters. Joint Venture was rapidly re-configurable and could perform a variety of missions, principal among them the ability to ferry up to 325 combat personnel and 400 tons of cargo up to one way at speeds in excess of . In 2003, Joint Venture was assigned to Operation Enduring Freedom in the Horn of Africa. She operated as a fast transport in support of the Combined Joint Task Force and performing a variety of tasks, such as transporting and supplying troops at high speed over long distances, operating as a mobile command centre, working close inshore, and operating as a helicopter carrier. At the end of the five year charter, she was handed back to Incat in early 2006. She was struck from the Naval Vessel Register and re-listed as "IX-532" (unclassified experimental). She then underwent a refit and was painted in the livery of Express Ferries. Plans for her to enter service as a car and passenger ferry never materialised. Isle of Man Steam Packet Company On 19 May 2008, the Isle of Man Steam Packet Company announced the purchase of the wave-piercing catamaran for £20 million, as the replacement for the fast craft Viking. Because of its previous use, the company said it had significantly fewer hours of service than a vessel of comparable age and was ideally suited for the planned service. She completed the voyage from Hobart to Portsmouth Harbour, with most of the materials for her refit, in 27 days. A £3 million refit, carried out by Burgess Marine, Portsmouth, provided a new aft accommodation module and the "Sky Lounge". The heavy military ramp was replaced with a new stern door and the helideck was removed. Following this, she arrived in Douglas on 11 May 2009. An open day took place at each of the company's ports and at a renaming ceremony, she was renamed after Manannán mac Lir, the Celtic god of the Irish sea. Manannan made her maiden service voyage with the Steam Packet Company on Friday 22 May 2009 on the 07:30 sailing from Douglas Harbour to Liverpool. During the winter period 2014/2015 Manannan was fitted with a removable mezzanine deck which created additional space for motorcycles during the Isle of Man TT and Festival of Motorcycling periods, allowing fans who have previously traveled as foot passengers the chance to bring their bikes. - by late March 2015, the number of motorcycles booked for the TT Festival was up 10% on the previous year. Service Until 2018, at , Manannan was the largest vessel of its kind on the Irish Sea until Irish Ferries took delivery of the Dublin Swift. In summer season, she operates daily sailings from Douglas to Liverpool, and weekly/twice weekly sailings to Belfast and Dublin. During the winter, Manannan remains in Douglas on reserve and sails to Liverpool to have her annual overhaul before returning for the summer season. Onboard facilities Manannans passenger facilities are located over two decks. Incidents In April 2015 the Manannan suffered six days of cancelled sailings due to damage to its jet system caused by sea debris. All sailings between the Isle of Man and Liverpool were cancelled and passengers were transferred to sailings on the Ben-my-Chree to and from Heysham, while the P&O Ferries vessel Express was chartered for a sailing to Larne in place of a cancelled Belfast sailing. Express also suffered damage while in Manx waters and P&O were forced to cancel a number of their own sailings as a result. Steam Packet boss Mark Woodward told a local newspaper, "Since 2007 there have been 17 recorded major incidents where our ships have been damaged and passengers have been inconvenienced by disrupted schedules as a result... the damage was incurred seven days after the vessel recommenced seasonal operational service and just three weeks after leaving dry-dock. All of this equipment was fully inspected during the docking period by Steam Packet Company engineering staff, along with Classification Society Surveyors and all found to be in good order." The ferry returned to service on Saturday 11 April 2015 once repairs were completed. The estimated price of repairs was above £100,000. On 24 March 2016, the Manannan collided with the Victoria Pier in Douglas Harbour on arrival at 22:30. Five passengers were taken to hospital with minor injuries and the following day's sailings were cancelled, with passengers being transferred to the Ben-my-Chree. The vessel suffered damage to the port side, causing the front of the hull to be bent to the left. The collision was caused by a systems control failure. Photo gallery See also List of multihulls References External links Manannan Specifications - steam-packet.com HSV-X1 Joint Venture Ships of the Isle of Man Steam Packet Company Ferries of Australia Ferries of the Isle of Man Merchant ships of the Isle of Man Incat high-speed craft 1998 ships Ships built by Incat Military catamarans High speed vessels of the United States Navy
30279470
https://en.wikipedia.org/wiki/Copper%20Project
Copper Project
Copper Project is a web-based project-management tool, first launched in 2001 by Element Software. As of 2011, the product is in its 4th version release and is used predominantly by creative consultancies. Copper is also provided via SAAS or a Proprietary based license. In 2007, Element Software CEO Ben Prendergast was featured on Apple.com as a leading entrepreneur in the Project Management Software industry. Reviews The Copper Project was voted one of the Top 20 Project Management Software products 2013. Previous reviews include Web Worker Daily, Venture Beat, Mashable, "Bright Hub", "Top 10 Review", "PM-Sherpa", "Web Based Software.com", "About.com Women in Business" "PM-Software.Org" Features Project management (with Basecamp import) Time tracking with online stopwatch File and Document Management Drag and Drop Gantt chart Resource Management Calendar (Filterable with iCal export) Invoicing with PDF export Timesheets Reports Integrates with Xero Copper is proprietary software available in English, Spanish, French, Italian, German, Dutch, Polish. See also Project management software List of project management software Project management Web 2.0 References External links Official site Element Software Project management software 2001 software Projects established in 2001
156891
https://en.wikipedia.org/wiki/Ford%20Model%20T
Ford Model T
The Ford Model T (colloquially known as the "tin Lizzie," "leaping Lena," "jitney" or "flivver") is an automobile that was produced by Ford Motor Company from October 1, 1908, to May 26, 1927. It is generally regarded as the first affordable automobile, which made car travel available to middle-class Americans. The relatively low price was partly the result of Ford's efficient fabrication, including assembly line production instead of individual handcrafting. The Ford Model T was named the most influential car of the 20th century in the 1999 Car of the Century competition, ahead of the BMC Mini, Citroën DS, and Volkswagen Beetle. Ford's Model T was successful not only because it provided inexpensive transportation on a massive scale, but also because the car signified innovation for the rising middle class and became a powerful symbol of the United States' age of modernization. With 15 million sold, it was the most sold car in history before being surpassed by the Volkswagen Beetle in 1972, and still stood eighth on the top-ten list, . Introduction Although automobiles had been produced from the 1880s, until the Model T was introduced in 1908, they were mostly scarce, expensive, and often unreliable. Positioned as reliable, easily maintained, mass-market transportation, the Model T was a great success. In a matter of days after the release, 15,000 orders had been placed. The first production Model T was built on August 12, 1908 and left the factory on September 27, 1908, at the Ford Piquette Avenue Plant in Detroit, Michigan. On May 26, 1927, Henry Ford watched the 15 millionth Model T Ford roll off the assembly line at his factory in Highland Park, Michigan. Henry Ford conceived a series of cars between the founding of the company in 1903 and the introduction of the Model T. Ford named his first car the Model A and proceeded through the alphabet up through the Model T, twenty models in all. Not all the models went into production. The production model immediately before the Model T was the Model S, an upgraded version of the company's largest success to that point, the Model N. The follow-up to the Model T was the Ford Model A, rather than the "Model U." The company publicity said this was because the new car was such a departure from the old that Ford wanted to start all over again with the letter A. The Model T was Ford's first automobile mass-produced on moving assembly lines with completely interchangeable parts, marketed to the middle class. Henry Ford said of the vehicle: I will build a motor car for the great multitude. It will be large enough for the family, but small enough for the individual to run and care for. It will be constructed of the best materials, by the best men to be hired, after the simplest designs that modern engineering can devise. But it will be so low in price that no man making a good salary will be unable to own one – and enjoy with his family the blessing of hours of pleasure in God's great open spaces. Although credit for the development of the assembly line belongs to Ransom E. Olds, with the first mass-produced automobile, the Oldsmobile Curved Dash, having begun in 1901, the tremendous advances in the efficiency of the system over the life of the Model T can be credited almost entirely to Ford and his engineers. Characteristics The Model T was designed by Childe Harold Wills, and Hungarian immigrants Joseph A. Galamb and Eugene Farkas. Henry Love, C. J. Smith, Gus Degner and Peter E. Martin were also part of the team. Production of the Model T began in the third quarter of 1908. Collectors today sometimes classify Model Ts by build years and refer to these as "model years," thus labeling the first Model Ts as 1909 models. This is a retroactive classification scheme; the concept of model years as understood today did not exist at the time. The nominal model designation was "Model T," although design revisions did occur during the car's two decades of production. Engine The Model T has a front-mounted inline four-cylinder engine, producing , for a top speed of . According to Ford Motor Company, the Model T had fuel economy on the order of . The engine was capable of running on gasoline, kerosene, or ethanol, although the decreasing cost of gasoline and the later introduction of Prohibition made ethanol an impractical fuel for most users. The engines of the first 2,447 units were cooled with water pumps; the engines of unit 2,448 and onward, with a few exceptions prior to around unit 2,500, were cooled by thermosiphon action. The ignition system used in the Model T was an unusual one, with a low-voltage magneto incorporated in the flywheel, supplying alternating current to trembler coils to drive the spark plugs. This was closer to that used for stationary gas engines than the expensive high-voltage ignition magnetos that were used on some other cars. This ignition also made the Model T more flexible as to the quality or type of fuel it used. The system did not need a starting battery, since proper hand-cranking would generate enough current for starting. Electric lighting powered by the magneto was adopted in 1915, replacing acetylene gas flame lamp and oil lamps, but electric starting was not offered until 1919. The Model T engine was produced for replacement needs as well as stationary and marine applications until 1941, well after production of the Model T had ended. The Fordson Model F tractor engine, that was designed about a decade later, was very similar to, but larger than, the Model T engine. Transmission and drive train The Model T is a rear-wheel drive vehicle. Its transmission is a planetary gear type billed as "three speed." In today's terms it is considered a two-speed, because one of the three speeds is reverse. The Model T's transmission is controlled with three floor-mounted pedals and a lever mounted to the road side of the driver's seat. The throttle is controlled with a lever on the steering wheel. The left-hand pedal is used to engage the transmission. With the floor lever in either the mid position or fully forward and the pedal pressed and held forward, the car enters low gear. When held in an intermediate position, the car is in neutral. If the left pedal is released, the Model T enters high gear, but only when the lever is fully forward – in any other position, the pedal only moves up as far as the central neutral position. This allows the car to be held in neutral while the driver cranks the engine by hand. The car can thus cruise without the driver having to press any of the pedals. In the first 800 units, reverse is engaged with a lever; all units after that use the central pedal, which is used to engage reverse gear when the car is in neutral. The right-hand pedal operates the transmission brake – there are no brakes on the wheels. The floor lever also controls the parking brake, which is activated by pulling the lever all the way back. This doubles as an emergency brake. Although it was uncommon, the drive bands could fall out of adjustment, allowing the car to creep, particularly when cold, adding another hazard to attempting to start the car: a person cranking the engine could be forced backward while still holding the crank as the car crept forward, although it was nominally in neutral. As the car utilizes a wet clutch, this condition could also occur in cold weather, when the thickened oil prevents the clutch discs from slipping freely. Power reaches the differential through a single universal joint attached to a torque tube which drives the rear axle; some models (typically trucks, but available for cars, as well) could be equipped with an optional Two-speed differential|two-speed rear Ruckstell axle, shifted by a floor-mounted lever which provides an underdrive gear for easier hill climbing. Chassis / frame The heavy-duty Model TT truck chassis came with a special worm gear rear differential with lower gearing than the normal car and truck, giving more pulling power but a lower top speed (the frame is also stronger; the cab and engine are the same). A Model TT is easily identifiable by the cylindrical housing for the worm-drive over the axle differential. All gears are vanadium steel running in an oil bath. Transmission bands and linings Two main types of band lining material were used: Cotton – Cotton woven linings were the original type fitted and specified by Ford. Generally, the cotton lining is "kinder" to the drum surface, with damage to the drum caused only by the retaining rivets scoring the drum surface. Although this in itself did not pose a problem, a dragging band resulting from improper adjustment caused overheating of the transmission and engine, diminished power, and – in the case of cotton linings – rapid destruction of the band lining. Wood – Wooden linings were originally offered as a "longer life" accessory part during the life of the Model T. They were a single piece of steam-bent wood and metal wire, fitted to the normal Model T transmission band. These bands give a very different feel to the pedals, with much more of a "bite" feel. The sensation is of a definite "grip" of the drum and seemed to noticeably increase the feel, in particular of the brake drum. Suspension and wheels Model T suspension employed a transversely mounted semi-elliptical spring for each of the front and rear beam axles which allowed a great deal of wheel movement to cope with the dirt roads of the time. The front axle was drop forged as a single piece of vanadium steel. Ford twisted many axles through eight full rotations (2880 degrees) and sent them to dealers to be put on display to demonstrate its superiority. The Model T did not have a modern service brake. The right foot pedal applied a band around a drum in the transmission, thus stopping the rear wheels from turning. The previously mentioned parking brake lever operated band brakes acting on the inside of the rear brake drums, which were an integral part of the rear wheel hubs. Optional brakes that acted on the outside of the brake drums were available from aftermarket suppliers. Wheels were wooden artillery wheels, with steel welded-spoke wheels available in 1926 and 1927. Tires were pneumatic clincher type, in diameter, wide in the rear, in the front. Clinchers needed much higher pressure than today's tires, typically , to prevent them from leaving the rim at speed. Flat tires were a common problem. Balloon tires became available in 1925. They were all around. Balloon tires were closer in design to today's tires, with steel wires reinforcing the tire bead, making lower pressure possible – typically – giving a softer ride. The steering gear ratio was changed from 4:1 to 5:1 with the introduction of balloon tires. The old nomenclature for tire size changed from measuring the outer diameter to measuring the rim diameter so (rim diameter) × (tire width) wheels has about the same outer diameter as clincher tires. All tires in this time period used an inner tube to hold the pressurized air; tubeless tires were not generally in use until much later. Wheelbase is and standard track width was – track could be obtained on special order, "for Southern roads", identical to the pre-Civil War track gauge for many railroads in the former Confederacy. The standard 56-inch track being very near the inch standard railroad track gauge, meant that Model Ts could be and frequently were, fitted with flanged wheels and used as motorized railway vehicles or "speeders". The availability of a version meant the same could be done on the few remaining Southern railways – these being the only nonstandard lines remaining, except for a few narrow-gauge lines of various sizes. Although a Model T could be adapted to run on track as narrow as gauge (Wiscasset, Waterville and Farmington RR, Maine has one), this was a more complex alteration. Colors By 1918, half of all the cars in the U.S. were Model Ts. In his autobiography, Ford reported that in 1909 he told his management team, "Any customer can have a car painted any color that he wants so long as it is black." However, in the first years of production from 1908 to 1913, the Model T was not available in black, but rather only in gray, green, blue, and red. Green was available for the touring cars, town cars, coupes, and Landaulets. Gray was available for the town cars only and red only for the touring cars. By 1912, all cars were being painted midnight blue with black fenders. Only in 1914 was the "any color so long as it is black" policy finally implemented. It is often stated Ford suggested the use of black from 1914 to 1925 due to the low cost, durability, and faster drying time of black paint in that era. However, there is no evidence that black dried any faster than any other dark varnishes used at the time for painting. Paint choices in the American automotive industry, as well as in others (including locomotives, furniture, bicycles, and the rapidly expanding field of electrical appliances), were shaped by the development of the chemical industry. These included the disruption of dye sources during World War I and the advent, in the mid-1920s, of new nitrocellulose lacquers that were faster-drying and more scratch-resistant, and obviated the need for multiple coats; understanding the choice of paints for the Model T era and the years immediately following requires an understanding of the contemporaneous chemical industry. During the lifetime production of the Model T, over 30 types of black paint were used on various parts of the car. These were formulated to satisfy the different means of applying the paint to the various parts, and had distinct drying times, depending on the part, paint, and method of drying. Body Although Ford classified the Model T with a single letter designation throughout its entire life and made no distinction by model years, enough significant changes to the body were made over the production life that the car may be classified into several style generations. The most immediately visible and identifiable changes were in the hood and cowl areas, although many other modifications were made to the vehicle. 1909–1914 – Characterized by a nearly straight, five-sided hood, with a flat top containing a center hinge and two side sloping sections containing the folding hinges. The firewall is flat from the windshield down with no distinct cowl. For these years, acetylene gas flame headlights were used because the flame is resistant to wind and rain. Thick concave mirrors combined with magnifying lenses projected the acetylene flame light.The fuel tank is placed under the front seat. 1915–1916 – The hood design is nearly the same five-sided design with the only obvious change being the addition of louvers to the vertical sides. A significant change to the cowl area occurred with the windshield relocated significantly behind the firewall and joined with a compound-contoured cowl panel. In these years electric headlights replaced carbide headlights. 1917–1923 – The hood design was changed to a tapered design with a curved top. The folding hinges were now located at the joint between the flat sides and the curved top. This is sometimes referred to as the "low hood" to distinguish it from the later hoods. The back edge of the hood now met the front edge of the cowl panel so that no part of the flat firewall was visible outside of the hood. This design was used the longest and during the highest production years, accounting for about half of the total number of Model Ts built. 1923–1925 – This change was made during the 1923 calendar year, so models built earlier in the year have the older design, while later vehicles have the newer design. The taper of the hood was increased and the rear section at the firewall is about an inch taller and several inches wider than the previous design. While this is a relatively minor change, the parts between the third and fourth generations are not interchangeable. 1926–1927 – This design change made the greatest difference in the appearance of the car. The hood was again enlarged, with the cowl panel no longer a compound curve and blended much more with the line of the hood. The distance between the firewall and the windshield was also increased significantly. This style is sometimes referred to as the "high hood". The styling on the last "generation" was a preview for the following Model A, but the two models are visually quite different, as the body on the A is much wider and has curved doors as opposed to the flat doors on the T. Diverse applications When the Model T was designed and introduced, the infrastructure of the world was quite different from today's. Pavement was a rarity except for sidewalks and a few big-city streets. (The sense of the term "pavement" as equivalent with "sidewalk" comes from that era, when streets and roads were generally dirt and sidewalks were a paved way to walk along them.) Agriculture was the occupation of many people. Power tools were scarce outside factories, as were power sources for them; electrification, like pavement, was found usually only in larger towns. Rural electrification and motorized mechanization were embryonic in some regions and nonexistent in most. Henry Ford oversaw the requirements and design of the Model T based on contemporary realities. Consequently, the Model T was (intentionally) almost as much a tractor and portable engine as it was an automobile. It has always been well regarded for its all-terrain abilities and ruggedness. It could travel a rocky, muddy farm lane, cross a shallow stream, climb a steep hill, and be parked on the other side to have one of its wheels removed and a pulley fastened to the hub for a flat belt to drive a bucksaw, thresher, silo blower, conveyor for filling corn cribs or haylofts, baler, water pump, electrical generator, and many other applications. One unique application of the Model T was shown in the October 1922 issue of Fordson Farmer magazine. It showed a minister who had transformed his Model T into a mobile church, complete with small organ. During this era, entire automobiles (including thousands of Model Ts) were hacked apart by their owners and reconfigured into custom machinery permanently dedicated to a purpose, such as homemade tractors and ice saws. Dozens of aftermarket companies sold prefab kits to facilitate the T's conversion from car to tractor. The Model T had been around for a decade before the Fordson tractor became available (1917–18), and many Ts had been converted for field use. (For example, Harry Ferguson, later famous for his hitches and tractors, worked on Eros Model T tractor conversions before he worked with Fordsons and others.) During the next decade, Model T tractor conversion kits were harder to sell, as the Fordson and then the Farmall (1924), as well as other light and affordable tractors, served the farm market. But during the Depression (1930s), Model T tractor conversion kits had a resurgence, because by then used Model Ts and junkyard parts for them were plentiful and cheap. Like many popular car engines of the era, the Model T engine was also used on home-built aircraft (such as the Pietenpol Sky Scout) and motorboats. An armored-car variant (called the "FT-B") was developed in Poland in 1920 due to the high demand during the Polish-Soviet war in 1920. Many Model Ts were converted into vehicles that could travel across heavy snows with kits on the rear wheels (sometimes with an extra pair of rear-mounted wheels and two sets of continuous track to mount on the now-tandemed rear wheels, essentially making it a half-track) and skis replacing the front wheels. They were popular for rural mail delivery for a time. The common name for these conversions of cars and small trucks was "snowflyers". These vehicles were extremely popular in the northern reaches of Canada, where factories were set up to produce them. A number of companies built Model T–based railcars. In The Great Railway Bazaar, Paul Theroux mentions a rail journey in India on such a railcar. The New Zealand Railways Department's RM class included a few. The American LaFrance company modified more than 900 Model Ts for use in firefighting, adding tanks, hoses, tools and a bell. Model T fire engines were in service in North America, Europe, and Australia. A 1919 Model T equipped to fight chemical fires has been restored and is on display at the North Charleston Fire Museum in South Carolina. Production Mass production The knowledge and skills needed by a factory worker were reduced to 84 areas. When introduced, the T used the building methods typical at the time, assembly by hand, and production was small. The Ford Piquette Avenue Plant could not keep up with demand for the Model T, and only 11 cars were built there during the first full month of production. More and more machines were used to reduce the complexity within the 84 defined areas. In 1910, after assembling nearly 12,000 Model Ts, Henry Ford moved the company to the new Highland Park complex. During this time the Model T production system (including the supply chain) transitioned into an iconic example of assembly-line production. In subsequent decades it would also come to be viewed as the classic example of the rigid, first-generation version of assembly line production, as opposed to flexible mass production of higher quality products. As a result, Ford's cars came off the line in three-minute intervals, much faster than previous methods, reducing production time from 12.5 hours before to 93 minutes by 1914, while using less manpower. In 1914, Ford produced more cars than all other automakers combined. The Model T was a great commercial success, and by the time Ford made its 10 millionth car, half of all cars in the world were Fords. It was so successful Ford did not purchase any advertising between 1917 and 1923; instead, the Model T became so famous, people considered it a norm. More than 15 million Model Ts were manufactured in all, reaching a rate of 9,000 to 10,000 cars a day in 1925, or 2 million annually, more than any other model of its day, at a price of just $260 ($ today). Total Model T production was finally surpassed by the Volkswagen Beetle on February 17, 1972, while the Ford F-Series (itself directly descended from the Model T roadster pickup) has surpassed the Model T as Ford's all-time best-selling model. Henry Ford's ideological approach to Model T design was one of getting it right and then keeping it the same; he believed the Model T was all the car a person would, or could, ever need. As other companies offered comfort and styling advantages, at competitive prices, the Model T lost market share and became barely profitable. Design changes were not as few as the public perceived, but the idea of an unchanging model was kept intact. Eventually, on May 26, 1927, Ford Motor Company ceased US production and began the changeovers required to produce the Model A. Some of the other Model T factories in the world continued a short while, with the final Model T produced at the Cork, Ireland plant in December 1928. Model T engines continued to be produced until August 4, 1941. Almost 170,000 were built after car production stopped, as replacement engines were required to service already produced vehicles. Racers and enthusiasts, forerunners of modern hot rodders, used the Model Ts' blocks to build popular and cheap racing engines, including Cragar, Navarro, and, famously, the Frontenacs ("Fronty Fords") of the Chevrolet brothers, among many others. The Model T employed some advanced technology, for example, its use of vanadium steel alloy. Its durability was phenomenal, and some Model Ts and their parts are in running order over a century later. Although Henry Ford resisted some kinds of change, he always championed the advancement of materials engineering, and often mechanical engineering and industrial engineering. In 2002, Ford built a final batch of six Model Ts as part of their 2003 centenary celebrations. These cars were assembled from remaining new components and other parts produced from the original drawings. The last of the six was used for publicity purposes in the UK. Although Ford no longer manufactures parts for the Model T, many parts are still manufactured through private companies as replicas to service the thousands of Model Ts still in operation today. On May 26, 1927, Henry Ford and his son Edsel drove the 15-millionth Model T out of the factory. This marked the famous automobile's official last day of production at the main factory. Price and production The moving assembly line system, which started on October 7, 1913, allowed Ford to reduce the price of his cars. As he continued to fine-tune the system, Ford was able to keep reducing costs significantly. As volume increased, he was able to also lower the prices due to some of the fixed costs being spread over a larger number of vehicles as large supply chain investments increased assets per vehicle. Other factors reduced the price such as material costs and design changes. As Ford had market dominance in North America during the 1910s, other competitors reduced their prices to stay competitive, while offering features that weren't available on the ModelT such as a wide choice of colors, body styles and interior appearance and choices, and competitors also benefited from the reduced costs of raw materials and infrastructure benefits to supply chain and ancillary manufacturing businesses. In 1909, the cost of the Runabout started at . By 1925 it had been lowered to . The figures below are US production numbers compiled by R.E. Houston, Ford Production Department, August 3, 1927. The figures between 1909 and 1920 are for Ford's fiscal year. From 1909 to 1913, the fiscal year was from October 1 to September 30 the following calendar year with the year number being the year in which it ended. For the 1914 fiscal year, the year was October 1, 1913, through July 31, 1914. Starting in August 1914, and through the end of the ModelT era, the fiscal year was August 1 through July 31. Beginning with January 1920, the figures are for the calendar year. The above tally includes a total of 14,689,525 vehicles. Ford said the last ModelT was the 15 millionth vehicle produced. Recycling Henry Ford used wood scraps from the production of Model Ts to make charcoal briquettes. Originally named Ford Charcoal, the name was changed to Kingsford Charcoal after the Iron Mountain Ford Plant closed in 1951 and the Kingsford Chemical Company was formed and continued the wood distillation process. E. G. Kingsford, Ford's cousin by marriage, brokered the selection of the new sawmill and wood distillation plant site. Lumber for production of the Model T came from the same location, built-in 1920 called the Iron Mountain Ford which incorporated a sawmill where lumber from Ford purchased land in the Upper Peninsula of Michigan was cut and dried. Scrap wood was distilled at the Iron Mountain plant for its wood chemicals, with the end by-product being lump charcoal. This lump charcoal was modified and pressed into briquettes and mass-marketed by Ford. First global car The Ford Model T was the first automobile built by various countries simultaneously, since they were being produced in Walkerville, Canada, and in Trafford Park, Greater Manchester, England, starting in 1911 and were later assembled in Germany, Argentina, France, Spain, Denmark, Norway, Belgium, Brazil, Mexico, and Japan, as well as several locations throughout the US. Ford made use of the knock-down kit concept almost from the beginning of the company as freight and production costs from Detroit had Ford assembling vehicles in major metropolitan centers of the US. The Aeroford was an English automobile manufactured in Bayswater, London, from 1920 to 1925. It was a Model T with a distinct hood and grille to make it appear to be a totally different design, what later was called badge engineering. The Aeroford sold from £288 in 1920, dropping to £168–214 by 1925. It was available as a two-seater, four-seater, or coupé. Advertising and marketing Ford created a massive publicity machine in Detroit to ensure every newspaper carried stories and advertisements about the new product. Ford's network of local dealers made the car ubiquitous in virtually every city in North America. A large part of the success of Ford's Model T stems from the innovative strategy which introduced a large network of sales hubs making it easy to purchase the car. As independent dealers, the franchises grew rich and publicized not just the Ford but the very concept of automobiling; local motor clubs sprang up to help new drivers and to explore the countryside. Ford was always eager to sell to farmers, who looked on the vehicle as a commercial device to help their business. Sales skyrocketed – several years posted around 100 percent gains on the previous year. 24 Hours of Le Mans Parisian Ford dealer Charles Montier and his brother-in-law Albert Ouriou entered a heavily modified version of the Model T (the "Montier Special") in the first three 24 Hours of Le Mans. They finished 14th in the inaugural 1923 race. Car clubs Today, four main clubs exist to support the preservation and restoration of these cars: the Model T Ford Club International, the Model T Ford Club of America and the combined clubs of Australia. With many chapters of clubs around the world, the Model T Ford Club of Victoria has a membership with a considerable number of uniquely Australian cars. (Australia produced its own car bodies, and therefore many differences occurred between the Australian bodied tourers and the US/Canadian cars.) In the UK, the Model T Ford Register of Great Britain celebrated its 50th anniversary in 2010. Many steel Model T parts are still manufactured today, and even fiberglass replicas of their distinctive bodies are produced, which are popular for T-bucket style hot rods (as immortalized in the Jan and Dean surf music song "Bucket T", which was later recorded by The Who). In 1949, more than twenty years after the end of production, 200,000 Model Ts were registered in the United States. In 2008, it was estimated that about 50,000 to 60,000 Ford Model Ts remain roadworthy. In popular media A 1920 Ford Model T is featured in the 1920 Harold Lloyd comedy short Get Out and Get Under. The Ford Model T was the car of choice for comedy duo Stan Laurel and Oliver Hardy. It was used in most of their short and feature films. In 1966, Belgian comic book authors Maurice Tillieux and Francis created the comic adventures of a character named Marc Lebut and his Model T. Another Belgian comic strip, Piet Pienter en Bert Bibber, has the protagonists driving a model T in the earliest stories; they later move with the times and change into more recent cars, but always by Ford. The phrase to "go the way of the Tin Lizzie" is a colloquialism referring to the decline and elimination of a popular product, habit, belief, or behavior as a now outdated historical relic that has been replaced by something new. In Aldous Huxley's Brave New World, Henry Ford is regarded as a messianic figure, Christian crosses have been truncated to Ts, and vehicles are called "flivvers" (from a slang reference to the Model T). Moreover, the calendar is converted to an A.F. ("After Ford") system, wherein the calendar begins (AF 1) with the introduction of the Model T (AD 1908). The well-known phrase God is in his heaven, all is right with the world (originating from Robert Browning's Pippa Passes) is changed into Ford is in his flivver, all is right with the world. In 1953, The Ford 50th Anniversary Show was broadcast live on NBC and CBS attracting an audience of 50 million. Edward R. Murrow introduced a comic sketch featuring puppets "Kukla, Fran and Ollie" driving and singing in a Ford Model T automobile. The segment also includes clips of the Model T in silent movies, including Harold Lloyd and Keystone Cops comedies. Murrow then reviewed the history of the Model T and Ford's development of the assembly line. Forty years after the broadcast, television critic Tom Shales recalled the 1953 broadcast as both "a landmark in television" and "a milestone in the cultural life of the '50s". The Model T has a featured role in the Walt Disney sci-fi comedy The Absent-Minded Professor, in which the classic car flies. In a 1964 episode of Hazel, sponsored by Ford and featuring the company's current models in the stories, Hazel acquires a 1920 Model T. For the 1965–66 sitcom My Mother the Car, a 1924 Model T was modified to represent the show's fictional "1928 Porter" touring car. Lizzie from the Cars franchise is based on a 1923 Ford Model T coupe. In the alternate history series Southern Victory by Harry Turtledove, the Model T was the most popular car in the United States before the Great War. It was so desirable that it was even exported to the Confederate States, which won the American Civil War in 1862. However, Model Ts in the Confederacy proved difficult to maintain as none had been built locally. During the Great War, many Model Ts served as staff cars for US generals and officers. Archie Andrews from the Archie Comics drove a 1916 Model T from 1941 (when such a car would have been 25 years old) until 1983. In the 1947 Donald Duck cartoon, Wide Open Spaces, Donald's car (based on a 1947 Buick Roadmaster) is crushed by a large boulder and is somehow molded into a Model T. In Gerry Anderson's The Secret Service, a Ford Model T was used regularly in the show. Gallery Model T chronology See also New Zealand RM class (Model T Ford) – a 1925 experimental railcar based on a Model T powertrain Piper J-3 Cub, the 1930s/40s American light aircraft that developed a similar degree of ubiquity in general aviation circles to the Model T Lakeside Foundry Notes and references Bibliography External links FordModelT.net – Resource for Model T Owners and Enthusiasts Model T Ford Club of America (USA) Model T Ford Club International First and second web pages of Old Rhinebeck Aerodrome's vintage vehicle collection, featuring five Model T-based vehicles 1900s cars 1910s cars 1920s cars Brass Era vehicles Cars introduced in 1908 Convertibles Model T Full-size vehicles History of Detroit 1908 establishments in the United States Motor vehicles manufactured in the United States Pickup trucks Front mid-engine, rear-wheel-drive vehicles
28170760
https://en.wikipedia.org/wiki/OS/VS2
OS/VS2
Operating System/Virtual Storage 2 (OS/VS2) is the successor operating system to OS/360 MVT in the OS/360 family. SVS refers to OS/VS2 Release 1 MVS refers to OS/VS2 Release 2 and later IBM mainframe operating systems
41908522
https://en.wikipedia.org/wiki/Nokia%20Fastlane
Nokia Fastlane
Nokia Fastlane is a user interface from Nokia, used on the Nokia Asha platform and Nokia X platform. Fastlane is the timeline of your activities in your phone. You can access Fastlane in Asha OS by swiping left or right from the Start screen. The first device to run Fastlane is Nokia Asha 501. Usage Fastlane has an Facebook's "What's On Your Mind" Feature. It also has an "Upcoming Events" Feature. Compatible With Nokia Asha Software Platform Nokia Asha Software 1.0 Nokia Asha Software 1.1 Nokia Asha Software 1.2 Nokia Lumia Nokia X platform Nokia X Software 1.0.1 Nokia X Software 1.1.1 Nokia X Software 1.1.2.2 Nokia X Software 2.0 Compatible Phones Nokia Nokia Asha 230 Nokia Asha 500 Nokia Asha 501 Nokia Asha 502 Nokia Asha 503 Nokia X(1st Version) Nokia X+ Nokia XL Nokia X2 (2014) See also Nokia Nokia Asha 501 Nokia Asha platform Nokia X Graphical user interfaces
220684
https://en.wikipedia.org/wiki/Index%20of%20Internet-related%20articles
Index of Internet-related articles
This page provides an index of articles thought to be Internet or Web related topics. A AARNet - Abilene Network - Access control list - Ad hoc network - Address resolution protocol - ADSL - AirPort - All your base are belong to us - AOL - APNIC - AppleTalk - Application Configuration Access Protocol - Archimedes Plutonium - Archie search engine - ARIN - ASN.1 - Asynchronous Transfer Mode - Auction - Authentication - Automatic teller machine - Autonomous system - Awards B Yahoo! Babel Fish - Backbone cabal - Base - Bet exchange - Biefeld-Brown effect - Blank media tax - Bogon filtering - Bomb-making instructions on the internet - Book - Bookmark - Border gateway protocol - Broadband Integrated Services Digital Network - Broadband Internet - Bulletin board system C Cable modem - Carrier sense multiple access - Carrier sense multiple access with collision detection - Carrier sense multiple access with collision avoidance - CDDB - Content-control software - Chain letter - Channel access method - Charles Stark Draper Prize - Cisco Systems, Inc. - Classless Inter-Domain Routing - Code Red worm - Common Gateway Interface - Communications protocol - Component object model - Computer - Computer addiction - Computer-assisted language learning - computer network - Computer worm - Computing technology - Concurrent Versions System - Consumer privacy - Content-control software - Content delivery - Coordinated Universal Time - Core router - CSMA/CARP - Customer privacy - Cyber law - Cyberpunk - Cybersex - Cyberspace D Darknet - DDP - Defense Advanced Research Projects Agency - del.icio.us - Delivermail - Demilitarized zone (computing) - Denial of service - DHCP - Dial-up - Dial-up access - DiffServ - Digital divide - Digital literacy - Digital Equipment Corporation - Digital subscriber line - DirecTV - DISH Network - Disk image - Distance-vector routing protocol - DNS - Domain forwarding - Domain name registry - DVB - Dynamic DNS E E-card - E-democracy - E-mail - E-Services - EBay - Eldred v. Ashcroft - Electronic mailing list - Electronic money - Embrace, extend and extinguish - End-to-end connectivity - Enterprise content management - Entropy - Epoch - Ethernet - European Installation Bus - EverQuest - Everything2 - Extended ASCII - Extranet - F Fan fiction - FAQ - Federal Standard 1037C - Fiber optic - Fidonet - File sharing - File transfer protocol - Finger protocol - Firefox - Firewall - Flaming - Floppy disk - Focus group - Form - FORscene - Frame relay - FTP G Gecko - Geocaching - G.hn - GIMPS - Global Internet usage - Glossary of Internet-related terminology - GNU - Gnutella - Google - Gopher protocol H Hacker ethic - Hate sites - HDLC - Head end - Hierarchical routing - High speed internet - Hilary Rosen - History of radio - History of the Internet - Homepage - HomePNA - Hop (telecommunications) - HTML - HTTP - HTTPS - Human–computer interaction I ICANN - ICQ - Identity theft - IEEE 802.11 - IMAP - IMAPS - Indigenous Dialogues - Infocom - Information Age - Information Awareness Office - Instant messaging - Integrated Services Digital Network - Internet - Internet access in the United States - Internet Archive - Internet as a source of prior art - Internet backbone - Internet Capitalization Conventions - Internet censorship - Internet censorship circumvention - Internet Chess Club - Internet child pornography - Internet Control Message Protocol - Internet democracy - Internet Engineering Task Force - Internet friendship - Internet Group Management Protocol - Internet minute - Internet organizations - Internet phone - Internet pornography - Internet Protocol - Internet protocol suite - Internet radio - Internet-related terminology - Internet Relay Chat - Internet romance - Internet service provider - Internet slang - Internet Society - Internet standard - Internet Storm Center - Internet time - Internet troll - Internet2 - Internetworking - InterNIC - Interpedia - Interplanetary Internet - InterWiki - Intranet - iOS - IP address - IP protocol - IPv4 - IPv6 - IPX - IRC - ISCSI - ISDN - ISO 8601 - ISO 8859-1 J JAIN - James H. Clark - Java applet - Java platform - JavaScript - Jon Postel - JPEG - JSTOR K KA9Q - Knowledge Aided Retrieval in Activity Context - Ken McVay - Kerberos - KNX L LACNIC - Large Technical System - Larry Page - Legal aspects of computing - Lightweight Directory Access Protocol - Link-state routing protocol - Linux Network Administrators' Guide - List of the oldest currently registered Internet domain names - LiveJournal - Load balancing - Local area network - Loopback - Lycos M Mailbomb - Make money fast - Matt Drudge - Media player (application software) - Medium - Melissa worm - MenuetOS - Metcalfe's law - Metropolitan area network - Microsoft .NET - Microsoft SQL Server - Miller test - Mirror - Modem - Modulation - Morris worm - Mozilla Firefox - Mozilla Thunderbird - MPEG-1 Audio Layer II - Multichannel video programming distributor - Multicast - MUMPS MYSPACE - N Napster - National Broadband Network - NetBIOS - Netiquette - Netscape Communicator - Netwar - Network address translation - Network Control Program - Network File System - Network Information Centre - Network mapping - Network News Transfer Protocol - Network time protocol - News agency - News aggregator - News client (newsreader) - News server - Non-repudiation - Novell - NSD - NSFNet - NTLM - Nude celebrities on the Internet O Online - online banking - Online Books Page - Open mail relay - Open shortest path first - Open Site - Opera (web browser) - Organizations - OS/390 - OSI model - OSPF - Out-of-band data P Packet radio - Packet switching - Parasitic computing - Parental controls - Paul Mockapetris - Paul Vixie - PayPal - Peer-to-peer - Peering - Pen pal - Perl - Personal area network - Ping - PKZIP - Plug-and-play - Point-to-Point Protocol - Political media - POP - POP3 - Port forwarding - Port scan - Pretty Good Privacy - Primary mirror - Private IP address - Project Gutenberg - Protocol - Protocol stack - Pseudonymous remailer - PSOS (real-time operating system) - Psychological effects of Internet use - Public switched telephone network - Publishing Q QNX - QOTD - Quality of service - QuickTime R Red Hat - Regex - Regulation of Investigatory Powers Act 2000 - RPC - Resource Reservation Protocol - Request for Comments - Reverse Address Resolution Protocol - RIPE - RISC OS - Root nameserver - Route analytics - Router (computing) - Routing - Rooster Teeth - Routing information protocol - RSA - RTP - RTSP S SCADA - Scientology vs. the Internet - scp - Script kiddie - Secret identity - Secure copy - Secure file transfer program - Secure shell - Sequenced packet exchange - Sergey Brin - Serial line IP - Serial Line Internet Protocol - Serial (podcast) - SMB - SFTP - Signalling System 7 - Simple network management protocol - Slashdot effect - Smiley - Simple File Transfer Protocol - SMTP - Social engineering (security) - Social impact of YouTube - Social media - Software development kit - Sohonet - Spam - SPX - Spyware - SQL slammer worm - SSH - SSH File Transfer Protocol - Stateful firewall - Stateless firewall - Steganography - Stub network T TCP - TCP and UDP port numbers - Ted Nelson - Telecommunication - Telecommunications network - Telecommunications traffic engineering - Teledesic - Telegraphy - Teleprinter - Telnet - The Cathedral and the Bazaar - Think tank - Thunderbird - Time to live - Timeline of communication technology - Timeline of computing 1950-1979 - Timeline of computing 1980-1989 - Tiscali - Token Ring - Top-level domain - Traceroute - Transmission Control Protocol - Transmission system - Transport Layer Security - Trusted computing - TTL U UDDI - Ubiquitous Knowledge Processing Lab - Ultrix - Ungermann-Bass - Uniform Resource Identifier - Uniform Resource Locator - Universal Plug and Play - University of California, Berkeley - Usenet - Usenet cabal - USENET Cookbook - User datagram protocol - UTF-16 - UUCP V vCard - Victorian Internet - Vint Cerf - Virtual community - Voice over IP W WAP - WAI - War driving - Warez - Warhol worm - WAV - Web 2.0 - Web annotation - Web application - Web browser - Webcomic - Web commerce - Web design - Web directory - Web hosting - Web index - Web portal - Web search engine - Web server - Web service - Web traffic - Web television - Webcam - WebDAV - Webmail - Webpage - WebQuest - Website - Whois - Wi-Fi - Wide area information server - Wide area network - Wiki software - Wikipedia - WikiWikiWeb - Windows 3.x - Winsock - Wireless access point - Wireless Application Protocol - Wireless broadband - Wireless community network - World Wide Web - WorldForge X X.25 - XDR - Xerox Network Systems - XML - XS4ALL Y YTMND - Yahoo! - Yahoo! Internet Life Z Zephyr See also :Category:Computing terminology List of computing topics Index Index Internet
1125773
https://en.wikipedia.org/wiki/VOB
VOB
VOB (for video object) is the container format in DVD-Video media. VOB can contain digital video, digital audio, subtitles, DVD menus and navigation contents multiplexed together into a stream form. Files in VOB format may be encrypted. File format Files in VOB format have a .vob filename extension and are typically stored in the VIDEO_TS directory at the root of a DVD. The VOB format is based on the MPEG program stream format, but with additional limitations and specifications in the private streams. The MPEG program stream has provisions for non-standard data (as used in VOB files) in the form of so-called private streams. VOB files are a very strict subset of the MPEG program stream standard. While all VOB files are MPEG program streams, not all MPEG program streams comply with the definition for a VOB file. Analogous to the MPEG program stream, a VOB file can contain H.262/MPEG-2 Part 2 or MPEG-1 Part 2 video, MPEG-1 Audio Layer II or MPEG-2 Audio Layer II audio, but usage of these compression formats in a VOB file has some restrictions in comparison to the MPEG program stream. In addition, VOB can contain linear PCM, AC-3 or DTS audio and subpictures (subtitles). VOB files cannot contain AAC audio (MPEG-2 Part 7), MPEG-4 compression formats and others, which are allowed in the MPEG program stream standard. On the DVD, all the content for one title set is contiguous, but broken up into 1 GB VOB files in order to be compatible with all operating systems, as some cannot read files larger than that size. Each VOB file must be less than or equal to one GB. Companion files VOB files may be accompanied with IFO and BUP files. These files respectively have .ifo and .bup filename extensions. IFO (information) files contain all the information a DVD player needs to know about a DVD so that the user can navigate and play all DVD content properly, such as where a chapter starts, where a certain audio or subtitle stream is located, information about menu functions and navigation. BUP (backup) files are exact redundant copies of IFO files, supplied to help in case of corruption. Video players may not allow DVD navigation when IFO or BUP files are absent. Images, video and audio used in DVD menus are stored in VOB files. Copy protection Almost all commercially produced DVD-Video titles use some restriction or copy protection method, which also affects VOB files. Copy protection is usually used for copyrighted content. Many DVD-Video titles are encrypted with Content Scramble System (CSS). This is a data encryption and communications authentication method designed to prevent copying video and audio data directly from the DVD-Video discs. Decryption and authentication keys needed for playing back encrypted VOB files are stored in the normally inaccessible lead-in area of the DVD and are used only by CSS decryption software (e.g., in a DVD player or software player). If someone is trying to copy the contents of an encrypted DVD-Video (e.g., VOB files) to a hard drive, an error can occur, because the DVD was not authenticated in the drive by CSS decryption software. Authentication of the disc allows the copying of individual VOB files without error, but the encryption keys will not be copied. If the copied undecrypted VOB files are opened in a player, they will request the keys from the DVD-ROM drive and will fail. There are many CSS-decrypting programs, or ripping software, such as libdvdcss, DeCSS, DVD Decrypter, AnyDVD or DVD Shrink which allow a protected DVD-Video disc to be played without access to the original key or copied to hard disk unscrambled. In some countries, their usage can be a violation of law (e.g. for non-personal use). Playback A player of generic MPEG-2 files can usually play unencrypted VOB files, which contain MPEG-1 Audio Layer II audio. Other audio compression formats such as AC-3 or DTS are less widely supported. KMPlayer, VLC media player, GOM player, Media Player Classic and more platform-specific players like ALLPlayer play VOB files. Other DVD containers Some DVD Recorders use DVD-VR format and store multiplexed audiovisual content in VRO containers. A VRO file is equivalent to a collection of DVD-Video VOB files. The VRO files can be played directly like a VOB if no editing is intended. Fragmented VRO files are not widely supported by software players and video editing software. Enhanced VOB (EVO) is also an extension to VOB, originally meant for the now-discontinued HD DVD video. It can contain additional video and audio formats such H.264 and AAC. See also Comparison of video container formats List of video editing software References External links doom9.org - What is on a DVD? DVD Digital container formats Filename extensions
619009
https://en.wikipedia.org/wiki/PlayStation%20Portable
PlayStation Portable
The PlayStation Portable (PSP) is a handheld game console developed and marketed by Sony Computer Entertainment. It was first released in Japan on December 12, 2004, in North America on March 24, 2005, and in PAL regions on September 1, 2005, and is the first handheld installment in the PlayStation line of consoles. As a seventh generation console, the PSP competed with the Nintendo DS. Development of the PSP was announced during E3 2003, and the console was unveiled at a Sony press conference on May 11, 2004. The system was the most powerful portable console when it was introduced, and was the first real competitor of Nintendo's handheld consoles after many challengers such as Nokia's N-Gage had failed. The PSP's advanced graphics capabilities made it a popular mobile entertainment device, which could connect to the PlayStation 2 and PlayStation 3, any computer with a USB interface, other PSP systems, and the Internet. The PSP also had a vast array of multimedia features such as video playback, and has been considered a portable media player as well. The PSP is the only handheld console to use an optical disc format – in this case, Universal Media Disc (UMD) – as its primary storage medium; both games and movies have been released on the format. The PSP was received positively by critics, and sold over 80 million units during its ten-year lifetime. Several models of the console were released, before the PSP line was succeeded by the PlayStation Vita, released in Japan in 2011 and worldwide a year later. The Vita has backward compatibility with PSP games that were released on the PlayStation Network through the PlayStation Store, which became the main method of purchasing PSP games after Sony shut down access to the store from the PSP on March 31, 2016. Hardware shipments of the PSP ended worldwide in 2014; production of UMDs ended when the last Japanese factory producing them closed in late 2016. History Sony Computer Entertainment first announced development of the PlayStation Portable at a press conference preceding E3 2003. Although samples were not presented, Sony released extensive technical details. CEO Ken Kutaragi called the device the "Walkman of the 21st century", a reference to the console's multimedia capabilities. Several gaming websites were impressed with the handheld's computing capabilities, and looked forward to its potential as a gaming platform. In the 1990s, Nintendo had dominated the handheld market since launching its Game Boy in 1989, experiencing close competition only from Bandai's WonderSwan (1999–2003) in Japan and Sega's Game Gear (1990-2001). In January 1999, Sony had released the briefly successful PocketStation in Japan as its first foray into the handheld gaming market. The SNK Neo Geo Pocket and Nokia's N-Gage also failed to cut into Nintendo's share. According to an IDC analyst in 2004, the PSP was the "first legitimate competitor to Nintendo's dominance in the handheld market". The first concept images of the PSP appeared at a Sony corporate strategy meeting in November 2003, and featured a model with flat buttons and no analog joystick. Although some reviewers expressed concern about the lack of an analog stick, these fears were allayed when the PSP was officially unveiled at the Sony press conference during E3 2004. Sony released a list of 99 developer companies that pledged support for the new handheld. Several game demos such as Konami's Metal Gear Acid and Studio Liverpool's Wipeout Pure were also shown at the conference. Launch On October 17, 2004, Sony announced that the PSP base model would be launched in Japan on December 12 that year for ¥19,800 (about US$181 in 2004) while the Value System would launch for ¥24,800 (about US$226). The launch was a success, with over 200,000 units sold on the first day of sales. Color variations were sold in bundle packs that cost around $200. On February 3, 2005, Sony announced that the PSP would be released in North America on March 24 in one configuration for an MSRP of US$249/CA$299. Some commentators expressed concern over the high price, which was almost US$20 higher than that of the Japanese model and over $100 higher than the Nintendo DS. Despite these concerns, the PSP's North American launch was a success; Sony said 500,000 units were sold in the first two days of sales, though it was also reported that this figure was below expectations. The PSP was originally intended to have a simultaneous PAL and North American launch, but on March 15, 2005, Sony announced that the PAL launch would be delayed due to high demand for the console in Japan and North America. The next month, Sony announced that the PSP would be launched in the PAL region on September 1, 2005 for €249/£179. Sony defended the high price by saying North American consumers had to pay local sales taxes and that the Value Added Tax (sales tax) was higher in the UK than the US. Despite the high price, the PSP's PAL launch was a success, with the console selling over 185,000 units in the UK. All stock of the PSP in the UK sold out within three hours of its launch, more than doubling the previous first-day sales record of 87,000 units set by the Nintendo DS. The system also enjoyed great success in other areas of the PAL region; over 25,000 units were pre-ordered in Australia and nearly one million units were sold across Europe in the system's first week of sales. Hardware The PlayStation Portable uses the common "bar" form factor. The original model measures approximately and weighs . The front of the console is dominated by the system's LCD screen, which is capable of 480 × 272 pixel display resolution with 24-bit color, outperforming the Nintendo DS. Also on the unit's front are four PlayStation face buttons (, , , ); the directional pad, the analog "nub", and several other buttons. The system also has two shoulder buttons, a USB 2.0 mini-B port on the top of the console, and a wireless LAN switch and power cable input on the bottom. The back of the PSP features a read-only Universal Media Disc (UMD) drive for access to movies and games, and a reader compatible with Sony's Memory Stick PRO Duo flash cards is located on the left of the system. Other features include an IrDA-compatible infrared port and a two-pin docking connector (this was discontinued in PSP-2000 and later); built-in stereo speakers and headphone port; and IEEE 802.11b Wi-Fi for access to the Internet, free online multiplayer gaming via PlayStation Network, and data transfer. The PSP uses two 333 MHz MIPS32 R4000 R4k-based CPUs, as a main CPU and Media Engine, a GPU running at 166 MHz, and includes 32 MB main RAM (64MB on PSP-2000 and later models), and 4 MB embedded DRAM split between the aforementioned GPU and Media Engine. The hardware was originally forced to run more slowly than it was capable of; most games ran at 222 MHz. With firmware update 3.50 on May 31, 2007, however, Sony removed this limit and allowed new games to run at 333 MHz. The PSP is powered by an 1800 mAh battery (1200 mAh on the 2000 and 3000 models) that provides between about three and six hours of gameplay, between four and five hours of video playback, or between eight and eleven hours of audio playback. To make the unit slimmer, the capacity of the PSP's battery was reduced from 1800 mAh to 1200 mAh in the PSP-2000 and 3000 models. Due to more efficient power use, however, the expected playing time is the same as that of older models. The original high-capacity batteries work on the newer models, giving increased playing time, though the battery cover does not fit. The batteries take about 1.5 hours to charge and last for between four-and-a-half and seven hours depending on factors such as screen brightness settings, the use of WLAN, and volume levels. In March 2008, Sony released the Extended Life Battery Kit in Japan, which included a bulkier 2200 mAh battery with a fitting cover. In Japan, the kit was sold with a specific-colored cover matching the many PSP variations available. The North American kit released in December 2008 was supplied with two new covers; one black and one silver. Revisions PSP-2000 The PSP-2000, marketed in PAL countries as the "PSP Slim", is the first redesign of the PlayStation Portable. The PSP-2000 system is slimmer and lighter than the original PSP, reduced from and from . At E3 2007, Sony released information about a slimmer and lighter version for the device, which was first released in Hong Kong on August 30, 2007, in Europe on , in North America on , in South Korea on , and in Australia on . The UK release for the PSP 2000 was 14 September. The serial port was modified to accommodate a new video-out feature, making it incompatible with older PSP remote controls. On the PSP-2000, games only output to external monitors and televisions in progressive scan mode. Non-game video outputs work in either progressive or interlaced mode. USB charging was introduced and the D-Pad was raised in response to complaints of poor performance and the responsiveness of the buttons was improved. Other changes include improved WLAN modules and micro-controller, and a thinner, brighter LCD screen. To improve the poor loading times of UMD games on the original PSP, the internal memory (RAM and Flash ROM) was doubled from 32 MB to 64 MB, part of which now acting as a cache, also improving the web browser's performance. PSP-3000 In comparison with the PSP-2000, the 3000, marketed in PAL areas as "PSP Slim & Lite" or "PSP Brite", has an improved LCD screen with an increased color range, five times the contrast ratio, a halved pixel response time, new sub-pixel structure, and anti-reflective technology to reduce outdoor glare. The disc tray, logos, and buttons were all redesigned, and a microphone was added. Games could now be output in either component or composite video using the video-out cable. Some outlets called this model "a minor upgrade". The PSP-3000 was released in North America on October 14, 2008, in Japan on , in Europe on , and in Australia on . In its first four days on sale in Japan, the PSP-3000 sold over 141,270 units, according to Famitsu; it sold 267,000 units during October. On its release, a problem with interlacing when objects were in motion on the PSP-3000 screen was noticed. Sony announced this problem would not be fixed. PSP Go (N1000) The PSP Go (model PSP-N1000) was released on October 1, 2009, in North American and European territories, and on November 1 in Japan. It was revealed prior to E3 2009 through Sony's Qore video on demand service. Its design is significantly different from other PSP models. The unit is 43% lighter and 56% smaller than the original PSP-1000, and 16% lighter and 35% smaller than the PSP-3000. Its rechargeable battery is not intended to be removed by the user. It has a 480 × 272 pixel LCD screen, which slides up to reveal the main controls. The overall shape and sliding mechanism are similar to those of Sony's mylo COM-2 Internet device. The PSP Go features 802.11b Wi-Fi like its predecessors, although the USB port was replaced with a proprietary connector. A compatible cable that connects to other devices' USB ports is included with the unit. The new multi-use connector allows video and sound output with the same connector using an optional composite or component AV cable. As with previous models, Sony also offers a cradle (PSP-N340) for charging, video out, and USB data transfer on the PSP Go. This model adds support for Bluetooth connectivity, which enables the playing of games using a Sixaxis or DualShock 3 controller. The use of the cradle with the controller allow players to use the PSP Go as a portable device and as a console, although the output is not upscaled. PlayStation 1 games can be played in full screen using the AV/component cable or the cradle. The PSP Go lacks a UMD drive, and instead has 16 GB of internal flash memory, which can be extended by up to 32 GB with the use of a Memory Stick Micro (M2). Games must be downloaded from the PlayStation Store. The removal of the UMD drive effectively region-locks the unit because it must be linked to a single, region-locked PlayStation Network account. While the PSP Go can download games to itself, users can also download and transfer games to the device from a PlayStation 3 console, or the Windows-based software Media Go. All downloadable PSP and PlayStation games available for older PSP models are compatible with the PSP Go. Sony confirmed that almost all UMD-based PSP games released after October 1, 2009, would be available to download and that most older UMD-only games would also be downloadable. In February 2010, it was reported that Sony might re-launch the PSP Go due to the lack of consumer interest and poor sales. In June 2010, Sony began bundling the console with 10 free downloadable games; the same offer was made available in Australia in July. Three free games for the PSP Go were offered in America. In October that year, Sony announced it would reduce the price of the unit. On April 20, 2011, the manufacturer announced that the PSP Go would be discontinued outside of North America so it could concentrate on the PlayStation Vita. PSP Street (E1000) The PSP-E1000, which was announced at Gamescom 2011, is a budget-focused model that was released across the PAL region on October 26 of that year. The E1000 lacks Wi-Fi capability and has a matte, charcoal-black finish similar to that of the slim PlayStation 3. It has a monaural speaker instead of the previous models' stereo speakers and lacks a microphone. This model also lacked the physical brightness buttons from the front of the handheld, instead offering brightness controls in the System Software's 'Power Save Settings' menu. An ice-white version was released in PAL territories on July 20, 2012. Bundles and colors The PSP was sold in four main configurations. The Base Pack, called the Core Pack in North America, contained the console, a battery, and an AC adapter. This version was available at launch in Japan and was released later in North America and Europe. Many limited editions of the PSP were bundled with accessories, games, or movies. The first initial release of the Slims in North America on September 6, 2007 sold Daxter PSPs. Included with the bundle was a Ice Silver PSP with a Daxter UMD, the Family Guy : Freaking Sweet Collection, and a 1GB Memory Stick for usage. Limited-edition models were first released in Japan on September 12, 2007; North America and Europe on September 5; in Australia on September 12, and in the UK on October 26. The PSP-2000 was made available in piano black, ceramic white, ice silver, mint green, felicia blue, lavender purple, deep red, matte bronze, metallic blue, and rose pink as standard colors. Several special-edition consoles were colored and finished to sell with certain games, including Final Fantasy VII: Crisis Core (ice silver engraved), Star Ocean: First Departure (felicia blue engraved), Gundam (red gloss/matte black), and Monster Hunter Freedom (gold silkscreened) in Japan, Star Wars (Darth Vader silkscreened), and God of War: Chains of Olympus (Kratos silkscreened) in North America, The Simpsons (bright yellow with white buttons, analog and disc tray) in Australia and New Zealand, and Spider-Man (red gloss/matte black) in Europe. The PSP-3000 was made available in piano black, pearl white, mystic silver, radiant red, vibrant blue, spirited green, blossom pink, turquoise green and lilac purple. The limited edition "Big Boss Pack" of Metal Gear Solid: Peace Walker had a camouflage pattern while the God of War: Ghost of Sparta bundle pack included a black-and-red two-toned PSP. The Dissidia 012 Final Fantasy Cosmos & Chaos edition that was released on March 3, 2011, has an Amano artwork as the PSP's face plate. Comparison Below is a comparison of the different PlayStation Portable models: Software System software The PSP runs a custom operating system referred to as the System Software, which can be updated over the Internet, or by loading an update from a Memory Stick or UMD. Sony offers no method for downgrading such software. While System Software updates can be used with consoles from any region, Sony recommends only downloading updates released for the model's region. System Software updates have added many features, including a web browser, Adobe Flash support, additional codecs for various media, PlayStation 3 (PS3) connectivity, and patches against security exploits and the execution of homebrew programs. The most recent version, numbered 6.61, was released on January 15, 2015. Apps and functionality Web browser The PSP Internet Browser is a version of the NetFront browser and came with the system via an update. The browser supports most common web technologies, such as HTTP cookies, forms, CSS, and basic JavaScript. It features basic tabbed browsing and has a maximum of three tabs. Remote Play Remote Play allows the PSP to access many of the features of the PlayStation 3 console from a remote location using the PS3's WLAN capabilities, a home network, or the Internet. Using Remote Play, users can view photographs, listen to music, and watch videos stored on the PS3 or connected USB devices. Remote Play also allows the PS3 to be turned on and off remotely and lets the PSP control audio playback from the PS3 to a home theater system. Although most of the PS3's capabilities are accessible with Remote Play, playback of DVDs, Blu-ray Discs, PlayStation games, PlayStation 2 games, most PS3 games, and copy-protected files stored on the hard drive are not supported. VoIP access Starting with System Software version 3.90, the PSP-2000, 3000, and Go can use the Skype VoIP service. Due to hardware constraints it is not possible to use the service on the PSP-1000. The service allows Skype calls to be made over Wi-Fi and – on the Go – over the Bluetooth modem. Users must purchase Skype credit to make telephone calls. Room for PlayStation Portable At Tokyo Game Show 2009, Sony announced that a service similar to PlayStation Home, the PS3's online community-based service, was being developed for the PSP. Named "Room" (stylized R∞M), it was being beta-tested in Japan from October 2009 to April 2010. It could be launched directly from the PlayStation Network section of the XMB. As in Home, PSP owners would have been able to invite other PSP owners into their rooms to "enjoy real time communication". Development of Room halted on , 2010, due to feedback from the community. Digital Comics Reader Sony partnered with publishers such as Rebellion Developments, Disney, IDW Publishing, Insomnia Publications, , Marvel Comics, and Titan Books to release digitized comics on the PlayStation Store. The Digital Comics Reader application required PSP firmware 6.20. The PlayStation Store's "Comic" section premiered in Japan on , 2009, with licensed publishers ASCII Media Works, Enterbrain, Kadokawa, Kodansha, Shueisha, Shogakukan, Square-Enix, Softbank Creative (HQ Comics), Hakusensha, Bandai Visual, Fujimishobo, Futabasha, and Bunkasha. It launched in the United States and in English-speaking PAL countries on , 2009, though the first issues of Aleister Arcane, Astro Boy: Movie Adaptation, Star Trek: Enterprise Experiment and Transformers: All Hail Megatron were made available as early as through limited-time PlayStation Network redemption codes. In early 2010 the application was expanded to the German, French, Spanish and Italian languages. The choice of regional Comic Reader software is dictated by the PSP's firmware region; the Japanese Comic Reader will not display comics purchased from the European store, and vice versa. Sony shut down the Digital Comics service in September 2012. Homebrew development and custom firmware On June 15, 2005, hackers disassembled the code of the PSP and distributed it online. Initially the modified PSP allowed users to run custom code and a limited amount of protected software, including custom-made PSP applications such as a calculator or file manager. Sony responded to this by repeatedly upgrading the software. Some users were able to unlock the firmware to allow them to run more custom content and DRM-restricted software. Hackers were able to run protected software on the PSP through the creation of ISO loaders that could load copies of UMD games from a memory stick. Custom firmware including the M33 Custom Firmware, Minimum Edition (ME/LME) CFWm, and PRO CFWl were commonly seen in PSP systems. Also, there was an unsigned program made for the PSP, which allowed the handheld to downgrade its firmware to previous versions. Games There were 1,370 games released for the PSP during its 10-year lifespan. Launch games for PSP included; Ape Escape: On the Loose (North America, Europe, Japan), Darkstalkers Chronicle: The Chaos Tower (North America, Europe, Japan), Dynasty Warriors (all regions), Lumines (North America, Europe, Japan), Metal Gear Acid (North America, Europe, Japan), Need for Speed: Underground Rivals (North America, Europe, Japan), NFL Street 2: Unleashed (North America, Europe), Ridge Racer (North America, Europe, Japan), Spider-Man 2 (2004) (North America, Europe, Japan), Tiger Woods PGA Tour (North America, Europe, Japan), Tony Hawk's Underground 2 Remix (North America, Europe), Twisted Metal: Head-On (North America, Europe), Untold Legends: Brotherhood of the Blade (North America, Europe, Japan), Wipeout Pure (all regions), and World Tour Soccer: Challenge Edition (North America, Europe). Additionally, Gretzky NHL and NBA were North America exclusive launch titles. The best selling PSP game is Grand Theft Auto: Liberty City Stories, which sold 7.6 million copies as of October 2015. Other top selling PSP games include Grand Theft Auto: Vice City Stories, Monster Hunter Portable 3rd, Gran Turismo, and Monster Hunter Freedom Unite. Retro City Rampage DX, which was released in July 2016, was the final PSP game that was released. The best rated PSP games on Metacritic are God of War: Ghost of Sparta, Grand Theft Auto: Vice City Stories, and Daxter, Metal Gear Solid: Peace Walker is the only PSP game to receive a perfect score from Famitsū. During E3 2006, Sony Computer Entertainment America announced that the Greatest Hits range of budget titles were to be extended to the PSP system. On , 2006, Sony Computer Entertainment America released the first batch of Greatest Hits titles. These titles included Ape Escape:On the Loose, ATV Offroad Fury: Blazin' Trails, Hot Shots: Open Tee, Twisted Metal: Head-On, and Wipeout Pure. The PSP Greatest Hits lineup consists of games that have sold 250,000 copies or more and have been released for nine months. PSP games in this lineup retail for $19.99 each. Downloadable games were limited to 1.8 GB, most likely to guarantee a potential UMD release. A section of the PlayStation Store is dedicated to "Minis"; smaller, cheaper games available as download only. Demos and emulation In late 2004, Sony released a series of PSP demo games, including Duck In Water, world/ball, Harmonic City, and Luga City. Demos for commercial PSP games could be downloaded and booted directly from a Memory Stick. Demos were sometimes issued in UMD format and mailed out or given to customers at retail outlets. In addition, several older PlayStation games were re-released; these can be played on the PSP using emulation. , this feature could be officially accessed through the PlayStation Network service for PlayStation 3, PSP, PlayStation Vita (or PlayStation TV), or a personal computer. Emulation of the PSP is well-developed; one of the first emulators was JPCSP, which run on Java. PPSSPP is currently the fastest and most compatible PSP emulator; it supports all major games. Data Install In mid 2009, as larger storage became available for the PSP, the ability to install data became a feature in certain games; it remained mainly beneficial to UMD users, with a large majority of the games only improving the load times, there were a small number of games that added features such as speech in Metal Gear Solid: Peace Walker. Peripherals Official accessories for the console include an AC adapter, car adapter, headset, headphones with remote control, extended-life 2200 mAh battery, battery charger, carrying case, accessories pouch and cleaning cloth, and system pouch and wrist strap. A 1seg television tuner peripheral (model PSP-S310), designed specifically for the PSP-2000, was released in Japan on September 20, 2007. Sony sold a GPS accessory for the PSP-2000; this was released first in Japan and announced for the United States in 2008. It features maps on a UMD and offers driving directions and city guides. After the discontinuation of PSP, the Chinese electronics company Lenkeng released a PSP-to-HDMI converter called the LKV-8000. The device is compatible with the PSP-2000, PSP-3000 and PSP Go. To overcome the problem of PSP games being displayed in a small window surrounded by a black border, the LKV-8000 has a zoom button on the connector. A few other Chinese companies have released clones of this upscaler under different names, like the Pyle PSPHD42. The LKV-8000 and its variants have become popular among players and reviewers as the only means of playing and recording PSP gameplay on a large screen. Reception The PSP received generally positive reviews soon after launch; most reviewers noted similar strengths and weaknesses. CNET awarded the system 8.5 out of 10 and praised the console's powerful hardware and its multimedia capabilities but lamented the lack of a guard to cover the screen and the reading surface of UMD cartridges. Engadget praised the console's design, stating that "it is definitely one well-designed, slick little handheld". PC World commended the built-in Wi-Fi capability but criticized the lack of a web browser at launch, and the glare and smudges that resulted from the console's glossy exterior. Most reviewers also praised the console's large, bright viewing screen and its audio and video playback capabilities. In 2008, Time listed the PSP as a "gotta have travel gadget", citing the console's movie selection, telecommunications capability, and upcoming GPS functionality. The PlayStation Portable was initially seen as superior to the Nintendo DS when both devices were revealed in early 2004 because of the designers' emphasis on the technical accomplishments of the system. Nintendo of America President Reggie Fils-Aime, however, focused on the experience aspect of the Nintendo DS. The DS started to become more popular than the PSP early on because it attracted more third-party developers. The DS sold more units partly because of its touchscreen, second display, and wireless elements. From a multimedia perspective, the PSP has also been seen as a competitor to portable media players, notably the iPod Video that was released in the same year. Reviews of the PSP Go were mixed. It was mainly criticized for its initial pricing; Ars Technica called it "way too expensive" and The Guardian stated that cost was the "biggest issue" facing the machine. Engadget said the Go cost only $50 less than the PS3, which has a Blu-ray player. Wired said the older PSP-3000 model was cheaper and supports UMDs, and IGN stated that the price increase made the PSP Go a "hard sell". The placement of the analog stick next to the D-pad was also criticized. Reviewers also commented on the change from a mini-USB port to a proprietary port, making hardware and cables bought for previous models incompatible. The Go's screen was positively received by Ars Technica, which called the screen's image "brilliant, sharp and clear" and T3 stated that "pictures and videos look great". The controls received mixed reviews; The Times described them as "instantly familiar" whereas CNET and Stuff called the position of the analog stick "awkward". The device's capability to use a PS3 controller was praised by The New Zealand Herald but Ars Technica criticized the need to connect the controller and the Go to a PS3 for initial setup. Sales By March 31, 2007, the PlayStation Portable had shipped 25.39 million units worldwide with 6.92 million in Asia, 9.58 million in North America, and 8.89 million Europe. In Europe, the PSP sold 4 million units in 2006 and 3.1 million in 2007, according to estimates by Electronic Arts. In 2007, the PSP sold units in the US, according to the NPD Group and 3,022,659 in Japan according to Enterbrain. In 2008, the PSP sold 3,543,171 units in Japan, according to Enterbrain. In the United States, the PSP had sold 10.47 million units by January 1, 2008, according to the NPD Group. In Japan, during the week –30, 2008, the PSP nearly outsold all of the other game consoles combined, selling 129,986 units, some of which were bundled with Monster Hunter Portable 2nd G, which was the bestselling game in that week, according to Media Create. As of , 2008, the PSP had sold 11,078,484 units in Japan, according to Enterbrain. In Europe, the PSP had sold units as of , 2008, according to SCE Europe. In the United Kingdom, the PSP had sold units as of , 2009, according to GfK Chart-Track. From 2006 to the third quarter of 2010, the PSP sold 53 million units. In a 2009 interview, Peter Dillon, Sony's senior vice-president of marketing, said piracy of video games was leading to lower sales than hoped. Despite being aimed at a different audience, the PSP competed directly with the Nintendo DS. During the last few years of its life cycle, sales of the PSP models started to decrease. Shipments to North America ended in January 2014, later in Europe, and on June 3, 2014, Sony announced sales of the device in Japan would end. Production of the device and sales to the rest of Asia would continue. During its lifetime, the PSP sold 80 million fewer units than the Nintendo DS. Marketing controversies In late 2005, Sony said it had hired graffiti artists to spray-paint advertisements for the PSP in seven major U.S. cities, including New York City, Atlanta, Philadelphia, and San Francisco. According to Sony, it was paying businesses and building owners for the right to spray-paint their walls. A year later, Sony ran a poster campaign in England; a poster bearing the slogan "Take a running jump here" was removed from a Manchester Piccadilly station tram platform due to concerns it might encourage suicide. Later in 2006, news of a billboard advertisement released in the Netherlands depicting a white woman holding a black woman by the jaw, saying "PlayStation Portable White is coming", spread. Two similar advertisements existed; one showed the two women facing each other on equal footing in fighting stances, the other showed the black woman in a dominant position on top of the white woman. Sony's stated purpose was to contrast the white and black versions of the PSP but the advertisements were interpreted as being racially charged. These advertisements were never released in the rest of the world and were withdrawn from the Netherlands after the controversy. The advertisement attracted international press coverage; Engadget said Sony may have hoped to "capitalize on a PR firestorm". Sony came under scrutiny online in December 2006 for a guerrilla marketing campaign in which advertisers posed as young bloggers who desperately wanted a PSP. The site was created by advertising firm Zipatoni. See also Sony Ericsson Xperia Play Notes References External links Official Australia website Official New Zealand website Official UK PSP website Official US website Official Canada website Products and services discontinued in 2014 Handheld game consoles PlayStation (brand) Portable media players Products introduced in 2004 Discontinued products Discontinued handheld game consoles Regionless game consoles Sony consoles Seventh-generation video game consoles
47580458
https://en.wikipedia.org/wiki/Football%20Manager%202016
Football Manager 2016
Football Manager 2016 (abbreviated to FM16) is a football management simulation video game developed by Sports Interactive and published by Sega. It was released on Microsoft Windows, OS X and Linux on 13 November 2015. Gameplay FM16 features similar gameplay to that of the Football Manager series. Gameplay consists of taking charge of a professional association football team, as the team manager. Players can sign football players to contracts, manage finances for the club, and give team talks to players. FM16 is a simulation of real world management, with the player being judged on various factors by the club's AI owners and board. In FM16, players can now customise the appearances of their manager on the pitch. Two new modes are introduced in FM2016, including the Fantasy Draft mode, in which multiple players can play together, and draft players with a fixed budget. The second mode is called Create-A-Club, originated from the Editor version of the game but was now included in the final game. Players can create their own club with kits, logos, stadiums and transfer budget. All of them can be customised by players. Football Manager 2016 also features ProZone Match Analysis, which can provide analysis to matches. The feature was developed by Sports Interactive in conjunction with ProZone, a real-life analysis company. Improvements were introduced to the game's artificial intelligence, animation, movements of the game's characters, board requests, competition rules, and financial module. The game's Match Tactics and Set Piece Creator was overhauled. There would also be new social media features. Development The game was developed by Sports Interactive and was announced on 7 September 2015. It was released on 13 November 2015 for Microsoft Windows, Mac and Linux. There are also two other Football Manager games set to be released within 2015. Football Manager Touch, which features content from FM 2015'''s Classic mode, will be released for both PC and high-end tablets, and was said to offer a more streamlined experience. The second game is called Football Manager Mobile, which will be released for iOS and Android. A demo for the game was released on 15 November 2015. Players' career progress will be carried to the full version if they decide to purchase the game. ReceptionFootball Manager 2016 received positive reviews from critics upon release. Aggregate review website Metacritic'' assigned a score of 81 out of 100 based on 34 reviews. PC Gamer praised the game's vast depth and criticized its lack of accessibility, concluding, "Still untouchable on the footy front but shelf life and that inconsistent 3D engine chip away at its tender achilles." Tom Hatfield of GameSpot praised the game's Create a Club mode, improved user interface, but wrote unfavorably about the game's inaccessibility and lack of speed. The Guardian wrote positively about depth and praised the iterative steps the franchise was making towards accessibility, stating, "The moreish management sim is back featuring its usual tactical depth, but with a more user-friendly road to success – which is bad news for your loved ones." Sales Sports Interactive studio director Miles Jacobson announced that Football Manager 2016 had sold its 1 millionth copy on 15 September 2016. References External links 2015 video games Android (operating system) games 2015 MacOS games Linux games IOS games Sega video games Video games developed in the United Kingdom Windows games Video games with Steam Workshop support
48779176
https://en.wikipedia.org/wiki/TechWell%20Corporation
TechWell Corporation
TechWell Corporation (formerly Software Quality Engineering, SQE), was founded in 1986 by Bill Hetzel and David Gelperin as a consulting company to help organizations improve their software testing practices and produce higher quality software. Company During the late 1980s, Hetzel and Gelperin developed a software testing methodology, the Software Test and Evaluation Process (STEP) and an accompanying training course called Systematic Software Testing. During the 1990s, more than 10,000 testers from all parts of the world took this course and learned the STEP approach for testing. Notably, SQE coined the term “Test Then Code” in 1987, many years before approaches like test-driven development (TDD). SQE launched its first industry conference, Applications of Software Measurement (ASM) in 1991, followed by Software Testing, Analysis and Review (STAR) conference in 1992 and EuroSTAR in 1993. In 1998, when the STAR conference in the United States had grown to attract more than 1,000 attendees, it was split into STAREAST and STARWEST. In 1999, the company created a publishing division with the launch of Software Testing and Quality Engineering (STQE) magazine as well as a companion website (STQE.net). In 2001, StickyMinds.com was launched. The name "StickyMinds" was inspired by the STQE name read: Sticky. In January 2004, the magazine name was changed to Better Software magazine to reflect the broader focus on the entire software lifecycle. The company launched the Better Software conference in 2004, followed by the Agile Development Practices conference in 2007. Conferences TechWell conferences have been recognized as top conferences in the industry and cover the software lifecycle: STAREAST Testing Conference STARWEST Testing Conference STARCANADA Testing Conference Agile+DevOps East Agile+DevOps West Agile Testing Days USA Training In addition to conferences, SQE Training (a TechWell company) provides software improvement training across the entire software cycle. SQE Training offers courses in the following topic areas: agile development, configuration management, DevOps, software testing, security, mobile development and testing, project management, software requirements, and development and testing tools. SQE Training is a registered education provider for the PMI, as well a provider for certifications and continuing education for ScrumAlliance, ICAgile, and ISTQB. Online Resources TechWell also provides free communities for software professionals with information on emerging trends, latest ideas, and industry news. AgileConnection offers how-to advice on agile development principles, technologies and practices. Community members get access to articles, interviews, presentations, and Q&A discussions. StickyMinds is a resource for software testers, SQA professionals, and anyone interested in improving software quality and features in-depth articles, interviews, and how-to advice on the latest in software testing. TechWell Hub is a Slack community where software professionals engage in vivid conversations around agile, testing, DevOps, security, and more. References Companies established in 1986
55587621
https://en.wikipedia.org/wiki/2018%20in%20science
2018 in science
A number of significant scientific events occurred in 2018. Events January 1 January – Researchers at Harvard, writing in Nature Nanotechnology, report the first single lens that can focus all colours of the rainbow in the same spot and in high resolution, previously only achievable with multiple lenses. 2 January – Physicists at Cornell University report the creation of "muscle" for shape-changing, cell-sized robots. 3 January Computer researchers report discovering two major security vulnerabilities, named "Meltdown" and "Spectre," in the microprocessors inside almost all computers in the world. Scientists in Rome unveil the first bionic hand with a sense of touch that can be worn outside a laboratory. 4 January – MIT researchers devise a new method to create stronger and more resilient nanofibers. 5 January – Researchers report images (including image-1) taken by the Curiosity rover on Mars showing curious rock shapes that may require further study in order to help better determine whether the shapes are biological or geological. Later, an astrobiologist made a similar claim based on a different image (image-2) taken by the Curiosity rover. 8 January – The National Oceanic and Atmospheric Administration (NOAA) reports that 2017 was the costliest year on record for climate and weather-related disasters in the United States. 9 January A pattern in exoplanets is discovered by a team of multinational researchers led by the Université de Montréal: Planets orbiting the same star tend to have similar sizes and regular spacings. This could imply that most planetary systems form differently from the Solar System. Analysis of the stone Hypatia shows it has a different origin than the planets and known asteroids. Parts of it could be older than the solar system. A new study by researchers at Stanford University indicates the genetic engineering method known as CRISPR may trigger an immune response in humans, thus rendering it potentially ineffective in them. 10 January – Researchers at Imperial College London and King's College London publish a paper in the journal Scientific Reports about the development of a new 3D bioprinting technique, which allows the more accurate printing of soft tissue organs, such as lungs. 11 January In a study published in the journal Cell, University of Pennsylvania researchers show a method through which the human innate immune system may possibly be trained to more efficiently respond to diseases and infections. A NASA experiment, Station Explorer for X-ray Timing and Navigation Technology (SEXTANT), shows how spacecraft may possibly determine their location by focusing on millisecond pulsars in space. 15 January Artificial intelligence programs developed by Microsoft and Alibaba achieve better average performance on a Stanford University reading and comprehension test than human beings. University of Washington scientists publish a report in the journal Nature Chemistry of the development of a new form of biomaterial based delivery system for therapeutic drugs, which only release their cargo under certain physiological conditions, thereby potentially reducing drug side-effects in patients. University of Pennsylvania announces in the United States National Library of Medicine human clinical trials, that will encompass the use of CRISPR technology to modify the T cells of patients with multiple myeloma, sarcoma and melanoma cancers, to allow the cells to more effectively combat the cancers, the first of their kind trials in the US. 17 January – Engineers at the University of Texas at Austin, in collaboration with Peking University scientists, announce the creation of a memory storage device only one atomic layer thick; a so-called 'atomristor'. 18 January NASA and NOAA report that 2017 was the hottest year on record globally without an El Niño, and among the top three hottest years overall. Researchers report developing a blood test (or liquid biopsy) that can detect eight common cancer tumors early. The new test, based on cancer-related DNA and proteins found in the blood, produced 70% positive results in the tumor-types studied in 1005 patients. Sharks are shown to move and feed across the world's oceans in characteristic ways as demonstrated by a global-scale study of stable isotopes in shark tissues led by the University of Southampton and published in the journal Nature Ecology and Evolution. According to a new report published by the US National Science Foundation (NSF), the US is facing increasing competition in scientific endeavours from China, with the latter now publishing more annual scientific papers, but the US still leads in research and development (R&D) and venture capital (VC). Medical researchers at the Gladstone Institutes discover a method of turning skin cells into stem cells, with the use of CRISPR. 19 January – Researchers at the Technical University of Munich report a new propulsion method for molecular machines, which enables them to move 100,000 times faster than biochemical processes used to date. 22 January Amazon opens the first Amazon Go store, the first completely cashier-less grocery store. Engineers at MIT develop a new computer chip, with "artificial synapses," which process information more like neurons in a brain. 24 January – Scientists in China report in the journal Cell the creation of two monkey clones, named Zhong Zhong and Hua Hua, using the complex DNA transfer method that produced Dolly the sheep, for the first time. 25 January Researchers report evidence that modern humans migrated from Africa at least as early as 194,000 years ago, somewhat consistent with recent genetic studies, and much earlier than previously thought. Scientists working for Calico, a company owned by Alphabet, publish a paper in the journal eLife which presents possible evidence that Heterocephalus glaber (naked mole-rat) do not face increased mortality risk due to aging. 29 January – Scientists report, for the first time, that 800 million viruses, mainly of marine origin, are deposited daily from the Earth atmosphere onto every square meter of the planet's surface, as the result of a global atmospheric stream of viruses, circulating above the weather system, but below the altitude of usual airline travel, distributing viruses around the planet. February 2 February – A study published in the journal Science by researchers from the United States Geological Survey and the University of California, Santa Cruz reports the severe degradation of the health of polar bears in the Arctic, due to the effects of climate change. 5 February Researchers find additional evidence for an exotic form of water, called superionic water, which is not found naturally on Earth, but could be common on the planets Uranus and Neptune. Astronomers report evidence, for the first time, that extragalactic exoplanets, much more distant than the exoplanets found within the local Milky Way galaxy, may exist. 6 February SpaceX successfully conducts its maiden flight of its most powerful rocket to date, and the most powerful rocket since the Space Shuttle program, the Falcon Heavy, from LC-39A at Kennedy Space Center. The National Snow and Ice Data Center (NSIDC) reports that global sea ice extent has fallen to a new record low. 8 February – Astronomers report the first confirmed findings from the Zwicky Transient Facility (ZTF) project, with the discovery of 2018 CL, a small near-Earth asteroid. 9 February – Human eggs are grown in the laboratory for the first time, by researchers at the University of Edinburgh. 13 February – Scientists at Rockefeller University, writing in the journal Nature Microbiology, describe how compounds in soil known as malacidins can overcome antibiotic resistance in mice with MRSA. 14 February By studying the orbits of high-speed stars, researchers in Australia calculate that the Andromeda Galaxy has only one-third as much dark matter as previously thought, making it similar in mass to the Milky Way. A study published by the Journal of Experimental Medicine shows that blocking the enzyme beta-secretase (BACE1) in mice can substantially reduce the formation of plaques responsible for Alzheimer's disease. 16 February – Scientists report, for the first time, the discovery of a new form of light, which may involve polaritons, that could be useful in the development of quantum computers. 19 February – Scientists identify traces of the genes of the indigenous Taíno people in modern-day Puerto Ricans, indicating that the ethnic group was not extinct as previously believed. 21 February – Medical researchers report that e-cigarettes contain chemicals known to cause cancer and brain damage; as well as, contain potentially dangerous (even potentially toxic) levels of metals, including arsenic, chromium, lead, manganese and nickel. 28 February – Astronomers report, for the first time, a signal of the reionization epoch, an indirect detection of light from the earliest stars formed – about 180 million years after the Big Bang. March 5 March Researchers at MIT and Harvard report in the journal Nature of discovering the phenomenon of graphene acting as a superconductor, when its atoms are re-arranged in a specific manner. Google announces the creation of "Bristlecone", the world's most advanced quantum computer chip, featuring 72 qubits. 8 March – Scientists report the first detection of natural ice VII on Earth, previously it was only produced artificially. It may be common on the moons Enceladus, Europa and Titan. 9 March – NASA medical researchers report that human spaceflight may alter gene expression in astronauts, based on twin studies where one astronaut twin, Scott Kelly, spent nearly one year in space while the other, Mark Kelly, remained on Earth. 13 March – Scientists report that Archaeopteryx, a prehistoric feathered dinosaur, was likely capable of flight, but in a manner substantially different from that of modern birds. 15 March Intel reports that it will redesign its CPU processors (performance losses to be determined) to help protect against the Meltdown and Spectre security vulnerabilities (especially, Meltdown and Spectre-V2, but not Spectre-V1), and expects to release the newly redesigned processors later in 2018. Researchers at the Gladstone Institutes report a new cellular therapy in the journal Neuron which shows promise in combating the effects of Alzheimer's disease. 19 March – Uber suspends all of its self-driving cars worldwide after a woman is killed by one of the vehicles in Arizona. This is the first recorded fatality using a fully automated version of the technology. 22 March – Scientists at Harvard Medical School identify a key mechanism behind vascular aging and muscle decline in mice. Their study shows that treating the animals with a chemical compound called NMN enhances blood vessel growth and reduces cell death, boosting their stamina and endurance. 26 March A study in Geophysical Research Letters concludes that West Greenland's ice sheet is melting at its fastest rate in centuries. The world's first total transplant of a penis and scrotum is performed by surgeons at Johns Hopkins University in Baltimore, Maryland, operating on a soldier who was wounded in Afghanistan. April 2 April The inoperative Tiangong-1 space lab comes down over the South Pacific Ocean, northwest of Tahiti. Astronomers report the detection of the most distant individual star (actually, a blue supergiant), named Icarus (formally, MACS J1149 Lensed Star 1), at 9 billion light-years (light-travel distance) away from Earth. 5 April – Odilorhabdins, a novel class of naturally-produced antibiotics, is formally described. 10 April – Researchers in Japan report finding centuries' worth of rare-earth metals in deep sea mud, located near Minami-Tori-shima in the northwest Pacific. 11 April – Two studies, both published in Nature, find that the warm Atlantic Gulf Stream is at its weakest for at least 1,600 years. 17 April – Engineers at MIT develop a new more efficient method of producing long strips of graphene. 18 April NASA's Transiting Exoplanet Survey Satellite (TESS) is launched. Nanyang Technological University demonstrates a robot that can autonomously assemble an IKEA chair without interruption. 19 April – The results of a new gene therapy trial of 22 patients with the blood disorder beta thalassemia, published in the New England Journal of Medicine, indicates 15 of the patients being cured entirely while 7 requiring fewer annual blood transfusions. 25 April The Gaia collaboration publishes its second data release containing 1.7 billion light sources, with positions, parallaxes and proper motions for about 1.3 billion of them. Scientists publish evidence that asteroids may have been primarily responsible for bringing water to Earth. 26 April Scientists report that a letter of intent was signed by NASA and ESA which may provide a basis for sample return missions to other planets, including Mars sample return missions, with the purpose of better studying the possible existence of past or present extraterrestrial primitive life forms, including microorganisms. Scientists identify 44 gene variants linked to increased risk for depression. The Belle II experiment starts taking data to study B mesons. 27 April – Stephen Hawking's final paper – A smooth exit from eternal inflation? – is published in the Journal of High Energy Physics. 30 April – Researchers report identifying 6,331 groups of genes that are common to all living animals, and which may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian. May 1 May – The Genome Project-Write announces a new 10 year initiative to attempt to make human cells immune to viral infections. 2 May – Scientists discover that Helium is present in the exoplanet WASP-107b. 5 May – The InSight spacecraft, designed to study the interior and subsurface of the planet Mars, successfully launches at 11:05 UTC, with an expected arrival on 26 November 2018. 9 May – Scientists report that the curious physical phenomenon of quantum entanglement is even more supported based on recent rigorous Bell test experimentations. 10 May – NASA's Carbon Monitoring System (CMS) is cancelled by the Trump administration. 11 May – NASA approves the Mars Helicopter for the Mars 2020 mission. 14 May Astronomers publish supporting evidence of water plume activity on Europa, moon of the planet Jupiter, based on an updated critical analysis of data obtained from the Galileo space probe, which orbited Jupiter between 1995 and 2003. Such plume activity, similar to that found on Saturn's moon Enceladus, could help researchers search for life from the subsurface European ocean without having to land on the moon. Anthropologists provide evidence that the brain of Homo naledi, an extinct hominid which is thought to have lived between 226,000 and 335,000 years ago, was small, but nonetheless complex, sharing structural similarities with the modern human brain. 17 May – Scientists warn that banned CFC-11 gas emissions are originating from an unknown source somewhere in East Asia, with potential to damage the ozone layer. 22 May Scientists report another CPU security vulnerability, related to the Spectre and Meltdown vulnerabilities, called Speculative Store Bypass (SSB), and affecting the ARM, AMD and Intel families of cpu processors. Scientists from Purdue University and the Chinese Academy of Sciences report the use of CRISPR/Cas9 to develop a variety of rice producing 25-31% more grain than traditional breeding methods. Significant asteroid data arising from the Wide-field Infrared Survey Explorer and NEOWISE missions is questioned. 23 May – Paleontologists report finding the skull of a new species of haramiyida (a long lived lineage of mammaliaform cynodonts), called Cifelliodon wahkarmooshuh, underneath the fossilized foot of a large dinosaur that lived 130 million years ago in North America. 24 May Based largely on government data, including data from NASA, FEMA and others, The New York Times reports an exhaustive overview of recurrent natural disasters in the United States since 1900. Astronomers claim that the dwarf planet Pluto may have been formed as a result of the agglomeration of numerous comets and related Kuiper belt objects. Researchers at the University of Leeds report that climate change could increase arable land in boreal regions by 44% by the year 2100, while having a negative impact everywhere else. 30 May The first 3D printed human corneas are created at Newcastle University. The FDA approves the first artificial iris. Physicists of the MiniBooNE experiment report a stronger neutrino oscillation signal than expected, a possible hint of sterile neutrinos, an elusive particle that may pass through matter without any interaction whatsoever. June 1 June – NASA scientists detect signs of a dust storm on the planet Mars which may affect the survivability of the solar-powered Opportunity rover since the dust may block the sunlight (see image) needed to operate; as of 12 June, the storm spanned an area about the size of North America and Russia combined (about a quarter of the planet); as of 13 June, Opportunity was reported to be experiencing serious communication problem(s) due to the dust storm; a NASA teleconference about the dust storm was presented on 13 June 2018 at 01:30 pm/et/usa and is available for replay. On 20 June, NASA reported that the dust storm had grown to completely cover the entire planet. 4 June – Direct coupling of the Higgs boson with the top quark is observed for the first time by the ATLAS experiment and the CMS experiment at CERN. 5 June – Researchers at Case Western Reserve University School of Medicine synthesise the first artificial human prion. 6 June Footprints in the Yangtze Gorges area of South China, dating back 546 million years, are reported to be the earliest known record of an animal with legs. The spacecraft Dawn assumes a final (and much closer) orbit around the dwarf planet Ceres: as close as and as far away as (see images). 7 June – NASA announces that the Curiosity rover has detected a cyclical seasonal variation in atmospheric methane (see image) on the planet Mars, as well as the presence of kerogen and other complex organic compounds. 8 June – The U.S. Department of Energy's Oak Ridge National Laboratory unveils Summit as the world's most powerful supercomputer, with a peak performance of 200,000 trillion calculations per second, or 200 petaflops. 11 June – KATRIN, an experiment designed to measure the absolute mass of neutrinos, starts data-taking. 14–15 June – The Japanese Hayabusa2 probe returns images of the asteroid 162173 Ryugu from a distance of 650–700 km. It enters orbit on 27 June. 16 June – Astronomers detect AT2018cow (ATLAS name: ATLAS18qqn), a powerful astronomical explosion, 10-100 times brighter than a normal supernova, that may be a cataclysmic variable star (CV), gamma-ray burst (GRB), gravitational wave (GW), supernova (SN) or something else. By 22 June 2018, this astronomical event had generated a significant interest among astronomers throughout the world, and may be, as of 22 June 2018, considered a supernova, tentatively named Supernova 2018cow (SN 2018cow). However, the true identity of AT2018cow remains unclear, according to astronomers. 18 June – MIT publishes details of "VoxelMorph", a new machine-learning algorithm, which is over 1,000 times faster at registering brain scans and other 3-D images. 20 June – Scientists at the University of Edinburgh report that gene-edited pigs have been made resistant to porcine reproductive and respiratory syndrome, one of the world's most costly animal diseases. 21 June – The US National Science and Technology Council warns that America is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. 26 June – Researchers at the University of California, Los Angeles, develop synthetic T cells that mimic the form and function of real human versions. 27 June Astronomers report that ʻOumuamua, an object from interstellar space passing through the Solar System, is a mildly active comet, and not an asteroid, as previously thought. This was determined by measuring a non-gravitational boost to ʻOumuamua's acceleration, consistent with comet outgassing. (image) (animation) Astronomers report the detection of complex macromolecular organics on Enceladus, moon of the planet Saturn. July 2 July Astronomers report taking the first confirmed image (see image) of a newborn planet. The name of the nascent exoplanet is PDS 70b and is a few times larger than the planet Jupiter. The koala genome is completely sequenced. 10 July – Researchers at the University of Michigan show that increased atmospheric CO2 reduces the medicinal properties of milkweed plants that protect monarch butterflies from disease. 11 July – Scientists report the discovery in China of the oldest stone tools outside of Africa, estimated at 2.12 million years old. 12 July The IceCube Neutrino Observatory announces that they have traced a neutrino that hit their Antarctica-based research station in September 2017 back to its point of origin in a blazar 3.7 billion light-years away. This is the first time that a neutrino detector has been used to locate an object in space. Using NASA's Hubble and ESA's Gaia, astronomers make the most precise measurements to date of the universe's expansion rate – a figure of 73.5 km (45.6 miles) per second per megaparsec – reducing the uncertainty to just 2.2 percent. 16 July – A study by the University of Wisconsin-Madison concludes that thousands of miles of buried Internet infrastructure could be damaged or destroyed by rising sea levels within 15 years. 17 July – Scientists led by Scott S. Sheppard report the discovery of 12 new moons of Jupiter, taking its total number to 79. This includes an "oddball", Valetudo (originally known as S/2016 J 2; Roman-numeral designation Jupiter LXII), that is predicted to eventually collide with a neighbouring moon. 19 July – A complete fruit fly connectome is mapped at nanoscale resolution for the first time, using two high-speed electron microscopes on 7,000 brain slices and 21 million images. 20 July Researchers report that the largest single source of dust on the planet Mars comes from the Medusae Fossae Formation. Scientists at the University of Alabama at Birmingham announce the reversal of aging-associated skin wrinkles and hair loss in a mouse model. 23 July A study published in Nature Climate Change finds that the death toll from suicide in the United States and Mexico has risen between 0.7 and 2.1 percent with each degree (Celsius) of increased monthly average temperature. By 2050, this could lead to an additional 21,000 suicides. Scientists at the University of Alberta report a new technique, based on quickly removing or replacing single hydrogen atoms, which can provide a thousand-fold increase in solid-state memory density. 25 July Scientists report the discovery, based on MARSIS radar studies, of a subglacial lake on Mars, below the southern polar ice cap (see image), and extending sideways about , the first known stable body of water on the planet. NASA's Transiting Exoplanet Survey Satellite (TESS) begins science operations. Researchers in Brazil describe a new two-dimensional material called "hematene", derived from hematite, with application as a photocatalyst. 27 July – The longest total lunar eclipse of the 21st century occurs. 28 July – Artificial intelligence is used to demonstrate a link between personality type and eye movements. 30 July Using high-resolution satellite images, researchers from the Chizé Centre for Biological Studies report an 88% reduction in the world's biggest colony of king penguins, found on Île aux Cochons in the subantarctic Crozet Archipelago. A study by NASA's Goddard Space Flight Center concludes that terraforming of Mars is physically impossible with present-day technology. 31 July – Astronomers report the detection of an extremely strong magnetic field and aurora around a brown dwarf, which may possibly be a rogue planet, designated SIMP J01365663+0933473. August 1 August Earth Overshoot Day 2018 is reached. Astronomers report that FRB 180725A is the first detection of a Fast radio burst (FRB) under 700 MHz – as low as 580 MHz. Lab-grown lungs are successfully transplanted into pigs for the first time. 7 August – NASA researchers report confirmation by the New Horizons spacecraft of a "hydrogen wall" at the outer edges of the Solar System that was first detected in 1992 by the two Voyager spacecraft. 8 August Biologists report that Stromatoveris psygmoglena, an Ediacaran organism that dominated oceans half a billion years ago, was a member of Animalia, based on phylogenetic analysis. Computer researchers report that Artificial Intelligence (AI) programs have found thousands of prominent scientists overlooked by Wikipedia editors. 9 August – Researchers in China establish a new record for organic photovoltaic cells, boosting their maximum efficiency from 15 to 17.3 percent. 12 August – A Delta IV Heavy launches the Parker Solar Probe to study the Sun and the solar wind. 13 August – Astronomers at the Chandra X-ray Observatory report that the X-ray afterglow from a one-year-old neutron star merger—associated with GW170817 (gravitational wave), GRB 170817A (gamma ray burst) and AT 2017gfo (visible transient)—is fading at an increasingly rapid rate at 358.6 days after the event. 14 August Computer researchers report discovering another security vulnerability, named "Foreshadow", that may affect Intel processors inside personal computers and in third party clouds. Groundbreaking begins on the Giant Magellan Telescope in Chile. It is expected to be operational by 2024. 15 August – Astronomers report the detection of iron and titanium vapours in the atmosphere of an 'ultra-hot Jupiter' in close orbit around the large B-type star, KELT-9. 16 August Scientists announce the transformation of gaseous deuterium into a liquid metallic form. This may help researchers better understand giant gas planets, such as Jupiter, Saturn and related exoplanets, since such planets are thought to contain a lot of liquid metallic hydrogen, which may be responsible for their observed powerful magnetic fields. The wheat genome is fully sequenced after a 13-year effort. Scientists at Sandia National Laboratories reveal a platinum-gold alloy believed to be the most wear-resistant metal in the world, 100 times more durable than high-strength steel. 18 August – Research presented at the Goldschmidt conference in Boston concludes that water is likely to be a common feature of exoplanets between two and four times the size of Earth, with implications for the search of life in our Galaxy. 20 August Scientists report that life, based on genetic and fossil evidences, may have begun on Earth nearly 4.5 billion years ago, much earlier than thought before. Researchers report that the skyglow of STEVE ("Strong Thermal Emission Velocity Enhancement"), an atmospheric optical phenomenon appearing as a purple and green light ribbon in the sky, and not an aurora, is not associated with particle precipitation (electrons or ions) and, as a result, could be generated in the ionosphere. 21 August – Scientists announce the first direct evidence for exposed water-ice on the Moon's surface, which is found in permanently shaded regions. 22 August Scientists report evidence of a 13-year-old hominin female, nicknamed Denny, estimated to have lived 90,000 years ago, and who was determined to be half Neanderthal and half Denisovan, based on genetic analysis of a bone fragment discovered in Denisova Cave; the first time an ancient individual was discovered whose parents belonged to distinct human groups. Researchers report evidence of rapid shifts (in geological-time terms), nearly 30 times faster than known previously, of geomagnetic reversals, where the north magnetic pole of Earth becomes the south magnetic pole and vice versa, including a chronozone that lasted only 200 years, much shorter than any other such reversal found earlier. 28 August – Physicists officially report, for the first time, observing the Higgs boson decay into a pair of bottom quarks, an interaction that is primarily responsible for the "natural width" (range of masses with which a particle is observed) of the boson. 30 August – Researchers from the Chinese University of Hong Kong report a new way of controlling nanobots, using swarm behaviours to do complex tasks in minimally invasive surgeries. September 3 September – Astronomers present evidence that the wide hexagon at the north pole of the planet Saturn (possibly a jet stream of atmospheric gases moving at ) may be high, well into the stratosphere, at least during the northern spring and summer, rather than lower in the troposphere as thought earlier. 6 September – A study by the University of Illinois at Urbana–Champaign finds that large-scale solar panels and wind turbines in the Sahara desert would have a major impact on rainfall, vegetation and temperatures – potentially greening the region. 7 September Researchers at the National Geospatial-Intelligence Agency release a high resolution terrain map (detail down to the size of a car, and less in some areas) of Antarctica, named the "Reference Elevation Model of Antarctica" (REMA). A group of Japanese and American scientists publish a research paper which concludes that "space weathering" on the surface of Phobos, in tandem with its eccentric orbit, has caused its surface to be divided into two distinct geologic units, known as the red and blue units. 9 September – Astronomers report detecting another 72 Fast Radio Bursts (FRBs), using artificial intelligence, from FRB 121102 that had been missed earlier, resulting in about 300 total FRBs from this object. FRB 121102 is the only known repeating fast radio source which is very unusual since all other currently known FRBs (very powerful and extremely short-lived astronomical objects) have not been found to repeat, occurring one time only. 10 September NASA wins an Emmy Award for Outstanding Original Interactive Program for its presentation of the Cassini mission's Grand Finale at Saturn. The Massachusetts Institute of Technology (MIT) announces "Dense Object Nets" (DON), a new system that allows robots to pick up any object after visually inspecting it. An international team of researchers predict the entire set of beneficial 3-D distortions for controlling edge localised modes (ELMs) in tokamak plasma, without creating more problems. 12 September – Scientists report the discovery of the earliest known drawing by Homo sapiens, which is estimated to be 73,000 years old, much earlier than the 43,000 years old artifacts understood to be the earliest known modern human drawings found previously. 15 September – NASA launches ICESat-2, the agency's most technologically advanced ice-monitoring spacecraft to date. 16 September Astronomers report determining that the warm-hot intergalactic medium (or WHIM) may be where the missing matter (not dark matter) has been hiding in the universe. Medical researchers conclude, based on a 19,114 person study conducted over five years, that use of low-dose aspirin by older healthy people may not be beneficial and, in some case, may be harmful. 17 September – NASA releases the first light image (see image) (taken on 7 August 2018) by the Transiting Exoplanet Survey Satellite (TESS), a space telescope designed to search for exoplanets in an area 400 times larger than that covered by the Kepler mission. 20 September Researchers at Cincinnati Children's Hospital Medical Center report the first human oesophageal tissue grown entirely from pluripotent stem cells. Researchers identify human skeletal stem cells for the first time. Scientists discover molecules of fat in an ancient fossil to reveal the earliest confirmed animal in the geological record that lived on Earth 558 million years ago. A paper in the Cryosphere journal, from the European Geosciences Union, suggests that building walls on the seafloor could halt the slide of undersea glaciers, which are melting due to warmer ocean temperatures. Using data from the European Space Agency's X-ray observatory XMM-Newton, astronomers report the first detection of matter falling into a black hole at 30% of the speed of light, located in the centre of the billion-light year distant galaxy PG211+143. 21 September – The Japanese Hayabusa2 probe deploys two landers on the surface of the large asteroid Ryugu. 24 September Data from the Cassini–Huygens spacecraft, which explored Saturn and its moons between 2004 and 2017, reveals what appear to be three giant dust storms (see image), for the first time, in the equatorial regions of the moon Titan between the years 2009–2010. Astronomers describe several possible home star systems from which the interstellar object 'Oumuamua, found passing through the Solar System in October 2017, may have begun its interstellar journey. Studies suggest that the interstellar object is neither an asteroid nor a comet. 25 September Medical researchers report that Omega-3 fatty acids may significantly reduce the risk of cardiovascular events in some patients with a history of heart disease or type 2 diabetes. Scientists determine that Vorombe titan, an extinct elephant bird from the island of Madagascar which reached weights of and heights of tall, is the largest bird known to have existed. 26 September – Researchers provide evidence that phosphorus compounds, key components for life, are made in interstellar space and distributed throughout outer space, including the early Earth. 27 September – A study in the journal Science concludes that polychlorinated biphenyls (PCBs) could halve killer whale populations in the most heavily contaminated areas within 30–50 years. October 1 October James P. Allison from the United States and Tasuku Honjo from Japan win the Nobel Prize in Physiology or Medicine "for their discovery of cancer therapy by inhibition of negative immune regulation." NASA-funded researchers find that lengthy journeys into outer space, including travel to the planet Mars, may substantially damage the gastrointestinal tissues of astronauts. The studies support earlier work that found such journeys could significantly damage the brains of astronauts, and age them prematurely. However, unlike the conditions in space, the study admitted the full radiation doses over short periods. Astronomers announce the discovery of 2015 TG387 (also known as "The Goblin"), a trans-Neptunian object and sednoid in the outermost part of the Solar System, which may help explain some apparent effects of a hypothetical planet named Planet Nine (or Planet X). 2 October Arthur Ashkin from the United States, Gérard Mourou from France and Donna Strickland from Canada win the Nobel Prize in Physics "for groundbreaking inventions in the field of laser physics". Astronomers using data from the Gaia mission report the discovery of rogue, high-velocity stars hurtling towards the Milky Way, possibly originating from another galaxy. 3 October Frances H. Arnold from the United States, George P. Smith from the United States and Gregory P. Winter from the United Kingdom win the Nobel Prize in Chemistry for their work in evolutionary science. Astronomers publish details of a candidate exomoon, Kepler-1625b I, suggesting it has a mass and radius similar to Neptune, and orbits the exoplanet Kepler-1625b. 4 October – Researchers at McMaster University announce the development of a new technology, called a Planet Simulator, to help study the origin of life on planet Earth and beyond. 5 October – The Hubble Space Telescope is hit by a mechanical failure as it loses one of the gyroscopes needed for pointing the spacecraft. It is placed into "safe" mode while scientists attempt to fix the problem. 8 October The IPCC releases its Special Report on Global Warming of 1.5 °C, warning that "rapid, far-reaching and unprecedented changes in all aspects of society" are needed to keep global warming below 1.5 °C. Researchers report low-temperature chemical pathways from simple organic compounds to complex polycyclic aromatic hydrocarbon (PAH) chemicals. Such chemical pathways may help explain the presence of PAHs in the low-temperature atmosphere of Titan, a moon of the planet Saturn, and may be significant pathways, in terms of the PAH world hypothesis, in producing precursors to biochemicals related to life as we know it. 10 October Astronomers report 19 more new non-repeating FRB bursts detected by the Australian Square Kilometre Array Pathfinder (ASKAP). Physicists report producing quantum entanglement using living organisms, particularly between living bacteria and quantized light. 11 October Physicists report that quantum behavior can be explained with classical physics for a single particle, but not for multiple particles as in quantum entanglement and related nonlocality phenomena ("spooky action at a distance" ["gruselige Action in einiger Entfernung" (German)], according to Albert Einstein). Harvard astronomers present an analytical model that suggests matter—and potentially dormant spores—can be exchanged across the vast distances between galaxies, a process termed 'galactic panspermia', and not be restricted to the limited scale of solar systems. The world's fastest camera, able to capture 10 trillion frames per second, is announced by the Institut national de la recherche scientifique (INRS) in Quebec, Canada. 15 October – A study by the Rensselaer Polytechnic Institute finds that insect populations in Puerto Rico have crashed since the 1970s, with some species witnessing a 60-fold decrease in numbers. The fall is attributed to a 2.0 °C rise in tropical forest temperatures. 16 October The final published book by physicist Stephen Hawking, entitled Brief Answers to the Big Questions, is released. A comprehensive analysis of demographic trends published in The Lancet predicts that all countries are likely to experience at least a slight increase in life expectancy by 2040. Spain is expected to overtake Japan as it rises from fourth to first place, with an average lifespan of 85.8 years. Astronomers report that GRB 150101B, a gamma-ray burst event detected in 2015, may be directly related to the historic GW170817, a gravitational wave event detected in 2017, and associated with the merger of two neutron stars. The similarities between the two events, in terms of gamma ray, optical and x-ray emissions, as well as to the nature of the associated host galaxies, are "striking", suggesting the two separate events may both be the result of the merger of neutron stars, and both may be a kilonova (i.e., a luminous flash of radioactive light that produces elements like silver, gold, platinum and uranium), which may be more common in the universe than previously understood, according to the researchers. 17 October Researchers report possible transgenerational epigenetic inheritance (i.e., transmission of information from one generation of an organism to the next that affects the traits of offspring without alteration of the primary structure of DNA) in the form of paternal transmission of epigenetic memory via of sperm chromosomes in the roundworm Caenorhabditis elegans, a laboratory test organism. A study by Stanford University finds that the use of virtual reality can induce greater compassion in people than other forms of media. 20 October – The joint ESA/JAXA BepiColombo probe is launched to the planet Mercury. 22 October A study by the University at Albany forecasts that Peru's Quelccaya ice cap will reach a state of irreversible retreat by the mid-2050s, if current warming trends continue. Researchers at the University of Queensland recreated 450 million-year-old enzymes with thermostable proteins, which can withstand higher temperatures, and could be used to improve drugs and gene therapy. 24 October – Scientists report discovering the oldest weapons found in North America, ancient spear points, dated to 13,500 – 15,500 years ago, made of chert, predating the clovis culture (typically dated to 13,000 years ago), in the state of Texas. 26 October – Astronomers confirm the existence of dust cloud satellites, called Kordylewski clouds, in semi-stable regions (the L4 and L5 Lagrangian points of the Earth–Moon system) about above the planet Earth. 30 October NASA announces that the Kepler space telescope, having run out of fuel, and after nine years of service and the discovery of over 2,600 exoplanets, has been officially retired, and will maintain its current, safe orbit, away from Earth. Scientists announce the 3-D virtual reconstruction, for the first time, of a Neanderthal rib cage, which may help researchers better understand how this ancient human species moved and breathed. November 1 November The Earth BioGenome Project is launched, a 10-year global effort to sequence the genomes of all 1.5 million known animal, plant, protozoan and fungal species on Earth. NASA announces the official retirement, due to the depletion of fuel, of the Dawn spacecraft mission, that lasted 11 years, and that studied two protoplanets, Vesta and Ceres. The spacecraft will remain in a relatively stable orbit around Ceres for at least the next 20 years, serving as a "monument" to the mission. Russian scientists release a video recording of the Soyuz MS-10 manned spaceflight mission involving a Soyuz-FG rocket after launch on 11 October 2018 that, due to a faulty sensor, resulted in the destruction of the rocket. The crew, NASA astronaut Nick Hague and Russian cosmonaut Aleksey Ovchinin. escaped safely and successfully. Astronomers from Harvard University suggest that the interstellar object 'Oumuamua may be an extraterrestrial solar sail from an alien civilization, in an effort to help explain the object's "peculiar acceleration". 2 November Two independent teams of astronomers both conclude, based on numerous observations from other astronomers around the world, that the unusual AT2018cow event (also known as Supernova 2018cow, SN 2018cow, and "The Cow"), a very powerful astronomical explosion, 10 – 100 times brighter than a normal supernova detected on 16 June 2018, was "either a newly formed black hole in the process of accreting matter, or the frenetic rotation of a neutron star." The world's largest neuromorphic supercomputer, the million-core 'SpiNNaker' machine, is switched on by the University of Manchester, England. 4 November – Geologists present evidence, based on studies in Gale Crater by the Curiosity rover, that there was plenty of water on early Mars. 5 November Astronomers report the discovery of one of the oldest stars, named 2MASS J18082002-5104378 B, in the universe, about 13.5 billion-years-old, possibly one of the first stars, a tiny ultra metal-poor (UMP) star made almost entirely of materials released from the Big Bang. The discovery of the star in the Milky Way galaxy suggests that the galaxy may be at least 3 billion years older than thought earlier. A new assessment of the ozone hole, published by the UN, shows it to be recovering faster than previously thought. At projected rates, the Northern Hemisphere and mid-latitude ozone is expected to heal completely by the 2030s, followed by the Southern Hemisphere in the 2050s and polar regions by 2060. Scientists report the discovery of the smallest known ape, Simiolus minutus, which weighed approximately eight pounds, and lived about 12.5 million years ago in Kenya in East Africa. 7 November – Scientists report the discovery of the oldest known figurative art painting, over 40,000 (perhaps as old as 52,000) years old, of an unknown animal, in the cave of Lubang Jeriji Saléh on the Indonesian island of Borneo (see image). 12 November – China's Institute of Plasma Physics announces that plasma in the Experimental Advanced Superconducting Tokamak (EAST) has reached 100 million degrees Celsius. 14 November – Astronomers report the discovery of GJ 699 b, a Super-Earth orbiting near the snow line of Barnard's Star, just six light years from Earth. 16 November The 26th General Conference on Weights and Measures (CGPM) votes unanimously in favour of revised definitions of the SI base units, which the International Committee for Weights and Measures (CIPM) had proposed earlier that year. The new definitions come into force on 20 May 2019. Researchers at Japan's National Institute of Advanced Industrial Science and Technology (AIST) reveal a humanoid robot prototype, HRP-5P, intended to autonomously perform heavy labor or work in hazardous environments. Astronomers conclude that the many grooves on Phobos, one of two moons orbiting Mars, were caused by boulders, ejected from the asteroid impact that created Stickney crater (which takes up a substantial portion of the moon's surface), that rolled around on the surface of the moon. 19 November – NASA chooses Jezero crater on the planet Mars as the landing site for the Mars 2020 rover, which is to launch on 17 July 2020, and touch down on Mars on 18 February 2021. 20 November Astronomers report the use of a new powerful method, NIRSpec in adaptive optics (AO) mode (NIRSPAO), to search for biosignatures on exoplanets. The World Meteorological Organization (WMO) publishes its latest Greenhouse Gas Bulletin, showing record high concentrations of heat-trapping greenhouse gases, with levels of carbon dioxide (CO2) reaching 405.5 parts per million (ppm) in 2017, up from 403.3 ppm in 2016 and 400.1 ppm in 2015. The WMO reports that "there is no sign of a reversal in this trend, which is driving long-term climate change, sea level rise, ocean acidification and more extreme weather." 22 November 35 genes that predispose people to chronic kidney disease are discovered by scientists at the University of Manchester. Research published in Environmental Research Letters concludes that stratospheric aerosol injection to curb global warming is "technically possible" and would be "remarkably inexpensive" at $2 to 2.5 billion per year over the first 15 years. 23 November Volume II of the Fourth National Climate Assessment (NCA4) is released by the U.S. government. The Brazilian government reports that deforestation in the Amazon rainforest has reached its highest rate for a decade, with 7,900 km2 (3,050 sq miles) destroyed between August 2017 and July 2018, largely due to illegal logging. Researchers report, after detecting the presence on the International Space Station (ISS) of five Enterobacter bugandensis bacterial strains, none pathogenic to humans, that microorganisms on ISS should be carefully monitored to continue assuring a medically healthy environment for astronauts. 24 November – Scientists report that nearly all extant populations of animals, including humans, may be a result of a population expansion that began between one and two hundred thousand years ago, based on genetic mitochondrial DNA studies. 25 November – Chinese scientists report the birth of twin human girls, Lulu and Nana, as the world's first genetically edited babies. The human genes were edited to resist HIV. 26 November – NASA reports that the InSight Lander landed successfully on the planet Mars. Two touch down images are received. Also, from additional received transmissions, the sounds of the winds on Mars can be heard - for the first time. 27 November – Researchers at the University of Southern California publish details of a freeze-dried polio vaccine that does not require refrigeration. 30 November – Astronomers report that the extragalactic background light (EBL), the total amount of light that has ever been released by all the stars in the observable universe, amounts to 4 × 1084 photons. December 2–14 December – COP24 United Nations Climate Change conference in Katowice. 3 December – NASA reports the arrival of the OSIRIS-REx spacecraft to the carbonaceous asteroid Bennu after a two-year journey, and has determined that the asteroid interacted with water early in its history. 4 December – Physcists report discovery of superconductivity at 250 K and 170 GPa. 5 December An astronomer from the University of Oxford advances a new theory, related, in part, to notions of gravitationally repulsive negative masses, presented earlier by Albert Einstein, that may help better understand, in a testable manner, the considerable amounts of unknown dark matter and dark energy in the cosmos. Researchers create a new algorithm, based on deep learning, that is able to solve text-based CAPTCHA tests in less than 0.05 seconds. Scientists in the United Kingdom announce completion of the 100,000 Genomes Project. Research published by the Global Carbon Project shows record high carbon emissions of 37.1 billion metric tons in 2018, driven by a booming market for cars and ongoing coal use in China. 8 December – China launches Chang'e 4, the first mission to land a robotic craft on the far side of the Moon. 10 December Voyager 2, a space probe launched in 1977, is confirmed (image of onboard detections) to have left the Solar System for interstellar space on 5 November 2018, six years after its sister probe, Voyager 1 (related image). Four glaciers in the Vincennes Bay region of Antarctica are found to be thinning at surprisingly fast rates, casting doubt on the idea that the eastern part of the icy continent is stable. Researchers announce the discovery of considerable amounts of life forms, including 70% of bacteria and archea on Earth, comprising up to 23 billion tonnes of carbon, living up to at least deep underground, including below the seabed, according to a ten-year Deep Carbon Observatory project. 11 December – A report on the impact of climate change in the Arctic, published during the latest American Geophysical Union meeting, concludes that populations of wild reindeer, or caribou, have crashed from almost 5 million to just 2.1 million animals in the last two decades. 17 December Astronomers led by Scott Sheppard announce the discovery of 2018 VG18, nicknamed "Farout", the most distant body ever observed in the Solar System at approximately 120 AU. Scientists announce that the earliest feathers may have originated 250 million years ago, 70 million years earlier than previously thought. 18 December Scientists report that the earliest flowers began about 180 million years ago, 50 million years earlier than previously thought. The Kamchatka superbolide falls over the Bering Sea, near the east coast of Russia, the third largest asteroid to hit Earth since 1900. The event would not be recognized and announced until March 2019, however. 19 December – NASA reports that the InSight lander has deployed a seismometer on Mars, the first time a seismometer has been placed onto the surface of another planet. 24 December NASA celebrates the 50th Anniversary of the 1968 Christmas Eve trip around the Moon by the Apollo 8 astronauts (Earthrise image). Researchers at Tel Aviv University describe a process to make bioplastic polymers that don't require land or fresh water. Awards Fields Medal – Caucher Birkar, Alessio Figalli, Peter Scholze, Akshay Venkatesh Nobel Prize in Physiology or Medicine – James P. Allison and Tasuku Honjo, "for their discovery of cancer therapy by inhibition of negative immune regulation" Nobel Prize in Physics – Arthur Ashkin, Gérard Mourou and Donna Strickland, "for groundbreaking inventions in the field of laser physics" Nobel Prize in Chemistry – Frances H. Arnold, "for the directed evolution of enzymes", George P. Smith and Gregory P. Winter, "for the phage display of peptides and antibodies" Deaths January 5 – Thomas Bopp, American astronomer (b. 1949) February 1 – Barys Kit, Belarusian-American rocket scientist (b. 1910) February 2 – Joseph Polchinski, American theoretical physicist (b. 1954) February 4 – Alan Baker, British mathematician (b. 1939) February 5 – Donald Lynden-Bell, British astrophysicist (b. 1935) February 10 – Alan R. Battersby, British organic chemist (b. 1925) February 18 – Günter Blobel, German-American biologist and Nobel Prize laureate (b. 1936) February 21 – Richard E. Taylor, Canadian physicist and Nobel Prize laureate (b. 1929) March 6 – John Sulston, British biologist and Nobel Prize laureate (b. 1942) March 14 – Stephen Hawking, British theoretical physicist and cosmologist (b. 1942) April 7 – Peter Grünberg, German physicist and Nobel Prize laureate (b. 1939) May 26 – Ted Dabney, American engineerand computer scienties (b. 1937) June 29 – Arvid Carlsson, Swedish neuropharmacologist and Nobel Prize laureate (b. 1923) July 18 – Burton Richter, American physicist and Nobel Prize laureate (b. 1931) September 23 – Charles K. Kao, Hong Kong-American-British physicist and Nobel Prize laureate (b. 1933) October 3 - Leon M. Lederman, American physicist and Nobel Prize laureate (b. 1922) October 9 – Thomas A. Steitz, American biochemist and Nobel Prize laureate (b. 1940) December 9 – Riccardo Giacconi, Italian-American astrophysicist and Nobel Prize laureate (b. 1931) December 22 - Jean Bourgain, Belgian mathematician and Fields Medal laureate (b. 1954) December 23 - Elias M. Stein, American mathematician (b. 1931) December 26 - Peter Swinnerton-Dyer, English mathematician (b. 1927) December 26 - Roy J. Glauber, American theoretical physicist and Nobel Prize laureate (b. 1925) See also 2018 in spaceflight List of emerging technologies List of years in science Notes References External links 2018-related lists 21st century in science 2018-related timelines Science timelines by year
56930116
https://en.wikipedia.org/wiki/Block%27hood
Block'hood
Block'hood is a city-building video game developed by Plethora Project and published by Devolver Digital. It was released on 11 May 2017 for Microsoft Windows, MacOS and Linux. Gameplay Block'hood is a neighbourhood building simulator. It involves building a vertical tower for people to live in by combining square building blocks. There are over 200 different building blocks each serving different purposes and requiring different resources. Every block has inputs which are resources it consumes and outputs which are resources it produces. Resources such as Energy and Food must be managed, and if a building block doesn't have all its required resources it will decay and need to be replaced. To produce more resources blocks such as Wind Turbines, Farms and Water towers can be built. Other blocks that are available include: Flats which can house inhabitants, Parks that provide fresh air, Shops that produce money and Clinics that reduce sickness. The game is intended to be partially educational so it incorporates real world mechanics. When building all blocks must be accessible which can be achieved by adding stairs and corridors. The buildings must also be architecturally sound, for example corridors must be supported before they can built upon. There is a story mode with 5 chapters that also serves as the initial tutorial to the game. Additional tutorials are included that cover more complex mechanics of the game. There is a challenge mode where there are 24 challenges to complete, which involve producing a specified number of resources given a limited amount of money or other resources. The sandbox mode allows a neighbourhood to be built in an area of customisable size, inhabitants demands and random events can be left on or turned off in this mode. Development Block'hood was first released for early access on 10 March 2016 for Microsoft Windows and MacOS. On 13 March 2016, a bug fix was published. On 30 March 2016, new features were released including a new UI to inspect block properties, Pigs and Cows were added as well as new farms. On 27 April 2016, Inhabitants were added to the game along with new building blocks. On 25 May 2016, more inhabitants were added, as well as new challenge levels and more new building blocks. Plethora Project went to E3 in June 2016 to represent the game on MacOS. The game won an award for Best Gameplay at the "Games for Change" festival in June 2016. More new features including a world system enabling a larger neighbourhoods were released on 11 July 2016. On 11 August 2016, an update improving the menu system and user interface was released. On 14 October 2016, all 200 building blocks were available and dynamic effects like wind and rain were published. On 5 December 2016, inhabitants demands were added. The first Linux version of the game was released on 18 December 2016. The full version of the game was released on 11 May 2017, which included a new story game mode. Further bug fixes were applied on 11 and 21 July 2017. The game is built on the Unity3D game engine. Reception Cogconnected gave the game, 70 out of 100, Destructoid gave the game, 7 out of 10 Digitally Downloaded gave the game, 5 out of 5. Hey Poor Critic gave the game, 3 out of 5. Based on 6 reviews, the score from Metacritic is 75 out of 100. References External links Block'hood on Steam Plethora Project Best Gameplay Awards 2017 video games Indie video games Simulation video games City-building games Strategy video games Video games developed in the United States Windows games MacOS games Linux games
20541754
https://en.wikipedia.org/wiki/TRS-80%20Model%204
TRS-80 Model 4
The TRS-80 Model 4 is the last Z80-based home computer family by Radio Shack, sold from April 1983 through late 1991. Model 4 Tandy Corporation introduced the TRS-80 Model 4 in April 1983 as the successor to the TRS-80 Model III. The Model 4 has a faster Z80A 4 MHz CPU, larger video display of 80 columns by 24 rows, bigger keyboard, and can be upgraded to 128KB of RAM. It is compatible with Model III software and CP/M application software. The Model 4 was announced in the same April 1983 press release as was the TRS-80 Model 100 laptop. The two computers were often marketed by Tandy/Radio Shack as a complementary pair. A diskless Model 4 with 16KB RAM cost $999; with 64KB RAM and one single-sided 180K disk drive it cost $1699; with 64KB RAM and two drives it cost $1999. An upgrade for Model III owners cost $799 and provided a new motherboard and keyboard. The Model 4's first appearance in the Radio Shack catalog stated: "Yes, it looks like a Model III, but it's much much more. Compare the price and features of our amazing new Model 4 to any other computer in its class. You'll find that for power, versatility, and convenience it is a true breakthrough. To add the same features to other computers, you'd have to pay a whole lot more." Commenting on its unexpected longevity as a Radio Shack product and object of aftermarket support by third-party companies, in May 1987 80 Micro magazine remarked, "Even when it was introduced in 1983, the Model 4 was seen as a last gasp for the TRS-80 line." Overview The computer has the same all-in-one cabinet as the Model III, adopting a more contemporary-looking beige color scheme instead of the black and gray used on the Models I/III. The Model 4 uses WD1770/1773 floppy controllers instead of the WD1791, which allows for a larger gap between the index hole and first sector; later releases of TRSDOS and LDOS were modified for compatibility with the controller. The Model 4 shipped with TRSDOS 6, identical to Logical Systems's LDOS 6.00 third-party operating system (itself an enhancement to older versions of TRSDOS). When the Model 4 boots into TRSDOS 6, the video display switches into 80×24 mode and the entire 64KB address space is mapped as RAM. Misosys Inc. sold a Model 4 Hardware Interface Kit which enables the extra keys on the Model 4 keyboard, and in a 128 KB Model 4, the banked memory. Intellitech sold a program called Supermod4 that allows Model III programs running on a Model 4 to activate the 4 megahertz CPU clock, larger video display, the speaker and the function keys. In August 1985 80 Micro magazine published a DoubleDuty-like task switching program that activates the external RAM banks on a 128 KB Model 4 from within Model III mode. The Model 4 can run CP/M without modification, unlike the Model I and III. Digital Research produced a version of CP/M 3.0 for the Model 4. Montezuma Micro sold a version of CP/M 2.2 that was customized for the Model 4's hardware: banked RAM, reverse video and assignable codes for the function keys. It has a utility for reading and writing CP/M disk formats of many other brands of computer. Montezuma sold a terminate-stay-resident program they called Monte's Window, which provides functionality similar to Borland Sidekick. Its code resided entirely in the banked RAM of a 128K Model 4; no user memory was occupied. DoubleDuty was made only for the Model 4, marketed by Radio Shack. This is one of the first task-switching programs available for a microcomputer. It uses the upper 64KB of a 128KB machine to keep resident a second TRSDOS application, which can be switched instantly with another application loaded into the main 64KB. A third partition is available for TRSDOS library commands, such as DIR. DoubleDuty first appeared in Radio Shack's 1985 Computer Catalog (RSC-12), the same year that IBM's Topview, Apple's Switcher, and Quarterdeck's DESQview first became available. DoubleDuty was written by Randy Cook, the author of the first version of TRSDOS for the original Model I. Early versions of the Model 4 mainboard were designed to accept a Zilog Z800 16 bit CPU upgrade board to replace the Z80 8 bit CPU but this option was never released. In 1987 H.I. Tech produced an enhanced CPU board, the XLR8er, using the Hitachi HD64180 Z80-compatible processor. Reception Tandy sold 71,000 Model 4 computers in 1984. BYTE in October 1983 noted the lack of native software, but praised the Model 4's backwards compatibility and TRSDOS 6's new features. The magazine concluded that the Model 4 "provides a lot of flexible computing power ... Radio Shack has a guaranteed winner". Creative Computing chose the Model 4 as the best desktop computer under $2000 for 1984, stating that the $1299 price for a system with two disk drives was "a real bargain". Gate Array Model 4 The original version of the Model 4 (Radio Shack catalog number 26-1069) does not use gate array logic chips on its CPU board, but rather Programmable Array Logic chips (PALs). Starting from late 1984, a revised version was produced which came to be known as the Gate Array Model 4 (catalog number 26-1069A). This change greatly reduced the chip count and allows the circuitry for the Floppy Disk Controller and the RS-232 serial port to be included on the CPU board (making this new Model 4 a single-board computer, unlike the original 26-1069). Model 4P The Model 4P (September 1983, Radio Shack catalog number 26-1080), is a self-contained luggable unit. It has all the features of the desktop Model 4 except for the ability to add two outboard floppy disk drives and the interface for cassette tape storage (audio sent to the cassette port in Model III mode goes to the internal speaker). It was sold with the two internal single-sided 180KB drives. It was later made with the Gate Array technology (catalog number 26-1080A). 80 Micro published an article describing a simple motherboard modification to enable the installation of two external floppy drives. The 4P's video monitor is 9" in size compared to the Model 4's 12". The smaller size, and sharper dots, produce better video output. The computer is compatible with popular internal Model 4 peripherals, and has a slot for an internal modem board. The Radio Shack modem uses its own proprietary command set and only supports communications at 300 baud. Teletrends produced a 1200 baud that uses the Hayes command set. Tandy discontinued the 4P by early 1985, stating that "even though you won't find a more enthusiastic and devoted group of owners than our Model 4P folks, transportables just weren't moving well for any company that also sold a desktop version". Reception InfoWorld in 1983 predicted that the 4P would be a "smashing success" as a "substantial improvement" on the Model 4's video and keyboard. The magazine said that it was "truly a transportable computer" and approved of the "carefully thought-out mechanical design", not too large or small. Although criticizing the computer's lack of advanced documentation or double-sided drives, InfoWorld concluded that the 4P "is an outstanding product at an excellent price". Model 4D The final version of the Model 4 is the Model 4D (Radio Shack catalog number 26-1070), first sold in 1985. It is a Gate Array desktop machine featuring dual TEC FB-503 disk drives with a capacity of 360KB each (double density sectors, 40 track, double-sided). Rather than using a lever-style latch as had previous Model 4 drives, these drives use a twist-style latch that provides for more reliable clamping. They are half-height drives mounted with full-height faceplates. The DeskMate productivity suite is bundled with the 4D. It supplies simple applications including a word processor, filer, spreadsheet, calendar, and mail manager. Later Misosys, Inc. updated LS-DOS 6.3 to support dates through December 31, 2011 (as well as a few other enhancements). The Model III LDOS 5.1.4 was also updated to version 5.3, supporting the same feature set as LS-DOS 6.3. The Model 4D is the last computer descended from Radio Shack's original Model I from 1977 but it is not branded as a Radio Shack product. The badge mounted on its front cover brands it as the "Tandy TRS-80 Model 4D". This change in marketing resulted from Tandy corporation's desire to enhance its stature in the marketplace, because it was perceived by some in the computer press that the old "Radio Shack" moniker connoted an image of inferior quality. The Model 4D is the last computer to bear the "TRS-80" name. It retailed for $1199 at its introduction in 1985. During 1987–1988 the retail stores removed the Model 4Ds from display but they were kept in the yearly computer catalog and were available by special order through 1991, when they were closed out for $599. Parts and repair service remained available for several years longer. References External links 80 Micro review of the Model 4: "Once More, With Feeling" 80 Micro review of the Model 4D: "The Model 4D: Tandy's 8-Bit Burro Gets A Boost" Byte magazine review of the Model 4 80 Micro review of LS-DOS 6.3 upgrade by Hardin Brothers Logical Systems advertisement in 80 Micro for LS-DOS 6.3 upgrade Model 4 Technical Reference Manual (Non-Gate Array hardware & software) Model 4 and 4P Technical Reference Manual (Gate Array versions, hardware only) The Programmer's Guide to TRSDOS Version 6 by Roy Soltoff, Misosys Inc. The Source to TRSDOS 6.2 Volume 1 (commented assembler source to resident system, excluding libraries SYS6 & SYS7 (Volume 2), and system utilities (Volume 3) 80 Micro advertisement for Montezuma Micro CP/M 2.2 for the Model 4 System Programmer's Guide for the TRS-80 Model 4/4P Using Montezuma Micro CP/M 2.2 Owner's manual for Montezuma Micro CP/M for the TRS-80 Model 4 TRSDOS/LS-DOS 6.x User Command Summary TRS-80
93467
https://en.wikipedia.org/wiki/User%20space%20and%20kernel%20space
User space and kernel space
A modern computer operating system usually segregates virtual memory into user space and kernel space. Primarily, this separation serves to provide memory protection and hardware protection from malicious or errant software behaviour. Kernel space is strictly reserved for running a privileged operating system kernel, kernel extensions, and most device drivers. In contrast, user space is the memory area where application software and some drivers execute. Overview The term userland (or user space) refers to all code that runs outside the operating system's kernel. Userland usually refers to the various programs and libraries that the operating system uses to interact with the kernel: software that performs input/output, manipulates file system objects, application software, etc. Each user space process normally runs in its own virtual memory space, and, unless explicitly allowed, cannot access the memory of other processes. This is the basis for memory protection in today's mainstream operating systems, and a building block for privilege separation. A separate user mode can also be used to build efficient virtual machines – see Popek and Goldberg virtualization requirements. With enough privileges, processes can request the kernel to map part of another process's memory space to its own, as is the case for debuggers. Programs can also request shared memory regions with other processes, although other techniques are also available to allow inter-process communication. Implementation The most common way of implementing a user mode separate from kernel mode involves operating system protection rings. Protection rings, in turn, are implemented using CPU modes. Typically, kernel space programs run in kernel mode, also called supervisor mode; normal applications in user space run in user mode. Many operating systems are single address space operating systems—they have a single address space for all user-mode code. (The kernel-mode code may be in the same address space, or it may be in a second address space). Many other operating systems have a per-process address space, a separate address space for each and every user-mode process. Another approach taken in experimental operating systems is to have a single address space for all software, and rely on a programming language's semantics to make sure that arbitrary memory cannot be accessed – applications simply cannot acquire any references to the objects that they are not allowed to access. This approach has been implemented in JXOS, Unununium as well as Microsoft's Singularity research project. See also BIOS CPU modes Early user space Memory protection OS-level virtualization Notes References External links Linux Kernel Space Definition Operating system technology Device drivers
1950210
https://en.wikipedia.org/wiki/Tim%20Willits
Tim Willits
Tim Willits is the former studio director, co-owner, and level designer of id Software, the American video game developer company. As of August 2019, Willits is the chief creative officer at Saber Interactive. He became a Director of 3D Realms with Saber Interactive’s acquisition of the company. Biography Willits is a computer science and business graduate of the University of Minnesota and a former member of the University of Minnesota Army ROTC program. Willits was the battalion cadet-command sergeant major (C/CSM) during his junior year and attended ROTC Advanced Camp at Fort Lewis, Washington during the summer between his junior and senior years of college. After an injury during the summer, Willits completed two rotations, being assigned to both the first and seventh cadet regiments during that summer. He held the rank of cadet-major (C/MAJ) during his senior year and was assigned as the battalion training officer. Personal life Married for the second time in 2009, Willits currently lives in a Dallas suburb with his wife, Alison Barron Willits. Together, both of them have triplets. Career Willits has stated in numerous interviews that he was inspired to make video games when he downloaded a shareware version of Doom. He played the first room of E1M1, thinking that was the entire demo, then, discovering a door that led the player to the other rooms. It was that moment when the door opened that Willits decided he wanted to make video games. He joined id Software in 1995 after impressing the owners and development team with Doom levels he forged in his spare time and distributed free over the Internet. Willits has worked on Strife, The Ultimate Doom, Quake, Quake II, Quake III Arena, Quake III: Team Arena and Doom 3. Willits was lead designer on Doom 3, and executive producer on Quake 4. He was the creative director on Rage and Quake Live. Willits was referenced in the Doom movie as Dr. Willits. Willits is the only id Software employee who has been to every single QuakeCon event since its inception in 1996, which is something he is proud of. That is no longer the case since he left id in 2019. Willits was leading as game director on the arena shooter, Quake Champions. On July 18, 2019, Willits announced he would leave id Software after serving for 24 years. He is now the chief creative officer at Saber Interactive. Controversy Willits received attention in August 2017 for claiming that he created the concept of multiplayer maps during the development of Quake. According to Willits, he approached coworkers John Romero and John Carmack with the idea of maps which could only be played in multiplayer, which Willits claimed the two dismissed as "the stupidest idea they'd ever heard". The following day, Romero refuted Willits' statement on his personal blog, claiming that Willits' alleged encounter between him and Carmack never happened. Carmack said that he does not recall the conversation between Tim Willits, John Romero, and himself, and he trusts Romero's recollection of events, in line with the account detailed on Romero's blog. Romero explained that many hundreds of deathmatch-only maps had been made for Doom prior to Quake's release, including a deathmatch map created by then-id Software employee American McGee. Romero also noted that Marathon and Rise of the Triad, first person shooters which predated Quake by over a year, both shipped with maps exclusive to multiplayer. Tom Hall, co-founder of id Software and director of Rise of the Triad, gave his support for Romero. Willits responded to the article by posting an early video of a map fragment with elements of Q1DM3 shown named Tim14.bsp on his Instagram, and stated that "He stands by what he said". In January 2020, Willits was on the Arcade Attack Podcast and clarifies that when he talked about multiplayer-only maps he was specifically talking about Quake, not FPS games in general. He also added that Quake was the first FPS game that had dedicated client-server architecture for multiplayer. Willits was interviewed by Warren Spector in 2007, giving the same account of creating the concept of multiplayer-only maps. Willits also claimed to have created all of Quake's shareware levels; this was disputed by John Romero. Works These are the works Tim has done, which includes titles mostly from id Software: Notes References External links Willits' profile from MobyGames E3 2007: id Into the Future interview from IGN Living people Video game designers Creative directors Id Software people 1971 births
16693098
https://en.wikipedia.org/wiki/Mausezahn
Mausezahn
(, German for "mouse tooth") is a fast network traffic generator written in C which allows the user to craft nearly every possible and "impossible" packet. Since version 0.31 Mausezahn is open source in terms of the GPLv2. Herbert Haas, the original developer of Mausezahn, died on 25 June 2011. The project has been incorporated into the netsniff-ng toolkit, and continues to be developed there. Typical applications of Mausezahn include: Testing or stressing IP multicast networks Penetration testing of firewalls and IDS Finding weaknesses in network software or appliances Creation of malformed packets to verify whether a system processes a given protocol correctly Didactical demonstrations as lab utility Mausezahn allows sending an arbitrary sequence of bytes directly out of the network interface card. An integrated packet builder provides a simple command line interface for more complicated packets. Since version 0.38, Mausezahn offers a multi-threaded mode with Cisco-style command line interface. Features As of version 0.38 Mausezahn supports the following features: Jitter measurement via Real-time Transport Protocol (RTP) packets VLAN tagging (arbitrary number of tags) MPLS label stacks (arbitrary number of labels) BPDU packets as used by the Spanning Tree Protocol (PVST+ is also supported) Cisco Discovery Protocol messages Link Layer Discovery Protocol messages IGMP version 1 and 2 query and report messages DNS messages ARP messages IP, UDP, and TCP header creation ICMP packets Syslog messages Address, port, and TCP sequence number sweeps Random MAC or IP addresses, FQDN addresses A very high packet transmission rate (approximately 100,000 packets per second) Mausezahn only sends exactly the packet the user has specified. Therefore, it is rather less suited for vulnerability audits where additional algorithms are required to detect open ports behind a firewall and to automatically evade intrusion detection systems (IDS). However, a network administrator could implement audit routines via a script that utilizes Mausezahn for creating the actual packets. Platforms Mausezahn currently runs only on Linux systems and there are no plans to port it to the Windows operating system. See also Traffic generation model Nessus Nmap References External links Official/new website Computer security software Free network management software Linux-only free software Free software programmed in C
862179
https://en.wikipedia.org/wiki/Watchdog%20timer
Watchdog timer
A watchdog timer (sometimes called a computer operating properly or COP timer, or simply a watchdog) is an electronic or software timer that is used to detect and recover from computer malfunctions. Watchdog timers are widely used in computers to facilitate automatic correction of temporary hardware faults, and to prevent errant or malevolent software from disrupting system operation. During normal operation, the computer regularly restarts the watchdog timer to prevent it from elapsing, or "timing out". If, due to a hardware fault or program error, the computer fails to restart the watchdog, the timer will elapse and generate a timeout signal. The timeout signal is used to initiate corrective actions. The corrective actions typically include placing the computer and associated hardware in a safe state and invoking a computer reboot. Microcontrollers often include an integrated, on-chip watchdog. In other computers the watchdog may reside in a nearby chip that connects directly to the CPU, or it may be located on an external expansion card in the computer's chassis. Applications Watchdog timers are commonly found in embedded systems and other computer-controlled equipment where humans cannot easily access the equipment or would be unable to react to faults in a timely manner. In such systems, the computer cannot depend on a human to invoke a reboot if it hangs; it must be self-reliant. For example, remote embedded systems such as space probes are not physically accessible to human operators; these could become permanently disabled if they were unable to autonomously recover from faults. In robots and other automated machines, a fault in the control computer could cause equipment damage or injuries before a human could react, even if the computer is easily accessed. A watchdog timer is usually employed in cases like these. Watchdog timers are also used to monitor and limit software execution time on a normally functioning computer. For example, a watchdog timer may be used when running untrusted code in a sandbox, to limit the CPU time available to the code and thus prevent some types of denial-of-service attacks. In real-time operating systems, a watchdog timer may be used to monitor a time-critical task to ensure it completes within its maximum allotted time and, if it fails to do so, to terminate the task and report the failure. Architecture and operation Restarting The act of restarting a watchdog timer is commonly referred to as kicking the watchdog. Kicking is typically done by writing to a watchdog control port or by setting a particular bit in a register. Alternatively, some tightly coupled watchdog timers are kicked by executing a special machine language instruction. An example of this is the CLRWDT (clear watchdog timer) instruction found in the instruction set of some PIC microcontrollers. In computers that are running operating systems, watchdog restarts are usually invoked through a device driver. For example, in the Linux operating system, a user space program will kick the watchdog by interacting with the watchdog device driver, typically by writing a zero character to or by calling a KEEPALIVE ioctl. The device driver, which serves to abstract the watchdog hardware from user space programs, may also be used to configure the time-out period and start and stop the timer. Some watchdog timers will only allow kicks during a specific time window. The window timing is usually relative to the previous kick or, if the watchdog has not yet been kicked, to the moment the watchdog was enabled. The window begins after a delay following the previous kick, and ends after a further delay. If the computer attempts to kick the watchdog before or after the window, the watchdog will not be restarted, and in some implementations this will be treated as a fault and trigger corrective action. Enabling A watchdog timer is said to be enabled when operating and disabled when idle. Upon power-up, a watchdog may be unconditionally enabled or it may be initially disabled and require an external signal to enable it. In the latter case, the enabling signal may be automatically generated by hardware or it may be generated under software control. When automatically generated, the enabling signal is typically derived from the computer reset signal. In some systems the reset signal is directly used to enable the watchdog. In others, the reset signal is delayed so that the watchdog will become enabled at some later time following the reset. This delay allows time for the computer to boot before the watchdog is enabled. Without this delay, the watchdog would timeout and invoke a subsequent reset before the computer can run its application software — the software which kicks the watchdog — and the system would become stuck in an endless cycle of incomplete reboots. Single-stage watchdog Watchdog timers come in many configurations, and many allow their configurations to be altered. For example, the watchdog and CPU may share a common clock signal as shown in the block diagram below, or they may have independent clock signals. A basic watchdog timer has a single timer stage which, upon timeout, typically will reset the CPU: Multistage watchdog Two or more timers are sometimes cascaded to form a multistage watchdog timer, where each timer is referred to as a timer stage, or simply a stage. For example, the block diagram below shows a three-stage watchdog. In a multistage watchdog, only the first stage is kicked by the processor. Upon first stage timeout, a corrective action is initiated and the next stage in the cascade is started. As each subsequent stage times out, it triggers a corrective action and starts the next stage. Upon final stage timeout, a corrective action is initiated, but no other stage is started because the end of the cascade has been reached. Typically, single-stage watchdog timers are used to simply restart the computer, whereas multistage watchdog timers will sequentially trigger a series of corrective actions, with the final stage triggering a computer restart. Time intervals Watchdog timers may have either fixed or programmable time intervals. Some watchdog timers allow the time interval to be programmed by selecting from among a few selectable, discrete values. In others, the interval can be programmed to arbitrary values. Typically, watchdog time intervals range from ten milliseconds to a minute or more. In a multistage watchdog, each timer may have its own, unique time interval. Corrective actions A watchdog timer may initiate any of several types of corrective action, including maskable interrupt, non-maskable interrupt, hardware reset, fail-safe state activation, power cycling, or combinations of these. Depending on its architecture, the type of corrective action or actions that a watchdog can trigger may be fixed or programmable. Some computers (e.g., PC compatibles) require a pulsed signal to invoke a hardware reset. In such cases, the watchdog typically triggers a hardware reset by activating an internal or external pulse generator, which in turn creates the required reset pulses. In embedded systems and control systems, watchdog timers are often used to activate fail-safe circuitry. When activated, the fail-safe circuitry forces all control outputs to safe states (e.g., turns off motors, heaters, and high-voltages) to prevent injuries and equipment damage while the fault persists. In a two-stage watchdog, the first timer is often used to activate fail-safe outputs and start the second timer stage; the second stage will reset the computer if the fault cannot be corrected before the timer elapses. Watchdog timers are sometimes used to trigger the recording of system state information—which may be useful during fault recovery—or debug information (which may be useful for determining the cause of the fault) onto a persistent medium. In such cases, a second timer—which is started when the first timer elapses—is typically used to reset the computer later, after allowing sufficient time for data recording to complete. This allows time for the information to be saved, but ensures that the computer will be reset even if the recording process fails. For example, the above diagram shows a likely configuration for a two-stage watchdog timer. During normal operation the computer regularly kicks Stage1 to prevent a timeout. If the computer fails to kick Stage1 (e.g., due to a hardware fault or programming error), Stage1 will eventually timeout. This event will start the Stage2 timer and, simultaneously, notify the computer (by means of a non-maskable interrupt) that a reset is imminent. Until Stage2 times out, the computer may attempt to record state information, debug information, or both. The computer will be reset upon Stage2 timeout. Fault detection A watchdog timer provides automatic detection of catastrophic malfunctions that prevent the computer from kicking it. However, computers often have other, less-severe types of faults which do not interfere with kicking, but which still require watchdog oversight. To support these, a computer system is typically designed so that its watchdog timer will be kicked only if the computer deems the system functional. The computer determines whether the system is functional by conducting one or more fault detection tests and will kick the watchdog only if all tests have passed. In computers that are running an operating system and multiple processes, a single, simple test may be insufficient to guarantee normal operation, as it could fail to detect a subtle fault condition and therefore allow the watchdog to be kicked even though a fault condition exists. For example, in the case of the Linux operating system, a user-space watchdog daemon may simply kick the watchdog periodically without performing any tests. As long as the daemon runs normally, the system will be protected against serious system crashes such as a kernel panic. To detect less severe faults, the daemon can be configured to perform tests that cover resource availability (e.g., sufficient memory and file handles, reasonable CPU time), evidence of expected process activity (e.g., system daemons running, specific files being present or updated), overheating, and network activity, and system-specific test scripts or programs may also be run. Upon discovery of a failed test, the computer may attempt to perform a sequence of corrective actions under software control, culminating with a software-initiated reboot. If the software fails to invoke a reboot, the watchdog timer will timeout and invoke a hardware reset. In effect, this is a multistage watchdog timer in which the software comprises the first and intermediate timer stages and the hardware reset the final stage. In a Linux system, for example, the watchdog daemon may attempt to perform a software-initiated restart, which can be preferable to a hardware reset as the file systems will be safely unmounted and fault information will be logged. However it is essential to have the insurance of the hardware timer as a software restart can fail under a number of fault conditions. See also Command Loss Timer Reset a related method to keep a spacecraft commandable Safe mode (spacecraft) Immunity Aware Programming Dead man's switch Heartbeat (computing) Keepalive Notes References External links Building a great watchdog – Article by Jack Ganssle Embedded systems
8425040
https://en.wikipedia.org/wiki/List%20of%20network%20protocol%20stacks
List of network protocol stacks
This is a list of protocol stack architectures. A protocol stack is a suite of complementary communications protocols in a computer network or a computer bus system. See also Lists of network protocols IEEE 802 Network protocols Communications protocols Network protocol stacks stacks
49627
https://en.wikipedia.org/wiki/Universal%20Disk%20Format
Universal Disk Format
Universal Disk Format (UDF) is an open, vendor-neutral file system for computer data storage for a broad range of media. In practice, it has been most widely used for DVDs and newer optical disc formats, supplanting ISO 9660. Due to its design, it is very well suited to incremental updates on both recordable and (re)writable optical media. UDF was developed and maintained by the Optical Storage Technology Association (OSTA). In engineering terms, Universal Disk Format is a profile of the specification known as ISO/IEC 13346 and ECMA-167. Usage Normally, authoring software will master a UDF file system in a batch process and write it to optical media in a single pass. But when packet writing to rewritable media, such as CD-RW, UDF allows files to be created, deleted and changed on-disc just as a general-purpose filesystem would on removable media like floppy disks and flash drives. This is also possible on write-once media, such as CD-R, but in that case the space occupied by the deleted files cannot be reclaimed (and instead becomes inaccessible). Multi-session mastering is also possible in UDF, though some implementations may be unable to read disks with multiple sessions. History The Optical Storage Technology Association standardized the UDF file system to form a common file system for all optical media: both for read-only media and for re-writable optical media. When first standardized, the UDF file system aimed to replace ISO 9660, allowing support for both read-only and writable media. After the release of the first version of UDF, the DVD Consortium adopted it as the official file system for DVD-Video and DVD-Audio. UDF shares the basic volume descriptor format with ISO 9660. A "UDF Bridge" format is defined since 1.50 so that a disc can also contain a ISO 9660 file system making references to files on the UDF part. Revisions Multiple revisions of UDF have been released: Revision 1.00 (24 October 1995). Original Release. Revision 1.01 (3 November 1995). Added DVD Appendix and made a few minor changes. Revision 1.02 (30 August 1996). This format is used by DVD-Video discs. Revision 1.50 (4 February 1997). Added support for (virtual) rewritability on CD-R/DVD-R media by introducing the VAT structure. Added sparing tables for defect management on rewritable media such as CD-RW, and DVD-RW and DVD+RW. Add UDF bridge. Revision 2.00 (3 April 1998). Added support for Stream Files and real-time files (for DVD recording) and simplified directory management. VAT support was extended. Revision 2.01 (15 March 2000) is mainly a bugfix release to UDF 2.00. Many of the UDF standard's ambiguities were resolved in version 2.01. Revision 2.50 (30 April 2003). Added the Metadata Partition facilitating metadata clustering, easier crash recovery and optional duplication of file system information: All metadata like nodes and directory contents are written on a separate partition which can optionally be mirrored. This format is used by some versions of Blu-rays and most HD-DVD discs. Revision 2.60 (1 March 2005). Added Pseudo OverWrite method for drives supporting pseudo overwrite capability on sequentially recordable media. Has read-only compatibility with UDF 2.50 implementations. (Some Blu-rays use this format.) UDF Revisions are internally encoded as binary-coded decimals; Revision 2.60, for example, is represented as . In addition to declaring its own revision, compatibility for each volume is defined by the minimum read and minimum write revisions, each signalling the requirements for these operations to be possible for every structure on this image. A "maximum write" revision additionally records the highest UDF support level of all the implementations that has written to this image. For example, a UDF 2.01 volume that does not use Stream Files (introduced in UDF 2.00) but uses VAT (UDF 1.50) created by a UDF 2.60-capable implementation may have the revision declared as , the minimum read revision set to , the minimum write to , and the maximum write to . Specifications The UDF standard defines three file system variations, called "builds". These are: Plain (Random Read/Write Access). This is the original format supported in all UDF revisions Virtual Allocation Table a.k.a. VAT (Incremental Writing). Used specifically for writing to write-once media Spared (Limited Random Write Access). Used specifically for writing to rewritable media Plain build Introduced in the first version of the standard, this format can be used on any type of disk that allows random read/write access, such as hard disks, DVD+RW and DVD-RAM media. Metadata (up to v2.50) and file data is addressed more or less directly. In writing to such a disk in this format, any physical block on the disk may be chosen for allocation of new or updated files. Since this is the basic format, practically any operating system or file system driver claiming support for UDF should be able to read this format. VAT build Write-once media such as DVD-R and CD-R have limitations when being written to, in that each physical block can only be written to once, and the writing must happen incrementally. Thus the plain build of UDF can only be written to CD-Rs by pre-mastering the data and then writing all data in one piece to the media, similar to the way an ISO 9660 file system gets written to CD media. To enable a CD-R to be used virtually like a hard disk, whereby the user can add and modify files on a CD-R at will (so-called "drive letter access" on Windows), OSTA added the VAT build to the UDF standard in its revision 1.5. The VAT is an additional structure on the disc that allows packet writing; that is, remapping physical blocks when files or other data on the disc are modified or deleted. For write-once media, the entire disc is virtualized, making the write-once nature transparent for the user; the disc can be treated the same way one would treat a rewritable disc. The write-once nature of CD-R or DVD-R media means that when a file is deleted on the disc, the file's data still remains on the disc. It does not appear in the directory any more, but it still occupies the original space where it was stored. Eventually, after using this scheme for some time, the disc will be full, as free space cannot be recovered by deleting files. Special tools can be used to access the previous state of the disc (the state before the delete occurred), making recovery possible. Not all drives fully implement version 1.5 or higher of the UDF, and some may therefore be unable to handle VAT builds. Spared (RW) build Rewriteable media such as DVD-RW and CD-RW have fewer limitations than DVD-R and CD-R media. Sectors can be rewritten at random (though in packets at a time). These media can be erased entirely at any time, making the disc blank again, ready for writing a new UDF or other file system (e.g., ISO 9660 or CD Audio) to it. However, sectors of -RW media may "wear out" after a while, meaning that their data becomes unreliable, through having been rewritten too often (typically after a few hundred rewrites, with CD-RW). The plain and VAT builds of the UDF format can be used on rewriteable media, with some limitations. If the plain build is used on a -RW media, file-system level modification of the data must not be allowed, as this would quickly wear out often-used sectors on the disc (such as those for directory and block allocation data), which would then go unnoticed and lead to data loss. To allow modification of files on the disc, rewriteable discs can be used like -R media using the VAT build. This ensures that all blocks get written only once (successively), ensuring that there are no blocks that get rewritten more often than others. This way, a RW disc can be erased and reused many times before it should become unreliable. However, it will eventually become unreliable with no easy way of detecting it. When using the VAT build, CD-RW/DVD-RW media effectively appears as CD-R or DVD+/-R media to the computer. However, the media may be erased again at any time. The spared build was added in revision 1.5 to address the particularities of rewriteable media. This build adds an extra Sparing Table in order to manage the defects that will eventually occur on parts of the disc that have been rewritten too many times. This table keeps track of worn-out sectors and remaps them to working ones. UDF defect management does not apply to systems that already implement another form of defect management, such as Mount Rainier (MRW) for optical discs, or a disk controller for a hard drive. The tools and drives that do not fully support revision 1.5 of UDF will ignore the sparing table, which would lead them to read the outdated worn-out sectors, leading to retrieval of corrupted data. The so-called UDF overhead that is spread over the entire disc reserves a portion of the data storage space, limiting the useable capacity of CD-RW with e.g. 650 MB of original capacity to around 500 MB. Character set The UDF specifications allow only one Character Set OSTA CS0, which can store any Unicode Code point excluding U+FEFF and U+FFFE. Additional character sets defined in ECMA-167 are not used. Since Errata DCN-5157, the range of code points was expanded to all code points from Unicode 4.0 (or any newer or older version), which includes Plane 1-16 characters such as Emoji. DCN-5157 also recommends normalizing the strings to Normalization Form C. The OSTA CS0 character set stores a 16-bit Unicode string "compressed" into 8-bit or 16-bit units, preceded by a single-byte "compID" tag to indicate the compression type. The 8-bit storage is functionally equivalent to ISO-8859-1, and the 16-bit storage is UTF-16 in big endian. The reference algorithm neither checks for forbidden code points nor interprets surrogate pairs, so like NTFS the string may be malformed. (No specific form of storage is specified by DCN-5157, but UTF-16BE is the only well-known method for storing all of Unicode while being mostly backward compatible with UCS-2.) Compatibility Many DVD players do not support any UDF revision other than version 1.02. Discs created with a newer revision may still work in these players if the ISO 9660 bridge format is used. Even if an operating system claims to be able to read UDF 1.50, it still may only support the plain build and not necessarily either the VAT or Spared UDF builds. Mac OS X 10.4.5 claims to support Revision 1.50 (see man mount_udf), yet it can only mount disks of the plain build properly and provides no virtualization support at all. It cannot mount UDF disks with VAT, as seen with the Sony Mavica issue. Releases before 10.4.11 mount disks with Sparing Table but does not read its files correctly. Version 10.4.11 fixes this problem. Similarly, Windows XP Service Pack 2 (SP2) cannot read DVD-RW discs that use the UDF 2.00 sparing tables as a defect management system. This problem occurs if the UDF defect management system creates a sparing table that spans more than one sector on the DVD-RW disc. Windows XP SP2 can recognize that a DVD is using UDF, but Windows Explorer displays the contents of a DVD as an empty folder. A hotfix is available for this and is included in Service Pack 3. Due to the default UDF versions and options, a UDF partition formatted by Windows cannot be written under macOS. On the other hand, a partition formatted by macOS cannot be directly written by Windows, due to the requirement of a MBR partition table. In addition, Linux only supports writing to UDF 2.01. A script for Linux and macOS called handles these incompatibilities by using UDF 2.01 and adding a fake MBR; for Windows the best solution is using the command-line tool . See also Comparison of file systems DVD authoring ISO/IEC 13490 References Further reading ISO/IEC 13346 standard, also known as ECMA-167. External links OSTA home page UDF specifications: 1.02, 1.50, 2.00, 2.01, 2.50, 2.60 (March 1, 2005), SecureUDF Wenguang Wang's UDF Introduction Linux UDF support Microsoft Windows UDF Read Troubleshooting AIX - CD-ROM file system and UDFS Disk file systems ISO standards IEC standards Ecma standards Windows components
40205956
https://en.wikipedia.org/wiki/Wolfram%20Language
Wolfram Language
The Wolfram Language ( ) is a general multi-paradigm programming language developed by Wolfram Research. It emphasizes symbolic computation, functional programming, and rule-based programming and can employ arbitrary structures and data. It is the programming language of the mathematical symbolic computation program Mathematica. History The Wolfram Language was a part of the initial version of Mathematica in 1988. Symbolic aspects of the engine make it a computer algebra system. The language can perform integration, differentiation, matrix manipulations, and solve differential equations using a set of rules. Also in 1988 was the notebook model and the ability to embed sound and images, according to Theodore Gray's patent. An online frontend for the language, WolframAlpha, was released in 2009. Wolfram implemented this website by translating natural language statements into Wolfram-language queries that link to its database. The work leading to Wolfram Alpha also means that Wolfram's implementation of the language now has built-in access to a knowledge-base as well as natural language processing functions. Wolfram also added features for more complex tasks, such as 3D modeling. A name was finally adopted for the language in 2013, as Wolfram Research decided to make a version of the language engine free for Raspberry Pi users, and they needed to come up with a name for it. It was included in the recommended software bundle that the Raspberry Pi Foundation provides for beginners, which caused some controversy due to the Wolfram language's proprietary nature. Plans to port the Wolfram language to the Intel Edison were announced after the board's introduction at CES 2014 but was never released. In 2019, a link was added to make Wolfram libraries compatible with the Unity game engine, giving game developers access to the language's high level functions. Syntax The Wolfram Language syntax is overall similar to the M-expression of 1960s LISP, with support for infix operators and "function-notation" function calls. Basics The Wolfram language writes basic arithmetic expressions using infix operators. (* This is a comment. *) 4 + 3 (* = 7 *) 1 + 2 * (3 + 4) (* = 15 *) (* Note that Multiplication can be omitted: 1 + 2 (3 + 4) *) (* Divisions return rational numbers: *) 6 / 4 (* = 3/2 *) Function calls are denoted with square brackets: Sin[Pi] (* = 0 *) (* This is the function to convert rationals to floating point: *) N[3 / 2] (* = 1.5 *) Lists are enclosed in curly brackets: Oddlist={1,3,5} (* = {1,3,5} *) Syntax sugar The language may deviate from the M-expression paradigm when an alternative, more human-friendly way of showing an expression is available: A number of formatting rules are used in this language, including for typeset expressions and for language input. Functions can also be applied using the prefix expression and the postfix expression . Derivatives can be denoted with an apostrophe . The infix operators themselves are considered "sugar" for the function notation system. A formatter desugars the input: FullForm[1+2] (* = Plus[1, 2] *) Functional programming Currying is supported. Pattern matching Functions in the Wolfram Language are effectively a case of simple patterns for replacement: F[x_] := x ^ 0 The is a "SetDelayed operator", so that the x is not immediately looked for. is syntax sugar for , i.e. a "blank" for any value to replace x in the rest of the evaluation. An iteration of bubble sort is expressed as: sortRule := {x___,y_,z_,k___} /; y>z -> {x,z,y,k} (* Rule[Condition[List[PatternSequence[x, BlankNullSequence[]], Pattern[y, Blank[]], Pattern[z, Blank[]], PatternSequence[k, BlankNullSequence[]]], Greater[y, z]], List[x, z, y, k]] *) The operator is "condition", so that the rule only applies when . The three underscores are a syntax for a , for a sequence that can be null. A ReplaceRepeated operator can be used to apply this rule repeatedly, until no more change happens: { 9, 5, 3, 1, 2, 4 } //. sortRule (* = ReplaceRepeated[{ 9, 5, 3, 1, 2, 4 }, sortRule] *) (* = {1, 2, 3, 4, 5, 9} *) The pattern matching system also easily gives rise to rule-based integration and derivation. The following are excerpts from the Rubi package of rules: (* Reciprocal rule *) Int[1/x_,x_Symbol] := Log[x]; (* Power rule *) Int[x_^m_.,x_Symbol] := x^(m+1)/(m+1) /; FreeQ[m,x] && NeQ[m,-1] Implementations The official, and reference, implementation of the Wolfram Language lies in Mathematica and associated online services. These are closed source. Wolfram Research has, however, released a C++ parser of the language under the open source MIT License. The reference book is open access. In the over three-decade-long existence of the Wolfram language, a number of open source third party implementations have also been developed. Richard Fateman's MockMMA from 1991 is of historical note, both for being the earliest reimplementation and for having received a cease-and-desist from Wolfram. Modern ones still being maintained include Symja in Java, expreduce in Golang, and the SymPy-based Mathics. These implementations focus on the core language and the computer algebra system that it implies, not on the online "knowledgebase" features of Wolfram. In 2019, Wolfram Research released a freeware Wolfram Engine, to be used as a programming library in non-commercial software. Naming The language was officially named in June 2013 although, as the backend of the computing system Mathematica, it has been in use in various forms for over 30 years since Mathematica's initial release. See also Notebook interface References External links Documentation for the Wolfram Language An Elementary Introduction to the Wolfram Language The Wolfram Programming Cloud WolframLanguage.org: a guide to community resources about Wolfram Language Showcase of the "Mathematica language", Code Golf StackExchange Community Wiki Mathematics, Physics & Chemistry with the Wolfram Language (World Scientific, 2022) Array programming languages Audio programming languages Computational notebook Computer algebra systems Computer vision software Concatenative programming languages Cross-platform software Data mining and machine learning software Data visualization software Data-centric programming languages Declarative programming languages Dynamically typed programming languages Educational programming languages Finite element software Formula editors Formula manipulation languages Functional languages Functional programming High-level programming languages Homoiconic programming languages Image processing software Linear algebra Literate programming Multi-paradigm programming languages Neural network software Numerical linear algebra Numerical programming languages Object-oriented programming languages Ontology languages Parallel computing Pattern matching programming languages Programming languages created in 1988 Simulation programming languages Social network analysis software Software modeling language SQL data access Statistical programming languages Technical analysis software Term-rewriting programming languages Theorem proving software systems Wolfram Research
2762024
https://en.wikipedia.org/wiki/HomePak
HomePak
HomePak, published in 1984 by Batteries Included, is an integrated application written for the Atari 8-bit family and ported to the Commodore 64, Commodore 128, IBM PCjr, and Apple II. It includes a word processor (HomeText), database (HomeFind), and terminal communications program (HomeTerm). HomePak was designed by Russ Wetmore (who previously wrote the game Preppie!) for Star Systems Software, Inc. The Commodore 128 version was ported by Sean M. Puckett and Scott S. Smith. The Atari 8-bit version of HomePak is implemented in the Action! programming language from Optimized Systems Software. Reception Ahoy! warned "don't expect more than you pay for", stating that while HomeText was "quite nice" and HomeTerm was "wonderful," HomeFile was "very disappointing. Anyone who needs to use the database for even a mildly sophisticated operation will be frustrated and confused ... a total mess". In a review of the HomeTerm portion of the package, Ron Luks wrote in a 1984 review for ANALOG Computing, "A superb terminal program is rare indeed, but in my collection of over two dozen Atari terminal programs, I have two or three that meet the "superb" criteria. Only one, however, can be the best. Hometerm is, quite simply, the best." In a 1986 Page 6 review, the author had technical problems using HomeTerm in the UK. He called HomeFind, "elegant, friendly and very easy to use," and wrote that HomeText, "might even tempt me away from my trusty old Atariwriter." Legacy With Sparky Starks, Wetmore co-authored a similarly-styled Atari 8-bit application called HomeCard. It was advertised as an "electronic filing box" and "intelligent Rolodex." HomeCard was published by Antic Software in 1985, not Batteries Included. References Word processors Apple II software Atari 8-bit family software Commodore 64 software 1984 software
618816
https://en.wikipedia.org/wiki/Johnny%20Evers
Johnny Evers
John Joseph Evers (July 21, 1881 – March 28, 1947) was an American professional baseball second baseman and manager. He played in Major League Baseball (MLB) from 1902 through 1917 for the Chicago Cubs, Boston Braves, and Philadelphia Phillies. He also appeared in one game apiece for the Chicago White Sox and Braves while coaching them in 1922 and 1929, respectively. Evers was born in Troy, New York. After playing for the local minor league baseball team for one season, Frank Selee, manager of the Cubs, purchased Evers's contract and soon made him his starting second baseman. Evers helped lead the Cubs to four National League pennants, including two World Series championships. The Cubs traded Evers to the Braves in 1914; that season, Evers led the Braves to victory in the World Series, and was named the league's Most Valuable Player. Evers continued to play for the Braves and Phillies through 1917. He then became a coach, scout, manager, and general manager in his later career. Known as one of the smartest ballplayers in MLB, Evers also had a surly temper that he took out on umpires. Evers was a part of a great double-play combination with Joe Tinker and Frank Chance, which was immortalized as "Tinker-to-Evers-to-Chance" in the poem "Baseball's Sad Lexicon". Evers was elected to the Baseball Hall of Fame by the Veterans Committee in 1946. Early life Evers was born on July 21, 1881, in Troy, New York. His father worked as a saloon keeper. Many of Evers' relatives, including his father, brothers, and uncles, played baseball. Evers attended St. Joseph's Elementary School and played sandlot ball in Troy. Career Minor league career Evers made his professional debut in minor league baseball for the Troy Trojans of the Class-B New York State League in 1902 as a shortstop. Evers reportedly weighed less than , and opposing fans thought he was a part of a comedic act. Evers reportedly weighed no more than during his career. Evers batted .285 and led the New York State League with 10 home runs. Frank Selee, manager of the Chicago Cubs, scouted Evers's teammate, pitcher Alex Hardy. Selee, also looking for a second baseman due to an injury to starter Bobby Lowe, purchased Hardy's and Evers's contracts for $1,500 ($ in current dollar terms); the Trojans were willing to sell Evers's services due to his temper. Chicago Cubs Evers made his MLB debut with the Cubs on September 1 at shortstop, as Selee moved Joe Tinker from shortstop to third base. Only three players in the National League (NL) were younger than Evers: Jim St. Vrain, Jimmy Sebring, and Lave Winham. Three days later, Selee returned Tinker to shortstop and assigned Evers to second base. In his month-long tryout with the Cubs, Evers batted .222 without recording an extra-base hit and played inconsistent defense. However, Lowe's injury did not properly heal by spring training in 1903, allowing Evers to win the starting job for the 1903 season. Lowe recovered during the 1903 season, but Evers' strong play made Lowe expendable; Evers finished third in the NL in fielding percentage among second basemen (.937), and finished fifth in assists (245) and putouts (306). The Cubs sold Lowe to the Pittsburgh Pirates after the season. Evers played 152 games in the 1904 season. Defensively, his 518 assists and 381 putouts led the NL, though his 54 errors led all NL second basemen. During the 1906 season, Evers finished fifth in the NL with 49 stolen bases, and led the league with 344 putouts and led all second basemen with 44 errors. The Cubs won the NL pennant in 1906, but lost the 1906 World Series to the Chicago White Sox four games to two; Evers batted 3-for-20 (.150) in the series. During the 1907 season, Evers led the NL with 500 assists. The Cubs repeated as NL champions in 1907, and won the 1907 World Series over the Detroit Tigers, four games to none, as Evers batted 7-for-20 (.350). During the 1908 pennant race, Evers alerted the umpires to Fred Merkle's baserunning error in a game against the New York Giants, which became known as "Merkle's Boner". Al Bridwell hit what appeared to be the game-winning single for the Giants, while Merkle, the baserunner on first base, went to the clubhouse without touching second base. Evers called for the ball, and the umpire ruled Merkle out. NL president Harry Pulliam ruled the game a tie, with a makeup to be played. The Cubs won the makeup game, thereby winning the pennant. The Cubs then won the 1908 World Series over Detroit, four games to one, as Evers again batted 7-for-20 (.350). For the 1908 season, Evers had a .300 batting average, good for fifth in the NL, and a .402 on-base percentage, second only to Honus Wagner. Evers drew 108 walks during the 1910 season, trailing only Miller Huggins. However, Evers missed the end of the season with a broken leg. Without Evers, the Cubs won the NL pennant, but lost the 1910 World Series to the Philadelphia Athletics, four games to one. Evers agreed to manage the Navy Midshipmen, a college baseball team, in 1911, despite the opposition of Cubs' manager Frank Chance. He experienced a nervous breakdown in 1911; returning to the Cubs later in the season, he played in only 46 games that year. Evers indicated that this was a result of a business deal that cost Evers most of his savings. Evers rebounded to bat .341 in 1912, good for fourth in the NL, and he led the NL with a .431 on-base percentage. Team owner Charles W. Murphy named Evers manager in 1913, signing him to a five-year contract, succeeding Chance. Boston Braves and Philadelphia Phillies After the 1913 season, Evers was offered $100,000 ($ in current dollar terms) to jump to the Federal League, but he opted to take less money to remain with the Cubs. In February 1914, after Evers signed his players to contracts, Murphy fired Evers as manager and traded him to the Boston Braves for Bill Sweeney and Hub Perdue. Murphy insisted that Evers had resigned as manager, which Evers denied. Evers insisted he was a free agent, but the league assigned him to the Braves. He signed a four-year contract at $10,000 per season ($ in current dollar terms), with a $20,000 signing bonus. During the 1914 season, the Braves fell into last place of the eight-team NL by July 4. However, the Braves came back from last place in the last ten weeks of the season to win the NL pennant. Evers' .976 fielding percentage led all NL second basemen. The Braves defeated the Philadelphia Athletics in the 1914 World Series, four games to none, as Evers batted 7-for-16 (.438). Evers won the Chalmers Award, the forerunner of the modern-day Most Valuable Player award, ahead of teammate Rabbit Maranville. Evers was limited in 1915 by injuries, and also served suspension for arguing with umpires. After a poor season in 1916, Evers began the 1917 season with a .193 batting average. Due to Evers' declining performance, the Braves placed Evers on waivers at mid-season, and he was claimed by the Philadelphia Phillies. Evers rejected an offer to become manager of the Jersey City Skeeters of the International League that offseason. He signed with the Boston Red Sox as a player-coach for the 1918 season, but was released without playing a game for them. Not receiving another offer from an MLB team, Evers traveled to Paris as a member of the Knights of Columbus to promote baseball in France. Coaching and managing career In 1920, Evers was slated to become head baseball coach at Boston College, however he instead accepted a last minute offer to join the New York Giants as a coach. He managed the Cubs again in 1921, succeeding Fred Mitchell. With the team struggling, Evers was fired in August and replaced with Bill Killefer. The Cubs finished seventh out of eight in the NL that season. Evers served as a coach for the Chicago White Sox in 1922 and 1923. He returned to second base in 1922, filling in for an injured Eddie Collins. Evers played in one game for the White Sox as Collins recovered. Evers was named the White Sox acting manager for the 1924 season, succeeding Chance, who was ordered home due to poor health. However, Evers suffered from appendicitis during the season, missing time during the year, and the White Sox opened up a managerial search when Chance died in September. The White Sox replaced Evers with Collins after the season. Evers rejoined the Braves as a scout. As Braves owner Emil Fuchs sold manager Rogers Hornsby to the Cubs and assumed managerial duties himself for the 1929 season, Fuchs hired Evers as a coach. Fuchs had no experience as a field manager, and so Evers became captain of the Braves, directing the team during the game and dealing with umpires. Evers and fellow coach Hank Gowdy played in one game in the 1929 season, coming into the bottom of the ninth inning on October 6, 1929. In the process, Evers became the oldest player in the league for the year. Evers remained a coach for the Braves under Bill McKechnie, who succeeded Fuchs as field manager in 1930, and served in the role through 1932. He continued to scout for the Braves, and then became general manager of the Albany Senators of the New York–Pennsylvania League in 1935. He resigned from Albany at the end of the season. Over his managerial career, he posted a 180–192 record. Managerial record Personal Evers married Helen Fitzgibbons. His son, John J. Evers, Jr., served as a Lieutenant in World War II, assigned to the Pacific Theater of Operations. When his son was 11 years old, Evers bought part of the Albany Senators and gave him the stock. Evers' brother, Joe Evers, and uncle, Tom Evers, also played in MLB. His great-nephew is Sports Illustrated writer Tim Layden. Though Evers and Tinker were part of one of the most successful double-play combinations in baseball history, the two despised each other off of the field. They went several years without speaking to each other after one argument. When Chance once named Tinker the smartest ballplayer he knew, Evers took it as a personal affront. Later life Evers operated a sporting goods store in Albany, New York in 1923. However, Evers lost his money and filed for bankruptcy in 1936. The store was passed down to Evers' descendants. He also worked as superintendent of Bleecker Stadium in Albany and spent time teaching baseball to sandlot players. Evers suffered a stroke in August 1942, which paralyzed the right side of his body. He remained bedridden or confined to a wheelchair for most of the next five years. Evers died of a cerebral hemorrhage in 1947 at St. Peter's Hospital in Albany, and is buried in Saint Mary's Cemetery in Troy. Legacy Evers retired in 1918, having batted .300 or higher twice in his career, stolen 324 bases and scored 919 runs. He frequently argued with umpires and received numerous suspensions during his career. His combative play and fights with umpires earned him the nickname "The Human Crab". Evers served as the pivot man in the "Tinker-to-Evers-to-Chance" double-play combination, which inspired the classic baseball poem "Baseball's Sad Lexicon", written by New York Evening Mail newspaper columnist Franklin Pierce Adams in July 1910. Evers, Tinker, and Chance were all inducted in the Hall of Fame in the same year. The Merkle play remains one of the most famous in baseball history. The ball used in the Merkle play was sold at an auction in the 1990s for $27,500, making it one of the four most valuable baseballs based on purchase price. Evers' role in Merkle's boner cemented his legacy as a smart ballplayer. Evers is mentioned in the 1949 poem "Line-Up for Yesterday" by Ogden Nash: See also List of Major League Baseball career stolen bases leaders List of members of the Baseball Hall of Fame List of Major League Baseball player-managers References External links 1881 births 1947 deaths Baseball players from New York (state) Boston Braves players Boston Braves scouts Chicago Cubs managers Chicago Cubs players Chicago Orphans players Chicago White Sox coaches Chicago White Sox managers Chicago White Sox players Major League Baseball player-managers Major League Baseball second basemen National Baseball Hall of Fame inductees New York Giants (NL) coaches Philadelphia Phillies players Sportspeople from Troy, New York Troy Trojans (minor league) players
3096875
https://en.wikipedia.org/wiki/Bump%20Elliott
Bump Elliott
Chalmers William "Bump" Elliott (January 30, 1925 – December 7, 2019) was an American football player, coach, and college athletics administrator. He played halfback at Purdue University (1943–1944) and the University of Michigan (1946–1947). Elliott grew up in Bloomington, Illinois, enlisted in the United States Marine Corps as a senior in high school and was assigned to the V-12 Navy College Training Program at Purdue University. He received varsity letters in football, baseball, and basketball at Purdue, before being called into active duty in late 1944, serving with the Marines in China. After being discharged from the military, he enrolled at the University of Michigan in 1946 and joined the football team for whom his brother Pete Elliott played quarterback. In 1947, he played for an undefeated and untied Michigan football team known as the "Mad Magicians", led the Big Nine Conference in scoring, won the Chicago Tribune Silver Football trophy as the Most Valuable Player in the Conference, and was selected as an All-American by the American Football Coaches Association. After graduating from Michigan in 1948, Elliott spent ten years as an assistant football coach at Oregon State, Iowa, and Michigan. He was appointed as Michigan's head football coach in 1959 and held that position until 1968, leading the team to a Big Ten Conference championship and Rose Bowl victory in the 1964 season. For a period of 21 years, from 1970 to 1991, he was the athletic director at the University of Iowa. During his tenure as athletic director, he hired coaches Dan Gable, Hayden Fry, Lute Olson, C. Vivian Stringer, and Dr. Tom Davis, and the Iowa Hawkeyes won 41 Big Ten Conference championships and 11 NCAA titles. In 1989, Elliott was inducted into the College Football Hall of Fame. Early life Chalmers William Elliott was born in Detroit, but grew up in Bloomington, Illinois. His father, J. Norman Elliott, was an ears, nose and throat doctor who also coached football at Illinois Wesleyan University from 1930 to 1934. Elliott's given name is Chalmers, but he was known by the nickname "Bump" since he was six months old, though no one remembered how he received the nickname, "not even his mother." Elliott and his younger brother, Pete Elliott, both played football together for Bloomington High School, where Bump was an All-State halfback in 1942, and Pete made it as a fullback in 1943. Had it not been for World War II, Bump and Pete likely would have attended the University of Illinois, which was about 50 miles from their home in Bloomington. However, both brothers wanted to get into the V-12 Navy College Training Program, and Illinois did not have such a program. Bump enlisted in the United States Marine Corps while still a senior in high school and was called to active duty in 1943. He was assigned to the V-12 officer training program at Purdue University. His brother, Pete, also enlisted and was assigned to officer training at Michigan. Purdue University and military service Elliott attended Purdue from 1943 to 1944. In his freshman year, Elliott earned varsity letters in football, basketball and baseball. He played three games for the unbeaten and untied 1943 Purdue football team where he was described as "a capable triple-threater and stellar defensive performer." He scored a touchdown against Minnesota in his first game, and made a key interception at Purdue's ten-yard line in the season's final game against Indiana. A May 1944 newspaper article reported that the 19-year-old Elliott, who had been a "high school sensation last year," had won three major athletic letters in his first year as a Naval V-12 student at Purdue. "A speedy 160-pound, five foot 10-inch performer, he lost little time making his mark in football last fall once he became eligible upon completion of his first V-12 term." Elliott appeared in the final three games of the football season, and his performance in the season's final game against Indiana "provided one of the highlights of the Boilermaker season." In basketball, he was "consistent as a guard on Purdue's cage combination." In baseball, Elliott played shortstop and center field, where he was "a steady fielder with a strong arm." In a May 1944 game, Elliott led the Boilermakers to a 17–4 win over Wisconsin, with five hits, five stolen bases, four RBIs, three runs scored, and four putouts in center field. His performance against Wisconsin was "one of the biggest baseball days ever turned" by a Big Ten baseball player. Elliott played in the first six games of the 1944 football season for Purdue before being transferred by the Marine Corps. In a game against Marquette in late September, he broke up a 7–7 tie with successive touchdown runs of 24 and 71 yards. He was also the only defensive player in 1944 to pull down Illinois' Claude "Buddy" Young from behind. Elliott received orders to report for active duty in October 1944, and he played his last game in a Purdue uniform against the Michigan Wolverines on October 28, 1944. In November 1944, Elliott was sent to Parris Island. He was later sent to China and emerged from the war as a Marine lieutenant. University of Michigan Elliott and his younger brother, Pete Elliott, were teammates at Bloomington High School in 1943 and again at Michigan in 1946 and 1947. After his discharge from the military, Bump joined Pete at Michigan, where Pete played quarterback and Bump was the right halfback for the undefeated 1947 team. Before the 1948 Rose Bowl, one article noted that the two brothers roomed together at Michigan and arranged their programs so that their classes were identical. The article observed: "They look alike, act alike and think alike and in Ann Arbor, Mich., when they walk down the street any Michigan student can recognize Bump and Pete, the inseparable Elliott Brothers, Wolverines right half and quarterback respectively." The brothers shared the same distinctive golden red hair, and the two were so close that they told a reporter in 1947 that a girl had to receive "the Bumper stamp of approval" before passing Pete's test. 1946 season After being discharged from the Marine Corps, Elliott attended the University of Michigan, where he joined his brother, Pete, in Michigan's backfield. Elliott "practically stepped off a World War II transport from Marine Corps duty in China to Michigan's Ferry Field and stardom." With less than a week of conditioning after his discharge from the Marines, he was reported to be giving Michigan's coaching staff "something lovely to look at." In a 14–14 tie with Northwestern in mid-October 1946, Elliott scored all 14 of Michigan's points. He scored the first touchdown late in the first quarter on a 37-yard pass from Bob Chappuis in the corner of the end zone. In the fourth quarter, Michigan fullback Bob Wiese intercepted a pass on Michigan's 1-yard line, and lateralled to Elliott on the Michigan 40-yard line. From that point, Elliott ran it back 60 yards down the sideline for his second touchdown. He again scored two touchdowns in Michigan's 21–0 win over Minnesota on November 2, 1946. He also helped Michigan to a 28–6 win over Wisconsin with a bullet pass to end Bob Mann in the end zone. Big Nine MVP in 1947 In 1947, Elliott played for the Wolverines team known as the "Mad Magicians" that went undefeated and untied, and defeated the Southern Cal Trojans, 49–0 in the 1948 Rose Bowl. The team is considered to be the greatest Michigan team of all time. Along with Bob Chappuis, Elliott was one of the key players in Michigan's undefeated season. He led the Big Nine in scoring, made the All-American team picked by the American Football Coaches Association, and was voted Most Valuable Player in the Big Nine Conference to win the Chicago Tribune Silver Football trophy. Elliott was one of two Michigan players in 1947 (the other was fullback Jack Weisenburger) who played both offense and defense. Indeed, Elliott was actually a four-way threat as he contributed in rushing, receiving, punt returns, and defense. He scored a total of 12 touchdowns in 1947—eight rushing, two receiving, one on a punt return, and another on an interception return. He contributed 911 all-purpose yardage – 438 rushing, 318 receiving, and 155 on punt returns. He averaged 6.4 yards per carry as a rusher, 19.9 yards per reception, and 17.2 yards per punt return. Michigan head coach Fritz Crisler called Elliott the greatest right halfback he had ever seen. Elliott had a breakthrough season that began with the team's "Blue" versus "White" exhibition game in mid-September in which he scored four touchdowns, including 50- and 60-yard runs. He scored touchdowns in each of the team's early season wins over Michigan State (55–0), Stanford (49–13), and Pitt (69–0). His touchdown against Pitt came on defense, as he intercepted a pass and ran it back 37 yards. In the Big Nine opener against Northwestern, Elliott scored on a nine-yard run less than two minutes after the game started, as the Wolverines won, 49–21. In Michigan's closest contest of the 1947 season, a 13–6 win over Minnesota, Elliott caught a 40-yard pass from Bob Chappuis on his fingertips at the Minnesota 15-yard line and went on to score with a minute and 15 seconds to go in the first half. Said one reporter: "It was the exceptional speed of Elliott on this play that turned the tide. He completely outmaneuvered the Minnesota secondary." The biggest challenge of the 1947 season came in a 14–7 win over Illinois. The Associated Press described Elliott as Michigan's "Big Cog" in the Illinois game, and the United Press proclaimed: "Bump Elliott Steals Show in 14 to 7 Defeat of Illinois Saturday." In the first quarter, he ran back a punt 75 yards for a touchdown, as Bob Mann "bulldozed the path with a vicious block", and "the Bloomington blaster scampered down the sidelines." Elliott also set up the Wolverines second score with a long reception to the Illinois four-yard line. He also played a key role on defense, intercepting a pass at the Michigan nine-yard line to halt an Illinois drive. Another article concluded: "The individual hero was Bump Elliott, a 168-pound halfback who loped 74 yards for one touchdown and caught a pass for a 52 yard gain to set up the second and winning marker." He finished the season scoring two touchdowns each in games against Indiana and Ohio State. At the end of the season, Elliott and Bob Chappuis both received 16 of 18 possible points in voting by the AP for the All-Big Nine football team. Elliott weighed only during his All-American season in 1947. Asked later about how he managed to compete at his weight, Elliott noted, "I was awful lucky to get by at that weight." 1948 Rose Bowl against Southern Cal As the Big Nine Conference champions, the 1947 Wolverines were invited to play in the 1948 Rose Bowl game against the Southern Cal Trojans. Michigan dominated the game, winning 49–0, as "the shifty Chappuis and the speedy Elliott began to fake (the Trojans) out of their shoes." Elliott scored on an 11-yard touchdown pass from Chappuis. In August 1948, Elliott was chosen as the captain of the College All-Stars in their game against the Chicago Cardinals at Soldier Field. Injured in practice, Elliott was unable to play as the Cardinals beat the All-Stars, 28–0. Application for 1948 eligibility denied Elliott applied for an extra year of eligibility in 1948. Due to his military service, he played in only three games as a freshman and six games in his sophomore season. Under the Big Nine conference code, he was eligible for a fifth season due to a war-caused stay at Purdue in 1943–1944. However, his request was denied by the Big Nine Conference. The decision was criticized by Michigan's representative on the Big Nine faculty committee as a "grave injustice." Nonetheless, Elliott set the Michigan career interception return yards record that stood for five years until Don Oldham pushed the record from 174 yards to 181 yards. His 174 career yards still ranks fifth in school history. Coaching career The Elliott brothers served as assistant coaches together at Oregon State in 1949 and 1950, before going their separate ways. The Elliotts' coached against each other in the early 1960s while Bump was the head football coach at Michigan and Pete held the same position at the University of Illinois. In November 1963, Pete Elliott's Illinois team was ranked No. 2 in the country and the favorite for the Rose Bowl when it faced off against Bump Elliott's Michigan team. Michigan had a record of 2–3–1 when the brothers met in 1963, but Michigan came out on top, 14–8, marking the fourth time in four games that Bump's Wolverines came out on top of brother Pete's Illini. After graduating from Michigan, Bump turned down an offer to play professional football for the Detroit Lions, saying he said he had obtained a job in Chicago outside of football. Elliott also considered going into medicine as his father had done, but he chose instead to go into coaching. He started his coaching career at Michigan in the fall of 1948 as assistant backfield coach. In the spring of 1949, he was hired as an assistant coach under Kip Taylor at Oregon State, where he remained for three seasons, from 1949 to 1951. Elliott later recalled, "I was only 24 when Kip Taylor hired me as backfield coach at Oregon State, and it bothered me a little because there were two backs on the squad who were older than I was." It was even worse for his brother Pete, who was 22 when he was hired to coach the ends. Bump recalled: "After practice one night some players noticed Pete light up a cigaret. One of his ends drew Pete aside and said in a fatherly voice, 'You shouldn't smoke, coach; I didn't do it when I was your age." Oregon State had an overall record of 14–15 in Elliott's three years as an assistant coach. In 1952, Elliott was hired as an assistant at the University of Iowa under its head coach, Forest Evashevski, another former All-American at the University of Michigan. On being hired at Iowa, Elliott said, "I should feel at home back in the Big Ten. I grew up in Bloomington – 40 miles from Illinois. I played at Purdue and Michigan and coached at Michigan. My father went to Iowa and Northwestern and now I'm coaching at Iowa." He stayed at Iowa until 1957. Elliott was with the Hawkeyes in 1956 when they went 9–1, won the Big Ten championship, and defeated his former team, Oregon State, 35–19, in the 1957 Rose Bowl game. He returned to Michigan in 1957 as a backfield coach under Bennie Oosterbaan. In 1959, Elliott was elevated to head football coach at Michigan. He was the head coach for ten years from 1959 to 1968, posting a career record of 51–42–2, for a .547 winning percentage. In Big Ten Conference play, his record was 32–34–2 (.485). Although his tenure at Michigan was unsuccessful by the school's historic standards, he did lead the 1964 Wolverines to a 9–1 record, a Big Ten title and a win in the Rose Bowl against Oregon State. His final team, in 1968, won eight of its first nine games but then suffered a humiliating 50–14 loss against Ohio State. Despite having a 36-point lead, Ohio State Coach Woody Hayes passed for, and failed to get, a two-point conversion after the final score and with 1:23 remaining in the game. When asked why he went for the two-point conversion, Hayes reportedly said, "Because we couldn't go for three!" Shortly after the game, Elliott resigned, and athletic director Don Canham hired Bo Schembechler to replace him as head coach. Schembechler would use the memory of the 1968 Ohio State loss to motivate his team the following season. There were reports during the 1968 season that Elliott had been given an ultimatum: "Either win or face the possibility of being kick upstairs." There were also reports when Don Canham was hired that Elliott had expected to be named athletic director and that there was "bad blood" between Canham and Elliott. However, Canham later denied that Elliott was "eased out" of his job. In an interview with Joe Falls, Canham said: "Bump and I are close personal friends. Bump is not naïve – he knows that when you work at a place for 10 years and you're not winning consistently, it doesn't become fun for anybody – the coach, the alumni, the players or anybody else. We talked about this and we talked about it openly. If Bump had said to me, 'Look, give me a couple of more years,' I would have given it to him. I mean that. I didn't fire Bump Elliott. My first year as director Bump had an 8 and 2 record. Anyone could live with that." According to Canham, he met with Elliott in December 1968 and offered him the job of associate athletic director. Canham told Elliott he could stay on as coach if he wanted, but Canham could not promise him that the job of associate athletic director would still be open in another couple of years. Canham said: "Bump smiled at me and said, 'I don't have to think about it.' He was ready to get out. I did not force him, and I mean that in all honesty. But the job had ceased to be fun for him." Schembechler later recalled that he remained loyal to Elliott when he took over as Michigan's head coach in 1969. When Schembechler won the Big Ten championship in 1969, he said, "I made certain I let everyone know I won with Bump's kids. Bump was a man of great class and he showed it to me again and again in that first year, never getting in the way, always trying to be helpful, always trying to encourage me." After Michigan won the 1969 Ohio State game, the team presented the game ball to Elliott, and Schembechler noted that "I don't remember when I felt happier about anything in my life." From 1969 to 1970, Elliott was the associate director of athletics at Michigan. Athletic director at Iowa Elliott became the men's athletic director at the University of Iowa in 1970, succeeding Forest Evashevski. He came to Iowa in the midst of a feud between athletic director Forest Evashevski and football coach Ray Nagel. Evashevski resigned in May 1970, and Elliott was hired to replace him. On accepting the job, Elliott noted: "It's difficult to leave a town where you've lived for 13 years (Ann Arbor, Michigan), but the opportunity is so good at Iowa with the people and the school that no one could pass it up." During Elliott's tenure, the school's teams won 34 Big Ten championships and 11 NCAA titles, as well as making three Rose Bowl appearances and one trip to the Final Four in basketball. The university also built a basketball arena (Carver-Hawkeye Arena), erected an indoor workout center for football and added more than 10,000 seats to its football stadium. His career at Iowa was marked by a general resurgence in the competitiveness of Iowa athletics. Elliott hired a number of notable coaches, including Lute Olson, Dan Gable, Hayden Fry, and Dr. Tom Davis. During Elliott's 21 years as athletic director, the Iowa Hawkeyes won 41 Big Ten championships in football (1981, 1985, 1990), wrestling (1974–1990), men's basketball (1970, 1979), baseball (1972, 1974, 1990), men's gymnastics (1972, 1974, 1986), men's swimming (1981, 1982). See Iowa Hawkeyes for complete list of championships. Elliott was known as "a coach's AD." "He hired coaches he trusted, then gave them the resources, latitude and support they needed to operate as they saw fit – providing they played by the rules." Iowa wrestling coach Dan Gable said his wife cried on learning that Elliott had retired. In 1999, Gable wrote: "Right after I came to coach at the University of Iowa, I had a meeting with Bump Elliott, who was the Athletic Director. I'll never forget what Bump said to me: 'Don't ask for the moon. Strive to get there, sure, but do it wisely through continuing to build upon what you already have. As you build, come see me, and we'll see how I can help you out.' I now call that bit of wisdom the Bump Elliott Rule, and it serves a good reminder to keep things in perspective. Gradual, solid growth is better than any quick fix.""The one thing we emphasized from the start was that our staff had to make sure we were 100 percent loyal to each other and the university," Elliott said at the time of his retirement. "There could be no jealousy between the coaches and various programs. I wanted no one talking behind anyone's backs. I wanted absolute loyalty. If not, then that person could leave any time." Elliott was also the one who hired Hayden Fry as Iowa's football coach in 1979. Fry later said that Elliott was one of the principal reasons he chose to coach at Iowa. In his autobiography, Fry wrote: "Iowa had one thing in its favor as far as I was concerned: Bump Elliott was its athletic director. Bump had a reputation as being a fair, honest and well-liked administrator." Elliott told Fry that he would be the last football coach Bump ever hired. Fry was puzzled and asked Elliott what he meant. Elliott said, "Simple, I don't think they'll give me a chance to hire another coach, so if you don't make it, neither will I." He is the only person to have been with Rose Bowl teams in five capacities – player, assistant coach, head coach, assistant athletic director, and athletic director. Family Elliott and his wife Barbara met while he was with the Marine Corps at Purdue and she was studying pre-school education there. They married in 1949, and had three children, Bill (born October 1951), Bob (1953–2017), and Betsy (born c. 1955). Son Bob Elliott was Iowa's defensive coordinator under Hayden Fry in the 1990s. Elliott lived in his later years at the Oaknoll Retirement Community in Iowa City. He died on December 7, 2019, at age 94. Honors and accolades Elliott received numerous honors and accolades, including the following: Recipient of the Chicago Tribune Silver Football as the Most Valuable Player in the Big Nine Conference in 1947; Selected as an All-American by the American Football Coaches Association in 1947; Inducted into the University of Michigan Hall of Honor in 1986 for his contributions in football, basketball, baseball, and as a football coach; Inducted into the College Football Hall of Fame in 1989; Inducted into the National Iowa Varsity Club Hall of Fame in 1997; Inducted into the Michigan Sports Hall of Fame in 2002; and Elliott Drive, the Iowa City street on which Carver-Hawkeye Arena is located, is named in his honor. The sculpture of the 12' stainless steel hawk, Strike Force, is located in a small park just south of Carver-Hawkeye arena. In addition to the street in his name and the sculpture, a scholarship in Elliott's name were all spearheaded by his good friend Earle Murphy to honor Bump and future Iowa athletes. Head coaching record See also University of Michigan Athletic Hall of Honor References External links Profile at Bentley Historical Library, University of Michigan Athletics History 1925 births 2019 deaths American football running backs Iowa Hawkeyes athletic directors Michigan Wolverines football coaches Michigan Wolverines football players College Football Hall of Fame inductees United States Marine Corps personnel of World War II United States Marines Sportspeople from Bloomington, Illinois Coaches of American football from Illinois Players of American football from Illinois Players of American football from Detroit Military personnel from Illinois Military personnel from Detroit
35056192
https://en.wikipedia.org/wiki/SS%20Prince%20of%20Wales%20%281887%29
SS Prince of Wales (1887)
PS (RMS) Prince of Wales No. 93381 was a steel built paddle steamer which was purchased together with her sister , by the Isle of Man Steam Packet Company from the Isle of Man, Liverpool and Manchester Steamship Company in 1888 - referred to as The Manx Line. Construction and dimensions Prince of Wales was built by Fairfield Shipbuilding and Engineering Company, Govan, in 1887, and was launched on Thursday, 14 April 1887. Fairfield's also supplied her engines and boilers. The cost of her construction is not recorded. However, she was purchased by the Steam Packet Company together with PS Queen Victoria for the sum of £155,000 () Length 330'; beam 39'1"; depth 15'2". Prince of Wales had a registered tonnage of , was certified to carry 1546 passengers and had a crew complement of 69. Both sisters were fitted with compound engines developing at 40.5 r.p.m., with a boiler steam pressure of . Both the Prince of Wales and Queen Victoria's engines were referred to as a coupled two crankshaft engine. The crankshaft was connected at the crank by a drag link, the object of which was to get the two cranks at right angles, one driving the valve gear of the other. The high-pressure cylinder was horizontal to, and the low-pressure cylinder diagonal to, the centre of the shaft. The two cylinders were 61 and 112 inches in diameter with a 78-inch stroke. So successful were these two ships that a number of other companies adopted the engine design for cross-channel work. Service life The Manx Line, as the Isle of Man, Liverpool and Manchester Steamship Company was called commenced service with the Prince of Wales and her sister Queen Victoria. Both ships had been built by Fairfield's to excel the Mona's Isle and the [[SS Mona's Queen (1885)|Mona's Queen]] it being the intention of the shipbuilding firm that the Steam Packet Company should be forced to buy these two ships. To counter these rivals, the Steam Packet Company reduced fares, and The Manx Line retaliated. They advertised a 3hr 30mins passage from Douglas to Liverpool, and their two ships were certainly capable of keeping such a schedule, being able to complete passage between the ports 30mins quicker than the Steam Packet ships. As in the early days of the Isle of Man Steam Packet Company, racing between the two Companies' ships took place. On 19 May 1888, Mona's Isle and Queen Victoria had an exciting race, with the Queen Victoria winning by 32 minutes. As a consequence of reckless price-cutting both companies lost money, and at the end of 1888, the Steam Packet Company bought the two Manx Line ships, both of which became reliable and valued members of the fleet. The Prince of Wales was considered a very fast ship in her time. During her sea trials she was recorded over a measured mile at a speed of 24.5 knots. She had beaten the best vessels in the Steam Packet fleet in the short rivalry between her original owners and the Steam Packet on the Douglas route, and was a most valuable addition to the fleet upon her acquisition. Prince of Wales once steamed from Rock Light, New Brighton to Douglas Head (a distance of 68 nautical miles), in 2hrs. 59 min., an average speed of 23.25 knots. In August 1894, she collided with and sank the steamer Hibernia. Two of the crew of the Hibernia were lost, and a third man was picked up by the Steam Packet ship. Some months later the Manx vessel was held to blame, and a claim of £1,750 was awarded against the company. War servicePrince of Wales was sold to the Admiralty in 1915. Her name was changed to Prince Edward and she was fitted out as a net-laying anti-submarine ship. Together with her sister, both vessels were still considered fast for their day, and although they were getting on in years, naval architects appeared to think that paddlers, if not converted to troop carriers, were well suited to an anti-submarine role. The two ships were soon in the Eastern Mediterranean theatre, in support of troopships and even warships in the submarine-infested seas. At one time during the Gallipoli Campaign they found themselves accompanying their Steam Packet sister , which was landing troops at Suvla Bay. Disposal After the Great War, she was sold under the name Prince Edward to T. C. Pas for £5,600; () and was broken up at Scheveningen, the Netherlands. References Bibliography Chappell, Connery (1980). Island Lifeline'' T.Stephenson & Sons Ltd Ships of the Isle of Man Steam Packet Company 1887 ships World War I merchant ships of the United Kingdom Ferries of the Isle of Man Steamships Steamships of the United Kingdom Paddle steamers of the United Kingdom Maritime incidents in 1894 Merchant ships of the United Kingdom Ships built in Govan
1115657
https://en.wikipedia.org/wiki/Laser%20128
Laser 128
The Laser 128 is an Apple II clone, released by VTech in 1986 and comparable to the Apple IIe and Apple IIc. Description VTech Laser 128 has 128 kB of RAM. Like the Apple IIc, it is a one-piece semi-portable design with a carrying handle and built-in 5¼-inch floppy disk drive, that uses the 65C02 microprocessor, and supports Apple II graphics. Unlike the Apple IIc, it has a numeric keypad, Centronics printer port, and 128 kB of dedicated video RAM. The 15-pin D-sub digital video port is compatible with Apple's IIc flat panel display, but unlike the IIc, Laser 128's port is also RGBI interface compatible with an adapter cable. The first 128 model has a proprietary 560x384 video mode removed in later units. Laser 128 has a single expansion slot for Apple II peripheral cards, which gives it better expansion capabilities than a IIc, but cards remain exposed; the slot is intended for an $80 expansion chassis with two slots compatible with the Apple IIe's Slot 5 and Slot 7. The computer also has an internal memory-expansion slot, requiring a card that allows up to 1 MB of additional RAM that can be used as a RAM disk. Laser 128EX and 128EX/2, also expandable by 1 MB, come with the memory expansion card. Models History Announced in early 1986, VTech sold the Laser 128 in the US at a suggested retail price of $479, while Central Point Software sold it by mail for $395; by comparison, the Apple IIe sold for $945 in April 1986. Apple filed a lawsuit to stop distribution but VTech obtained United States Customs approval to export the Laser 128 to the United States in 1986, and the lawsuit reportedly had no effect on demand for the computer. Central Point—the most prominent dealer—sold the Laser 128 and accessories with full-page magazine advertisements, saying that "a computer without expansion slots is a dead-end that stays behind as technology advances". It advertised the Laser 128 in Commodore computer magazines; the name was, Central Point president Mike Brown said, "chosen to sound like the Commodore 128", and the company intended to appeal to those who wanted to use the large Apple software library with a computer that cost the same as the comparable Commodore. By late 1986, other mail-order firms also sold the Laser 128, and at least one peripheral maker advertised its product's compatibility with the clone. By 1988, VTech had purchased a majority share in Central Point Software and formed Laser Computer as a division of the company. It ended Central Point's mail order sales of the 128, only selling through dealers such as Sears. inCider magazine wrote that year that "Laser will never sell as many computers or have as big a distribution network as Apple, but there's no doubt that the 128 [has] won a place in the Apple market, and irritated Apple in the process". VTech subsequently released the Laser 128EX (1987), with a 3.6 MHz CPU, and the $549 Laser 128EX/2 (mid-1988), with a 3.5-inch disk drive and MIDI port. (A $499 version of the 128EX/2 with a 5.25-inch drive was available.) Apple soon released the Apple IIc Plus. Compatibility While the Apple II clones from Franklin were discontinued after the company lost Apple Computer, Inc. v. Franklin Computer Corp. (1983), VTech reverse-engineered the Apple Monitor ROM using a clean room design rather than copying it, and licensed an Applesoft BASIC-compatible version of Microsoft BASIC. Apple carefully studied the Laser 128 but was unable to force the clone off the market. Despite its physical resemblance to the IIc, software sees the Laser 128 as an enhanced IIe with 128 kB RAM and Extended 80-Column Text Card. Apple said in 1984 that the IIc was compatible with 90% of all Apple II software. Central Point said in 1986 that testing had found that only Choplifter, David's Midnight Magic, and Serpentine did not run on the clone, because of Broderbund's copy protection. "We think it safe to surmise that the latest and best software is 90 percent likely to run on the Laser 128", InfoWorld wrote in 1986. Compatible software included AppleWorks, Quicken, Apple Writer, VisiCalc, Flight Simulator II, The Print Shop, and Where in the World is Carmen Sandiego?, sometimes with slightly different colors. 12% of 129 tested software packages were incompatible, mostly educational software or games. While incompatible with some hardware, the magazine wrote that the expansion slot and parallel port let the Laser 128 use other products which were incompatible with the IIc. inCider called the computer "amazingly Apple-compatible", estimating 95% compatibility. Programs that successfully ran on the Laser 128 included F-15 Strike Eagle, Fantavision, WordPerfect, and The Hitchhiker's Guide to the Galaxy, and the magazine wrote that it was easy to install $25 upgraded ROM chips if necessary to improve compatibility. A+ similarly found that the computer was compatible with 28 of 30 popular Apple II programs, while only about half worked with the Franklin Ace. BYTE wrote that expansion cards worked properly but the magazine found "mixed results" with software compatibility, stating that "graphics programs I tested revealed flaws in the Laser 128's compatibility with both the Apple IIc and II+". The Laser 128's popularity ensured that most major software companies tested their software on the Laser as well as on Apple hardware. Licensing BASIC greatly reduced the amount of code that had to be reimplemented. Applesoft BASIC constitutes the largest and most complex part of an Apple II's ROM contents. Microsoft made most of its money by retaining the rights to the software that it sold to others. Like IBM with PC DOS, Apple did not have an exclusive license for the Applesoft dialect of BASIC, and VTech was free to license it. Much Apple software depends on various machine code routines that are a part of BASIC in ROM. Reception InfoWorld in May 1986 stated that "we can see why" Apple opposed the Laser 128's importation to the United States. It stated that other than the keyboard feel, the computer's external features (the expansion slot, numeric keypad, and Centronics port) improved on the IIc. Given the high degree of compatibility and a price less than half that of the IIc, the magazine concluded that the Laser 128 "is a real bargain". Writing that "it's cheap and it works", inCider in December 1986 stated that the Laser 128 "[deserved] a look from anyone considering a Commodore. Or, to be blunt, anyone considering an Apple IIc". The magazine also disliked the keyboard's feel and called the computer "homely", but concluded that "The Laser is a remarkably compatible, competent performer. The Apple market isn't known for hardware bargains, but it has one now". BYTE in January 1987 preferred the Laser 128's keyboard, including the keypad and cursor keys' locations, to that of the Apple IIc and approved of the documentation's quality. Despite describing the software incompatibility issues as "disappointing" the magazine concluded that its "technical issues are relatively minor", and that its low price made the computer "perfect for someone looking for a second computer or an inexpensive first computer that runs the largest pool of software available today". inCider in November 1988 stated that the Laser 128EX/2 "has everything you can possibly put into an 8-bit Apple II ... in terms of standard equipment, it's more than a match for the IIc Plus". The Apple product was faster (4 MHz vs 3.6 MHz), and the $126 difference in price between the two computers was much smaller than the IIc's more than $300 premium over the Laser 128, but the 128EX/2's memory was more easily expandable, important to AppleWorks users. The magazine concluded that while the "128EX/2 is a slick machine, the most fully loaded II compatible you can buy", the 5 1/4-inch version of the EX/2—or the older EX for those who did not need a 3 1/2-inch drive—"may be bargain hunters' best bet". References External links Laser 128 6502-based home computers Apple II clones VTech
530421
https://en.wikipedia.org/wiki/Online%20poker
Online poker
Online poker is the game of poker played over the Internet. It has been partly responsible for a huge increase in the number of poker players worldwide. Christiansen Capital Advisors stated online poker revenues grew from $82.7 million in 2001 to $2.4 billion in 2005, while a survey carried out by DrKW and Global Betting and Gaming Consultants asserted online poker revenues in 2004 were at $1.4 billion. In a testimony before the United States Senate regarding Internet Gaming, Grant Eve, a Certified Public Accountant representing the US Accounting Firm Joseph Eve, Certified Public Accountants, estimated that one in every four dollars gambled is gambled online. Overview Traditional (or "brick and mortar", B&M, live, land-based) venues for playing poker, such as casinos and poker rooms, may be intimidating for novice players and are often located in geographically disparate locations. Also, brick and mortar casinos are reluctant to promote poker because it is difficult for them to profit from it. Though the rake, or time charge, of traditional casinos is often high, the opportunity costs of running a poker room are even higher. Brick and mortar casinos often make much more money by removing poker rooms and adding more slot machines. For example, figures from the Gaming Accounting Firm Joseph Eve estimate that poker accounts for 1% of brick and mortar casino revenues. Online venues, by contrast, are dramatically cheaper because they have much smaller overhead costs. For example, adding another table does not take up valuable space like it would for a brick and mortar casino. Online poker rooms also allow the players to play for low stakes (as low as 1¢/2¢) and often offer poker freeroll tournaments (where there is no entry fee), attracting beginners and/or less wealthy clientele. Online venues may be more vulnerable to certain types of fraud, especially collusion between players. However, they have collusion detection abilities that do not exist in brick and mortar casinos. For example, online poker room security employees can look at the hand history of the cards previously played by any player on the site, making patterns of behavior easier to detect than in a casino where colluding players can simply fold their hands without anyone ever knowing the strength of their holding. Online poker rooms also check players' IP addresses in order to prevent players at the same household or at known open proxy servers from playing on the same tables. Digital device fingerprinting also allows poker sites to recognize and block players who create new accounts in attempts to circumvent prior account bans, restrictions and closures. History Free poker online was played as early as the late 1990s in the form of IRC poker. Planet Poker was the first online card room to offer real money games in 1998. The first real money poker game was dealt on January 1, 1998. Author Mike Caro became the "face" of Planet Poker in October 1999. The major online poker sites offer varying features to entice new players. One common feature is to offer tournaments called satellites by which the winners gain entry to real-life poker tournaments. It was through one such tournament on PokerStars that Chris Moneymaker won his entry to the 2003 World Series of Poker. He went on to win the main event, causing shock in the poker world, and beginning the poker boom. The 2004 World Series featured three times as many players as in 2003. At least four players in the WSOP final table won their entry through an online cardroom. Like Moneymaker, 2004 winner Greg Raymer also won his entry at the PokerStars online cardroom. In October 2004, Sportingbet, at the time the world's largest publicly traded online gaming company (SBT.L), announced the acquisition of ParadisePoker.com, one of the online poker industry's first and largest cardrooms. The $340 million acquisition marked the first time an online card room was owned by a public company. Since then, several other card room parent companies have gone public. In June 2005, PartyGaming, the parent company of the then-largest online cardroom, PartyPoker, went public on the London Stock Exchange, achieving an initial public offering market value in excess of $8 billion. At the time of the IPO, ninety-two percent of Party Gaming's income came from poker operations. In early 2006, PartyGaming moved to acquire EmpirePoker.com from Empire Online. Later in the year, bwin, an Austrian-based online gambling company, acquired PokerRoom.com. Other poker rooms such as PokerStars that were rumored to be exploring initial public offerings have postponed them. As of March 2008, there are fewer than forty stand-alone cardrooms and poker networks with detectable levels of traffic. There are however more than 600 independent doorways or 'skins' into the group of network sites. As of January 2009, the majority of online poker traffic occurs on just a few major networks, among them PokerStars, Full Tilt Poker and the iPoker Network. As of February 2010, there are approximately 545 online poker websites. Within the 545 active sites, about two dozen are stand-alone sites (down from 40 in March 2008), while the remaining sites are called “skins” and operate on 21 different shared networks, the largest network being iPoker which has dozens of skins operating on its network. Of all the online poker rooms PokerStars.com is deemed the world's largest poker site by number of players on site at any one time. By May 2012 PokerStars.com had increased their market share to more than 56%. The year 2011 is known as the infamous year of Black Friday, when the U.S Department of Justice seized the domain names of PokerStars, Full Tilt & Absolute Poker, effectively freezing the bankrolls of their player base. Full Tilt was accused by the DoJ of acting as a Ponzi scheme and scamming players out of $300 million. On the other hand, PokerStars paid $1 billion in fines immediately. In 2014, PokerStars became the largest publicly traded company in the industry of poker when businessman David Baazov initiated a takeover bid costing $4.9 billion. The COVID-19 pandemic has resulted in a massive increase in online poker traffic. The pandemic is believed to have directed both professional and recreational players who normally prefer live poker to online platforms due to the indefinite closure of most casinos and other live gaming venues worldwide, with even many unlicensed venues shutting down. In addition, the sudden dearth of live entertainment options due to the widespread disruption of the sports and entertainment schedules around the world is believed to have resulted in more than the usual number of casual players turning to online poker as an alternative. Many operators reported traffic of double or more the previous volume, depending on the time of day. Legality From a legal perspective, online poker may differ in some ways from online casino gambling. However, many of the same issues do apply. For a discussion of the legality of online gambling in general, see online gambling. Online poker is legal and regulated in many countries including several nations in and around the Caribbean Sea, and most notably the United Kingdom. United States In the United States, the North Dakota House of Representatives passed a bill in February 2005 to legalize and regulate online poker and online poker card room operators in the state. The legislation required that online poker operations would have to physically locate their entire operations in the state. Testifying before the state Senate Judiciary committee, Nigel Payne, CEO of Sportingbet and owner of Paradise Poker, pledged to relocate to the state if the bill became law. The measure, however, was defeated by the State Senate in March 2005 after the U.S. Department of Justice sent a letter to North Dakota attorney general Wayne Stenehjem stating that online gaming "may" be illegal, and that the pending legislation "might" violate the federal Wire Act. However, many legal experts dispute the DOJ's claim. In response to this and other claims by the DOJ regarding the legality of online poker, many of the major online poker sites stopped advertising their "dot-com" sites in American media. Instead, they created "dot-net" sites that are virtually identical but offer no real money wagering. The sites advertise as poker schools or ways to learn the game for free, and feature words to the effect of "this is not a gambling website." On October 13, 2006, President Bush officially signed into law the SAFE Port Act, a bill aimed at enhancing security at U.S. ports. Attached to the Safe Port Act was a provision known as the Unlawful Internet Gambling Enforcement Act of 2006 (UIGEA). According to the UIGEA, "unlawful internet gambling" means to place, receive, or otherwise knowingly transmit a bet or wager by means of the internet where such bet is unlawful under any law in the State in which the bet is initiated, received, or otherwise made. Thus, the UIGEA prohibits online gambling sites from performing transactions with American financial institutions. As a result of the bill, several large publicly traded poker gaming sites such as PartyPoker, PacificPoker and bwin closed down their US-facing operations. The UIGEA has had a devastating effect on the stock value of these companies. Some poker sites, such as PokerStars, Full Tilt Poker, Absolute Poker, continued to operate and remained open to US players. Following passage of UIGEA, former U.S. Senator Al D'Amato joined the Poker Players Alliance (PPA). Part of the PPA's mission is to protect and to advocate for the right of poker players to play online. D'Amato's responsibilities include Congressional lobbying. In April 2008, the PPA claimed over 1,000,000 members. Other grassroots organizations, including the Safe and Secure Internet Gambling Initiative, have formed in opposition to UIGEA, to promote the freedom of individuals to gamble online with the proper safeguards to protect consumers and ensure the integrity of financial transactions. On November 27, 2009, Department of the Treasury Secretary Timothy F. Geithner and Federal Reserve Chairman Ben S. Bernanke announced a six-month delay, until June 1, 2010, for required compliance with the Unlawful Internet Gambling Enforcement Act of 2006 (UIGEA). The move blocks regulations to implement the legislation which requires the financial services sector to comply with ambiguous and burdensome rules in an attempt to prevent unlawful Internet gambling transactions. On July 28, 2010, the House Financial Services Committee passed H.R. 2267 by a vote of 41–22–1. The bill would legalize and regulate online poker in the United States. In September 2010, the Washington State Supreme Court upheld a law making playing poker online a felony. On April 15, 2011, in U. S. v. Scheinberg et al. (10 Cr. 336), the Federal Bureau of Investigation temporarily shut down three major poker .com websites of Full Tilt Poker, Poker Stars, and Absolute Poker, and seized several of their bank accounts. A grand jury charged 11 defendants, including the founders of the poker sites, with bank fraud, money laundering, and violating gambling laws. The prosecutors claim the individuals tricked or influenced U.S. banks into receiving profits from online gambling, an act that violated UIGEA. The same day, former Senator D'Amato released a comment on behalf of the PPA. He asserts that, "Online poker is not a crime and should not be treated as such." D'Amato made no comment on the specific charges raised but promised a response once the "full facts become available." He responded in the Washington Post on April 22. The actions by the Department of Justice were also criticized by gaming law experts, including I. Nelson Rose. On September 20, 2011, in response to guidance requested by the states of Illinois and New York regarding the sale of lottery tickets online, the Department of Justice issued a memorandum opinion stating that the Wire Act does not prohibit lottery sales over the internet because it deals solely with wagering on sporting contests. While this opinion does not address online poker specifically, the reasoning employed interprets the Wire Act in such a way that its provisions don't apply to the game of poker. On August 21, 2012, a federal judge in New York ruled that poker is not gambling under federal law because it is primarily a game of skill, not chance. The ruling resulted in the dismissal of a federal criminal indictment against a man convicted of conspiring to operate an illegal underground poker club. The judge relied in his decision largely on findings by a defense expert who analyzed Internet poker games. On April 30, 2013, Nevada became the first U.S. state to allow persons physically located within the state and at least 21 years of age to play poker online for money legally. In late October, Delaware launched its regulated online gambling market. Controlled by the Delaware Lottery, the state offers online casino games in addition to online poker. On February 25, 2014, Nevada Governor Brian Sandoval and Delaware Governor Jack Markell signed the first interstate poker compact, an agreement that will allow online poker players from Nevada to play for real money against players located in Delaware. The compact is limited to online poker only, as that is the only game currently permitted under Nevada law. Should more states enter into the agreement, something that is provided for under the terms of the compact, more games could be offered. Following an agreement between Nevada, Delaware, and New Jersey governments to allow player pooling between all three states, a three-state online poker compact went live on May 1, 2018. Australia In Australia the Interactive Gambling Act was signed into law in 2001. The act makes it illegal for online poker providers to operate or advertise their services in Australia. The intention of the act was to entirely prohibit online poker, but the act itself only forbids operators based in Australia from providing their service. It did not prohibit citizens from accessing the online poker services of providers that were based overseas. The Interactive Gambling Amendment Bill was passed in 2017 in response to the failings of the 2001 Interactive Gambling Act. This provided a significant improvement towards ensuring consumer protection and responsible gaming in Australian citizens. This latest bill successfully forced the major poker companies to stop offering their services to Australian citizens. Although there are certain provisions in the law which allow licensed establishments to provide online poker services, there is no agency set up to issue any of such required licenses. How online poker rooms profit Typically, online poker rooms generate the bulk of their revenue via four methods. First, there is the rake. Similar to the vig paid to a bookie, the rake is a fee paid to the house for hosting the game. Rake is collected from most real money ring game pots. The rake is normally calculated as a percentage of the pot based on a sliding scale and capped at some maximum fee. Each online poker room determines its own rake structure. Since the expenses for running an online poker table are smaller than those for running a live poker table, rake in most online poker rooms is much smaller than its brick and mortar counterpart. Second, hands played in pre-scheduled multi-table and impromptu sit-and-go tournaments are not raked, but rather an entry fee around five to ten percent of the tournament buy-in is added to the entry cost of the tournament. These two are usually specified in the tournament details as, e.g., $20+$2 ($20 represents the buy-in that goes into the prize pool and $2 represents the entry fee, de facto rake). Unlike real casino tournaments, online tournaments do not deduct dealer tips and other expenses from the prize pool. Third, some online poker sites also offer side games like blackjack, roulettes, or side bets on poker hands where the player plays against "the house" for real money. The odds are in the house's favor in these games, thus producing a profit for the house. Some sites go as far as getting affiliated with online casinos, or even integrating them into the poker room software. Fourth, like almost all institutions that hold money, online poker sites invest the money that players deposit. Regulations in most jurisdictions exist in an effort to limit the sort of risks sites can take with their clients' money. However, since the sites do not have to pay interest on players' bankrolls even low-risk investments can be a significant source of revenue. Integrity and fairness Randomness of the shuffle Many critics question whether the operators of such games - especially those located in jurisdictions separate from most of their players - might be engaging in fraud themselves. Internet discussion forums are rife with allegations of non-random card dealing, possibly to favour house-employed players or "bots" (poker-playing software disguised as a human opponent), or to give multiple players good hands thus increasing the bets and the rake, or simply to prevent new players from losing so quickly that they become discouraged. However, despite anecdotal evidence to support such claims, others argue that the rake is sufficiently large that such abuses would be unnecessary and foolish. Attempts at manipulative dealing could face a risk of third party detection due to increasingly sophisticated tracking software that could be used to detect any number of unusual patterns, though such analyses are not generally available in the public domain. Many players claim to see many "bad beats" with large hands pitted against others all too often at a rate that seems to be a lot more common than in live games. However, this could be caused by the higher hands per hour at on-line cardrooms. Since online players get to see more hands, their likelihood of seeing more improbable bad beats or randomly large pots is similarly increased. Many online poker sites are certified by major auditing firms like PricewaterhouseCoopers to review the fairness of the random number generator, shuffle, and payouts for some sites. Insider cheating Insider cheating can occur when a person with trusted access to the system (e.g. an employee of the poker room) uses their position to play poker themselves with an unfair advantage. This could be done without the knowledge of the site managers. Perhaps the first known major case came to light in October 2007, when Absolute Poker acknowledged that its integrity had been breached by an employee, who had been able to play at high stakes while viewing his opponents' hidden "hole" cards. The cheating was first brought to light by the efforts of players, whose saved histories of play showed the employee was playing as only someone who could see their opponents' cards could. In 2008, UltimateBet became embroiled in a similar scandal, with former employees accused of using a software backdoor to see opponents' cards. UltimateBet confirmed the allegations on May 29. The Kahnawake Gaming Commission announced sanctions against UltimateBet as a result. Collusion More mundane cheating involves collusion between players, or the use of multiple accounts by a single player. Collusion is not limited to online play but can occur in any poker game with three or more players. Most poker rooms claim to actively scan for such activity. For example, in 2007, PokerStars disqualified TheV0id, the winner of the main event of the World Championship of Online Poker for breaching their terms of service. Differences from conventional poker Online poker and conventional poker have several differences. One difference is that players do not sit near each other, removing the ability to observe others' reactions and body language. Instead, online poker players focus on opponents' betting patterns, reaction times, speed of play, use of check boxes/auto plays, opponents' fold/flop percentages, chat box, waiting for the big blind, beginners' tells, and other behavior tells that are not physical in nature. Since poker requires adaptability, successful online players learn to master the new frontiers of their surroundings. Another less obvious difference is the rate of play. In brick and mortar cardroooms, the dealer has to collect , shuffle, and deal the cards after every hand. Due to this and other delays common in offline casinos, the average rate of play is around thirty hands per hour. Online casinos do not have these delays. Dealing and shuffling are instantaneous, there are no delays relating to counting chips (for a split pot), and on average the play is faster due to "auto-action" buttons (where the player selects their action before their turn). It is not uncommon for an online poker table to average ninety to one hundred hands per hour. Online poker is cheaper to play than conventional poker. While the rake structures of online poker sites might not differ from those in brick and mortar operations, most of the other incidental expenses entailed by playing in a live room do not exist in online poker. An online poker player can play at home and incur no transportation costs to and from the poker room. Provided the player already has a computer and an Internet connection, there are no up-front equipment costs to get started. There are also considerable incidental expenses at a live poker table. Besides the rake, tipping the dealers, chip runners, servers, and other casino employees is expected, putting a further drain on a player's profits. Also, while an online player can enter and leave tables almost as they please, once seated at a live table a player must remain there until they wish to stop playing or else go back to the bottom of the waiting list. Food and beverages at casinos are expensive even compared to other hospitality establishments in the same city, let alone at home, and casino managers have little incentive to provide complimentary food or drink for poker players. In brick and mortar casinos, the only real way a player can increase their earnings is to increase their limit, likely encountering better opponents in the process. In the online world, players have another option: play more tables. Unlike a traditional casino where it is physically impossible to play at more than one table at a time, most online poker rooms permit this. Depending on the site and the player's ability to make speedy decisions, a player might play several tables at the same time, viewing them each in a separate window on the computer display. For example, an average profit around $10 per 100 hands at a low-limit game is generally considered to be good play. In a casino, this would earn a player under $4 an hour. After dealer tips, the "winning" player would probably barely break even before any other incidental expenses. In an online poker room, a player with the same win rate playing a relatively easy pace of four tables at once at a relatively sluggish 60 hands per hour each earns about $24/hour on average. The main restriction limiting the number of tables a player can play is the need to make consistently good decisions within the allotted time at every table, but some online players can effectively play up to eight or more tables at once. This can not only increase winnings but can also help to keep a player's income reasonably stable, since instead of staking their entire bankroll on one higher limit table they are splitting their bankroll, wins and losses amongst many lower limit tables, probably also encountering somewhat less skilled opponents in the process. Another important difference results from the fact that some online poker rooms offer online poker schools that teach the basics and significantly speed up the learning curve for novices. Many online poker rooms also provide free money play so that players may practice these skills in various poker games and limits without the risk of losing real money, and generally offer the hand history of played hands for analysis and discussion using a poker hand converter. People who previously had no way to learn and improve because they had no one to play with now have the ability to learn the game much quicker and gain experience from free-money play. The limits associated with online poker range down to far lower levels than the table limits at a traditional casino. The marginal cost of opening each online table is so minuscule that on some gambling sites players can find limits as low as $.01–$.02. By comparison, at most brick and mortar establishments the lowest limits are often $1–$2. Few (if any) online poker sites allow action to be taken "in the dark", while this is usually allowed and applied by players in real gaming houses. It is also not uncommon for online poker sites to not allow a player the option of showing their hand before folding if they are the giving up the pot to the last remaining bettor. This practice is also typically allowed in casinos. Currency issues One issue exclusive to online poker is the fact that players come from around the world and deal in a variety of currencies. This is not an issue in live poker where everyone present can be expected to carry the local currency. Most online poker sites operate games exclusively in U.S. dollars, even if they do not accept players based in the United States. There are two methods by which poker sites can cater to players who do not deal with U.S. dollars on a regular basis. The first method is to hold players' funds in their native currencies and convert them only when players enter and leave games. The main benefit of this method for players is to ensure that bankrolls are not subject to exchange rate fluctuations against their local currencies while they are not playing. Also, most sites that use this method usually apply the same exchange rate when a player cashes out of a game as when he bought in, ensuring that players do not expend significant sums simply by entering and leaving games. The other method is to require players to convert their funds when depositing them. However, some sites that use this policy do accept payments in a variety of currencies and convert funds at a lower premium compared to what banks and credit card companies would charge. Others only accept payment in U.S. dollars. One benefit of this method is that a player who constantly "tops up" his chip stack to a constant level (some poker rooms have an optional feature that can perform this function automatically) does not have to worry about rounding issues when topping up with a nominal sum – these could add up over time. Players may also make use of ewallets, virtual wallets that will allow players to store their funds online in the currency of their choice. Using crypto poker only platforms, like SWC Poker, allows users to deposit and withdraw funds from poker platforms without worrying about further currency conversion and identity checks. Many online poker sites, particularly those that serve the United States, began adopting cryptocurrencies in 2013 as a means of bypassing the UIGEA. The majority of these poker rooms accept deposits in Bitcoin and then convert them to U.S. dollars, performing this process in reverse when paying out winnings. There also exist cryptocurrency-only operators who denominate their games in Bitcoin or fractions of a bitcoin, avoiding fiat currencies entirely. Poker tools Various software applications are available for online play. Such tools include hand database programs that save, sort, and recall all hand histories played online. Scanning the active tables for known players and displaying previous statistics from hands with those players next to their name (known as a heads up display or HUD) is a common feature of these programs and is allowed by most sites. Other programs include hand re-players and odds, equity or variance calculators. Some software goes as far as to provide you with quizzes, or scan your previously played hands and flag likely mistakes. Bonuses Many online poker sites offer incentives to players, especially new depositors, in the form of bonuses. Usually, the bonuses are paid out incrementally as certain amounts are raked by the player. For example, a site may offer a player who deposits $100 a bonus of $50 that awards $5 every time the player rakes $25. To earn the full $50 bonus sum, the player would have to rake $250 in total. Many online cardrooms also have VIP programs to reward regular players. Poker rooms often offer additional bonuses for players who wish to top-up their accounts. These are known as reload bonuses. Many online rooms also offer rakeback, and some offer poker propping. See the online casino article for more on general information on bonuses. See also Computer poker players Poker companies Notes External links Poker Online games Poker
9066499
https://en.wikipedia.org/wiki/VPSKeys
VPSKeys
VPSKeys is a freeware input method editor developed and distributed by the Vietnamese Professionals Society (VPS). One of the first input method editors for Vietnamese, it allows users to add accent marks to Vietnamese text on computers running Microsoft Windows. The first version of VPSKeys, supporting Windows 3.1, was released in 1993. The most recent version is 4.3, released in October 2007. Features VPSKeys supports the Telex, VISCII, VNI, and VIQR input methods, as well as a number of character encodings. One of its unique features is a "hook/tilde dictionary" (), which provides spelling suggestions for distinguishing words with or tones. This feature is helpful for speakers of dialects in which these two tones have merged. VPS character encoding The "VPS" character encoding for writing Vietnamese replaces several control characters, including several C0 control characters, with letters while including the ASCII graphical characters unmodified, a similar approach to VSCII-1 (TCVN1) and VISCII. Trojan incident In March 2010, Google and McAfee announced on their security blogs that they believe that hackers compromised the VPS website and replaced the program with a trojan. The trojan, which McAfee has code-named W32/VulcanBot, creates a botnet that could be used to launch distributed denial of service attacks on websites critical of the Vietnamese government's plan to mine bauxite in the country's Central Highlands. McAfee suspects that the authors of the trojan have ties to the Vietnamese government. However, Nguyễn Tử Quảng of Bách Khoa Internet Security (Bkis) called McAfee's accusation "somewhat premature". The Vietnamese Ministry of Foreign Affairs issued a statement calling Google's and McAfee's comments "groundless". VPS discovered a breach on their website on January 22, 2010, and restored the non-infected software then, but did not publicize it widely because they did not realize the serious nature of the matter. References External links Vietnamese Professionals Society Download VpsKeys 4.3 Vietnamese character input Windows-only freeware
28499702
https://en.wikipedia.org/wiki/List%20of%20information%20technology%20initialisms
List of information technology initialisms
The table below lists information technology initialisms and acronyms in common and current usage. These acronyms are used to discuss LAN, internet, WAN, routing and switching protocols, and their applicable organizations. The table contains only current, common, non-proprietary initialisms that are specific to information technology. Most of these initialisms appear in IT career certification exams such as CompTIA A+. See also List of computing and IT abbreviations References Information technology Information technology acronyms
24678542
https://en.wikipedia.org/wiki/Plex%20Systems
Plex Systems
Plex Systems, Inc. is a software company based in Troy, Michigan. The company develops and markets the Plex Manufacturing Cloud, a software as a service (SaaS) or cloud computing ERP for manufacturing. Overview Plex Systems began as an internal IT project at an automotive parts manufacturer, MSP Industries Corporation, in 1989. The company was formed as Plexus Systems LLC in 1995 by Robert Beatty, providing client/server manufacturing software. The company began offering its software via the software as a service (SaaS) or cloud computing model when Plexus Online was launched in 2001. In 2006 Apax Partners acquired a majority interest in the company. In 2009, the company changed its name to Plex Systems and renamed its flagship product Plex Online. On June, 2012, the company announced the acquisition from a group of share holders, including Apax Partners by Francisco Partners. In June 2012, Accel Partners invested $30 million in Plex: In June 2014, Plex secured $50 million in additional funding led by T. Rowe Price, which joined existing investors Francisco Partners and Accel Partners. The investment will be used to support expanded product development, as well as investments in marketing and sales.In June 2021, Plex was acquired by Rockwell Automation $2.22 billion in cash. Aberdeen Group suggested in its "Aberdeen AXIS: ERP in Manufacturing 2009” report that Plex Systems was among the top four performing ERP vendors. Plex was the only ERP software solution provider placed entirely within the “Champion” performance category, just ahead of SAP AG. (There is evidence of Plex acting as a sponsor for Aberdeen Group so this report from Aberdeen may be biased) However, other vendors evaluated in the same report are also sponsors of Aberdeen Group. Historically, the company has not released detailed financial information, citing its status as a privately held corporation. However, in May 2012, the company reported a revenue increase of 30.6% in the first quarter ending March 31, compared to a year earlier. Recurring revenue increased by 30.5 percent, representing the 19th consecutive quarter of growth." Plex is known as the first provider of a complete SaaS ERP solution for manufacturing companies. Several IT software bloggers have written about Plex’s ability to provide a wide scope of critical features for manufacturers in a SaaS model, where larger ERP vendors had not succeeded at the time. The Plex Manufacturing Cloud The Plex Manufacturing Cloud is a software as a service (SaaS) or cloud application ERP that manages the manufacturing process and supports the functions of production, inventory, shipping, supply-chain management, quality, accounting, sales, and human resource departments, in addition to the traditional ERP roles of finance/accounting, procurement, human capital management, etc. Plex is targeted towards manufacturing industries with rigorous traceability, quality and food safety requirements, including automotive, aerospace, food & beverage, and life sciences or medical manufacturing. The system must be accessed using a web browser, making its functions available from anywhere with an Internet connection. The software is designed to provide managers and engineers with real-time visibility to production data. While Plex Systems calls the SaaS solution "ERP", the software also includes the following integrated functions: Enterprise resource planning (ERP): Manufacturing execution system (MES) or manufacturing operations management (MOM) Quality management systems (QMS). It helps maintain compliance with quality standards including ISO 9000 and ISO 14000, QS-9000, TS-16949, AS-9100, etc. Customer relationship management (CRM) Supply chain management software (SCMS): Software as a service (SaaS) or cloud application The Plex Manufacturing Cloud is built on a multi-tenant architecture. Software as a service (SaaS) (also referred to as cloud application) is an application delivery model in which the user accesses software over the Internet, from anywhere, at any time. The physical location and ownership/maintenance burden of the system that actually serves the software is outside the responsibility and concern of the end users. Some IT professionals have expressed concern about moving ERP to a SaaS model. At the same time many companies have successfully performed such deployments with Plex Systems and other providers. SaaS applications are deployed atop the platform layer of the cloud computing stack. These applications tend to be sold as a subscription, shifting the burden of the software cost across the useful life of the software, and tend to be accounted as an operating expense (OpEx). This is in contrast to traditional methods that require upfront payment or financing and tend to be accounted as a capital expense (CapEx). See also List of ERP software packages List of ERP vendors References 1995 establishments in Michigan American companies established in 1995 Cloud computing providers Companies based in Troy, Michigan ERP software companies Production and manufacturing software Software companies based in Michigan Software companies established in 1995 Software companies of the United States
18936632
https://en.wikipedia.org/wiki/Victrix
Victrix
Victrix is a genus of moths in the family Noctuidae described by Otto Staudinger in 1879. It may be synonymous with the genus Moureia. Species Subgenus Victrix Victrix karsiana Staudinger, 1879 Armenia, north-eastern Turkey, Asia Minor Victrix gracilis (Wagner, 1931) Turkey Victrix agenjoi (Fernández, 1931) Spain Victrix artaxias Varga & Ronkay, 1989 Armenia Victrix pinkeri Hacker & Lödl, 1989 Victrix marmorata (Warren, 1914) Qinghai Subgenus Rasihia Victrix acronictoides Han & Kononenko, 2017 Yunnan Victrix boursini (Draudt, 1936) Turkey Victrix chloroxantha (Boursin, 1957) Afghanistan Victrix commixta (Warren, 1909) northern Afghanistan Victrix confucii (Alphéraky, 1892) Tibet Victrix conspersa (Christoph, 1893) Turkmenistan Victrix diadela (Hampson, 1908) western Turkestan Victrix duelduelica (Osthelder, 1932) Turkey Victrix gracilior (Draudt, 1950) Victrix hackeri Varga & Ronkay, 1991 Victrix illustris Varga & Ronkay, 1991 Afghanistan Victrix klapperichi Hacker, 2001 Victrix lichenodes Boursin, 1969 Afghanistan Victrix macrosema (Boursin, 1957) northern Iran Victrix marginelota (Joannis, 1888) Syria, Transcaspia Victrix nanata (Draudt, 1950) Yunnan Victrix octogesima (Boursin, 1960) Afghanistan Victrix precisa (Warren, 1909) Morocco, Algeria Victrix sassanica Wiltshire, 1961 Iran Victrix superior (Draudt, 1950) Yunnan Victrix tabora (Staudinger, [1892]) Syria, Turkey Victrix tristis (Rungs, 1945) western Sahara Subgenus Chytobrya Victrix albida (Draudt, 1950) Sichuan Victrix bryophiloides (Draudt, 1950) Yunnan Victrix fraudatrix (Draudt, 1950) Sichuan, Yunnan Victrix perlopsis (Draudt, 1950) Sichuan Subgenus Poliobrya Victrix patula (Püngeler, 1906) eastern Turkestan, Xinjiang Victrix umovii (Eversmann, 1846) Sweden, Finland, Estonia, Latvia, Lithuania, Poland, Ukraine, Moldova, western Kazakhstan, Urals, south-western Siberia Victrix svetlanae Koshkin & Pekarsky, 2020 south-eastern Siberia Victrix fabiani Varga & Ronkay, 1989 Mongolia Victrix frigidalis Varga & Ronkay, 1991 Victrix akbet Volynkin, Titov & Cernila, 2019 north-easter Kazakhstan Subgenus Micromima Matov, Fibiger & Ronkay, 2009 Victrix bogdoana Matov, Fibiger & Ronkay, 2009 Victrix bioculalis (Caradja, 1934) Mongolia, northern China Victrix sinensis Han, Kononenko & Behounek, 2011 Fujian, Guangdong, Zhejiang, Shaanxi Victrix tripuncta (Draudt, 1950) Shanxi References Acronictinae
69195464
https://en.wikipedia.org/wiki/National%20Initiative%20for%20Cybersecurity%20Careers%20and%20Studies
National Initiative for Cybersecurity Careers and Studies
National Initiative for Cybersecurity Careers and Studies (NICCS) is an online training initiative and portal built as per the National Initiative for Cybersecurity Education framework. This is a federal cybersecurity training subcomponent, operated and maintained by Cybersecurity and Infrastructure Security Agency. History The initiative was launched by Janet Napolitano, then-Secretary of Homeland Security of Department of Homeland Security on February 21, 2013. The primary objective of the initiative is to develop and train the next generation of American cyber professional by involving academia and the private sector. Federal Virtual Training Environment NICCS hosts Federal Virtual Training Environment, a completely free online cybersecurity training system for federal and state government employees. It contains more than 800 hours of training materials on ethical hacking and surveillance, risk management, and malware analysis. See also Cybersecurity and Infrastructure Security Agency National Cyber Security Division National Initiative for Cybersecurity Education References Initiatives_in_the_United_States Computer network security
4676670
https://en.wikipedia.org/wiki/SuperPaint%20%28Macintosh%29
SuperPaint (Macintosh)
SuperPaint is a graphics program capable of both bitmap painting and vector drawing. SuperPaint was one of the first programs of its kind, combining the features of MacPaint and MacDraw whilst adding many new features of its own. It was originally written by William Snider, published by Silicon Beach Software (which was acquired by Aldus Corporation in 1990), and released in 1986 for the Apple Macintosh. William Snider wrote and designed the program from his house on an Apple Lisa in Pascal. It was the only program that outsold Silicon Beach's Dark Castle games, but SuperPaint was much more lucrative for the company, representing about 70% of the revenue. The program and packaging was also localised into Japanese. As it requires Classic, SuperPaint is unsupported as of Mac OS X version 10.5, but can still be used with the assistance of Mac OS emulators. History Version 1.0, released 1986, has a fixed position user interface with palettes arranged on the left and bottom edges of the screen. Includes LaserBits 300dpi editing mode and the ability to print in colour despite only being able to display in black & white. 1.1, released 1988, included the SuperConvert app to convert to/from LaserBits; was bundled with Microsoft Word 4.0 for Macintosh in 1990. 2.0, released 1989, introduced many new features including: AutoTrace, SuperBits (formerly LaserBits), freehand Bézier tool, multi-page documents, rich text in text blocks, rotation and transformations, plug-ins, a multi-palette user interface, custom tools in the paint palette. 3.0, released 1991, was a major revision that added many extra features, most notably colour support, but also image enhancement functions and texture fills; hot keys were revamped to simplify the interface; tear-off palettes. 3.5, released 1993, brought support for System 7, copy brush tool, several other new drawing tools including some that are pressure sensitive, expanded importing capability including still frames from QuickTime. This was the final version of the app. Later versions were published by Aldus after their 1990 acquisition of Silicon Beach Software. The application continued to be sold by Adobe after their 1994 takeover of Aldus. Uses Since Artist Richard Bolam used images drawn using Aldus SuperPaint in the 1990s as part of his "Bolam at 50" exhibition in 2014. References External links A screenshot from SuperPaint version 1 How to create a SuperPaint Menu Command plug-in (includes source code) How to create a SuperPaint Interactive Paint Tool plug-in (includes source code) Graphics software Raster graphics editors Macintosh-only software Classic Mac OS software Macintosh graphics software Discontinued software Aldus software 1986 software
1265916
https://en.wikipedia.org/wiki/Postfix%20%28software%29
Postfix (software)
Postfix is a free and open-source mail transfer agent (MTA) that routes and delivers electronic mail. It is released under the IBM Public License 1.0 which is a free software license. Alternatively, starting with version 3.2.5, it is available under the Eclipse Public License 2.0 at the user's option. Originally written in 1997 by Wietse Venema at the IBM Thomas J. Watson Research Center in New York, and first released in December 1998, Postfix continues to be actively developed by its creator and other contributors. The software is also known by its former names VMailer and IBM Secure Mailer. In March 2021 a study performed by E-Soft, Inc., approximately 32% of the publicly reachable mail-servers on the Internet ran Postfix, making it the second most popular mail server behind Exim. Typical deployment As an SMTP server, Postfix implements a first layer of defense against spambots and malware. Administrators can combine Postfix with other software that provides spam/virus filtering (e.g., Amavisd-new), message-store access (e.g., Dovecot), or complex SMTP-level access-policies (e.g., postfwd, milter-regex, policyd-weight). As an SMTP client, Postfix implements a high-performance parallelized mail-delivery engine. Postfix is often combined with mailing-list software (such as Mailman). Operating systems Postfix runs (or has run) on AIX, BSD, HP-UX, Linux, macOS, Solaris and, generally speaking, on every Unix-like operating system that ships with a C compiler and delivers a standard POSIX development environment. It is the default MTA for the macOS, NetBSD, RedHat/CentOS and Ubuntu operating systems. Architecture Postfix consists of a combination of server programs that run in the background, and client programs that are invoked by user programs or by system administrators. The Postfix core consists of several dozen server programs that run in the background, each handling one specific aspect of email delivery. Examples are the SMTP server, the scheduler, the address rewriter, and the local delivery server. For damage-control purposes, most server programs run with fixed reduced privileges, and terminate voluntarily after processing a limited number of requests. To conserve system resources, most server programs terminate when they become idle. Client programs run outside the Postfix core. They interact with Postfix server programs through mail delivery instructions in the user's ~/.forward file, and through small "gate" programs to submit mail or to request queue status information. Other programs provide administrative support to start or stop Postfix, query status information, manipulate the queue, or to examine or update its configuration files. Yellow ellipses One of Postfix' many daemons serving exactly one purpose. This split-up into many smaller pieces of software is considered one of the reasons why Postfix is secure and stable. Blue boxes The blue boxes represent so-called lookup tables. A lookup table consists of two columns (key and value) containing information used for access control, e-mail routing etc. Orange boxes The orange boxes are either mail queues or files. In either case, e-mails are stored on persistent media (e.g., a hard disk). White clouds The clouds stand for points at which e-mails enter or leave Postfix. For example, smtpd receives mail from other mail servers or users whereas smtp relays mail to other MTAs. Implementation The Postfix implementation uses safe subsets of the C language and of the POSIX system API. These subsets are buried under an abstraction layer that contains about 50% of all Postfix source code, and that provides the foundation on which all Postfix programs are built. For example, the "vstring" primitive makes Postfix code resistant to buffer overflow attacks, and the "safe open" primitive makes Postfix code resistant to race condition attacks on systems that implement the POSIX file system API. This abstraction layer does not affect the attack resistance of non-Postfix code, such as code in system libraries or in third-party libraries. Robustness Conceptually, Postfix manages pipelines of processes that pass the responsibility for message delivery and error notification from one process to the next. All message and notification "state" information is persisted in the file system. The processes in a pipeline operate mostly without centralized control; this relative autonomy simplifies error recovery. When a process fails before completing its part of a file or protocol transaction, its predecessor in the pipeline backs off and retries the request later, and its successor in the pipeline discards unfinished work. Many Postfix daemons can simply "die" when they run into a problem; they are automatically restarted when the next service request arrives. This approach makes Postfix highly resilient, as long as the operating system or hardware don't fail catastrophically. Performance One single Postfix instance has been clocked at ~300 message deliveries/second across the Internet, running on commodity hardware (a vintage-2003 Dell 1850 system with battery-backed MegaRAID controller and two SCSI disks). This delivery rate is an order of magnitude below the "intrinsic" limit of 2500 message deliveries/second that was achieved with the mail queue on a RAM disk while delivering to the "discard" transport (with a dual-core Opteron system in 2007). Mail systems such as Postfix and Qmail achieve high performance by delivering mail in parallel sessions. With mail systems such as Sendmail and Exim that make one connection at a time, high performance can be achieved by submitting limited batches of mail in parallel, so that each batch is delivered by a different process. Postfix and Qmail require parallel submission into different MTA instances once they reach their intrinsic performance limit, or the performance limits of the hardware or operating system. The delivery rates cited above are largely theoretical. With bulk mail delivery, the true delivery rate is primarily determined by the receiver's mail receiving policies and by the sender's reputation. Base configuration The main.cf file stores site-specific Postfix configuration parameters while master.cf defines daemon processes. The Postfix Basic Configuration tutorial covers the core settings that each site needs to consider, and the Postfix Standard Configuration Examples document discusses configuration settings for a few common environments. The Postfix Address Rewriting document covers address rewriting and mail routing. The full documentation collection is at Postfix Documentation More complex Postfix implementations may include: integration with other applications such as SpamAssassin; support for multiple virtual domain names - and use databases such as MySQL to control complex configurations. Release history See also List of mail servers Comparison of mail servers Email filtering References Further reading External links Postfix "how to" with configuration examples and explanation Message transfer agents Free email server software IBM software Unix network-related software 1997 software Email server software for Linux
16476804
https://en.wikipedia.org/wiki/2357%20Phereclos
2357 Phereclos
2357 Phereclos is a large Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered on 1 January 1981, by American astronomer Edward Bowell at the Anderson Mesa Station near Flagstaff, Arizona, in the United States. The dark and possibly spherical D-type asteroid belongs to the 30 largest Jupiter trojans and has a rotation period of 14.4 hours. It was named after the shipbuilder Phereclos from Greek mythology. Orbit and classification Cebriones is a dark Jovian asteroid orbiting in the trailing Trojan camp at Jupiter's Lagrangian point, 60° behind its orbit in a 1:1 resonance (see Trojans in astronomy). This Jupiter trojan is also a non-family asteroid of the Jovian background population. It orbits the Sun at a distance of 5.0–5.4 AU once every 11 years and 11 months (4,344 days; semi-major axis of 5.21 AU). Its orbit has an eccentricity of 0.05 and an inclination of 3° with respect to the ecliptic. The asteroid was first observed as at Lowell Observatory in September 1929. The body's observation arc begins at Lowell one month later with a precovery taken in October 1929, or more than 51 years prior to its official discovery observation at Anderson Mesa. Physical characteristics Phereclos is a dark D-type asteroid, according to the Tholen classification, the SDSS-based taxonomy and the survey conducted by Pan-STARRS. Rotation period In July 2010, a rotational lightcurve of Phereclos was obtained from photometric observations by Stefano Mottola using the 1.2-meter telescope at Calar Alto Observatory in Spain. Lightcurve analysis gave a rotation period of 14.394 hours with a low brightness variation of 0.09 magnitude (), indicative of a nearly spherical shape. Between 2010 and 2017, photometric follow-up observations by Robert Stephens at the Center for Solar System Studies, California, gave several, concurring periods of 7.16 (half-period), 14.345 and 14.49 hours. Diameter and albedo According to the surveys carried out by the Infrared Astronomical Satellite IRAS, the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Phereclos measures between 94.62 and 98.45 kilometers in diameter and its surface has an albedo between 0.049 and 0.0521. The Collaborative Asteroid Lightcurve Link adopts the results obtained by IRAS, that is, an albedo of 0.0521 and a diameter of 94.90 kilometers based on an absolute magnitude of 8.94. Naming This minor planet was named from Greek mythology after the skilled craftsman and shipbuilder Phereclos (Phereclus; Phereklos), who constructed the ship that Paris used to kidnap Helen. During the Trojan War, he was killed by the Greek hero Meriones. The official naming citation was published by the Minor Planet Center on 1 August 1981 (). Notes References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center 002357 Discoveries by Edward L. G. Bowell Minor planets named from Greek mythology Named minor planets 002357 19810101
6336
https://en.wikipedia.org/wiki/Chorded%20keyboard
Chorded keyboard
A keyset or chorded keyboard (also called a chorded keyset, chord keyboard or chording keyboard) is a computer input device that allows the user to enter characters or commands formed by pressing several keys together, like playing a "chord" on a piano. The large number of combinations available from a small number of keys allows text or commands to be entered with one hand, leaving the other hand free. A secondary advantage is that it can be built into a device (such as a pocket-sized computer or a bicycle handlebar) that is too small to contain a normal-sized keyboard. A chorded keyboard minus the board, typically designed to be used while held in the hand, is called a keyer. Douglas Engelbart introduced the chorded keyset as a computer interface in 1968 at what is often called "The Mother of All Demos". Principles of operation Each key is mapped to a number and then can be mapped to a corresponding letter or command. By pressing two or more keys together the user can generate many combinations. In Engelbart's original mapping, he used five keys: 1, 2, 4, 8, 16. The keys were mapped as follows: a = 1, b = 2, c = 3, d = 4, and so on. If the user pressed keys 1 + 2 = 3 simultaneously, and then released the keys, the letter "c" appeared. Unlike pressing a chord on a piano, the chord is recognized only after all the keys or mouse buttons are released. Since Engelbart introduced the keyset, several different designs have been developed based on similar concepts. As a crude example, each finger might control one key which corresponds to one bit in a byte, so that using seven keys and seven fingers, one could enter any character in the ASCII set—if the user could remember the binary codes. Due to the small number of keys required, chording is easily adapted from a desktop to mobile environment. Practical devices generally use simpler chords for common characters (e.g., Baudot), or may have ways to make it easier to remember the chords (e.g., Microwriter), but the same principles apply. These portable devices first became popular with the wearable computer movement in the 1980s. Thad Starner from Georgia Institute of Technology and others published numerous studies showing that two-handed chorded text entry was faster and yielded fewer errors than on a QWERTY keyboard. Currently stenotype machines hold the record for fastest word entry. Many stenotype users can reach 300 words per minute. However, stenographers typically train for three years before reaching professional levels of speed and accuracy. History The earliest known chord keyboard was part of the "five-needle" telegraph operator station, designed by Wheatstone and Cooke in 1836, in which any two of the five needles could point left or right to indicate letters on a grid. It was designed to be used by untrained operators (who would determine which keys to press by looking at the grid), and was not used where trained telegraph operators were available. The first widespread use of a chord keyboard was in the stenotype machine used by court reporters, which was invented in 1868 and is still in use. The output of the stenotype was originally a phonetic code that had to be transcribed later (usually by the same operator who produced the original output), rather than arbitrary text—automatic conversion software is now commonplace. In 1874, the five-bit Baudot telegraph code and a matching 5-key chord keyboard was designed to be used with the operator forming the codes manually. The code is optimized for speed and low wear: chords were chosen so that the most common characters used the simplest chords. But telegraph operators were already using typewriters with QWERTY keyboards to "copy" received messages, and at the time it made more sense to build a typewriter that could generate the codes automatically, rather than making them learn to use a new input device. Some early keypunch machines used a keyboard with 12 labeled keys to punch the correct holes in paper cards. The numbers 0 through 9 were represented by one punch; 26 letters were represented by combinations of two punches, and symbols were represented by combinations of two or three punches. Braille (a writing system for the blind) uses either 6 or 8 tactile 'points' from which all letters and numbers are formed. When Louis Braille invented it, it was produced with a needle holing successively all needed points in a cardboard sheet. In 1892, Frank Haven Hall, superintendent of the Illinois Institute for the Education of the Blind, created the Hall Braille Writer, which was like a typewriter with 6 keys, one for each dot in a braille cell. The Perkins Brailler, first manufactured in 1951, uses a 6-key chord keyboard (plus a spacebar) to produce braille output, and has been very successful as a mass market affordable product. Braille, like Baudot, uses a number symbol and a shift symbol, which may be repeated for shift lock, to fit numbers and upper case into the 63 codes that 6 bits offer. After World War II, with the arrival of electronics for reading chords and looking in tables of "codes", the postal sorting offices started to research chordic solutions to be able to employ people other than trained and expensive typists. In 1954, an important concept was discovered: chordic production is easier to master when the production is done at the release of the keys instead of when they are pressed. Researchers at IBM investigated chord keyboards for both typewriters and computer data entry as early as 1959, with the idea that it might be faster than touch-typing if some chords were used to enter whole words or parts of words. A 1975 design by IBM Fellow Nat Rochester had 14 keys that were dimpled on the edges as well as the top, so one finger could press two adjacent keys for additional combinations. Their results were inconclusive, but research continued until at least 1978. Doug Engelbart began experimenting with keysets to use with the mouse in the mid 1960s. In a famous 1968 demonstration, Engelbart introduced a computer human interface that included the QWERTY keyboard, a three button mouse, and a five key keyset. Engelbart used the keyset with his left hand and the mouse with his right to type text and enter commands. The mouse buttons marked selections and confirmed or aborted commands. Users in Engelbart's Augmentation Research Center at SRI became proficient with the mouse and keyset. In the 1970s the funding Engelbart's group received from the Advanced Research Projects Agency (ARPA) was cut and many key members of Engelbart's team went to work for Xerox PARC where they continued to experiment with the mouse and keyset. Keychord sets were used at Xerox PARC in the early 1980s, along with mice, GUIs, on the Xerox Star and Alto workstations. A one button version of the mouse was incorporated into the Apple Macintosh but Steve Jobs decided against incorporating the chorded keyset. In the early 1980s, Philips Research labs at Redhill, Surrey did a brief study into small, cheap keyboards for entering text on a telephone. One solution used a grid of hexagonal keys with symbols inscribed into dimples in the keys that were either in the center of a key, across the boundary of two keys, or at the joining of three keys. Pressing down on one of the dimples would cause either one, two or three of the hexagonal buttons to be depressed at the same time, forming a chord that would be unique to that symbol. With this arrangement, a nine button keyboard with three rows of three hexagonal buttons could be fitted onto a telephone and could produce up to 33 different symbols. By choosing widely separated keys, one could employ one dimple as a 'shift' key to allow both letters and numbers to be produced. With eleven keys in a 3/4/4 arrangement, 43 symbols could be arranged allowing for lowercase text, numbers and a modest number of punctuation symbols to be represented along with a 'shift' function for accessing uppercase letters. While this had the advantage of being usable by untrained users via 'hunt and peck' typing and requiring one less key switch than a conventional 12 button keypad, it had the disadvantage that some symbols required three times as much force to depress them as others which made it hard to achieve any speed with the device. That solution is still alive and proposed by Fastap and Unitap among others, and a commercial phone has been produced and promoted in Canada during 2006. Standards Historically, the baudot and braille keyboards were standardized to some extent, but they are unable to replicate the full character set of a modern keyboard. Braille comes closest, as it has been extended to eight bits. The only proposed modern standard, GKOS (or Global Keyboard Open Standard) can support most characters and functions found on a computer keyboard but has had little commercial development. There is, however, a GKOS keyboard application available for iPhone since May 8, 2010, for Android since October 3, 2010 and for MeeGo Harmattan since October 27, 2011. Stenography Stenotype machines (sometimes used by Court Reporters) use a chording keyboard to represent sounds: on the standard keyboard, the 'U' represents the sound (and word) 'you', and the three-key trigraph 'K' 'A' 'T' represents the sound and word 'cat'. The stenotype keyboard is explicitly ordered 'K', on the left, is the starting sound. 'S' and 'T', which are common starting sounds and also common ending sounds, are available on both sides of the keyboard: 'TAT' is 3-key chord, using both T keys. . Open-source designs Four open-source keyer/keyset designs are available: The pickey, a PS/2 device based on the PIC microcontroller; the spiffchorder, a USB device based on the Atmel AVR family of microcontrollers; the FeatherChorder, a BLE chorder based on the Adafruit Feather, an all-in-one board incorporating an Arduino-compatible microcontroller; and the GKOS keypad driver for Linux as well as the Gkos library for the Atmel/Arduino open-source board. Plover is a free, open-source, cross-platform program intended to bring real-time stenographic technology not just to stenographers, but also to hobbyists using anything from professional Stenotype machines to low-cost NKRO gaming keyboards. It is available for Linux, Microsoft Windows, and Apple Mac macOS. Joy2chord is a chorded keyboard driver for Linux. With a configuration file, any joystick or gamepad can be turned into a chorded keyboard. This design philosophy was decided on to lower the cost of building devices, and in turn lower the entry barrier to becoming familiar with chorded keyboards. Macro keys, and multiple modes are also easily implemented with a user space driver. Commercial devices One minimal chordic keyboard example is Edgar Matias' Half-Qwerty keyboard described in patent circa 1992 that produces the letters of the missing half when the user simultaneously presses the space bar along with the mirror key. INTERCHI '93 published a study by Matias, MacKenzie and Buxton showing that people who have already learned to touch-type can quickly recover 50 to 70% of their two-handed typing speed. The loss contributes to the speed discussion above. It is implemented on two popular mobile phones, each provided with software disambiguation, which allows users to avoid using the space-bar. "Multiambic" keyers for use with wearable computers were invented in Canada in the 1970s. Multiambic keyers are similar to chording keyboards but without the board, in that the keys are grouped in a cluster for being handheld, rather than for sitting on a flat surface. Chording keyboards are also used as portable but two handed input devices for the visually impaired (either combined with a refreshable braille display or vocal synthesis). Such keyboards use a minimum of seven keys, where each key corresponds to an individual braille point, except one key which is used as a spacebar. In some applications, the spacebar is used to produce additional chords which enable the user to issue editing commands, such as moving the cursor, or deleting words. Note that the number of points used in braille computing is not 6, but 8, as this allows the user, among other things, to distinguish between small and capital letters, as well as identify the position of the cursor. As a result, most newer chorded keyboards for braille input include at least nine keys Touch screen chordic keyboards are available to smartphone users as an optional way of entering text. As the number of keys is low the button areas can be made bigger and easier to hit on the small screen. The most common letters do not necessarily require chording as is the case with the GKOS keyboard optimised layouts (Android app) where the twelve most frequent characters only require single keys. Historical The WriteHander, a 12-key chord keyboard from NewO Company, appeared in 1978 issues of ROM Magazine, an early microcomputer applications magazine. Another early commercial model was the six-button Microwriter, designed by Cy Endfield and Chris Rainey, and first sold in 1980. Microwriting is the system of chord keying and is based on a set of mnemonics. It was designed only for right-handed use. In 1982 the Octima 8 keys cord keyboard was presented by Ergoplic Kebords Ltd an Israeli Startup that was founded by Israeli researcher with intensive experience in Man Machine Interface design. The keyboard had 8 keys one for each finger and additional 3 keys that enabled the production of numbers, punctuations and control functions. The keyboard was fully compatible with the IBM PC & AT keyboards and had an Apple IIe version as well. Its key combinations were based on a mnemonic system that enabled fast and easy touch type learning. Within a few hours the user could achieve a typing speed similar to hand writing speed. The unique design also gave a relief from hand stress (Carpal Tunnel Syndrome) and allowed longer typing sessions than traditional keyboards. It was multi-lingual supporting English, German, French and Hebrew. The BAT is a 7-key hand-sized device from Infogrip, and has been sold since 1985. It provides one key for each finger and three for the thumb. It is proposed for the hand which does not hold the mouse, in an exact continuation of Engelbart's vision. See also BAT keyboard FrogPad Keyer Microwriter Palantype Stenotype Velotype syllable-chord keyboard References Bardini, Thierry, Bootstrapping: Douglas Engelbart, Coevolution, and the Origins of Personal Computing (2000), Chapters 2 & 3, , Engelbart and English, "A Research Center for Augmenting Human Intellect", AFIPS Conf. Proc., Vol 33, 1968 Fall Joint Computer Conference, p395-410 Lockhead and Klemmer, An Evaluation of an 8-Key Word-Writing Typewriter, IBM Research Report RC-180, IBM Research Center, Yorktown Heights, NY, Nov 1959. Rochester, Bequaert, and Sharp, "The Chord Keyboard", IEEE Computer, December 1978, p57-63 Seibel, "Data Entry Devices and Procedures", in Human Engineering Guide to Equipment Design, Van Cott and Kinkade (Eds), 1963 Computer keyboard types Physical ergonomics
2670485
https://en.wikipedia.org/wiki/Matthias%20Felleisen
Matthias Felleisen
Matthias Felleisen is a German-American computer science professor and author. He grew up in Germany and immigrated to the US when he was 21 years old. He received his PhD from Indiana University under the direction of Daniel P. Friedman. After serving as professor for 14 years in the Computer Science Department of Rice University, Felleisen moved to the Khoury College of Computer Sciences at Northeastern University in Boston, Massachusetts. There he currently serves as a Trustee Professor. Felleisen's interests include programming languages, including software tools, program design, the Design Recipe, software contracts, and many more. In the 1990s, Felleisen launched PLT and TeachScheme! (later ProgramByDesign and eventually giving rise to the Bootstrap project ) with the goal of teaching program-design principles to beginners and to explore the use of Scheme to produce large systems. As part of this effort, he authored How to Design Programs (MIT Press, 2001) with Findler, Flatt, and Krishnamurthi. For his dissertation Felleisen developed a novel form of operational semantics for higher-order functional languages with imperative extensions (state, control). Part I of "Semantics Engineering with PLT Redex" ) is derived from his dissertation. Its most well-known application is for a proof of type safety, worked out with his PhD student Andrew Wright. Control delimiters, the basis of delimited continuations, were introduced by Felleisen in 1988. They have since been used in many domains, particularly in defining new control operators; see Queinnec for a survey. A-normal form (ANF), an intermediate representation of programs in functional compilers were introduced by Sabry and Felleisen in 1992 as a simpler alternative to continuation-passing style (CPS). An implementation in the CAML compiler demonstrated the useful its practical usefulness and popularized the idea With Findler, Felleisen developed the notion of higher-order contracts. With such contracts, programmers can express assertions about the behavior of first-class functions, objects, classes and modules. Felleisen's work on gradual typing was a direct continuation of his work on these contracts; see below. In support of the TeachScheme! project, Felleisen and his team of Findler, Flatt, and Krishnamurthi designed and implemented the Racket programming language., Racket (nee PLT Scheme). The idea was to create a programming language with which it would be easy to quickly build pedagogic languages for novice students---a programmable programming language Flatt remains the lead architect of the Racket effort to this day. This Racket programming language has played a key role in the recent development of gradual typing. In 2006, Felleisen and his PhD student Sam Tobin-Hochstadt started the Typed Racket project with the goal of allowing developers to migrate code from an untyped programming language to the same syntax enriched with a sound type system The Typed Racket language was the first to fully implement and support the idea of "gradually typing" a code base and remains under active development. Felleisen gave the keynote addresses at the 2011 Technical Symposium on Computer Science Education, 2010 International Conference on Functional Programming, 2004 European Conference on Object-Oriented Programming and the 2001 Symposium on Principles of Programming Languages, and several other conferences and workshops on computer science. In 2006, he was inducted as a fellow of the Association for Computing Machinery. In 2009, he received the Karl V. Karlstrom Outstanding Educator Award from the ACM. In 2010, he received the SIGCSE Award for Outstanding Contribution to Computer Science Education from the ACM. In 2012, he received the ACM SIGPLAN Programming Languages Achievement Award for "significant and lasting contribution to the field of programming languages" including small-step operational semantics for control and state, mixin classes and mixin modules, a fully abstract semantics for Sequential PCF, web programming techniques, higher-order contracts with blame, and static typing for dynamic languages. Books Felleisen is co-author of: Realm Of Racket (No Starch Press, 2013) Semantics Engineering with PLT Redex (MIT Press, 2010) How to Design Programs (MIT Press, 2001) A Little Java, A Few Patterns (MIT Press, 1998) The Little MLer (MIT Press, 1998) The Little Schemer (MIT Press, 4th Ed., 1996) The Seasoned Schemer (MIT Press, 1996) References External links Matthias at Northeastern University Khoury College of Computer Sciences at Northeastern University Year of birth missing (living people) Living people American instructional writers Programming language researchers Lisp (programming language) people Northeastern University faculty Fellows of the Association for Computing Machinery Rice University faculty Indiana University alumni Computer science educators
195113
https://en.wikipedia.org/wiki/Digital%20divide
Digital divide
The digital divide refers to the gap between those who benefit from the Digital Age and those who do not. People without access to the Internet and other information and communication technologies (ICTs) are put at a socio-economic disadvantage, as they are unable or less able to obtain digital information, shop online, participate democratically, or learn and offer skills. This resulted in programs to give computers and related services to people without access. Since the 1990s, potent global movements, including a series of intergovernmental summit meetings, were conducted to "close the digital divide". Since then, this movement formulated solutions in public policy, technology design, finance and management that would allow all connected citizens to benefit equitably as a global digital economy spreads into the far corners of the world population. Though originally coined to refer merely to the matter of access—who is connected to the Internet and is not—the term digital divide has evolved to focus on the division between those who benefit from information and communications technologies and those who do not. Thus the aim of "closing the digital divide" now refers to efforts to provide meaningful access to Internet infrastructures, applications and services. The matter of closing the digital divide nowadays includes the matter of how emergent technologies such as artificial intelligence (so-called artificial intelligence for development or AI4D), robotics, and the Internet of Things (IoT) can benefit societies. As it has become clear that the Internet can harm as well as help citizens, the focus of closing the digital divide had focused on the matter of how to generate "net benefit" (optimal help minimal harm) as a result of the impact of a spreading digital economy. The divide between differing countries or regions of the world is referred to as the global digital divide, examining this technological gap between developing and developed countries on an international scale. The divide within countries (such as the digital divide in the United States) may refer to inequalities between individuals, households, businesses, or geographic areas, usually at different socioeconomic levels or other demographic categories. Aspects of the digital divide There are manifold definitions of the digital divide, all with slightly different emphasis, which is evidenced by related concepts like digital inclusion, digital participation, digital skills, media literacy, and digital accessibility. A common approach, adopted by leaders in the field like Jan van Dijk, consists in defining the digital divide by the problem it aims to solve: based on different answers to the questions of who, with which kinds of characteristics, connects how and why to what, there are hundreds of alternatives ways to define the digital divide. "The new consensus recognizes that the key question is not how to connect people to a specific network through a specific device, but how to extend the expected gains from new ICTs." In short, the desired impact and "the end justifies the definition" of the digital divide. Some actors, like the US-based National Digital Inclusion Alliance, draw conclusions based on their particular answers to these questions, and defined that for them, it implies: 1) affordable, robust broadband Internet service; 2) Internet-enabled devices that meet the needs of the user; 3) access to digital literacy training; 4) quality technical support; 5) applications and online content designed to enable and encourage self-sufficiency, participation and collaboration. Infrastructure The infrastructure by which individuals, households, businesses, and communities connect to the Internet address the physical mediums that people use to connect to the Internet such as desktop computers, laptops, basic mobile phones or smartphones, iPods or other MP3 players, gaming consoles such as Xbox or PlayStation, electronic book readers, and tablets such as iPads. Traditionally, the nature of the divide has been measured in terms of the existing numbers of subscriptions and digital devices. Given the increasing number of such devices, some have concluded that the digital divide among individuals has increasingly been closing as the result of a natural and almost automatic process. Others point to persistent lower levels of connectivity among women, racial and ethnic minorities, people with lower incomes, rural residents, and less educated people as evidence that addressing inequalities in access to and use of the medium will require much more than the passing of time. Recent studies have measured the digital divide not in terms of technological devices, but in terms of the existing bandwidth per individual (in kbit/s per capita). As shown in the Figure on the side, the digital divide in kbit/s is not monotonically decreasing but re-opens up with each new innovation. For example, "the massive diffusion of narrow-band Internet and mobile phones during the late 1990s" increased digital inequality, as well as "the initial introduction of broadband DSL and cable modems during 2003–2004 increased levels of inequality". This is because a new kind of connectivity is never introduced instantaneously and uniformly to society as a whole at once, but diffuses slowly through social networks. As shown by the Figure, during the mid-2000s, communication capacity was more unequally distributed than during the late 1980s, when only fixed-line phones existed. The most recent increase in digital equality stems from the massive diffusion of the latest digital innovations (i.e. fixed and mobile broadband infrastructures, e.g. 3G and fiber optics FTTH). Measurement methodologies of the digital divide, and more specifically an Integrated Iterative Approach General Framework (Integrated Contextual Iterative Approach – ICI) and the digital divide modeling theory under measurement model DDG (Digital Divide Gap) are used to analyze the gap existing between developed and developing countries, and the gap among the 27 members-states of the European Union. The bit as the unifying variable Instead of tracking various kinds of digital divides among fixed and mobile phones, narrow- and broadband Internet, digital TV, etc., it has recently been suggested to simply measure the amount of kbit/s per actor. This approach has shown that the digital divide in kbit/s per capita is actually widening in relative terms: "While the average inhabitant of the developed world counted with some 40 kbit/s more than the average member of the information society in developing countries in 2001, this gap grew to over 3 Mbit/s per capita in 2010." The upper graph of the Figure on the side shows that the divide between developed and developing countries has been diminishing when measured in terms of subscriptions per capita. In 2001, fixed-line telecommunication penetration reached 70% of society in developed OECD countries and 10% of the developing world. This resulted in a ratio of 7 to 1 (divide in relative terms) or a difference of 60% (divide in absolute terms). During the next decade, fixed-line penetration stayed almost constant in OECD countries (at 70%), while the rest of the world started a catch-up, closing the divide to a ratio of 3.5 to 1. The lower graph shows the divide not in terms of ICT devices, but in terms of kbit/s per inhabitant. While the average member of developed countries counted with 29 kbit/s more than a person in developing countries in 2001, this difference got multiplied by a factor of one thousand (to a difference of 2900 kbit/s). In relative terms, the fixed-line capacity divide was even worse during the introduction of broadband Internet at the middle of the first decade of the 2000s, when the OECD counted with 20 times more capacity per capita than the rest of the world. This shows the importance of measuring the divide in terms of kbit/s, and not merely to count devices. The International Telecommunications Union concludes that "the bit becomes a unifying variable enabling comparisons and aggregations across different kinds of communication technologies". Skills and digital literacy However, research shows that the digital divide is more than just an access issue and cannot be alleviated merely by providing the necessary equipment. There are at least three factors at play: information accessibility, information utilization, and information receptiveness. More than just accessibility, individuals need to know how to make use of the information and communication tools once they exist within a community. Information professionals have the ability to help bridge the gap by providing reference and information services to help individuals learn and utilize the technologies to which they do have access, regardless of the economic status of the individual seeking help. Location Internet connectivity can be utilized at a variety of locations such as homes, offices, schools, libraries, public spaces, Internet cafes and others. There are also varying levels of connectivity in rural, suburban, and urban areas. In 2017, the Wireless Broadband Alliance published the white paper The Urban Unconnected, which highlighted that in the eight countries with the world's highest GNP about 1.75 billion people lived without an Internet connection and one third of them resided in the major urban centers. Delhi (5.3 millions, 9% of the total population), San Paulo (4.3 millions, 36%), New York (1.6 mln, 19%), and Moscow (2.1 mln, 17%) registered the highest percentages of citizens that weren't provided of any type of Internet access. Globally speaking, only about half of the population have access to the internet leaving 3.7 billion people without internet. A majority of these people are from developing countries with a large portion of them being women. One of the leading factors of this is that globally different governments have different policies relating to issues such as privacy, data governance, speech freedoms as well as many other factors. This makes it challenging for technology companies to create an environment for users that are from certain countries due to restrictions put in place in the region. This disproportionately impacts the different regions of the world with Europe having the highest percentage of the population online while Africa has the lowest. From 2010 to 2014 Europe went from 67% to 75% and in the same time span Africa went from 10% to 19%. This also highlights how the growth rates for each region does not progress evenly in the development of infrastructure. Even if a region or specific country has access to the internet they are not always equivalent in quality. Even if a region or specific country has access to the internet they are not always equivalent in quality. Network speeds play a large role in the quality and experience a user takes away from using the internet. Oftentimes large cities and towns will have better access to high speed internet but more rural areas can have very limited or sometimes no service. This also creates an issue of locking a household into one specific service provider because it may be the only carrier that even offers service to the area. This applies to regions that have developed networks like the United States but also applies to developing countries. However, developing countries' existing networks often exacerbate this issue even more, creating very large areas that have virtually no coverage. In instances like this there are very limited options that a person could take to solve this since the issue is mainly infrastructure. They could either wait for the carrier to build the infrastructure in the area or they can move to an area with a connection. Despite this, technologies that provide an internet connection through satellite are becoming more common, like Starlink, but are still not easily available for people in the regions that need it the most. The gap in internet speeds is directly proportional to the area the person is in and can be the difference between a good experience or even an experience at all. Based on location, a connection may have speeds that are virtually unusable solely because a network provider has limited infrastructure in the area which emphasizes how important location is. To download 5GB of data in Taiwan it would take approximately 8 minutes while the same download would take 1 day, 6 hours, 1minute, and 40 seconds to download in Yemen. Although there is still a large portion of the world’s population without the internet, there are constantly developments being made to improve the infrastructure surrounding the internet and the percentage of people able to access the internet is steadily increasing globally. Applications Common Sense Media, a nonprofit group based in San Francisco, surveyed almost 1,400 parents and reported in 2011 that 47 percent of families with incomes more than $75,000 had downloaded apps for their children, while only 14 percent of families earning less than $30,000 had done so. Reasons and correlating variables The gap in a digital divide may exist for a number of reasons. Obtaining access to ICTs and using them actively has been linked to a number of demographic and socio-economic characteristics: among them income, education, race, gender, geographic location (urban-rural), age, skills, awareness, political, cultural and psychological attitudes. Multiple regression analysis across countries has shown that income levels and educational attainment are identified as providing the most powerful explanatory variables for ICT access and usage. Evidence was found that Caucasians are much more likely than non-Caucasians to own a computer as well as have access to the Internet in their homes. As for geographic location, people living in urban centers have more access and show more usage of computer services than those in rural areas. Gender was previously thought to provide an explanation for the digital divide, many thinking ICT were male gendered, but controlled statistical analysis has shown that income, education and employment act as confounding variables and that women with the same level of income, education and employment actually embrace ICT more than men (see Women and ICT4D). However, each nation has its own set of causes or the digital divide. For example, the digital divide in Germany is unique because it is not largely due to difference in quality of infrastructure. One telling fact is that "as income rises so does Internet use ...", strongly suggesting that the digital divide persists at least in part due to income disparities. Most commonly, a digital divide stems from poverty and the economic barriers that limit resources and prevent people from obtaining or otherwise using newer technologies. In research, while each explanation is examined, others must be controlled to eliminate interaction effects or mediating variables, but these explanations are meant to stand as general trends, not direct causes. Each component can be looked at from different angles, which leads to a myriad of ways to look at (or define) the digital divide. For example, measurements for the intensity of usages, such as incidence and frequency, vary by study. Some report usage as access to Internet and ICTs while others report usage as having previously connected to the Internet. Some studies focus on specific technologies, others on a combination (such as Infostate, proposed by Orbicom-UNESCO, the Digital Opportunity Index, or ITU's ICT Development Index). Economic gap in the United States During the mid-1990s, the US Department of Commerce, National Telecommunications & Information Administration (NTIA) began publishing reports about the Internet and access to and usage of the resource. The first of three reports is titled "Falling Through the Net: A Survey of the 'Have Nots' in Rural and Urban America" (1995), the second is "Falling Through the Net II: New Data on the Digital Divide" (1998), and the final report "Falling Through the Net: Defining the Digital Divide" (1999). The NTIA's final report attempted clearly to define the term digital divide; "the digital divide—the divide between those with access to new technologies and those without—is now one of America's leading economic and civil rights issues. This report will help clarify which Americans are falling further behind so that we can take concrete steps to redress this gap." Since the introduction of the NTIA reports, much of the early, relevant literature began to reference the NTIA's digital divide definition. The digital divide is commonly defined as being between the "haves" and "have-nots." The economic gap really comes into play when referring to the older generations. According to a Pew Research Center survey of U.S. adults executed from January 25 to February 8, 2021, the digital lives of Americans with high and low incomes are varied. Conversely, the proportion of Americans that use home internet or cell phones has maintained constant between 2019 and 2021. A quarter of those with yearly average earnings under $30,000 (24%) says they don't own smartphones. Four out of every ten low-income people (43 %) do not have home internet access or a computer (43%). Furthermore, the more significant part of lower-income Americans does not own a tablet device. On the other hand, every technology is practically universal among people earning $100,000 or higher per year. Americans with larger family incomes are also more likely to buy a variety of internet-connected products. Wifi at home, a smartphone, a computer, and a tablet are used by around six out of ten families making $100,000 or more per year, compared to 23 percent in the lesser household. Racial gap Although many groups in society are affected by a lack of access to computers or the Internet, communities of color are specifically observed to be negatively affected by the digital divide. This is evident when it comes to observing home Internet access among different races and ethnicities. 81% of Whites and 83% of Asians have home Internet access, compared to 70% of Hispanic people, 68% of Black people, 72% of American Indian/Alaska Natives, and 68% of Native Hawaiian/Pacific Islanders. Although income is a factor in home Internet access disparities, there are still racial and ethnic inequalities that are present among those within lower income groups. 58% of low income Whites are reported to have home Internet access in comparison to 51% of Hispanics and 50% of Blacks. This information is reported in a report titled "Digital Denied: The Impact of Systemic Racial Discrimination on Home-Internet Adoption" which was published by the DC-based public interest group Fress Press. The report concludes that structural barriers and discrimination that perpetuates bias against people of different races and ethnicities contribute to having an impact on the digital divide. The report also concludes that those who do not have Internet access still have a high demand for it, and reduction in the price of home Internet access would allow for an increase in equitable participation and improve Internet adoption by marginalized groups. Digital censorship and algorithmic bias are observed to be present in the racial divide. Hate-speech rules as well as hate speech algorithms online platforms such as Facebook have favored white males and those belonging to elite groups in society over marginalized groups in society, such as women and people of color. In a collection of internal documents that were collected in a project conducted by ProPublica, Facebook's guidelines in regards to distinguishing hate speech and recognizing protected groups revealed slides that identified three groups, each one containing either female drivers, black children, or white men. When the question of which subset group is protected is presented, the correct answer was white men . Minority group language is negatively impacted by automated tools of hate detection due to human bias that ultimately decides what is considered hate speech and what is not. Online platforms have also been observed to tolerate hateful content towards people of color but restrict content from people of color. Aboriginal memes on a Facebook page were posted with racially abusive content and comments depicting Aboriginal people as inferior. While the contents on the page were removed by the originators after an investigation conducted by the Australian Communications and Media Authority, Facebook did not delete the page and has allowed it to remain under the classification of controversial humor . However, a post by an African American woman addressing her uncomfortableness of being the only person of color in a small-town restaurant was met with racist and hateful messages. When reporting the online abuse to Facebook, her account was suspended by Facebook for three days for posting the screenshots while those responsible for the racist comments she received were not suspended. Shared experiences between people of color can be at risk of being silenced under removal policies for online platforms. Disability gap Inequities in access to information technologies are present among individuals living with a disability in comparison to those who are not living with a disability. According to The Pew Research Center, 54% of households with a person who has a disability have home Internet access compared to 81% of households that have home Internet access and do not have a person who has a disability. The type disability an individual has can prevent one from interacting with computer screens and smartphone screens, such as having a quadriplegia disability or having a disability in the hands. However, there is still a lack of access to technology and home Internet access among those who have a cognitive and auditory disability as well. There is a concern of whether or not the increase in the use of information technologies will increase equality through offering opportunities for individuals living with disabilities or whether it will only add to the present inequalities and lead to individuals living with disabilities being left behind in society. Issues such as the perception of disabilities in society, Federal and state government policy, corporate policy, mainstream computing technologies, and real-time online communication have been found to contribute to the impact of the digital divide on individuals with disabilities. People with disabilities are also the targets of online abuse. Online disability hate crimes have increased by 33% across the UK between 2016–17 and 2017–18 according to a report published by Leonard Cheshire, a health and welfare charity. Accounts of online hate abuse towards people with disabilities were shared during an incident in 2019 when model Katie Price's son was the target of online abuse that was attributed to him having a disability. In response to the abuse, a campaign was launched by Katie Price to ensure that Britain's MP's held those who are guilty of perpetuating online abuse towards those with disabilities accountable. Online abuse towards individuals with disabilities is a factor that can discourage people from engaging online which could prevent people from learning information that could improve their lives. Many individuals living with disabilities face online abuse in the form of accusations of benefit fraud and "faking" their disability for financial gain, which in some cases leads to unnecessary investigations. Gender gap Due to the rapidly declining price of connectivity and hardware, skills deficits have eclipsed barriers of access as the primary contributor to the gender digital divide. Studies show that women are less likely to know how to leverage devices and Internet access to their full potential, even when they do use digital technologies. In rural India, for example, a study found that the majority of women who owned mobile phones only knew how to answer calls. They could not dial numbers or read messages without assistance from their husbands, due to a lack of literacy and numeracy skills. A survey of 3,000 respondents across 25 countries found that adolescent boys with mobile phones used them for a wider range of activities, such as playing games and accessing financial services online. Adolescent girls in the same study tended to use just the basic functionalities of their phone, such as making calls and using the calculator. Similar trends can be seen even in areas where Internet access is near-universal. A survey of women in nine cities around the world revealed that although 97% of women were using social media, only 48% of them were expanding their networks, and only 21% of Internet-connected women had searched online for information related to health, legal rights or transport. In some cities, less than one quarter of connected women had used the Internet to look for a job. Studies show that despite strong performance in computer and information literacy (CIL), girls do not have confidence in their ICT abilities. According to the International Computer and Information Literacy Study (ICILS) assessment girls' self-efficacy scores (their perceived as opposed to their actual abilities) for advanced ICT tasks were lower than boys'. A paper published by J. Cooper from Princeton University points out that learning technology is designed to be receptive to men instead of women. The reasoning for this is that most software engineers and programmers are men, and they communicate their learning software in a way that would match the reception of their recipient. The association of computers in education is normally correlated with the male gender, and this has an impact on the education of computers and technology among women, although it is important to mention that there are plenty of learning software that are designed to help women and girls learn technology. Overall, the study presents the problem of various perspectives in society that are a result of gendered socialization patterns that believe that computers are a part of the male experience since computers have traditionally presented as a toy for boys when they are children. This divide is followed as children grow older and young girls are not encouraged as much to pursue degrees in IT and computer science. In 1990, the percentage of women in computing jobs was 36%, however in 2016, this number had fallen to 25%. This can be seen in the underrepresentation of women in IT hubs such as Silicon Valley. There has also been the presence of algorithmic bias that has been shown in machine learning algorithms that are implemented by major companies. In 2015, Amazon had to abandon a recruiting algorithm that showed a difference between ratings that candidates received for software developer jobs as well as other technical jobs. As a result, it was revealed that Amazon's machine algorithm was biased against women and favored male resumes over female resumes. This was due to the fact that Amazon's computer models were trained to vet patterns in resumes over a 10-year period. During this ten-year period, the majority of the resumes belong to male individuals, which is a reflection of male dominance across the tech industry. Age gap Older adults, those ages 60 and up, face various barriers that contribute to their lack of access to information and communication technologies (ICTs). Many adults are "digital immigrants" who have not had lifelong exposure to digital media and have had to adapt to incorporating it in their lives. A study in 2005 found that only 26% of people aged 65 and over were Internet users, compared to 67% in the 50-64 age group and 80% in the 30-49 year age group. This "grey divide" can be due to factors such as concern over security, motivation and self-efficacy, decline of memory or spatial orientation, cost, or lack of support. The aforementioned variables of race, disability, gender, and sexual orientation also add to the barriers for older adults. Many older adults may have physical or mental disabilities that render them homebound and financially insecure. They may be unable to afford Internet access or lack transportation to use computers in public spaces, the benefits of which would be enhancing their health and reducing their social isolation and depression. Homebound older adults would benefit from Internet use by using it to access health information, use telehealth resources, shop and bank online, and stay connected with friends or family using email or social networks. Those in more privileged socio-economic positions and with a higher level of education are more likely to have Internet access than those older adults living in poverty. Lack of access to the Internet inhibits "capitalism-enhancing activities" such as accessing government assistance, job opportunities, or investments. The results of the U.S. Federal Communication Commission's 2009 National Consumer Broadband Service Capability Survey shows that older women are less likely to use the Internet, especially for capital enhancing activities, than their male counterparts. However, a reverse divide is also happening, as poor and disadvantaged children and teenagers spend more time using digital devices for entertainment and less time interacting with people face-to-face compared to children and teenagers in well-off families. Social cognitive theory provides a possible explanation for an age gap in the digital divide because it suggests that self-efficacy beliefs are influenced by involvement in a task. Successful involvement increases self-efficacy while failure lowers it which is why when older individuals have less access to computers and the Internet, they have a much lower self-efficacy when it comes to computers. This in turn expands the digital divide because without access to computers and the Internet older individuals have fewer opportunities to find success with computer-related activities. One way to decrease the age gap in the Digital Divide is to provide training for elderly individuals on using different digital devices. Training programs would get older individuals a foot in the door in the increasingly advancing digital age which would ultimately increase the confidence they have using digital devices. In the United States, the gap has shrunk since 2005 with only 27% of people ages 65 and older still not using the internet. In Europe however, 51% of individuals over the age of 50 do not use the internet. If the problem is non-internet users, the elderly would be a significant problem group. Despite this, product developers do not cater to the needs of the elderly who may have physical disabilities like a visual impairment that may hamper their ability to read the small text on the screen or keyboard keys. These are simple adjustments that product designers could be making would drastically improve the inclusivity for digital devices for elderly individuals thus decreasing the age gap in the Digital Divide. For countries like China which is projected to be an "aged society", a country that has 14% of the population over the age of 65, there is a spotlight on decreasing the age gap and creating more inclusivity for the elderly in the digital age. JD, an ECommerce company in China is working to decrease the divide with their 5G smartphone for the elderly partnership with ZTE. in 2021 they released a phone equipped with services that are handy for the elderly and children alike such as, remote assistance capabilities, synchronized photo sharing, and fast medical consultation. The remote assistance is particularly useful for elderly individuals that might need one of their adult children to manage their phones from a separate location. JD believes that their 5G services help connect the elderly to their families and the digital world. According to their research, 70% of elderly consumers believe children are indispensable in the care process and 68% want to spend more time with their children and their remote services make that connection easier. During 2020, JD was able to connect elderly consumers with online shopping platforms during the COVID-19 pandemic through training programs on how to use digital devices like downloading apps, scanning QR codes, lining up for a hospital appointment early, and using mobile payments. The main idea of training is to give the elderly a foothold in the digital world and help them build confidence using new technologies. This should eventually increase the self-efficacy of elderly individuals and decrease the age gap in the Digital Divide. Cisco Systems and Independent Age have recently published a report that outlines a number of solutions for decreasing the age gap in the Digital divide. These ideas include: Creating age-appropriate designs - Devices that are simple and uncomplicated is a great way to cater design to the elderly. The older demographic does not necessarily want all of the other accessories that come with digital devices so it is important to have a design that is simple and will attract older buyers. Emphasizing the need for technology - It can be easy for elderly individuals to think that technology is meant for young people however, it is important to convince the older demographic that technology can improve their well being. Advertising relevant topics like telemedicine is one way to emphasize the need for the elderly to begin using technology. Relieve Anxieties - Older individuals may want to use new technologies but could be apprehensive of breaking it or not using it correctly. In this instance it can be helpful for caregivers and family members to encourage older individuals to use new digital devices and be supportive when the devices are introduced. Historical background The ethical roots of the matter of closing the digital divide can be found in the notion of "social contract", in which Jean Jacques Rousseau advocated that governments should intervene to ensure that any society's economic benefits should be fairly and meaningfully distributed. Amid the Industrial Revolution in Great Britain, Rousseau's idea helped to justify poor laws that created a safety net for those who were harmed by new forms of production. Later when telegraph and postal systems evolved, many used Rousseau's ideas to argue for full access to those services, even if it meant subsidizing hard to serve citizens. Thus, "universal services" referred to innovations in regulation and taxation that would allow phone services such as AT&T in the United States serve hard to serve rural users. In 1996, as telecommunications companies merged with Internet companies, the Federal Communications Commission adopted Telecommunications Services Act of 1996 to consider regulatory strategies and taxation policies to close the digital divide. Though the term "digital divide" was coined among consumer groups that sought to tax and regulate Information and communications technology (ICT) companies to close digital divide, the topic soon moved onto a global stage. The focus was the World Trade Organization which passed a Telecommunications Services Act, which resisted regulation of ICT companies so that they would be required to serve hard to serve individuals and communities. In an effort to assuage anti-globalization forces, the WTO hosted an event in 1999 in Seattle, USA, called “Financial Solutions to Digital Divide," co-organized by Craig Warren Smith of Digital Divide Institute and Bill Gates Sr. the chairman of the Bill and Melinda Gates Foundation. This event, attended by CEOs of Internet companies, UN Agencies, Prime Ministers, leading international foundations and leading academic institutions was the catalyst for a full scale global movement to close digital divide, which quickly spread virally to all sectors of the global economy. Facebook divide The Facebook divide, a concept derived from the "digital divide", is the phenomenon with regard to access to, use of, and impact of Facebook on society. It was coined at the International Conference on Management Practices for the New Economy (ICMAPRANE-17) on February 10–11, 2017. Additional concepts of Facebook Native and Facebook Immigrants were suggested at the conference. Facebook divide, Facebook native, Facebook immigrants, and Facebook left-behind are concepts for social and business management research. Facebook immigrants utilize Facebook for their accumulation of both bonding and bridging social capital. Facebook natives, Facebook immigrants, and Facebook left-behind induced the situation of Facebook inequality. In February 2018, the Facebook Divide Index was introduced at the ICMAPRANE conference in Noida, India, to illustrate the Facebook divide phenomenon. Overcoming the divide An individual must be able to connect to achieve enhancement of social and cultural capital as well as achieve mass economic gains in productivity. Therefore, access is a necessary (but not sufficient) condition for overcoming the digital divide. Access to ICT meets significant challenges that stem from income restrictions. The borderline between ICT as a necessity good and ICT as a luxury good is roughly around the "magical number" of US$10 per person per month, or US$120 per year, which means that people consider ICT expenditure of US$120 per year as a basic necessity. Since more than 40% of the world population lives on less than US$2 per day, and around 20% live on less than US$1 per day (or less than US$365 per year), these income segments would have to spend one third of their income on ICT (120/365 = 33%). The global average of ICT spending is at a mere 3% of income. Potential solutions include driving down the costs of ICT, which includes low-cost technologies and shared access through Telecentres. Furthermore, even though individuals might be capable of accessing the Internet, many are thwarted by barriers to entry, such as a lack of means to infrastructure or the inability to comprehend the information that the Internet provides. Lack of adequate infrastructure and lack of knowledge are two major obstacles that impede mass connectivity. These barriers limit individuals' capabilities in what they can do and what they can achieve in accessing technology. Some individuals can connect, but they do not have the knowledge to use what information ICTs and Internet technologies provide them. This leads to a focus on capabilities and skills, as well as awareness to move from mere access to effective usage of ICT. The United Nations is aiming to raise awareness of the divide by way of the World Information Society Day which has taken place yearly since May 17, 2006. It also set up the Information and Communications Technology (ICT) Task Force in November 2001. Later UN initiatives in this area are the World Summit on the Information Society, which was set up in 2003, and the Internet Governance Forum, set up in 2006. In the year 2000, the United Nations Volunteers (UNV) programme launched its Online Volunteering service, which uses ICT as a vehicle for and in support of volunteering. It constitutes an example of a volunteering initiative that effectively contributes to bridge the digital divide. ICT-enabled volunteering has a clear added value for development. If more people collaborate online with more development institutions and initiatives, this will imply an increase in person-hours dedicated to development cooperation at essentially no additional cost. This is the most visible effect of online volunteering for human development. Social media websites serve as both manifestations of and means by which to combat the digital divide. The former describes phenomena such as the divided users' demographics that make up sites such as Facebook, WordPress and Instagram. Each of these sites hosts thriving communities that engage with otherwise marginalized populations. An example of this is the large online community devoted to Afrofuturism, a discourse that critiques dominant structures of power by merging themes of science fiction and blackness. Social media brings together minds that may not otherwise meet, allowing for the free exchange of ideas and empowerment of marginalized discourses. Libraries Attempts to bridge the digital divide include a program developed in Durban, South Africa where deficient access to technology and a lack of documented cultural heritage has motivated the creation of an "online indigenous digital library as part of public library services". This project has the potential to narrow the digital divide by not only giving the people of the Durban area access to this digital resource, but also by incorporating the community members into the process of creating it. To address the divide The Gates Foundation started the Gates Library Initiative which provides training assistance and guidance in libraries. In nations where poverty compounds effects of the digital divide, programs are emerging to counter those trends. In Kenya, lack of funding, language, and technology illiteracy contributed to an overall lack of computer skills and educational advancement. This slowly began to change when foreign investment began. In the early 2000s, the Carnegie Foundation funded a revitalization project through the Kenya National Library Service. Those resources enabled public libraries to provide information and communication technologies to their patrons. In 2012, public libraries in the Busia and Kiberia communities introduced technology resources to supplement curriculum for primary schools. By 2013, the program expanded into ten schools. Effective use Community informatics (CI) provides a somewhat different approach to addressing the digital divide by focusing on issues of "use" rather than simply "access". CI is concerned with ensuring the opportunity not only for ICT access at the community level but also, according to Michael Gurstein, that the means for the "effective use" of ICTs for community betterment and empowerment are available. Gurstein has also extended the discussion of the digital divide to include issues around access to and the use of "open data" and coined the term "data divide" to refer to this issue area. Implications Social capital Once an individual is connected, Internet connectivity and ICTs can enhance his or her future social and cultural capital. Social capital is acquired through repeated interactions with other individuals or groups of individuals. Connecting to the Internet creates another set of means by which to achieve repeated interactions. ICTs and Internet connectivity enable repeated interactions through access to social networks, chat rooms, and gaming sites. Once an individual has access to connectivity, obtains infrastructure by which to connect, and can understand and use the information that ICTs and connectivity provide, that individual is capable of becoming a "digital citizen." Economic disparity In the United States, the research provided by Sungard Availability Services notes a direct correlation between a company's access to technological advancements and its overall success in bolstering the economy. The study, which includes over 2,000 IT executives and staff officers, indicates that 69 percent of employees feel they do not have access to sufficient technology to make their jobs easier, while 63 percent of them believe the lack of technological mechanisms hinders their ability to develop new work skills. Additional analysis provides more evidence to show how the digital divide also affects the economy in places all over the world. A BCG report suggests that in countries like Sweden, Switzerland, and the U.K., the digital connection among communities is made easier, allowing for their populations to obtain a much larger share of the economies via digital business. In fact, in these places, populations hold shares approximately 2.5 percentage points higher. During a meeting with the United Nations a Bangladesh representative expressed his concern that poor and undeveloped countries would be left behind due to a lack of funds to bridge the digital gap. Education The digital divide also impacts children's ability to learn and grow in low-income school districts. Without Internet access, students are unable to cultivate necessary tech skills to understand today's dynamic economy. The need for the internet starts while children are in school – necessary for matters such as school portal access, homework submission, and assignment research. Federal Communication Commission's Broadband Task Force created a report showing that about 70% of teachers give students homework that demand access to broadband. Even more, approximately 65% of young scholars use the Internet at home to complete assignments as well as connect with teachers and other students via discussion boards and shared files.  A recent study indicates that practically 50% of students say that they are unable to finish their homework due to an inability to either connect to the Internet or in some cases, find a computer. This has led to a new revelation: 42% of students say they received a lower grade because of this disadvantage. Finally, according to research conducted by the Center for American Progress, "if the United States were able to close the educational achievement gaps between native-born white children and black and Hispanic children, the U.S. economy would be 5.8 percent—or nearly $2.3 trillion—larger in 2050". In a reverse of this idea, well-off families, especially the tech-savvy parents in Silicon Valley, carefully limit their own children's screen time. The children of wealthy families attend play-based preschool programs that emphasize social interaction instead of time spent in front of computers or other digital devices, and they pay to send their children to schools that limit screen time. American families that cannot afford high-quality childcare options are more likely to use tablet computers filled with apps for children as a cheap replacement for a babysitter, and their government-run schools encourage screen time during school. Demographic differences Furthermore, according to the 2012 Pew Report "Digital Differences," a mere 62% of households who make less than $30,000 a year use the Internet, while 90% of those making between $50,000 and $75,000 had access.   Studies also show that only 51% of Hispanics and 49% of African Americans have high-speed Internet at home. This is compared to the 66% of Caucasians that too have high-speed Internet in their households. Overall, 10% of all Americans do not have access to high-speed Internet, an equivalent of almost 34 million people. Supplemented reports from The Guardian demonstrate the global effects of limiting technological developments in poorer nations, rather than simply the effects in the United States. Their study shows that rapid digital expansion excludes those who find themselves in the lower class. 60% of the world's population, almost 4 billion people, have no access to the Internet and are thus left worse off. Criticisms Knowledge divide Since gender, age, racial, income, and educational digital divides have lessened compared to the past, some researchers suggest that the digital divide is shifting from a gap in access and connectivity to ICTs to a knowledge divide. A knowledge divide concerning technology presents the possibility that the gap has moved beyond the access and having the resources to connect to ICTs to interpreting and understanding information presented once connected. Second-level digital divide The second-level digital divide, also referred to as the production gap, describes the gap that separates the consumers of content on the Internet from the producers of content. As the technological digital divide is decreasing between those with access to the Internet and those without, the meaning of the term digital divide is evolving. Previously, digital divide research has focused on accessibility to the Internet and Internet consumption. However, with more and more of the population gaining access to the Internet, researchers are examining how people use the Internet to create content and what impact socioeconomics are having on user behavior. New applications have made it possible for anyone with a computer and an Internet connection to be a creator of content, yet the majority of user-generated content available widely on the Internet, like public blogs, is created by a small portion of the Internet-using population. Web 2.0 technologies like Facebook, YouTube, Twitter, and Blogs enable users to participate online and create content without having to understand how the technology actually works, leading to an ever-increasing digital divide between those who have the skills and understanding to interact more fully with the technology and those who are passive consumers of it. Many are only nominal content creators through the use of Web 2.0, posting photos and status updates on Facebook, but not truly interacting with the technology. Some of the reasons for this production gap include material factors like the type of Internet connection one has and the frequency of access to the Internet. The more frequently a person has access to the Internet and the faster the connection, the more opportunities they have to gain the technology skills and the more time they have to be creative. Other reasons include cultural factors often associated with class and socioeconomic status. Users of lower socioeconomic status are less likely to participate in content creation due to disadvantages in education and lack of the necessary free time for the work involved in blog or web site creation and maintenance. Additionally, there is evidence to support the existence of the second-level digital divide at the K-12 level based on how educators' use technology for instruction. Schools' economic factors have been found to explain variation in how teachers use technology to promote higher-order thinking skills. Global digital divide The global digital divide describes global disparities, primarily between developed and developing countries, in regards to access to computing and information resources such as the Internet and the opportunities derived from such access. As with a smaller unit of analysis, this gap describes an inequality that exists, referencing a global scale. The Internet is expanding very quickly, and not all countries—especially developing countries—can keep up with the constant changes. The term "digital divide" does not necessarily mean that someone does not have technology; it could mean that there is simply a difference in technology. These differences can refer to, for example, high-quality computers, fast Internet, technical assistance, or telephone services. The difference between all of these is also considered a gap. There is a large inequality worldwide in terms of the distribution of installed telecommunication bandwidth. In 2014 only three countries (China, US, Japan) host 50% of the globally installed bandwidth potential (see pie-chart Figure on the right). This concentration is not new, as historically only ten countries have hosted 70–75% of the global telecommunication capacity (see Figure). The U.S. lost its global leadership in terms of installed bandwidth in 2011, being replaced by China, which hosts more than twice as much national bandwidth potential in 2014 (29% versus 13% of the global total). See also Achievement gap Civic opportunity gap Computer technology for developing areas Digital divide by country Digital divide in Canada Digital divide in China Digital divide in South Africa Digital divide in Thailand Digital rights Digital Society Day (October 17 in India) Global Internet usage Government by algorithm Information society International communication Internet geography Internet governance List of countries by Internet connection speeds Light-weight Linux distribution Literacy National broadband plans from around the world NetDay Net neutrality Rural Internet Groups devoted to digital divide issues Center for Digital Inclusion Digital Textbook a South Korean Project that intends to distribute tablet notebooks to elementary school students. Inveneo TechChange United Nations Information and Communication Technologies Task Force Sources References Bibliography Azam, M. (2007). "Working together toward the inclusive digital world". Digital Opportunity Forum. Unpublished manuscript. Retrieved July 17, 2009, from https://digitalthousend.com/ Borland, J. (April 13, 1998). "Move Over Megamalls, Cyberspace Is the Great Retailing Equalizer". Knight Ridder/Tribune Business News. Brynjolfsson, Erik and Michael D. Smith (2000). "The great equalizer? Consumer choice behavior at Internet shopbots". Sloan Working Paper 4208–01. eBusiness@MIT Working Paper 137. July 2000. Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts. James, J. (2004). Information Technology and Development: A new paradigm for delivering the Internet to rural areas in developing countries. New York, NY: Routledge. (print). (e-book). Southwell, B. G. (2013). Social networks and popular understanding of science and health: sharing disparities. Baltimore, MD: Johns Hopkins University Press. (book). World Summit on the Information Society (WSIS), 2005. "What's the state of ICT access around the world?" Retrieved July 17, 2009. World Summit on the Information Society (WSIS), 2008. "ICTs in Africa: Digital Divide to Digital Opportunity". Retrieved July 17, 2009. Further reading "Falling Through the Net: Defining the Digital Divide" (PDF), NTIS, U.S. Department of Commerce, July 1999. DiMaggio, P. & Hargittai, E. (2001). "From the 'Digital Divide' to 'Digital Inequality': Studying Internet Use as Penetration Increases", Working Paper No. 15, Center for Arts and Cultural Policy Studies, Woodrow Wilson School, Princeton University. Retrieved May 31, 2009. Foulger, D. (2001). "Seven bridges over the global digital divide". IAMCR & ICA Symposium on Digital Divide, November 2001. Retrieved July 17, 2009. Council of Economic Advisors (2015). Mapping the Digital Divide. "A Nation Online: Entering the Broadband Age", NTIS, U.S. Department of Commerce, September 2004. Rumiany, D. (2007). "Reducing the Global Digital Divide in Sub-Saharan Africa". Posted on Global Envision with permission from Development Gateway, January 8, 2007. Retrieved July 17, 2009. "Telecom use at the Bottom of the Pyramid 2 (use of telecom services and ICTs in emerging Asia)", LIRNEasia, 2007. "Telecom use at the Bottom of the Pyramid 3 (Mobile2.0 applications, migrant workers in emerging Asia)", LIRNEasia, 2008–09. "São Paulo Special: Bridging Brazil's digital divide", Digital Planet, BBC World Service, October 2, 2008. Graham, M. (2009). "Global Placemark Intensity: The Digital Divide Within Web 2.0 Data", Floatingsheep Blog. Yfantis, V. (2017). "Disadvantaged Populations And Technology In Music". A book about the digital divide in the music industry. External links E-inclusion, an initiative of the European Commission to ensure that "no one is left behind" in enjoying the benefits of Information and Communication Technologies (ICT). eEurope – An information society for all, a political initiative of the European Union. Digital Inclusion Network, an online exchange on topics related to the digital divide and digital inclusion, E-Democracy.org. "The Digital Divide Within Education Caused by the Internet", Benjamin Todd, Acadia University, Nova Scotia, Canada, Undergraduate Research Journal for the Human Sciences, Volume 11 (2012). Statistics from the International Telecommunication Union (ITU) Mobile Phones and Access is an animated video produced by TechChange and USAID which explores issues of access related to global mobile phone usage. Divide Technology development Economic geography Cultural globalization Global inequality Rural economics Social inequality
20800638
https://en.wikipedia.org/wiki/BRP-PACU
BRP-PACU
BRP-PACU is a dual channel FFT audio analysis tool. It is designed to be used with an omnidirectional calibrated microphone to configure any sound system with an appropriate equalization and delay. It compares the output of the system to the input of the system to obtain the transfer function of the system. These data allow one to perform final equalization using just the input/output of the DSP or any other device used for Equalization. Theoretical basis This software program uses a Transfer Function Measurement method to compare the output of a (unprocessed) loud-speaker system and room combination to the input signal which is usually filtered pseudorandom noise. Because the sound has a propagation time from the exit point of the transducer to the measurement device, a delay must be inserted in the reference signal to compensate. This delay is automatically found by the software to aid in practical system measurement. Supported platforms Currently the only supported platforms are Linux and Mac OS X because it relies on POSIX Threads. It also is written using floating point processing, making most embedded Linux device support difficult. Features Four capture buffers, with auto-save (in case of crash) and save-as ability Averages buffers to a separate buffer and flips it for analysis Automatic delay calculation Impulse response capturing Uses JACK to route and manage audio paths Pink noise generation tool to eliminate need for an external pink noise source Licensing and availability The software is licensed under the GPL-2.0-or-later. It is available from SourceForge as C code. Future development Ubuntu and Debian packages A Virtual Machine for usage under other Operating Systems such as Microsoft Windows The ability to create and load User Interface options Phase response for transfer function References External links Free science software Free audio software Free software programmed in C Audio software with JACK support
17904353
https://en.wikipedia.org/wiki/Ficus%20maxima
Ficus maxima
Ficus maxima is a fig tree which is native to Mexico, Central America, the Caribbean and South America south to Paraguay. Figs belong to the family Moraceae. The specific epithet maxima was coined by Scottish botanist Philip Miller in 1768; Miller's name was applied to this species in the Flora of Jamaica, but it was later determined that Miller's description was actually of the species now known as Ficus aurea. To avoid confusion, Cornelis Berg proposed that the name should be conserved for this species. Berg's proposal was accepted in 2005. Individuals may reach heights of . Like all figs it has an obligate mutualism with fig wasps; F. maxima is only pollinated by the fig wasp Tetrapus americanus, and T. americanus only reproduces in its flowers. F. maxima fruit and leaves are important food resources for a variety of birds and mammals. It is used in a number of herbal medicines across its range. Description Ficus maxima is a tree which ranges from tall. Leaves vary in shape from long and narrow to more oval, and range from 6–24 (cm) (2–9 in) long and from wide. F. maxima is monoecious; each tree bears functional male and female flowers. The figs are borne singly and are in diameter (sometimes up to 2.5 cm [1 in]). Taxonomy With about 750 species, Ficus (Moraceae) is one of the largest angiosperm genera. (Frodin ranked it as the 31st largest.) Ficus maxima is classified in subgenus Pharmacosycea, section Pharmacosycea, subsection Petenenses. Although recent work suggests that subgenus Pharmacosycea is polyphyletic, section Pharmacosycea appears to be monophyletic and is a sister group to the rest of the genus Ficus. In 1768, Scottish botanist Philip Miller described Ficus maxima, citing Linnaeus' Hortus Cliffortianus (1738) and Hans Sloane's Catalogus plantarum quæ in insula Jamaica (1696). Sloane's illustration of this plant (published in his 1725 A voyage to the islands Madera, Barbados, Nieves, S. Christophers and Jamaica) depicted it with figs borne singly, a characteristic of the Ficus subgenus Pharmacosycea. A closer examination of Sloane's description led Cornelis Berg to conclude that the illustration depicted a member of the subgenus Urostigma, almost certainly F. aurea, and that the illustration of singly borne figs was probably artistic license. Berg located the plant collection upon which Sloane's illustration was based and concluded that Miller's F. maxima was, in fact, F. aurea. In 1806 the name Ficus radula was applied to material belonging to this species. The description, based on material collected in Venezuela by German naturalist Alexander von Humboldt and French botanist Aimé Bonpland, was published in Carl Ludwig Willdenow's fourth edition of Linnaeus' Species Plantarum. This is the oldest description that can unequivocally be applied to this species. In 1847 Danish botanist Frederik Michael Liebmann applied the name Pharmacosycea glaucescens to Mexican material belonging to this species. (It was transferred to the genus Ficus by Dutch botanist Friedrich Anton Wilhelm Miquel in 1867.) In 1849 the name Ficus suffocans was applied to Jamaican material belonging to this species in August Grisebach's Flora of the British West Indian Islands. In their 1914 Flora of Jamaica, William Fawcett and Alfred Barton Rendle linked Sloane's illustration to F. suffocans. Gordon DeWolf agreed with their conclusion and used the name F. maxima for that species in the 1960 Flora of Panama, supplanting F. radula and F. glaucescens. Since this use has become widespread, Berg proposed that the name Ficus maxima be conserved in the way DeWolf had used it with a new type (Krukoff's 1934 collection from Amazonas, Brazil). This proposal was accepted by the nomenclatural committee in 2005. Common names Ficus maxima ranges from the northern Caribbean to southern South America, in countries where English, Spanish, Portuguese and a variety of indigenous languages are spoken. Across this range, it is known by a variety of common names. Reproduction Figs have an obligate mutualism with fig wasps (Agaonidae); figs are only pollinated by fig wasps, and fig wasps are only able to reproduce in fig flowers. Generally, each fig species depends on a single species of fig wasp for pollination, and each species of fig wasp can only reproduce in the flowers of a single species of fig tree. Ficus maxima is pollinated by Tetrapus americanus, although recent work suggests that the species known as T. americanus is a cryptic species complex of at least two species, which are not sister taxa. Figs have complicated inflorescences called syconia. Flowers are entirely contained within an enclosed structure. Their only connection with the outside is through a small pore called ostiole. Monoecious figs like F. maxima have both male and female flowers within the syconium. Female flowers mature first. Once mature, they produce a volatile chemical attractant which is recognised by female wasps belonging to the species Tetrapus americanus. Female wasps of this species are about long and are capable of producing about 190 offspring. Female fig wasps arrive carrying pollen from their natal tree and squeeze their way through the ostiole into the interior of the syconium. The syncomium bears 500–600 female flowers arranged in multiple layers - those that are closer to the outer wall of the fig have short pedicels and long styles, while those that are located closer to the interior of the chamber have long pedicels and short styles. Female wasps generally lay their eggs in the short-styled flowers, while longer-styled flowers were more likely to be pollinated. The eggs hatch and the larvae parasitise the flowers in which they were laid. Pollinated flowers which have not been parasitised give rise to seeds. Male wasps mature and emerge before the females. They mate with the females, which have not yet emerged from their galls. Males cut exit holes in the outer wall of the syconium, through which the females exit the fig. The male flowers mature around the same time as the female wasps emerge and shed their pollen on the newly emerged females; like about one third of figs, F. maxima is passively pollinated. The newly emerged female wasps leave through the exit holes the males have cut and fly off to find a syconium in which to lay their eggs. The figs then ripen. The ripe figs are eaten by a variety of mammals and birds which disperse the seeds. Distribution Ficus maxima ranges from Paraguay and Bolivia in the south to Mexico in the north, where it is widespread and common. It is found in fourteen states across the southern and central portion of the country. It occurs in tropical deciduous forest, tropical semi-evergreen forest, tropical evergreen forest, oak forest and in aquatic or subaquatic habitats. It is found throughout Central America - in Guatemala, Belize, Honduras, Nicaragua, El Salvador, Costa Rica and Panama. It is present in Cuba and Jamaica in the Greater Antilles, and Trinidad and Tobago in the southern Caribbean. In South America it ranges through Colombia, Venezuela, Guyana, Suriname, French Guiana, Ecuador, Peru, Bolivia, Paraguay and in the Brazilian states of Amapá, Amazonas, Mato Grosso, Minas Gerais, Pará. Ecology Figs are sometimes considered to be potential keystone species for communities of fruit-eating animals; their asynchronous fruiting patterns may cause them to be important fruit sources when other food sources are scarce. At Tinigua National Park in Colombia Ficus maxima was an important fruit producer during periods of fruit scarcity in one of three years. This led Colombian ecologist Pablo Stevens to consider it a possible keystone species, but he decided against including it in his final list of potential keystone species at the site. Ficus maxima fruit are consumed by birds and mammals. These animals act as seed dispersers when the defaecate or regurgitate intact seeds, or when they drop fruit below the parent tree. In Panama, F. maxima fruit were reported to have relatively high levels of protein and low levels of water-soluble carbohydrates in a study of Ficus fruit consumed by bats. Black howler monkeys in Belize consume fruit and young and mature leaves of F. maxima. In southern Veracruz, Mexico, F. maxima was the third most important food source for a studied population of Mexican howler monkeys; they consumed young leaves, mature leaves, mature fruit and petioles. Venezuelan red howlers were observed feeding F. maxima fruit in Colombia. The interaction between figs and fig wasps is especially well-known (see section on reproduction, above). In addition to their pollinators, Ficus species are exploited by a group of non-pollinating chalcidoid wasps whose larvae develop in its figs. Both pollinating and non-pollinating wasps serve as hosts for parasitoid wasps. In addition to T. americanus, F. maxima figs from Brazil were found to contain non-pollinating wasps belonging to the genus Critogaster, mites, ants, beetles, and dipteran and lepidopteran larvae. Norwegian biologist Frode Ødegaard recorded a total of 78 phytophagous (plant-eating) insect species on a single F. maxima tree in Panamanian dry forest—59 wood eating insects, 12 which fed on green plant parts, and 7 flower visitors. It supported the fourth most specialised phytophagous insect fauna and the second largest wood-feeding insect fauna among the 24 tree species sampled. Uses Ficus maxima is used by the Lacandon Maya to treat snakebite. Leaves are moistened by chewing and applied to the bite. In the provinces of Loja and Zamora-Chinchipe in Ecuador, a leaf infusion is used to treat internal inflammations. The Paya of Honduras use the species for firewood, and to treat gingivitis. The Tacana of Bolivia use the latex to treat intestinal parasites, as do people in Guatemala's Petén Department. In Brazil it is used as an anthelmintic, antirheumatic, anti-anaemic and antipyretic. The latex is also used to bind limestone soils to produce cal, an adobe cement. Gaspar Diaz M. and colleagues isolated four methoxyflavones from F. maxima leaves. David Lentz and colleagues observed antimicrobial activity in Ficus maxima extracts. References External links Ficus maxima Mill. Trees, Shrubs, and Palms of Panama, Smithsonian Tropical Research Institute Center for Tropical Forest Science. maxima Trees of the Caribbean Trees of Central America Trees of Mexico Trees of South America Trees of Guatemala Trees of Peru Plants described in 1768 Taxa named by Philip Miller
5516020
https://en.wikipedia.org/wiki/Criticism%20of%20Java
Criticism of Java
The Java programming language and Java software platform have been criticized for design choices including the implementation of generics, forced object-oriented programming, the handling of unsigned numbers, the implementation of floating-point arithmetic, and a history of security vulnerabilities in the primary Java VM implementation, HotSpot. Software written in Java, especially its early versions, has been criticized for its performance compared to software written in other programming languages. Developers have also remarked that differences in various Java implementations must be taken into account when writing complex Java programs that must work with all of them. Language syntax and semantics Generics When generics were added to Java 5.0, there was already a large framework of classes (many of which were already deprecated), so generics were implemented using type erasure to allow for migration compatibility and re-use of these existing classes. This limited the features that could be provided, compared to other languages. Because generics are implemented using type erasure the actual type of a template parameter E is unavailable at run time. Thus, the following operations are not possible in Java: public class MyClass<E> { public static void myMethod(Object item) { if (item instanceof E) { //Compiler error ... } E item2 = new E(); //Compiler error E[] iArray = new E[10]; //Compiler error } } Noun-orientedness By design, Java encourages programmers to think of a solution in terms of nouns (classes) interacting with each other, and to think of verbs (methods) as operations that can be performed on or by that noun. Steve Yegge argues that this causes an unnecessary restriction on language expressiveness because a class can have multiple functions that operate on it, but a function is bound to a class and can never operate on multiple types. Many other multi-paradigm languages support functions as a top-level construct. When combined with other features such as function overloading (one verb, multiple nouns) and generic functions (one verb, a family of nouns with certain properties), the programmer can decide whether to solve a specific problem in terms of nouns or verbs. Java version 8 introduced some functional programming features. Hidden relationship between code and hardware In 2008 the United States Department of Defense's Center Software Technology Support published an article in the "Journal of Defense Software Engineering" discussing the unsuitability of Java as the first language taught. Disadvantages were that students "had no feeling for the relationship between the source program and what the hardware would actually do" and the impossibility "to develop a sense of the run-time cost of what is written because it is extremely hard to know what any method call will eventually execute". In 2005 Joel Spolsky criticized Java as an overfocused part of universities' curricula in his essay The Perils of JavaSchools. Others, like Ned Batchelder, disagree with Spolsky for criticizing the parts of the language that he found difficult to understand, claiming that Spolsky's commentary was more of a 'subjective rant'. Unsigned integer types Java lacks native unsigned integer types. Unsigned data is often generated from programs written in C, and the lack of these types prevents direct data interchange between C and Java. Unsigned large numbers are also used in a number of numeric processing fields, including cryptography, which can make Java more inconvenient to use for these tasks. Although it is possible to get round this problem using conversion code and larger data types, it makes using Java cumbersome for handling unsigned data. While a 32-bit signed integer may be used to hold a 16-bit unsigned value losslessly, and a 64-bit signed integer a 32-bit unsigned integer, there is no larger type to hold a 64-bit unsigned integer. In all cases, the memory consumed may double, and typically any logic relying on two's complement overflow must be rewritten. If abstracted, function calls become necessary for many operations which are native to some other languages. Alternatively, it is possible to use Java's signed integers to emulate unsigned integers of the same size, but this requires detailed knowledge of bitwise operations. Some support for unsigned integer types was provided in JDK 8, but not for unsigned bytes and with no support in the Java language. Operator overloading Java has been criticized for not supporting user-defined operators. Operator overloading improves readability, so its absence can make Java code less readable, especially for classes representing mathematical objects, such as complex numbers and matrices. Java has only one non-numerical use of an operator: + for string concatenation. But this is implemented by the compiler, which generates code to create StringBuilder instances – it is impossible to create user-defined operator overloads. Compound value types Java lacks compound value types, such as structs in C, bundles of data that are manipulated directly instead of indirectly via references. Value types can sometimes be faster and smaller than classes with references. For example, Java's HashMap is implemented as an array of references to HashMap.Entry objects, which in turn contain references to key and value objects. Looking something up requires inefficient double dereferencing. If Entry were a value type, the array could store key-value pairs directly, eliminating the first indirection, increasing locality of reference and reducing memory use and heap fragmentation. Further, if Java supported generic primitive types, keys and values could be stored in the array directly, removing both levels of indirection. Large arrays Java has been criticized for not supporting arrays of 231 (about 2.1 billion) or more elements. This is a limitation of the language; the Java Language Specification, Section 10.4, states that: Arrays must be indexed by int values... An attempt to access an array component with a long index value results in a compile-time error. Supporting large arrays would also require changes to the JVM. This limitation manifests itself in areas such as collections being limited to 2 billion elements and the inability to memory map continuous file segments larger than 2 GB. Java also lacks multidimensional arrays (contiguously allocated single blocks of memory accessed by a single indirection), which limits performance for scientific and technical computing. There is no efficient way to initialize arrays in Java. When declaring an array, the JVM compiles it to bytecodes with instructions that set its elements one by one at run time. Because Java methods cannot be bigger than 64KB, arrays of even modest sizes with values assigned directly in the code will throw the message "Error: code too large" on compilation. Integration of primitives and arrays Arrays and primitives are somewhat special and need to be treated differently from classes. This has been criticized because it requires many variants of functions when creating general-purpose libraries. Parallelism Per Brinch Hansen argued in 1999 that Java's implementation of parallelism in general, and monitors in particular, does not provide the guarantees and enforcements required for secure and reliable parallel programming. While a programmer can establish design and coding conventions, the compiler can make no attempt to enforce them, so the programmer may unwittingly write insecure or unreliable code. Serialization Java provides a mechanism called object serialization, where an object can be represented as a sequence of bytes that includes its data fields, together with type information about itself and its fields. After an object is serialized object, it can later be deserialized; that is, the type information and bytes that represent its data can be used to recreate the object in memory. This raises very serious theoretical and actual security risks. Floating point arithmetic Although Java's floating point arithmetic is largely based on IEEE 754 (Standard for Binary Floating-Point Arithmetic), some mandated standard features are not supported even when using the strictfp modifier, such as Exception Flags and Directed Roundings. The extended precision types defined by IEEE 754 (and supported by many processors) are not supported by Java. Performance Before 2000, when the HotSpot VM was implemented in Java 1.3, there were many criticisms of its performance. Java has been demonstrated to run at a speed comparable with optimized native code, and modern JVM implementations are regularly benchmarked as one of the fastest language platforms available – typically no more than three times slower than C and C++. Performance has improved substantially since early versions. Performance of JIT compilers relative to native compilers has been shown to be quite similar in some optimized tests. Java bytecode can either be interpreted at run time by a virtual machine, or be compiled at load time or run time into native code which runs directly on the computer's hardware. Interpretation is slower than native execution, but compilation at load time or run time has an initial performance penalty. Modern JVM implementations all use the compilation approach, so after the initial startup time the performance is similar to native code. Game designer and programmer John D. Carmack concluded in 2005 about Java on cell-phones: "The biggest problem is that Java is really slow. On a pure cpu / memory / display / communications level, most modern cell phones should be considerably better gaming platforms than a Game Boy Advance. With Java, on most phones you are left with about the CPU power of an original 4.77 mhz (sic) IBM PC, and lousy control over everything." Security The Java platform provides a security architecture which is designed to allow the user to run untrusted bytecode in a "sandboxed" manner to protect against malicious or poorly written software. This "sandboxing" feature is intended to protect the user by restricting access to platform features and APIs which could be exploited by malware, such as accessing the local filesystem or network, or running arbitrary commands. In 2010, there was a significant rise in malicious software targeting security flaws in the sandboxing mechanisms used by Java implementations, including Oracle's. These flaws allow untrusted code to bypass the sandbox restrictions, exposing the user to attacks. Flaws were fixed by security updates, but were still exploited on machines without the updates. Critics have suggested that users do not update their Java installations because they don't know they have them, or how to update them. Many organisations restrict software installation by users, but are slow to deploy updates. Oracle has been criticized for not promptly providing updates for known security bugs. When Oracle finally released a patch for widely-exploited flaws in Java 7, it removed Java 6 from users' machines, despite it being widely used by enterprise applications that Oracle had stated were not impacted by the flaws. In 2007, a research team led by Marco Pistoia exposed another important flaw of the Java security model, based on stack inspection. When a security-sensitive resource is accessed, the security manager triggers code that walks the call stack, to verify that the codebase of each method on it has authority to access the resource. This is done to prevent confused deputy attacks, which take place every time a legitimate, more privileged program is tricked by another into misusing its authority. The confused-deputy problem is a specific type of privilege escalation. Pistoia observed that when a security-sensitive resource is accessed, the code responsible for acquiring the resource may no longer be on the stack. For example, a method executed in the past may have modified the value of an object field that determines which resource to use. That method call may no longer be on the stack when it is inspected. Some permissions are implicitly equivalent to Java's AllPermission. These include the permission to change the current security manager (and replace it with one that could potentially bypass the stack inspection), the permission to instantiate and use a custom class loader (which could choose to associate AllPermission to a malicious class upon loading it), and the permission to create a custom permission (which could declare itself as powerful as AllPermission via its implies method). These issues are documented in Pistoia's two books on Java Security: Java 2 Network Security (Second Edition) and Enterprise Java Security. Parallel installations Before Java 7, it was normal for the installer not to detect or remove older Java installations. It was quite common on a Windows computer to see multiple installations of Java 6 on the same computer, varying only by minor revision. Multiple installations are permitted and can be used by programs that rely on specific versions. This has the effect that new Java installations can provide new language features and bug fixes, but they do not correct security vulnerabilities, because malicious programs can use the older versions. Java 7 updated older versions of itself, but not Java 6 or earlier. Automatic updates As of 2014, common third-party tools (such as Adobe Flash and Adobe Reader) have been the subject of scrutiny for security vulnerabilities. Adobe and others have moved to automatic updates on Windows. These don't need any user action, and assure that security issues are promptly resolved with minimal effort by users or administrators. As of 2015, Java 8 still requires users to update Java themselves. But on Windows only those with administrator privileges can update software. The Windows Java updater frequently triggers a disruptive User Account Control elevation prompt: whatever users choose, they still get the same "Java needs to be updated" message. See also Comparison of Java and C++ Comparison of Java and C# Comparison of the Java and .NET platforms Java performance Write once, run anywhere Notes External links Free But Shackled - The Java Trap, an essay by Richard Stallman of the free software movement (dated April 12, 2004) Computer Science Education: Where Are the Software Engineers of Tomorrow? (dated January 8, 2008) What are Bad features of Java? Java (programming language) Java de:Java (Technik)#Kritik
54452801
https://en.wikipedia.org/wiki/Quantum%20supremacy
Quantum supremacy
In quantum computing, quantum supremacy or quantum advantage is the goal of demonstrating that a programmable quantum device can solve a problem that no classical computer can solve in any feasible amount of time (irrespective of the usefulness of the problem). Conceptually, quantum supremacy involves both the engineering task of building a powerful quantum computer and the computational-complexity-theoretic task of finding a problem that can be solved by that quantum computer and has a superpolynomial speedup over the best known or possible classical algorithm for that task. The term was coined by John Preskill in 2012, but the concept of a quantum computational advantage, specifically for simulating quantum systems, dates back to Yuri Manin's (1980) and Richard Feynman's (1981) proposals of quantum computing. Examples of proposals to demonstrate quantum supremacy include the boson sampling proposal of Aaronson and Arkhipov, D-Wave's specialized frustrated cluster loop problems, and sampling the output of random quantum circuits. A notable property of quantum supremacy is that it can be feasibly achieved by near-term quantum computers, since it does not require a quantum computer to perform any useful task or use high-quality quantum error correction, both of which are long-term goals. Consequently, researchers view quantum supremacy as primarily a scientific goal, with relatively little immediate bearing on the future commercial viability of quantum computing. Because this goal, of building a quantum computer that can perform a task that no other existing computer feasibly can, can become more difficult if classical computers or simulation algorithms improve, quantum supremacy may be temporarily or repeatedly achieved, placing claims of achieving quantum supremacy under significant scrutiny. Background Quantum supremacy in the 20th century In 1936, Alan Turing published his paper, “On Computable Numbers”, in response to the 1900 Hilbert Problems. Turing's paper described what he called a “universal computing machine”, which later became known as a Turing machine. In 1980, Paul Benioff utilized Turing's paper to propose the theoretical feasibility of Quantum Computing. His paper, “The Computer as a Physical System: A Microscopic Quantum Mechanical Hamiltonian Model of Computers as Represented by Turing Machines“, was the first to demonstrate that it is possible to show the reversible nature of quantum computing as long as the energy dissipated is arbitrarily small. In 1981, Richard Feynman showed that quantum mechanics could not be simulated on classical devices. During a lecture, he delivered the famous quote, “Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy.” Soon after this, David Deutsch produced a description for a quantum Turing machine and designed an algorithm created to run on a quantum computer. In 1994, further progress toward quantum supremacy was made when Peter Shor formulated Shor's algorithm, streamlining a method for factoring integers in polynomial time. Later on in 1995, Christopher Monroe and David Wineland published their paper, “Demonstration of a Fundamental Quantum Logic Gate”, marking the first demonstration of a quantum logic gate, specifically the two-bit "controlled-NOT". In 1996, Lov Grover put into motion an interest in fabricating a quantum computer after publishing his algorithm, Grover's Algorithm, in his paper, “A fast quantum mechanical algorithm for database search”. In 1998, Jonathan A. Jones and Michele Mosca published “Implementation of a Quantum Algorithm to Solve Deutsch's Problem on a Nuclear Magnetic Resonance Quantum Computer”, marking the first demonstration of a quantum algorithm. Progress in the 21st century Vast progress toward quantum supremacy was made in the 2000s from the first 5-qubit Nuclear Magnetic Resonance computer (2000), the demonstration of Shor's theorem (2001), and the implementation of Deutsch's algorithm in a clustered quantum computer (2007). In 2011, D-Wave Systems of Burnaby in British Columbia became the first company to sell a quantum computer commercially. In 2012, physicist Nanyang Xu landed a milestone accomplishment by using an improved adiabatic factoring algorithm to factor 143. However, the methods used by Xu were met with objections. Not long after this accomplishment, Google purchased its first quantum computer. Google had announced plans to demonstrate quantum supremacy before the end of 2017 with an array of 49 superconducting qubits. In early January 2018, Intel announced a similar hardware program. In October 2017, IBM demonstrated the simulation of 56 qubits on a classical supercomputer, thereby increasing the computational power needed to establish quantum supremacy. In November 2018, Google announced a partnership with NASA that would “analyze results from quantum circuits run on Google quantum processors, and... provide comparisons with classical simulation to both support Google in validating its hardware and establish a baseline for quantum supremacy.” Theoretical work published in 2018 suggests that quantum supremacy should be possible with a "two-dimensional lattice of 7×7 qubits and around 40 clock cycles" if error rates can be pushed low enough. On June 18, 2019, Quanta Magazine suggested that quantum supremacy could happen in 2019, according to Neven's law. On September 20, 2019, the Financial Times reported that "Google claims to have reached quantum supremacy with an array of 54 qubits out of which 53 were functional, which were used to perform a series of operations in 200 seconds that would take a supercomputer about 10,000 years to complete". On October 23, Google officially confirmed the claims. IBM responded by suggesting some of the claims are excessive and suggested that it could take 2.5 days instead of 10,000 years, listing techniques that a classical supercomputer may use to maximize computing speed. IBM's response is relevant as the most powerful supercomputer at the time, Summit, was made by IBM. In December 2020, a group based in the University of Science and Technology of China (USTC) led by Jian-Wei Pan reached quantum supremacy by implementing gaussian boson sampling on 76 photons with their photonic quantum computer Jiuzhang. The paper states that to generate the number of samples the quantum computer generates in 20 seconds, a classical supercomputer would require 600 million years of computation. In October 2021, teams from USTC again reported quantum advantage by building two supercomputers called Jiuzhang 2.0 and Zuchongzhi. The light-based Jiuzhang 2.0 implemented gaussian boson sampling to detect 113 photons from a 144-mode optical interferometer and a sampling rate speed up of — a difference of 37 photons and 10 orders of magnitude over the previous Jiuzhang. Zuchongzhi is a programmable superconducting quantum computer that needs to be kept at extremely low temperatures to work efficiently and uses random circuit sampling to obtain 56 qubits from a tunable coupling architecture of 66 transmons — an improvement over Google's Sycamore 2019 achievement by 3 qubits, meaning a greater computational cost of classical simulation of 2–3 orders of magnitude. A third study reported that Zuchongzhi 2.1 completed a sampling task that "is about 6 orders of magnitude more difficult than that of Sycamore" "in the classic simulation". Computational complexity Complexity arguments concern how the amount of some resource needed to solve a problem (generally time or memory) scales with the size of the input. In this setting, a problem consists of an inputted problem instance (a binary string) and returned solution (corresponding output string), while resources refers to designated elementary operations, memory usage, or communication. A collection of local operations allows for the computer to generate the output string. A circuit model and its corresponding operations are useful in describing both classical and quantum problems; the classical circuit model consists of basic operations such as AND gates, OR gates, and NOT gates while the quantum model consists of classical circuits and the application of unitary operations. Unlike the finite set of classical gates, there are an infinite amount of quantum gates due to the continuous nature of unitary operations. In both classical and quantum cases, complexity swells with increasing problem size. As an extension of classical computational complexity theory, quantum complexity theory considers what a theoretical universal quantum computer could accomplish without accounting for the difficulty of building a physical quantum computer or dealing with decoherence and noise. Since quantum information is a generalization of classical information, quantum computers can simulate any classical algorithm. Quantum complexity classes are sets of problems that share a common quantum computational model, with each model containing specified resource constraints. Circuit models are useful in describing quantum complexity classes. The most useful quantum complexity class is BQP (bounded-error quantum polynomial time), the class of decision problems that can be solved in polynomial time by a universal quantum computer. Questions about BQP still remain, such as the connection between BQP and the polynomial-time hierarchy, whether or not BQP contains NP-complete problems, and the exact lower and upper bounds of the BQP class. Not only would answers to these questions reveal the nature of BQP, but they would also answer difficult classical complexity theory questions. One strategy for better understanding BQP is by defining related classes, ordering them into a conventional class hierarchy, and then looking for properties that are revealed by their relation to BQP. There are several other quantum complexity classes, such as QMA (quantum Merlin Arthur) and QIP (quantum interactive polynomial time). The difficulty of proving what cannot be done with classical computing is a common problem in definitively demonstrating quantum supremacy. Contrary to decision problems that require yes or no answers, sampling problems ask for samples from probability distributions. If there is a classical algorithm that can efficiently sample from the output of an arbitrary quantum circuit, the polynomial hierarchy would collapse to the third level, which is generally considered to be very unlikely. Boson sampling is a more specific proposal, the classical hardness of which depends upon the intractability of calculating the permanent of a large matrix with complex entries, which is a #P-complete problem. The arguments used to reach this conclusion have been extended to IQP Sampling, where only the conjecture that the average- and worst-case complexities of the problem are the same is needed, as well as to Random Circuit Sampling, which is the task replicated by the Google and UTSC research groups. Proposed experiments The following are proposals for demonstrating quantum computational supremacy using current technology, often called NISQ devices. Such proposals include (1) a well-defined computational problem, (2) a quantum algorithm to solve this problem, (3) a comparison best-case classical algorithm to solve the problem, and (4) a complexity-theoretic argument that, under a reasonable assumption, no classical algorithm can perform significantly better than current algorithms (so the quantum algorithm still provides a superpolynomial speedup). Shor's algorithm for factoring integers This algorithm finds the prime factorization of an n-bit integer in time whereas the best known classical algorithm requires time and the best upper bound for the complexity of this problem is . It can also provide a speedup for any problem that reduces to integer factoring, including the membership problem for matrix groups over fields of odd order. This algorithm is important both practically and historically for quantum computing. It was the first polynomial-time quantum algorithm proposed for a real-world problem that is believed to be hard for classical computers. Namely, it gives a superpolynomial speedup under the reasonable assumption that RSA, today's most common encryption protocol, is secure. Factoring has some benefit over other supremacy proposals because factoring can be checked quickly with a classical computer just by multiplying integers, even for large instances where factoring algorithms are intractably slow. However, implementing Shor's algorithm for large numbers is infeasible with current technology, so it is not being pursued as a strategy for demonstrating supremacy. Boson sampling This computing paradigm based upon sending identical photons through a linear-optical network can solve certain sampling and search problems that, assuming a few complexity-theoretical conjectures (that calculating the permanent of Gaussian matrices is #P-Hard and that the polynomial hierarchy does not collapse) are intractable for classical computers. However, it has been shown that boson sampling in a system with large enough loss and noise can be simulated efficiently. The largest experimental implementation of boson sampling to date had 6 modes so could handle up to 6 photons at a time. The best proposed classical algorithm for simulating boson sampling runs in time for a system with n photons and m output modes. BosonSampling is an open-source implementation in R. The algorithm leads to an estimate of 50 photons required to demonstrate quantum supremacy with boson sampling. Sampling the output distribution of random quantum circuits The best known algorithm for simulating an arbitrary random quantum circuit requires an amount of time that scales exponentially with the number of qubits, leading one group to estimate that around 50 qubits could be enough to demonstrate quantum supremacy. Bouland, Fefferman, Nirkhe and Vazirani gave, in 2018, theoretical evidence that efficiently simulating a random quantum circuit would require a collapse of the computational polynomial hierarchy. Google had announced its intention to demonstrate quantum supremacy by the end of 2017 by constructing and running a 49-qubit chip that would be able to sample distributions inaccessible to any current classical computers in a reasonable amount of time. The largest universal quantum circuit simulator running on classical supercomputers at the time was able to simulate 48 qubits. But for particular kinds of circuits, larger quantum circuit simulations with 56 qubits are possible. This may require increasing the number of qubits to demonstrate quantum supremacy. On October 23, 2019, Google published the results of this quantum supremacy experiment in the Nature article, “Quantum Supremacy Using a Programmable Superconducting Processor” in which they developed a new 53-qubit processor, named “Sycamore”, that is capable of fast, high-fidelity quantum logic gates, in order to perform the benchmark testing. Google claims that their machine performed the target computation in 200 seconds, and estimated that their classical algorithm would take 10,000 years in the world's fastest supercomputer to solve the same problem. IBM disputed this claim, saying that an improved classical algorithm should be able to solve that problem in two and a half days on that same supercomputer. Criticisms Susceptibility to error Quantum computers are much more susceptible to errors than classical computers due to decoherence and noise. The threshold theorem states that a noisy quantum computer can use quantum error-correcting codes to simulate a noiseless quantum computer assuming the error introduced in each computer cycle is less than some number. Numerical simulations suggest that that number may be as high as 3%. However, it is not yet definitively known how the resources needed for error correction will scale with the number of qubits. Skeptics point to the unknown behavior of noise in scaled-up quantum systems as a potential roadblock for successfully implementing quantum computing and demonstrating quantum supremacy. Criticism of the name Some researchers have suggested that the term 'quantum supremacy' should not be used, arguing that the word "supremacy" evokes distasteful comparisons to the racist belief of white supremacy. A controversial Nature commentary signed by thirteen researchers asserts that the alternative phrase 'quantum advantage' should be used instead. John Preskill, the professor of theoretical physics at the California Institute of Technology who coined the term, has since clarified that the term was proposed to explicitly describe the moment that a quantum computer gains the ability to perform a task that a classical computer never could. He further explained that he specifically rejected the term 'quantum advantage' as it did not fully encapsulate the meaning of his new term: the word 'advantage' would imply that a computer with quantum supremacy would have a slight edge over a classical computer while the word 'supremacy' better conveys complete ascendancy over any classical computer. In December 2020, Nature's Philip Ball wrote that the term 'quantum advantage' has "largely replaced" the term 'quantum supremacy'. See also Gottesman–Knill theorem List of quantum processors Sycamore processor Jiuzhang (quantum computer) References Quantum computing Computational complexity theory
59225659
https://en.wikipedia.org/wiki/LiveQuartz
LiveQuartz
LiveQuartz is a basic graphic editor developed for macOS by Romain Piveteau. Each document is in a single window with layers and filters on both sides, tools are displayed on the top and document settings or at the bottom in the status bar. LiveQuartz features layers-based image editing, non destructive filters and selection, painting and retouching tools. LiveQuartz was one of the first public raster image editors built on top of Core Image to be made public. In May 2005, when the first beta of iMage (the original name of LiveQuartz) was released, its singularity was that it was the first graphic editor to use two new Mac OS X Tiger frameworks: Core Image and Core Data. LiveQuartz was also, back in early 2005, the first macOS X image editing app to use a unique window user interface without "palettes". Features Uses technologies like Cocoa (API), Quartz (graphics layer), Core Data and Core Image. Uses layers-based editing and non destructive filters (filters can me merged onto their layer when using certain tools or when doing certain actions like cutting a selection, etc...). Selection and retouch tools. LiveQuartz provides unlimited (per document) undos (since app opening). Integrates with macOS and applications such as Apple Photos. Support for Drag and Drop and standard image formats (JPEG, PNG, TIFF, HEIF). Pictures can be imported a lot of different ways: They can be drag and dropped from Finder or other applications. They can be opened from the "File" menu. They can be shared from Apple Photos app. They can be taken with an iSight camera from within the app, imported from a scanner or plugged camera or with an iOS device having the same iCloud account as the mac with Camera Continuity in macOS Mojave. Support for other macOS features such as multi-touch gestures, versions, auto save, and full screen mode. Tools Supported image file formats Version history (in decreasing date order) LiveQuartz See also Comparison of raster graphics editors References External links Rhapsoft website LiveQuartz Mac App Store page LiveQuartz Lite (with in-app subscriptions and purchases) Mac App Store page Ars Technica about LiveQuartz 1.0 MacWorld about LiveQuartz 1.8 Raster graphics editors MacOS graphics software MacOS-only software
2031045
https://en.wikipedia.org/wiki/Hardware%20acceleration
Hardware acceleration
Hardware acceleration is the use of computer hardware designed to perform specific functions more efficiently when compared to software running on a general-purpose central processing unit (CPU). Any transformation of data that can be calculated in software running on a generic CPU can also be calculated in custom-made hardware, or in some mix of both. To perform computing tasks more quickly (or better in some other way), generally one can invest time and money in improving the software, improving the hardware, or both. There are various approaches with advantages and disadvantages in terms of decreased latency, increased throughput and reduced energy consumption. Typical advantages of focusing on software may include more rapid development, lower non-recurring engineering costs, heightened portability, and ease of updating features or patching bugs, at the cost of overhead to compute general operations. Advantages of focusing on hardware may include speedup, reduced power consumption, lower latency, increased parallelism and bandwidth, and better utilization of area and functional components available on an integrated circuit; at the cost of lower ability to update designs once etched onto silicon and higher costs of functional verification, and times to market. In the hierarchy of digital computing systems ranging from general-purpose processors to fully customized hardware, there is a tradeoff between flexibility and efficiency, with efficiency increasing by orders of magnitude when any given application is implemented higher up that hierarchy. This hierarchy includes general-purpose processors such as CPUs, more specialized processors such as GPUs, fixed-function implemented on field-programmable gate arrays (FPGAs), and fixed-function implemented on application-specific integrated circuits (ASICs). Hardware acceleration is advantageous for performance, and practical when the functions are fixed so updates are not as needed as in software solutions. With the advent of reprogrammable logic devices such as FPGAs, the restriction of hardware acceleration to fully fixed algorithms has eased since 2010, allowing hardware acceleration to be applied to problem domains requiring modification to algorithms and processing control flow. The disadvantage however, is that in many open source projects, it requires proprietary libraries that not all vendors are keen to distribute or expose, making it difficult to integrate in such projects. Overview Integrated circuits can be created to perform arbitrary operations on analog and digital signals. Most often in computing, signals are digital and can be interpreted as binary number data. Computer hardware and software operate on information in binary representation to perform computing; this is accomplished by calculating boolean functions on the bits of input and outputting the result to some output device downstream for storage or further processing. Computational equivalence of hardware and software Because all Turing machines can run any computable function, it is always possible to design custom hardware that performs the same function as a given piece of software. Conversely, software can be always used to emulate the function of a given piece of hardware. Custom hardware may offer higher performance per watt for the same functions that can be specified in software. Hardware description languages (HDLs) such as Verilog and VHDL can model the same semantics as software and synthesize the design into a netlist that can be programmed to an FPGA or composed into the logic gates of an ASIC. Stored-program computers The vast majority of software-based computing occurs on machines implementing the von Neumann architecture, collectively known as stored-program computers. Computer programs are stored as data and executed by processors. Such processors must fetch and decode instructions ,as well as loading data operands from memory (as part of the instruction cycle) to execute the instructions constituting the software program. Relying on a common cache for code and data leads to the "von Neumann bottleneck", a fundamental limitation on the throughput of software on processors implementing the von Neumann architecture. Even in the modified Harvard architecture, where instructions and data have separate caches in the memory hierarchy, there is overhead to decoding instruction opcodes and multiplexing available execution units on a microprocessor or microcontroller, leading to low circuit utilization. Modern processors that provide simultaneous multithreading exploit under-utilization of available processor functional units and instruction level parallelism between different hardware threads. Hardware execution units Hardware execution units do not in general rely on the von Neumann or modified Harvard architectures and do not need to perform the instruction fetch and decode steps of an instruction cycle and incur those stages' overhead. If needed calculations are specified in a register transfer level (RTL) hardware design, the time and circuit area costs that would be incurred by instruction fetch and decoding stages can be reclaimed and put to other uses. This reclamation saves time, power and circuit area in computation. The reclaimed resources can be used for increased parallel computation, other functions, communication or memory, as well as increased input/output capabilities. This comes at the cost of general-purpose utility. Emerging hardware architectures Greater RTL customization of hardware designs allows emerging architectures such as in-memory computing, transport triggered architectures (TTA) and networks-on-chip (NoC) to further benefit from increased locality of data to execution context, thereby reducing computing and communication latency between modules and functional units. Custom hardware is limited in parallel processing capability only by the area and logic blocks available on the integrated circuit die. Therefore, hardware is much more free to offer massive parallelism than software on general-purpose processors, offering a possibility of implementing the parallel random-access machine (PRAM) model. It is common to build multicore and manycore processing units out of microprocessor IP core schematics on a single FPGA or ASIC. Similarly, specialized functional units can be composed in parallel as in digital signal processing without being embedded in a processor IP core. Therefore, hardware acceleration is often employed for repetitive, fixed tasks involving little conditional branching, especially on large amounts of data. This is how Nvidia's CUDA line of GPUs are implemented. Implementation metrics As device mobility has increased, new metrics have been developed that measure the relative performance of specific acceleration protocols, considering the characteristics such as physical hardware dimensions, power consumption and operations throughput. These can be summarized into three categories: task efficiency, implementation efficiency, and flexibility. Appropriate metrics consider the area of the hardware along with both the corresponding operations throughput and energy consumed. Applications Examples of hardware acceleration include bit blit acceleration functionality in graphics processing units (GPUs), use of memristors for accelerating neural networks and regular expression hardware acceleration for spam control in the server industry, intended to prevent regular expression denial of service (ReDoS) attacks. The hardware that performs the acceleration may be part of a general-purpose CPU, or a separate unit called a hardware accelerator, though they are usually referred with a more specific term, such as 3D accelerator, or cryptographic accelerator. Traditionally, processors were sequential (instructions are executed one by one), and were designed to run general purpose algorithms controlled by instruction fetch (for example moving temporary results to and from a register file). Hardware accelerators improve the execution of a specific algorithm by allowing greater concurrency, having specific datapaths for their temporary variables, and reducing the overhead of instruction control in the fetch-decode-execute cycle. Modern processors are multi-core and often feature parallel "single-instruction; multiple data" (SIMD) units. Even so, hardware acceleration still yields benefits. Hardware acceleration is suitable for any computation-intensive algorithm which is executed frequently in a task or program. Depending upon the granularity, hardware acceleration can vary from a small functional unit, to a large functional block (like motion estimation in MPEG-2). Hardware acceleration units by application See also Coprocessor DirectX Video Acceleration (DXVA) Direct memory access (DMA) High-level synthesis C to HDL Flow to HDL Soft microprocessor Flynn's taxonomy of parallel computer architectures Single instruction, multiple data (SIMD) Single instruction, multiple threads (SIMT) Multiple instructions, multiple data (MIMD) Computer for operations with functions References External links Application-specific integrated circuits Central processing unit Computer optimization Gate arrays Graphics hardware Articles with example C code
35766070
https://en.wikipedia.org/wiki/Univention%20Corporate%20Server
Univention Corporate Server
Univention Corporate Server (UCS) is a server operating system derived from Debian with an integrated management system for the central and cross-platform administration of servers, services, clients, desktops and users as well as virtualized computers operated in UCS. In addition to the operation of local, virtual instances, UCS can also be operated in cloud environments. Via the integration of the open source software Samba 4, Univention also supports the functions provided in many companies by Microsoft Active Directory for the administration of computers operated with Microsoft Windows. UCS-based components and UCS-certified, third-party products can be installed via the Univention App Center. UCS provides all App Center applications with a runtime environment and services for the operation including a central, consistent management of the apps. Docker containers can also be run on UCS systems and several of the apps available in the App Center are Docker-based. Univention is a member of the Open Source Business Alliance and supports the creation of the Open Source Business Alliance open source software stacks. History The impulse for the development of UCS, which began in 2002, was the lack of a standardised Linux server operating system offering companies and organisations an alternative to Microsoft's domain concept with the proprietary directory service Active Directory. Comparable Linux solutions (e.g., from SUSE and Red Hat) did not offer an integrated, cross-system user and computer management system, with the result that corresponding solutions had to be configured and maintained individually. Important early driving forces for the development of UCS were initially the Oldenburgische Landesbank and the department of the Bremen Senator for Education and Science, until the product was ready for market launch at the end of 2004. Since then, in addition to new versions, a number of software solutions based on the main product UCS have also been launched. UCS is predominantly employed in the German-speaking world by companies and public organisations from a wide range of sectors and fields, among others by the regional government authority of the federal state Brandenburg. In 2005, Univention began to market UCS also in other German-speaking countries. Today, UCS is used in many European countries and also outside of Europe, for example, in Australia, Nigeria and the USA where Univention established a subsidiary in 2013. Licenses and editions UCS is open-source software; the proprietary developments of Univention GmbH included in UCS were published under the GNU GPL until Version 2.3. With the launch of Version 2.4, the company switched to GNU AGPL. There are also a range of software appliances based on UCS (e.g., in the groupware, desktop and IT service management fields). Since 21 April 2015 UCS is freely available to companies in form of the UCS Core Edition, which replaced the previous "free for personal use" license. This Core Edition is a fully featured version and differs from the fee-based edition only in terms of product liability and support. Structure and components Univention Corporate Server is based on the Debian Linux distribution. There are numerous open source applications integrated in UCS, for example Samba, the authentication service Kerberos, the virtualization software KVM, and Nagios for the monitoring of servers. The core and important unique selling point of UCS is the central administration tool "Univention Management Console", which allows the cross-system and cross-location management of IT infrastructures. UCS uses the directory service OpenLDAP to save data for identity and system management. The administration tools are operated via the web-based applications and command-line interfaces. Thanks to the integrated administration service UCS Virtual Machine Manager (UVMM), the administration tools also allow the central administration of virtualized servers and clients, hard drives, CDROM and DVD images as well as the physical systems on which they are operated. The manufacturer goes to great lengths to guarantee possibilities for the integration of UCS in existing IT environments via the use of open standards and supplied connectors. In this way, the integrated tool Active Directory Connection allows the bidirectional synchronisation of the Microsoft directory service Active Directory and the directory service used in UCS, OpenLDAP. In addition, UCS offers various interfaces for manufacturers of application software enabling them to integrate their applications in the UCS management system. Since UCS 3.1, UCS provides with "Univention App Center" an own graphic management component for the installation and deinstallation of UCS components and UCS-certified third-party appliances. The Univention App Center includes, beside Univention solutions, for example, the Open Source groupware solutions Kopano, Open-Xchange, the document management system agorum core, the slack alternative rocket.chat and the dropbox alternatives ownCloud and Nextcloud, or the collaboration solutions ONLYOFFICE and Collabora. References External links Univention on Github Debian-based distributions Enterprise Linux distributions Virtualization-related software for Linux X86-64 Linux distributions Software using the GNU AGPL license Linux distributions
19459800
https://en.wikipedia.org/wiki/.%D0%B1%D0%B3
.бг
The domain name (romanized as .bg; abbreviation of , tr. Bălgarija) is an internationalized country code top-level domain (IDN ccTLD) for Bulgaria. The ASCII DNS name of the domain would be , according to rules of the Internationalizing Domain Names in Applications procedures. It has previously been rejected by ICANN twice, due to its visual similarity to Brazil's .br, but in 2014 an ICANN panel determined that .бг is not confusingly similar to ISO 3166-1 country codes. The panel compared the two Cyrillic characters in several fonts to the Latin br, bt, bs, BT and BF. History On 24 October 2007, UNINET, a Bulgarian association announced the intent to submit an application for creation of the .бг domain. On 23 June 2008, the government of Bulgaria officially announced its intent to operate the domain in a letter from Plamen Vatchkov, chairman of the Bulgarian State Agency for Information Technology and Communication, to Paul Twomey, president and CEO of ICANN, after several months of discussions within the Internet Society – Bulgaria involving senior government ministers. On 18 May 2010 ICANN rejected the proposed domain on the grounds of visual similarities with the Brazilian ccTLD domain . In June 2010 the Minister of Transport, Information Technology and Communications Alexander Tsvetkov confirmed in a radio interview that Bulgaria would file a second request for the same domain. Disapproval of the domain would mean lower interest and usage of Bulgaria's alternative IDNs than currently expected. On 10 January 2011 the Minister of Transport, Information Technology and Communications organized a round-table between all interested parties, where all agreed to continue with the application for .бг A poll has shown that .бгр has the second highest support, after .бг. In March 2011 ICANN rejected .бг a second time. Bulgarian authorities have gone ahead discussing other Cyrillic domains. In 2014 ICANN's Extended Process Similarity Review Panel (EPSRP) approved .бг. On 5 March 2016, the .бг domain was added to the Root Zone. On 25 June 2016 the ICANN board delegated the domain to Imena.BG Plc. As of 12 July 2016, the first Cyrillic domain http://имена.бг is accessible online. See also Proposed top-level domain .bg - Bulgaria's Latin top-level domain .қаз .мкд .рф .срб .укр References Internet in Bulgaria Б
48600
https://en.wikipedia.org/wiki/Sendmail
Sendmail
Sendmail is a general purpose internetwork email routing facility that supports many kinds of mail-transfer and delivery methods, including the Simple Mail Transfer Protocol (SMTP) used for email transport over the Internet. A descendant of the delivermail program written by Eric Allman, Sendmail is a well-known project of the free and open source software and Unix communities. It has spread both as free software and proprietary software. Overview Allman had written the original ARPANET delivermail which shipped in 1979 with 4.0 and 4.1 BSD. He wrote Sendmail as a derivative of delivermail in the early 1980s at UC Berkeley. It shipped with BSD 4.1c in 1983, the first BSD version that included TCP/IP protocols. In 1996, approximately 80% of the publicly reachable mail-servers on the Internet ran Sendmail. More recent surveys have suggested a decline, with 3.64% of mail servers in March 2021 detected as running Sendmail in a study performed by E-Soft, Inc. A previous survey (December 2007 or earlier) reported 24% of mail servers running Sendmail according to a study performed by Mail Radar. Allman designed Sendmail to incorporate great flexibility, but it can be daunting to configure for novices. Standard configuration packages delivered with the source code distribution require the use of the M4 macro language which hides much of the configuration complexity. The configuration defines the site-local mail delivery options and their access parameters, the mechanism of forwarding mail to remote sites, as well as many application tuning parameters. Sendmail supports a variety of mail transfer protocols, including SMTP, DECnet's Mail-11, HylaFax, QuickPage and UUCP. Additionally, Sendmail v8.12 introduced support for milters - external mail filtering programs that can participate in each step of the SMTP conversation. Acquisition by Proofpoint, Inc. Sendmail, Inc was acquired by Proofpoint, Inc. This announcement was released on 1 October 2013. Security Sendmail originated in the early days of the Internet, an era when considerations of security did not play a primary role in the development of network software. Early versions of Sendmail suffered from a number of security vulnerabilities that have been corrected over the years. Sendmail itself incorporated a certain amount of privilege separation in order to avoid exposure to security issues. , current versions of Sendmail, like other modern MTAs, incorporate a number of security improvements and optional features that can be configured to improve security and help prevent abuse. History of vulnerabilities Sendmail vulnerabilities in CERT advisories and alerts: The UNIX-HATERS Handbook dedicated an entire chapter to perceived problems and weaknesses of sendmail. Implementation As of sendmail release 8.12.0 the default implementation of sendmail runs as the Unix user smmsp — the sendmail message submission program. See also List of mail servers Comparison of mail servers Mail delivery agent Mail user agent msmtp Internet messaging platform Morris worm MeTA1 Notes References — This is the Sendmail "bible" containing 1308 pages about Sendmail. It is also known as "The Bat Book", because of the picture on its cover. The 1st Edition was published in November 1993. — A companion to sendmail, 3rd Edition, this book documents the improvements in V8.13 in parallel with its release. — presented at the USENIX Annual Technical Conference External links Sendmail, Inc. Sendmail sources SMTPfeed, SMTP Fast Exploding External Deliverer for Sendmail. Daniel J. Bernstein, Internet SMTP server survey, October 2001 Mike Brodbelt, A brief history of mail Message transfer agents Free email server software Free software programmed in C Companies based in Emeryville, California Email server software for Linux 1983 software
15945619
https://en.wikipedia.org/wiki/Computer-assisted%20surgery
Computer-assisted surgery
Computer-assisted surgery (CAS) represents a surgical concept and set of methods, that use computer technology for surgical planning, and for guiding or performing surgical interventions. CAS is also known as computer-aided surgery, computer-assisted intervention, image-guided surgery, digital surgery and surgical navigation, but these are terms that are more or less synonymous with CAS. CAS has been a leading factor in the development of robotic surgery. General principles Creating a virtual image of the patient The most important component for CAS is the development of an accurate model of the patient. This can be conducted through a number of medical imaging technologies including CT, MRI, x-rays, ultrasound plus many more. For the generation of this model, the anatomical region to be operated has to be scanned and uploaded into the computer system. It is possible to employ a number of scanning methods, with the datasets combined through data fusion techniques. The final objective is the creation of a 3D dataset that reproduces the exact geometrical situation of the normal and pathological tissues and structures of that region. Of the available scanning methods, the CT is preferred, because MRI data sets are known to have volumetric deformations that may lead to inaccuracies. An example data set can include the collection of data compiled with 180 CT slices, that are 1 mm apart, each having 512 by 512 pixels. The contrasts of the 3D dataset (with its tens of millions of pixels) provide the detail of soft vs hard tissue structures, and thus allow a computer to differentiate, and visually separate for a human, the different tissues and structures. The image data taken from a patient will often include intentional landmark features, in order to be able to later realign the virtual dataset against the actual patient during surgery. See patient registration. Image analysis and processing Image analysis involves the manipulation of the patients 3D model to extract relevant information from the data. Using the differing contrast levels of the different tissues within the imagery, as examples, a model can be changed to show just hard structures such as bone, or view the flow of arteries and veins through the brain. Diagnostic, preoperative planning, surgical simulation Using specialized software the gathered dataset can be rendered as a virtual 3D model of the patient, this model can be easily manipulated by a surgeon to provide views from any angle and at any depth within the volume. Thus the surgeon can better assess the case and establish a more accurate diagnostic. Furthermore, the surgical intervention will be planned and simulated virtually, before actual surgery takes place (computer-aided surgical simulation [CASS]). Using dedicated software, the surgical robot will be programmed to carry out the planned actions during the actual surgical intervention. Surgical navigation In computer-assisted surgery, the actual intervention is defined as surgical navigation. Using the surgical navigation system the surgeon uses special instruments, which are tracked by the navigation system. The position of a tracked instrument in relation to the patient's anatomy is shown on images of the patient, as the surgeon moves the instrument. The surgeon thus uses the system to 'navigate' the location of an instrument. The feedback the system provides of the instrument location is particularly useful in situations where the surgeon cannot actually see the tip of the instrument, such as in minimally invasive surgeries. Robotic surgery Robotic surgery is a term used for correlated actions of a surgeon and a surgical robot (that has been programmed to carry out certain actions during the preoperative planning procedure). A surgical robot is a mechanical device (generally looking like a robotic arm) that is computer-controlled. Robotic surgery can be divided into three types, depending on the degree of surgeon interaction during the procedure: supervisory-controlled, telesurgical, and shared-control. In a supervisory-controlled system, the procedure is executed solely by the robot, which will perform the pre-programmed actions. A telesurgical system, also known as remote surgery, requires the surgeon to manipulate the robotic arms during the procedure rather than allowing the robotic arms to work from a predetermined program. With shared-control systems, the surgeon carries out the procedure with the use of a robot that offers steady-hand manipulations of the instrument. In most robots, the working mode can be chosen for each separate intervention, depending on the surgical complexity and the particularities of the case. Applications Computer-assisted surgery is the beginning of a revolution in surgery. It already makes a great difference in high-precision surgical domains, but it is also used in standard surgical procedures. Computer-assisted neurosurgery Telemanipulators have been used for the first time in neurosurgery, in the 1980s. This allowed a greater development in brain microsurgery (compensating surgeon’s physiological tremor by 10-fold), increased accuracy and precision of the intervention. It also opened a new gate to minimally invasive brain surgery, furthermore reducing the risk of post-surgical morbidity by avoiding accidental damage to adjacent centers. Computer-assisted neurosurgery also includes spinal procedures using navigation and robotics systems. Current navigation systems available include Medtronic Stealth, BrainLab, 7D Surgical, and Stryker; current robotics systems available include Mazor Renaissance, MazorX, Globus Excelsius GPS, and Brainlab Cirq. Computer-assisted oral and maxillofacial surgery Bone segment navigation is the modern surgical approach in orthognathic surgery (correction of the anomalies of the jaws and skull), in temporo-mandibular joint (TMJ) surgery, or in the reconstruction of the mid-face and orbit. It is also used in implantology where the available bone can be seen and the position, angulation and depth of the implants can be simulated before the surgery. During the operation surgeon is guided visually and by sound alerts. IGI (Image Guided Implantology) is one of the navigation systems which uses this technology. Guided Implantology New therapeutic concepts as guided surgery are being developed and applied in the placement of dental implants. The prosthetic rehabilitation is also planned and performed parallel to the surgical procedures. The planning steps are at the foreground and carried out in a cooperation of the surgeon, the dentist and the dental technician. Edentulous patients, either one or both jaws, benefit as the time of treatment is reduced. Regarding the edentulous patients, conventional denture support is often compromised due to moderate bone atrophy, even if the dentures are constructed based on correct anatomic morphology. Using cone beam computed tomography, the patient and the existing prosthesis are being scanned. Furthermore, the prosthesis alone is also scanned. Glass pearls of defined diameter are placed in the prosthesis and used as reference points for the upcoming planning. The resulting data is processed and the position of the implants determined. The surgeon, using special developed software, plans the implants based on prosthetic concepts considering the anatomic morphology. After the planning of the surgical part is completed, a CAD/CAM surgical guide for dental placement is constructed. The mucosal-supported surgical splint ensures the exact placement of the implants in the patient. Parallel to this step, the new implant supported prosthesis is constructed. The dental technician, using the data resulting from the previous scans, manufactures a model representing the situation after the implant placement. The prosthetic compounds, abutments, are already prefabricated. The length and the inclination can be chosen. The abutments are connected to the model at a position in consideration of the prosthetic situation. The exact position of the abutments is registered. The dental technician can now manufacture the prosthesis. The fit of the surgical splint is clinically proved. After that, the splint is attached using a three-point support pin system. Prior to the attachment, irrigation with a chemical disinfectant is advised. The pins are driven through defined sheaths from the vestibular to the oral side of the jaw. Ligaments anatomy should be considered, and if necessary decompensation can be achieved with minimal surgical interventions. The proper fit of the template is crucial and should be maintained throughout the whole treatment. Regardless of the mucosal resilience, a correct and stable attachment is achieved through the bone fixation. The access to the jaw can now only be achieved through the sleeves embedded in the surgical template. Using specific burs through the sleeves the mucosa is removed. Every bur used, carries a sleeve compatible to the sleeves in the template, which ensures that the final position is achieved but no further progress in the alveolar ridge can take place. Further procedure is very similar to the traditional implant placement. The pilot hole is drilled and then expanded. With the aid of the splint, the implants are finally placed. After that, the splint can be removed. With the aid of a registration template, the abutments can be attached and connected to the implants at the defined position. No less than a pair of abutments should be connected simultaneously to avoid any discrepancy. An important advantage of this technique is the parallel positioning of the abutments. A radiological control is necessary to verify the correct placement and connection of implant and abutment. In a further step, abutments are covered by gold cone caps, which represent the secondary crowns. Where necessary, the transition of the gold cone caps to the mucosa can be isolated with rubber dam rings. The new prosthesis corresponds to a conventional total prosthesis but the basis contains cavities so that the secondary crowns can be incorporated. The prosthesis is controlled at the terminal position and corrected if needed. The cavities are filled with a self-curing cement and the prosthesis is placed in the terminal position. After the self-curing process, the gold caps are definitely cemented in the prosthesis cavities and the prosthesis can now be detached. Excess cement may be removed and some corrections like polishing or under filling around the secondary crowns may be necessary. The new prosthesis is fitted using a construction of telescope double cone crowns. At the end position, the prosthesis buttons down on the abutments to ensure an adequate hold. At the same sitting, the patient receives the implants and the prosthesis. An interim prosthesis is not necessary. The extent of the surgery is kept to minimum. Due to the application of the splint, a reflection of soft tissues in not needed. The patient experiences less bleeding, swelling and discomfort. Complications such as injuring of neighbouring structures are also avoided. Using 3D imaging during the planning phase, the communication between the surgeon, dentist and dental technician is highly supported and any problems can easily detected and eliminated. Each specialist accompanies the whole treatment and interaction can be made. As the end result is already planned and all surgical intervention is carried according to the initial plan, the possibility of any deviation is kept to a minimum. Given the effectiveness of the initial planning the whole treatment duration is shorter than any other treatment procedures. Computer-assisted ENT surgery Image-guided surgery and CAS in ENT commonly consists of navigating preoperative image data such as CT or cone beam CT to assist with locating or avoiding anatomically important regions such as the optical nerve or the opening to the frontal sinuses. For use in middle-ear surgery there has been some application of robotic surgery due to the requirement for high-precision actions. Computer-assisted orthopedic surgery (CAOS) The application of robotic surgery is widespread in orthopedics, especially in routine interventions, like total hip replacement or pedicle screw insertion during spinal fusion. It is also useful in pre-planning and guiding the correct anatomical position of displaced bone fragments in fractures, allowing a good fixation by osteosynthesis, especially for malrotated bones. Early CAOS systems include the HipNav, OrthoPilot, and Praxim. Recently, mini-optical navigation tools called Intellijoint HIP have been developed for hip arthroplasty procedures. Computer-assisted visceral surgery With the advent of computer-assisted surgery, great progresses have been made in general surgery towards minimal invasive approaches. Laparoscopy in abdominal and gynecologic surgery is one of the beneficiaries, allowing surgical robots to perform routine operations, like colecystectomies, or even hysterectomies. In cardiac surgery, shared control systems can perform mitral valve replacement or ventricular pacing by small thoracotomies. In urology, surgical robots contributed in laparoscopic approaches for pyeloplasty or nephrectomy or prostatic interventions. Computer-assisted cardiac interventions Applications include atrial fibrillation and cardiac resynchronization therapy. Pre-operative MRI or CT is used to plan the procedure. Pre-operative images, models or planning information can be registered to intra-operative fluoroscopic image to guide procedures. Computer-assisted radiosurgery Radiosurgery is also incorporating advanced robotic systems. CyberKnife is such a system that has a lightweight linear accelerator mounted on the robotic arm. It is guided towards tumor processes, using the skeletal structures as a reference system (Stereotactic Radiosurgery System). During the procedure, real time X-ray is used to accurately position the device before delivering radiation beam. The robot can compensate for respiratory motion of the tumor in real-time. Advantages CAS starts with the premise of a much better visualization of the operative field, thus allowing a more accurate preoperative diagnostic and a well-defined surgical planning, by using surgical planning in a preoperative virtual environment. This way, the surgeon can easily assess most of the surgical difficulties and risks and have a clear idea about how to optimize the surgical approach and decrease surgical morbidity. During the operation, the computer guidance improves the geometrical accuracy of the surgical gestures and also reduce the redundancy of the surgeon’s acts. This significantly improves ergonomy in the operating theatre, decreases the risk of surgical errors, reduces the operating time and improves the surgical outcome. Disadvantages There are several disadvantages of computer-assisted surgery. Many systems have costs in the millions of dollars, making them a large investment for even big hospitals. Some people believe that improvements in technology, such as haptic feedback, increased processor speeds, and more complex and capable software will increase the cost of these systems. Another disadvantage is the size of the systems. These systems have relatively large footprints. This is an important disadvantage in today's already crowded-operating rooms. It may be difficult for both the surgical team and the robot to fit into the operating room. See also Advanced Simulation Library is a hardware accelerated multiphysics simulation software References External links
54583862
https://en.wikipedia.org/wiki/Department%20of%20Home%20Affairs%20%28Australia%29
Department of Home Affairs (Australia)
The Department of Home Affairs is the Australian Government interior ministry with responsibilities for national security, law enforcement, emergency management, border control, immigration, refugees, citizenship, transport security and multicultural affairs. The portfolio also includes federal agencies such as the Australian Federal Police, Australian Border Force and the Australian Security Intelligence Organisation. The Home Affairs portfolio reports to the Minister for Home Affairs, Karen Andrews, and is led by the Secretary of the Department of Home Affairs, Mike Pezzullo. The Department was officially established on 20 December 2017, building on the former Department of Immigration and Border Protection and bringing policy responsibilities and agencies from the Attorney-General's Department, Department of Infrastructure and Regional Development, Department of the Prime Minister and Cabinet, and Department of Social Services. The Department of Home Affairs is seen as the Australian version of the United Kingdom's Home Office or the United States Department of Homeland Security. History One of the seven inaugural Australian Public Service departments at the federation of Australia was the Department of Home Affairs (1901–16) with wide-ranging responsibilities for public works, elections, census, the public service, pensions, and inter-state relations. This department was followed by the Department of Home and Territories (1916–1928), the Department of Home Affairs (1928–32), the Department of the Interior (1932–39), the Department of the Interior (1939–72), the Department of Home Affairs (1977–80), and the Department of Home Affairs and Environment (1980–84). Prior to the formation of the current Department of Home Affairs, the Attorney-General's Department had responsibility for national security, law enforcement, emergency management as well as border protection alongside the various forms of the Department of Immigration and Citizenship. The proposed establishment of the Department of Home Affairs was announced by Prime Minister Malcolm Turnbull on 18 July 2017 to be headed by Immigration Minister Peter Dutton as the designated Minister for Home Affairs to bring together all national security, border control and law enforcement agencies of the government. The Department was officially stood up on the 20 December 2017 through an Administrative Arrangements Order. The Department combines the national security, law enforcement and emergency management functions of the Attorney-General's Department, the transport security functions of the Department of Infrastructure and Regional Development, the counterterrorism and cybersecurity functions of the Department of the Prime Minister and Cabinet, the multicultural affairs functions of the Department of Social Services, and the entirety of the Department of Immigration and Border Protection. Ministers The ministers of the Home Affairs portfolio were announced on 19 December 2017 by Prime Minister Malcolm Turnbull including a Minister for Law Enforcement and Cybersecurity held by Angus Taylor and a Minister for Citizenship and Multicultural Affairs held by Alan Tudge. With the Liberal Party of Australia leadership spills, 2018 resulting in the change of Prime Minister, Scott Morrison separated the concurrently held Minister for Immigration and Border Protection office from Peter Dutton who was also Minister for Home Affairs and renamed the immigration minister to the Minister for Immigration, Citizenship and Multicultural Affairs as a post in the Outer Ministry. The immigration portfolio was elevated back to the cabinet in October 2021. The following are the ministers of the portfolio: Minister for Home Affairs: Karen Andrews Minister for Emergency Management and National Recovery and Resilience: Bridget McKenzie Minister for Immigration, Citizenship, Migrant Services and Multicultural Affairs: Alex Hawke Assistant Minister for Customs, Community Safety and Multicultural Affairs: Jason Wood Portfolio responsibilities The Department is responsible for the following functions: National security policy and operations, including - Countering terrorism policy and coordination Countering foreign interference Countering violent extremism programs Law enforcement policy and operations Immigration and migration, including - border security entry, stay and departure arrangements for non-citizens customs and border control other than quarantine and inspection Multicultural affairs Transport security Cyber policy co-ordination Protective services at Commonwealth establishments and diplomatic and consular premises in Australia Critical infrastructure protection co-ordination Commonwealth emergency management Natural disaster relief, recovery and mitigation policy and financial assistance including payments to the States and Territories and the Australian Government Disaster Recovery Payment Departmental functions Counter-Terrorism The Commonwealth Counter-Terrorism Coordinator and the Centre for Counter-Terrorism Coordination within the Department of Home Affairs (formerly within the Department of the Prime Minister and Cabinet) provides strategic advice and support to the Minister for Home Affairs and the Prime Minister on all aspects of counterterrorism and countering violent extremism policy and co-ordination across government. The Office was created after recommendations from the Review of Australia's Counter-Terrorism Machinery in 2015 in response to the 2014 Sydney hostage crisis. The Commonwealth Counter-Terrorism Coordinator also serves as the Co-Chair and or Chair of the Australian and New Zealand Counter-Terrorism Committee and the Joint Counter-Terrorism Board, with the Centre for Counter-Terrorism Coordination providing secretariat support to the Australian Counter-Terrorism Centre and the Australian and New Zealand Counter-Terrorism Committee. Along with the Deputy Counter-Terrorism Coordinator, the Centre for Counter-Terrorism Coordination is also composed of the Counter-Terrorism Operational Coordination and Evaluation Branch, the Counter-Terrorism Strategic Coordination Branch, the Counter-Terrorism Capability Branch, and the Home Affairs Counter-Terrorism Policy Branch. Cyber Security The National Cyber Security Adviser and the Cyber Security Policy Division within the Department of Home Affairs (formerly within the Department of the Prime Minister and Cabinet) is responsible for cyber security policy and the implementation of the Australian Government Cyber Security Strategy. The National Cyber Coordinator also ensures effective partnerships between Commonwealth, state and territory governments, the private sector, non-governmental organisations, the research community and the international partners. The National Cyber Coordinator also works closely with the Australian Cyber Security Centre and the Australian Ambassador for Cyber Issues. CERT Australia is the national computer emergency response team responsible for cybersecurity responses and providing cyber security advice and support to critical infrastructure and other systems of national interest. CERT Australia works closely with other Australian Government agencies, international CERTs, and the private sector. It is also a key element in the Australian Cyber Security Centre, sharing information and working closely with ASIO, the Australian Federal Police, the Australian Signals Directorate, the Defence Intelligence Organisation and the Australian Criminal Intelligence Commission. Aviation and Maritime Security The Aviation and Maritime Security Division (formerly the Office of Transport Security within the Department of Infrastructure and Regional Development) is led by the Executive Director of Transport Security and is responsible for aviation security, air cargo security, maritime security, and various transport security operations. Transnational Serious and Organised Crime The Commonwealth Transnational Serious and Organised Crime Coordinator is responsible for policy development and strategic coordination of the disruption of transnational serious organised crime across the Australian Government including the Australian Federal Police, Australian Border Force, Australian Criminal Intelligence Commission, Australian Transaction Reports and Analysis Centre, and state and territory law enforcement agencies. The Coordinator is held concurrently by an Australian Federal Police Deputy Commissioner. Counter Child Exploitation The Australian Centre to Counter Child Exploitation is a whole-of-government initiative within the Australian Federal Police responsible to the Commonwealth Transnational Serious and Organised Crime Coordinator to investigate, disrupt and prosecute child exploitation and online child abuse crimes. Counter Foreign Interference The National Counter Foreign Interference Coordinator is responsible for policy development and strategic coordination of countering foreign interference and counter-espionage to protect the integrity of Australian national security and interests. The Coordinator is responsible for interagency and intergovernmental strategy and coordination to counter coercive, clandestine or deceptive activities undertaken on behalf of foreign powers. Accordingly, the Coordinator acts as an intergovernmental focal point for the Australian Federal Police, the Australian Security Intelligence Organisation, the Department of Foreign Affairs and Trade, the Attorney-General's Department, and elements of the Department of Defence such as the Defence Security and Vetting Service and Australian Defence Force Investigative Service. Critical Infrastructure The Australian Government Critical Infrastructure Centre (CIC) is responsible for whole-of-government co-ordination of critical infrastructure protection and national security risk assessments and advice. It was established on 23 January 2017 originally within the Attorney-General's Department and brings together expertise and capability from across the Australian Government and functions in close consultation states and territory governments, regulators, and the private sector. The Centre also supports the Foreign Investment Review Board and brings together staff from across governmental authorities including from the Australian Treasury, the Department of Infrastructure and Regional Development, and the Department of the Environment and Energy. Crisis Coordination The Australian Government Crisis Coordination Centre (CCC) is an all-hazards co-ordination facility, which operates on a 24/7 basis, and supports the Australian Government Crisis Committee (AGCC) and the National Crisis Committee (NCC). The CCC provides whole-of-government all-hazards monitoring and situational awareness for domestic and international events and coordinates Australian Government responses to major domestic incidents. The Crisis Coordination Centre is managed by the Crisis Management Branch of Emergency Management Australia which was within the Attorney-General's Department before its transfer. Departmental Executive Secretary of Home Affairs Deputy Secretary (Executive) Deputy Secretary (Policy) Deputy Secretary (Corporate and Enabling) / Chief Operating Officer Deputy Secretary (Intelligence and Capability) Deputy Secretary (Immigration and Citizenship Services) Deputy Secretary (Infrastructure, Transport Security and Customs) / Deputy Comptroller-General of Customs Commonwealth Counter-Terrorism Coordinator National Cyber Security Adviser Commonwealth Transnational Serious and Organised Crime Coordinator National Counter Foreign Interference Coordinator Commissioner of the Australian Border Force / Comptroller-General of Customs Commissioner of the Australian Federal Police Director-General of Security Chief Executive Officer of the Australian Criminal Intelligence Commission Chief Executive Officer of the Australian Transaction Reports and Analysis Centre Portfolio agencies Australian Security Intelligence Organisation Australian Federal Police Australian Criminal Intelligence Commission Australian Transaction Reports and Analysis Centre Australian Institute of Criminology Australian Border Force (including the Maritime Border Command, the National Border Targeting Centre and Operation Sovereign Borders) See also Department of Home Affairs (1901–16) Department of Home and Territories (1916–1928) Department of Home Affairs (1928–32) Department of the Interior (1932–39) Department of the Interior (1939–72) Department of Home Affairs (1977–80) Department of Home Affairs and Environment (1980–84) References 2017 establishments in Australia Government departments of Australia Australian intelligence agencies Lists of Australian government agencies Government agencies established in 2017 Law enforcement in Australia Australian criminal law Crime in Australia Public policy in Australia Federal law enforcement agencies of Australia Terrorism in Australia Australia
11992837
https://en.wikipedia.org/wiki/Fujitsu%20Glovia%20Inc.
Fujitsu Glovia Inc.
CrescentOne is a supplier of enterprise resource planning (ERP) software for discrete manufacturing and a wholly owned subsidiary of Fujitsu Limited. The company is best known for its GLOVIA G2 software. Primary markets for GLOVIA G2 are the OEM and the Tier 1, 2 and 3 manufacturers in a variety of industries, including aerospace and defense, automotive, electronics, capital equipment, make-to-order (MTO), engineer-to-order (ETO) and high-volume manufacturing. History Founded in 1970 as Xerox Computer Services, Glovia International became a wholly owned subsidiary of Fog Software Inc. in 2021 Its GLOVIA G2 manufacturing ERP software was first launched in 1990 as Xerox Chess and the next generation version was released as GLOVIA G2 in 2010. Glovia Services Inc., an El Segundo, California-based software solutions provider, was a wholly owned subsidiary of Glovia International, Inc., and provided SaaS web-based ERP software. In 2015, Glovia International, Inc. changed its name to Fujitsu Glovia, Inc.In 2021 its name changed to CrescentOne Inc. GLOVIA G2 Key areas of the GLOVIA G2 manufacturing ERP solution include product management, manufacturing, financials, customer management, supplier management, project management and business intelligence. Software as a service (SaaS) applications are accessed over the Internet. They also contributed to auditing and costing the computer systems for the 2008 Summer Olympics. References ERP Vendors Expand Offerings, Make In-roads Into SCM Market Small Business Technology Magazine SAP's mixed-up confusion Hosted ERP Done Right First SaaS Solution for the Business Process Management Global 100 Chart SMB Logistics: Small is the New Big FUJITSU GLOVIA, Inc. Profile External links GLOVIA G2 Web Page Fujitsu Home Page ERP software companies Fujitsu
70061
https://en.wikipedia.org/wiki/First%20Fleet
First Fleet
The First Fleet was a fleet of 11 ships that brought the first European and African settlers to Australia. It was made up of two Royal Navy vessels, three store ships and six convict transports. On 13 May 1787 the fleet under the command of Captain Arthur Phillip, with over 1400 people (convicts, marines, sailors, civil officers and free settlers), left from Portsmouth, England and took a journey of over and over 250 days to eventually arrive in Botany Bay, New South Wales, where a penal colony would become the first European settlement in Australia. History Lord Sandwich, together with the President of the Royal Society, Sir Joseph Banks, the eminent scientist who had accompanied Lieutenant James Cook on his 1770 voyage, was advocating establishment of a British colony in Botany Bay, New South Wales. Banks accepted an offer of assistance from the American Loyalist James Matra in July 1783. Under Banks's guidance, he rapidly produced "A Proposal for Establishing a Settlement in New South Wales" (24 August 1783), with a fully developed set of reasons for a colony composed of American Loyalists, Chinese and South Sea Islanders (but not convicts). The decision to establish a colony in Australia was made by Thomas Townshend, Lord Sydney, Secretary of State for the Home Office. This was taken for two reasons: the ending of transportation of criminals to North America following the American Revolution, as well as the need for a base in the Pacific to counter French expansion. In September 1786, Captain Arthur Phillip was appointed Commodore of the fleet, which came to be known as the First Fleet, which was to transport the convicts and soldiers to establish a colony at Botany Bay. Upon arrival there, Phillip was to assume the powers of Captain General and Governor in Chief of the new colony. A subsidiary colony was to be founded on Norfolk Island, as recommended by Sir John Call and Sir George Young, to take advantage for naval purposes of that island's native flax (harakeke) and timber. The cost to Britain of outfitting and despatching the Fleet was £84,000 (about £9.6 million, or $19.6 million as of 2015). Ships Royal Naval escort On 25 October 1786 the 20 gun , lying in the dock at Deptford, was commissioned, and the command given to Phillip. The armed tender under command of Lieutenant Henry Lidgbird Ball was also commissioned to join the expedition. On 15 December, Captain John Hunter was assigned as second captain to Sirius to command in the absence of Phillip, whose presence, it was to be supposed, would be requisite at all times wherever the seat of government in that country might be fixed. HMS Sirius Sirius was Phillip’s flagship for the fleet. She had been converted from the merchantman Berwick, built in 1780 for Baltic trade. She was a 520 ton, sixth-rate vessel, originally armed with ten guns, four six-pounders and six carronades, Phillip had ten more guns placed aboard. HMS Supply Supply was designed in 1759 by shipwright Thomas Slade, as a yard craft for the ferrying of naval supplies. Measuring 170 tons, she had two masts, and was fitted with four small 3-pounder cannons and six -pounder swivel guns. Her armament was substantially increased in 1786 with the addition of four 12-pounder carronades. Convict transports Food and supply transports Ropes, crockery, agricultural equipment and a miscellany of other stores were needed. Items transported included tools, agricultural implements, seeds, spirits, medical supplies, bandages, surgical instruments, handcuffs, leg irons and a prefabricated wooden frame for the colony's first Government House. The party had to rely on its own provisions to survive until it could make use of local materials, assuming suitable supplies existed, and grow its own food and raise livestock. Golden Grove The reverend Richard Johnson, chaplain for the colony, travelled on the Golden Grove with his wife and servants. Legacy Scale models of all the ships are on display at the Museum of Sydney. The models were built by ship makers Lynne and Laurie Hadley, after researching the original plans, drawings and British archives. The replicas of Supply, Charlotte, Scarborough, Friendship, Prince of Wales, Lady Penrhyn, Borrowdale, Alexander, Sirius, Fishburn and Golden Grove are made from Western Red or Syrian Cedar. Nine Sydney harbour ferries built in the mid-1980s are named after First Fleet vessels. The unused names are Lady Penrhyn and Prince of Wales. People The majority of the people travelling with the fleet were convicts, all having been tried and convicted in Great Britain, almost all of them in England. Many are known to have come to England from other parts of Great Britain and, especially, from Ireland; at least 14 are known to have come from the British colonies in North America; 12 are identified as black (born in Britain, Africa, the West Indies, North America, India or a European country or its colony). The convicts had committed a variety of crimes, including theft, perjury, fraud, assault, robbery, for which they had variously been sentenced to death, which was then commuted to penal transportation for 7 years, 14 years, or the term of their natural life. Four companies of marines volunteered for service in the colony, these marines made up the New South Wales Marine Corps, under the command of Major Robert Ross, a detachment on board every convict transport. The families of marines also made the voyage. A number of people on the First Fleet kept diaries and journals of their experiences, including the surgeons, sailors, officers, soldiers, and ordinary seamen. There are at least eleven known manuscript Journals of the First Fleet in existence as well as some letters. The exact number of people directly associated with the First Fleet will likely never be established, as accounts of the event vary slightly. A total of 1,420 people have been identified as embarking on the First Fleet in 1787, and 1,373 are believed to have landed at Sydney Cove in January 1788. In her biographical dictionary of the First Fleet, Mollie Gillen gives the following statistics: While the names of all crew members of Sirius and Supply are known, the six transports and three store ships may have carried as many as 110 more seamen than have been identified – no complete musters have survived for these ships. The total number of persons embarking on the First Fleet would, therefore, be approximately 1,530 with about 1,483 reaching Sydney Cove. According to the first census of 1788 as reported by Governor Phillip to Lord Sydney, the non-indigenous population of the colony was 1,030 and the colony also consisted of 7 horses, 29 sheep, 74 swine, 6 rabbits, and 7 cattle. The following statistics were provided by Governor Phillip: The chief surgeon for the First Fleet, John White, reported a total of 48 deaths and 28 births during the voyage. The deaths during the voyage included one marine, one marine's wife, one marine's child, 36 male convicts, four female convicts, and five children of convicts. Notable members of First Fleet Officials Captain Arthur Phillip, R.N, Governor of New South Wales Major Robert Ross, Lieutenant Governor and commander of the marines Captain David Collins, Judge Advocate Augustus Alt, Surveyor John White, Principal Surgeon William Balmain, assistant Surgeon Richard Johnson, chaplain Soldiers Lieutenant George Johnston Captain Watkin Tench Lieutenant William Dawes Lieutenant Ralph Clark Sailors Captain John Hunter, commander of HMS Sirius Lieutenant Henry Lidgbird Ball, commander of HMS Supply Lieutenant William Bradley, 1st lieutenant of HMS Sirius Lieutenant Philip Gidley King, commandant of Norfolk Island Arthur Bowes Smyth, ship’s surgeon on Lady Penrhyn Lieutenant John Shortland, Agent for Transports John Shortland, son of above, 2nd mate of HMS Sirius Convicts Thomas Barrett, first person executed in colony Mary Bryant, with her husband, children and 6 other convicts escaped the colony and eventually returned to England John Caesar, bushranger Henry Kable, businessman James Martin, was part of the escape with Mary Bryant, wrote autobiography James Ruse, farmer, one of the only ones in the colony at its establishment Robert Sidaway, baker, opened the first theatre in Sydney James Squire, brewer Voyage Preparing the fleet In September 1786 Captain Arthur Phillip was chosen to lead the expedition to establish a colony in New South Wales. On 15 December, Captain John Hunter, was appointed Phillip’s second. By now had been nominated as flagship, with Hunter holding command. The armed tender under command of Lieutenant Henry Lidgbird Ball had also joined the fleet. With Phillip in London awaiting Royal Assent for the bill of management of the colony, the loading and provisioning of the transports was carried out by Lieutenant John Shortland, the agent for transports. On 16 March 1787, the fleet began to assemble at its appointed rendezvous, the Mother Bank, Isle of Wight. His Majesty's frigate Sirius and armed tender Supply, three store-ships, Golden Grove, Fishburn and Borrowdale, for carrying provisions and stores for two years; and lastly, six transports; Scarborough and Lady Penrhyn, from Portsmouth; Friendship and Charlotte, from Plymouth; Prince of Wales, and Alexander, from Woolwich. On 9 May Captain Phillip arrived in Portsmouth, the next day coming aboard the ships and give orders to prepare the fleet for departure. Leaving Portsmouth Phillip first tried to get the fleet to sail on 10 May, but a dispute by sailors of the Fishburn about pay, they refused to leave until resolved. The fleet finally left Portsmouth, England on 13 May 1787. The journey began with fine weather, and thus the convicts were allowed on deck. The Fleet was accompanied by the armed frigate until it left English waters. On 20 May 1787, one convict on Scarborough reported a planned mutiny; those allegedly involved were flogged and two were transferred to Prince of Wales. In general, however, most accounts of the voyage agree that the convicts were well behaved. On 3 June 1787, the fleet anchored at Santa Cruz at Tenerife. Here, fresh water, vegetables and meat were brought on board. Phillip and the chief officers were entertained by the local governor, while one convict tried unsuccessfully to escape. On 10 June they set sail to cross the Atlantic to Rio de Janeiro, taking advantage of favourable trade winds and ocean currents. The weather became increasingly hot and humid as the Fleet sailed through the tropics. Vermin, such as rats, and parasites such as bedbugs, lice, cockroaches and fleas, tormented the convicts, officers and marines. Bilges became foul and the smell, especially below the closed hatches, was over-powering. While Phillip gave orders that the bilge-water was to be pumped out daily and the bilges cleaned, these orders were not followed on Alexander and a number of convicts fell sick and died. Tropical rainstorms meant that the convicts could not exercise on deck as they had no change of clothes and no method of drying wet clothing. Consequently, they were kept below in the foul, cramped holds. On the female transports, promiscuity between the convicts, the crew and marines was rampant, despite punishments for some of the men involved. In the doldrums, Phillip was forced to ration the water to three pints a day. The Fleet reached Rio de Janeiro on 5 August and stayed for a month. The ships were cleaned and water taken on board, repairs were made, and Phillip ordered large quantities of food. The women convicts' clothing had become infested with lice and was burnt. As additional clothing for the female convicts had not arrived before the Fleet left England, the women were issued with new clothes made from rice sacks. While the convicts remained below deck, the officers explored the city and were entertained by its inhabitants. A convict and a marine were punished for passing forged quarter-dollars made from old buckles and pewter spoons. The Fleet left Rio de Janeiro on 4 September to run before the westerlies to the Table Bay in southern Africa, which it reached on 13 October. This was the last port of call, so the main task was to stock up on plants, seeds and livestock for their arrival in Australia. The livestock taken on board from Cape Town destined for the new colony included two bulls, seven cows, one stallion, three mares, 44 sheep, 32 pigs, four goats and "a very large quantity of poultry of every kind". Women convicts on Friendship were moved to other transports to make room for livestock purchased there. The convicts were provided with fresh beef and mutton, bread and vegetables, to build up their strength for the journey and maintain their health. The Dutch colony of Cape Town was the last outpost of European settlement which the fleet members would see for years, perhaps for the rest of their lives. "Before them stretched the awesome, lonely void of the Indian and Southern Oceans, and beyond that lay nothing they could imagine." Assisted by the gales in the "Roaring Forties" latitudes below the 40th parallel, the heavily laden transports surged through the violent seas. In the last two months of the voyage, the Fleet faced challenging conditions, spending some days becalmed and on others covering significant distances; Friendship travelled 166 miles one day, while a seaman was blown from Prince of Wales at night and drowned. Water was rationed as supplies ran low, and the supply of other goods including wine ran out altogether on some vessels. Van Diemen's Land was sighted from Friendship on 4 January 1788. A freak storm struck as they began to head north around the island, damaging the sails and masts of some of the ships. On 25 November, Phillip had transferred to Supply. With Alexander, Friendship and Scarborough, the fastest ships in the Fleet, which were carrying most of the male convicts, Supply hastened ahead to prepare for the arrival of the rest. Phillip intended to select a suitable location, find good water, clear the ground, and perhaps even have some huts and other structures built before the others arrived. This was a planned move, discussed by the Home Office and the Admiralty prior to the Fleet's departure. However, this "flying squadron" reached Botany Bay only hours before the rest of the Fleet, so no preparatory work was possible. Supply reached Botany Bay on 18 January 1788; the three fastest transports in the advance group arrived on 19 January; slower ships, including Sirius, arrived on 20 January. This was one of the world's greatest sea voyages – eleven vessels carrying about 1,487 people and stores had travelled for 252 days for more than 15,000 miles (24,000 km) without losing a ship. Forty-eight people died on the journey, a death rate of just over three per cent. Arrival in Australia It was soon realised that Botany Bay did not live up to the glowing account that the explorer Captain James Cook had provided. The bay was open and unprotected, the water was too shallow to allow the ships to anchor close to the shore, fresh water was scarce, and the soil was poor. First contact was made with the local indigenous people, the Eora, who seemed curious but suspicious of the newcomers. The area was studded with enormously strong trees. When the convicts tried to cut them down, their tools broke and the tree trunks had to be blasted out of the ground with gunpowder. The primitive huts built for the officers and officials quickly collapsed in rainstorms. The marines had a habit of getting drunk and not guarding the convicts properly, whilst their commander, Major Robert Ross, drove Phillip to despair with his arrogant and lazy attitude. Crucially, Phillip worried that his fledgling colony was exposed to attack from those described as "Aborigines" or from foreign powers. Although his initial instructions were to establish the colony at Botany Bay, he was authorised to establish the colony elsewhere if necessary. On 21 January, Phillip and a party which included John Hunter, departed the Bay in three small boats to explore other bays to the north. Phillip discovered that Port Jackson, about 12 kilometres to the north, was an excellent site for a colony with sheltered anchorages, fresh water and fertile soil. Cook had seen and named the harbour, but had not entered it. Phillip's impressions of the harbour were recorded in a letter he sent to England later: "the finest harbour in the world, in which a thousand sail of the line may ride in the most perfect security ...". The party returned to Botany Bay on 23 January. On the morning of 24 January, the party was startled when two French ships, the Astrolabe and the Boussole, were seen just outside Botany Bay. This was a scientific expedition led by Jean-François de La Pérouse. The French had expected to find a thriving colony where they could repair ships and restock supplies, not a newly arrived fleet of convicts considerably more poorly provisioned than themselves. There was some cordial contact between the French and British officers, but Phillip and La Pérouse never met. The French ships remained until 10 March before setting sail on their return voyage. They were not seen again and were later discovered to have been shipwrecked off the coast of Vanikoro in the present-day Solomon Islands. On 26 January 1788, the Fleet weighed anchor and sailed to Port Jackson. The site selected for the anchorage had deep water close to the shore, was sheltered, and had a small stream flowing into it. Phillip named it Sydney Cove, after Lord Sydney, the British Home Secretary. This date is celebrated as Australia Day, marking the beginning of British settlement. The British flag was planted and formal possession taken. This was done by Phillip and some officers and marines from Supply, with the remainder of Supplys crew and the convicts observing from on board ship. The remaining ships of the Fleet did not arrive at Sydney Cove until later that day. Writer and art critic Robert Hughes popularized the idea in his 1986 book The Fatal Shore that an orgy occurred upon the unloading of the convicts, though more modern historians regard this as untrue, since the first reference to any such indiscretions is as recent as 1963. First contact The First Fleet encountered Indigenous Australians when they landed at Botany Bay. The Cadigal people of the Botany Bay area witnessed the Fleet arrive and six days later the two ships of French explorer La Pérouse, the Astrolabe and the Boussole, sailed into the bay. When the Fleet moved to Sydney Cove seeking better conditions for establishing the colony, they encountered the Eora people, including the Bidjigal clan. A number of the First Fleet journals record encounters with Aboriginal people. Although the official policy of the British Government was to establish friendly relations with Aboriginal people, and Arthur Phillip ordered that the Aboriginal people should be well treated, it was not long before conflict began. The colonists did not sign treaties with the original inhabitants of the land. Between 1790 and 1810, Pemulwuy of the Bidjigal clan led the local people in a series of attacks against the colonists. After January 1788 The ships of the First Fleet mostly did not remain in the colony. Some returned to England, while others left for other ports. Some remained at the service of the Governor of the colony for some months: some of these were sent to Norfolk Island where a second penal colony was established. 1788 15 February – HMS Supply sails for Norfolk Island carrying a small party to establish a settlement. 5/6 May – Charlotte, Lady Penrhyn and Scarborough set sail for China. 14 July – Borrowdale, Alexander, Friendship and Prince of Wales set sail to return to England. 2 October – Golden Grove sets sail for Norfolk Island with a party of convicts, returning to Port Jackson 10 November, while HMS Sirius sails for Cape of Good Hope for supplies. 19 November – Fishburn and Golden Grove set sail for England. This means that only HMS Supply now remains in Sydney cove. 1789 23 December – carrying stores for the colony strikes an iceberg and is forced back to the Cape. It never reaches the colony in New South Wales. 1790: 19 March – HMS Sirius is wrecked off Norfolk Island. 17 April – HMS Supply sent to Batavia, Dutch East Indies, for emergency food supplies. 3 June – Lady Juliana, the first of six vessels of the Second Fleet, arrives in Sydney cove. The remaining five vessels of the Second Fleet arrive in the ensuing weeks. 19 September – HMS Supply returns to Sydney having chartered the Dutch vessel Waaksamheyd to accompany it carrying stores. Legacy Last survivors On Sat 26 January 1842 The Sydney Gazette and New South Wales Advertiser reported "The Government has ordered a pension of one shilling per diem to be paid to the survivors of those who came by the first vessel into the Colony. The number of these really 'old hands' is now reduced to three, of whom, two are now in the Benevolent Asylum, and the other is a fine hale old fellow, who can do a day's work with more spirit than many of the young fellows lately arrived in the Colony." The names of the three recipients were not given, and is academic as the notice turned out to be false, not having been authorised by the Governor. There were at least 25 persons still living who had arrived with the First Fleet, including several children born on the voyage. A number of these contacted the authorities to arrange their pension and all received a similar reply to the following received by John McCarty on 14 Mar 1842 "I am directed by His Excellency the Governor to inform you, that the paragraph which appeared in the Sydney Gazette relative to an allowance to the persons of the first expedition to New South Wales was not authorised by His Excellency nor has he any knowledge of such an allowance as that alluded to". E. Deas Thomson, Colonial Secretary. Following is a list of persons known to be living at the time the pension notice was published, in order of their date of death. At this time New South Wales included the whole Eastern seaboard of present day Australia except for Van Diemen's Land which was declared a separate colony in 1825 and achieved self governing status in 1855-6. This list does not include marines or convicts who returned to England after completing their term in NSW and who may have lived past January 1842. Rachel Earley: (or Hirley), convict per Friendship and Prince of Wales died 27 April 1842 at Kangaroo Point, VDL (said to be aged 75). Roger Twyfield: convict per Friendship died 30 April 1842 at Windsor, aged 98 (NSW reg as Twifield). Thomas Chipp: marine private per Friendship died 3 July 1842, buried Parramatta, aged 81 (NSW Reg age 93). Anthony Rope: convict per Alexander died 20 Apr 1843 at Castlereagh NSW, aged 84 (NSW Reg age 89). William Hubbard: Hubbard was convicted in the Kingston Assizes in Surrey, England, on 24 March 1784 for theft. He was transported to Australia on Scarborough in the First Fleet. He married Mary Goulding on 19 December 1790 in Rose Hill. In 1803 he received a land grant of 70 acres at Mulgrave Place. He died on 18 May 1843 at the Sydney Benevolent Asylum. His age was given as 76 when he was buried at Christ Church St. Lawrence, Sydney on 22 May 1843. Thomas Jones: convict per Alexander died Oct 1843 in NSW, aged 87. John Griffiths: marine private per Friendship who died 5 May 1844 at Hobart, aged 86. Benjamin Cusely: marine private per Friendship died 20 Jun 1845 at Windsor/Wilberforce, aged 86 (said to be 98). Henry Kable: convict per Friendship died 16 Mar 1846 at Windsor, aged 84. John McCarty: McCarty was a marine private who sailed on Friendship. McCarty claimed to have been born in Killarney, County Kerry, Ireland, circa Christmas 1745. He first served in the colony of New South Wales, then at Norfolk Island where he took up a land grant of 60 acres (Lot 71). He married first fleet convict Ann Beardsley on Norfolk Island in November 1791 after his marine discharge a month earlier. In 1808, at the impending closure of the Norfolk Island settlement, he resettled in Van Diemen's Land later taking a land grant (80 acres at Herdsman's Cove Melville) in lieu of the one forfeited on Norfolk Island. The last few years of his life were spent at the home of Mr. William H. Budd, at the Kinlochewe Inn near Donnybrook, Victoria. McCarty was buried on local land 24 July 1846, six months past his 100 birthday, although this is very likely an exaggerated age. John Alexander Herbert: convict per Scarborough died 19 Nov 1846 at Westbury Van Diemen's Land, aged 79. Robert Nunn: convict per Scarborough died 20 Nov 1846 at Richmond, aged 86. John Howard: convict per Scarborough died 1 Jan 1847 at Sydney Benevolent Asylum, aged 94. John Limeburner: The South Australian Register reported, in an article dated Wednesday 3 November 1847: "John Limeburner, the oldest colonist in Sydney, died in September last, at the advanced age of 104 years. He helped to pitch the first tent in Sydney, and remembered the first display of the British flag there, which was hoisted on a swamp oak-tree, then growing on a spot now occupied as the Water-Police Court. He was the last of those called the 'first-fleeters' (arrivals by the first convict ships) and, notwithstanding his great age, retained his faculties to the last." John Limeburner was a convict on Charlotte. He was convicted on 9 July 1785 at New Sarum, Wiltshire of theft of a waistcoat, a shirt and stockings. He married Elizabeth Ireland in 1790 at Rosehill and together they establish a 50-acre farm at Prospect. He died at Ashfield 4 September 1847 and is buried at St John's, Ashfield, death reg. as Linburner aged 104. John Jones: Jones was a marine private on the First Fleet and sailed on Alexander. He is listed in the N.S.W. 1828 Census as aged 82 and living at the Sydney Benevolent Asylum. He is said to have died at the Benevolent Asylum in 1848. Jane/Jenny Rose: (nee Jones), child of convict Elizabeth Evans per Lady Penrhyn died 29 Aug 1849 at Wollongong, aged 71. Samuel King: King was a scribbler (a worker in a scribbling mill) before he became a marine. He was a marine with the First Fleet on board . He shipped to Norfolk Island on Golden Grove in September 1788, where he lived with Mary Rolt, a convict who arrived with the First Fleet on Prince of Wales. He received a grant of 60 acres (Lot No. 13) at Cascade Stream in 1791. Mary Rolt returned to England on Britannia in October 1796. King was resettled in Van Diemen's Land, boarding City of Edinburgh on 3 September 1808, and landed in Hobart on 3 October. He married Elizabeth Thackery on 28 January 1810. He died on 21 October 1849 at 86 years of age and was buried in the Wesleyan cemetery at Lawitta Road, Back River. Mary Stevens: (nee Phillips), convict per Charlotte and Prince of Wales died 22 Jan 1850 at Longford Van Diemen's Land, aged 81. John Small: Convicted 14 March 1785 at the Devon Lent Assizes held at Exeter for Robbery King's Highway. Sentenced to hang, reprieved to 7 years' transportation. Arrived on Charlotte in First Fleet 1788. Certificate of freedom 1792. Land Grant 1794, 30 acre "Small's Farm" at Eastern Farms (Ryde). Married October 1788 Mary Parker also a First Fleet convict who arrived on Lady Penrhyn. John Small died on 2 October 1850 at age of 90 years. Edward Smith: aka Beckford, convict per Scarborough died 2 Jun 1851 at Balmain, aged 92. Ann Forbes: (m.Huxley), convict per Prince of Wales died 29 Dec 1851, Lower Portland NSW, aged 83. Henry Kable Jnr: aka Holmes, b. 1786 in Norwich Castle prison, son of convict Susannah Holmes per Friendship and Charlotte, died 13 May 1852 at Picton, New South Wales aged 66. Lydia Munro: (m.Goodwin) per Prince of Wales died 29 Jun 1856 at Hobart, reg as Letitia Goodwin, aged 85. Elizabeth Thackery: Elizabeth "Betty" King (née Thackery) was tried and convicted of theft on 4 May 1786 at Manchester Quarter Sessions, and sentenced to seven years' transportation. She sailed on Friendship, but was transferred to Charlotte at the Cape of Good Hope. She was shipped to Norfolk Island on in 1790 and lived there with James Dodding. In August 1800 she bought 10 acres of land from Samuel King at Cascade Stream. Elizabeth and James were relocated to Van Diemen's Land in December 1807 but parted company sometime afterwards. On 28 January 1810 Elizabeth married "First Fleeter" Private Samuel King (above) and lived with him until his death in 1849. Betty King died in New Norfolk, Tasmania on 7 August 1856, aged 89 years. She is buried in the churchyard of the Methodist Chapel, Lawitta Road, Back River, next to her husband, and the marked grave bears a First Fleet plaque. John Harmsworth: marine's child b.1788 per Prince of Wales died 21 Jul 1860 at Clarence Plains Tasmania, aged 73 years. Smallpox Historians have disagreed over whether those aboard the First Fleet were responsible for introducing smallpox to Australia's indigenous population, and if so, whether this was the consequence of deliberate action. In 1914, J. H. L. Cumpston, director of the Australian Quarantine Service put forward the hypothesis that smallpox arrived in Australia with First Fleet. Some researchers have argued that any such release may have been a deliberate attempt to decimate the indigenous population. Hypothetical scenarios for such an action might have included: an act of revenge by an aggrieved individual, a response to attacks by indigenous people, or part of an orchestrated assault by the New South Wales Marine Corps, intended to clear the path for colonial expansion. Seth Carus, a former Deputy Director of the National Defense University in the United States wrote in 2015 that there was a "strong circumstantial case supporting the theory that someone deliberately introduced smallpox in the Aboriginal population." Other historians have disputed the idea that there was a deliberate release of smallpox virus and/or suggest that it arrived with visitors to Australia other than the First Fleet. It has been suggested that live smallpox virus may have been introduced accidentally when Aboriginal people came into contact with variolous matter brought by the First Fleet for use in anti-smallpox inoculations. In 2002, historian Judy Campbell offered a further theory, that smallpox had arrived in Australia through contact with fishermen from Makassar in Indonesia, where smallpox was endemic. In 2011, Macknight stated: "The overwhelming probability must be that it [smallpox] was introduced, like the later epidemics, by [Indonesian] trepangers ... and spread across the continent to arrive in Sydney quite independently of the new settlement there." There is a fourth theory, that the 1789 epidemic was not smallpox but chickenpox – to which indigenous Australians also had no inherited resistance – that happened to be affecting, or was carried by, members of the First Fleet. This theory has also been disputed. Commemoration Garden After Ray Collins, a stonemason, completed years of research into the First Fleet, he sought approval from about nine councils to construct a commemorative garden in recognition of these immigrants. Liverpool Plains Shire Council was ultimately the only council to accept his offer to supply the materials and construct the garden free of charge. The site chosen was a disused caravan park on the banks of Quirindi Creek at Wallabadah, New South Wales. In September 2002 Collins commenced work on the project. Additional support was later provided by Neil McGarry in the form of some signs and the council contributed $28,000 for pathways and fencing. Collins hand-chiselled the names of all those who came to Australia on the eleven ships in 1788 on stone tablets along the garden pathways. The stories of those who arrived on the ships, their life, and first encounters with the Australian country are presented throughout the garden. On 26 January 2005, the First Fleet Garden was opened as the major memorial to the First Fleet immigrants. Previously the only other specific memorial to the First Fleeters was an obelisk at Brighton-Le-Sands, New South Wales. The surrounding area has a barbecue, tables, and amenities. See also Australian frontier wars Convicts in Australia Convict women in Australia European exploration of Australia History of Australia (1788–1850) History of Indigenous Australians Journals of the First Fleet Penal transportation Prehistory of Australia Second Fleet (Australia) Terra nullius Third Fleet (Australia) References Citations Bibliography Further reading Fiction James Talbot, The Thief Fleet, 2012, Colleen McCullough, Morgan's Run, Timberlake Wertenbaker, Our Country's Good, Thomas Keneally, The Playmaker, William Stuart Long, The Exiles, (hardcover, 1984) (paperback, 1979) (mass market paperback, 1981) William Stuart Long, The Settlers, (hardcover, 1980) (paperback, 1980) (mass market paperback, 1982) William Stuart Long, The Traitors, (hardcover, 1984) (mass market paperback, 1981) D. Manning Richards, Destiny in Sydney: An epic novel of convicts, Aborigines, and Chinese embroiled in the birth of Sydney, Australia, Marcus Clarke, For the Term of his Natural Life. Melbourne, 1874 External links Complete list of the convicts of the First Fleet Searchable database of First Fleet convicts The First Fleet – State Library of NSW State Library of NSW – First Fleet Re-enactment Company records, 1978–1990: Presented by Trish and Wally Franklin State Library of NSW – First Fleet Re-enactment Voyage 1987–1988 The First Fleet (1788) and The Re-enactment Fleet (1988) Some Untold History – Dr Wally Franklin and Dr Trish Franklin: An address to celebrate the 229th Anniversary of the sailing of the First Fleet from Portsmouth on 13th May 1787 Project Gutenberg Australia: The First Fleet Convict Records Convict Transportation Registers Database (Online) University of Queensland. Accessed 9 February 2015. "St. John's First Fleeters" in Michaela Ann Cameron (ed.), The St. John's Cemetery Project, (2018): an edited collection of biographies and profiles on the 50+ First Fleeters buried in Australia's oldest surviving European cemetery: St John's Cemetery, Parramatta 1788 in Australia History of immigration to Australia Convictism in Australia History of New South Wales Maritime history of Australia
25473664
https://en.wikipedia.org/wiki/WorkingPoint
WorkingPoint
WorkingPoint is a web-based application providing a suite of small business management tools. It is designed to offer a single point-of-access for all business management needs while offering a user-friendly interface. WorkingPoint’s functionalities include double-entry bookkeeping, contact management, inventory management, invoicing and bill & expense management. Company WorkingPoint, formerly Netbooks Inc, is a privately held corporation based in San Francisco, CA. The company is backed by CMEA Capital, also based in San Francisco. WorkingPoint has about ten employees and is led by CEO Tate Holt and Chairman Tom Proulx. Proulx is a co-founder of Intuit and an original author of that company’s Quicken personal finance software. The company was founded in 2007 under its original name Netbooks by co-creator Ridgely Evers. Evers set out to design a product that was more user-friendly than Intuit’s Quickbooks, which he also co-created. In mid-2009 the company officially rebranded itself and its flagship product “WorkingPoint”. The purpose of the re-branding was to disassociate the company from the product category of small laptops also known as netbooks. Social Media Presence WorkingPoint maintains a daily blog geared toward small business owners and managers. Each week the blog is updated with 3 WorkingPoint product feature or “how-to” posts, 2 subscriber company profiles, and 2 small business coaching posts. The company also maintains a Twitter page and a Facebook page. Product Description (Free Version) WorkingPoint allows businesses to invoice up to five customers (repeatedly) and provides account access for up to two individual users free of charge. Online Invoicing WorkingPoint allows users to create customized quotes and invoices online. The invoices can be used to bill customers via email or hardcopy post. WorkingPoint compiles the info from these invoices so users can track customer payments, inventory costs, shipping charges, accounts receivable and sales taxes. Users can also manage customer overpayments, provide customer loyalty discounts, and view a customer invoice history. Bill & Expense Management Users can track their bills and expenses by entering info into the WorkingPoint interface. WorkingPoint compiles this info so users can track categorized expenses, accounts paid, accounts payable, and vendor purchase history. The interface also allows users to add to their inventory while entering billing info. Double-Entry Bookeeping WorkingPoint automatically records entries under the double-entry bookkeeping system (also known as debits and credits) when the user completes invoicing and expense forms. Users can view transactions in general ledger format and perform closing entries if necessary. This functionality is designed for users who do not have an accounting background. Business Contact Management WorkingPoint provides an interface for users to manage their customer and vendor contact info. The software automatically tracks the user’s relationship with contacts, so users can track a contact’s sales and purchase history. Contacts can be imported and exported via numerous email clients including Microsoft Outlook, Yahoo! Mail, Google Gmail, and Mac Address Book. Inventory Management The software automatically adjusts inventory quantities after every purchase and sale. Users can track their current inventory quantity, average cost of inventory on-hand, cost of goods sold (COGS) and top-selling products. Users can also make manual adjustments to inventory when necessary. Financial Reporting Users can view a balance sheet, income statement, or cash flow statement pertaining to their business. The software automatically manages accruals to produce the balance sheet and income statement. Users can choose a data range from which to draw any of these reports. Financial reports can be converted to pdf format or exported (with formulas intact) to OpenOffice or Microsoft Excel. Cash Management WorkingPoint enables users to monitor cash balances on their bank accounts. The software automatically tracks cash inflows and outflows when users manage their accounts payable and accounts receivable. Business Dashboard The Business Dashboard visually and graphically displays key real-time business data. Users can customize the Dashboard to display data of their choosing. Online Company Profile Users can create an online company profile in order to have a presence on the Internet and as a basis for participation in WorkingPoint’s small business community features. Public profiles are featured in the WorkingPoint Company Directory and can be viewed externally using the URL format: https://businessname.workingpoint.com. Product Description (Premium Version) The premium version of WorkingPoint costs $10 per month. It includes all of the functionalities of the free version, allowing unlimited invoicing and account access. It also offers the following functions: 1099 Tax Reporting, invoice payment collection via PayPal, Email Marketing via VerticalResponse, and the Premium Reports & Accounting Package. 1099 Tax Reporting Users can identify qualifying companies and individuals for IRS Form 1099 or IRS Form 1096 reporting. WorkingPoint automatically tracks payments made to these companies and individuals. Users can then generate 1099 reports for distribution. Premium Reports & Accounting Package This includes: a Daily Operating Report providing users with sales and cash flow information, customizable accounts categorization, and cash flow statements using the indirect method of reporting. Invoice Payment Collection via PayPal Users can collect payment on their invoices via PayPal. Email Marketing via VerticalResponse The WorkingPoint premium package includes 500 email credits with the email marketing firm VerticalResponse. References http://www.workingpoint.com http://smallbiztechnology.com/archive/2009/08/is-quickbooks-too-complicated.html External links Workingpoint WorkingPoint Blog CMEA Capital Companies based in San Francisco Software companies based in the San Francisco Bay Area American companies established in 2007 Web applications Software companies established in 2007 Software companies of the United States
36465707
https://en.wikipedia.org/wiki/Audio%20Random%20Access
Audio Random Access
Audio Random Access (commonly abbreviated to ARA) is an extension for audio plug-in interfaces, such as AU, VST and RTAS, allowing them to exchange a greater amount of audio information with digital audio workstation (DAW) software. It was developed in a collaboration between Celemony Software and PreSonus. Functionality ARA increases the amount of communication possible between DAW software and a plug-in, allowing them to exchange important information, such as audio data, tempo, pitch, and rhythm, for an entire song, rather than just at the moment of playback. This increased amount of information exchange, and availability of data from other points in time, removes the need for audio material to be transferred to & from the plug-in, allowing that plug-in to be used as a more closely integrated part of the DAW's overall interface. History ARA was developed as a joint effort between Celemony Software and PreSonus, driven by the desire to increase the level of integration between Celemony's Melodyne plug-in and the DAWs using it. It was first published in October 2011 and released as part of PreSonus' Studio One DAW (version 2) and Melodyne (Editor, Assistant and Essential versions 1.3). Version 2 of ARA was announced during NAMM in January 2018, introducing new features such as the simultaneous editing of multiple tracks, transfer of chord track information, and undo synchronization with the DAW. DAWs which use ARA version 2 are not automatically backwards compatible with plug-ins using version 1. The first DAWs to support ARA version 2 were Logic Pro X (version 10.4, released in January 2018) and Studio One (version 4, released in May 2018). ARA implementation To allow software manufacturers to support ARA, a Software Development Kit has been published by Celemony. Current software products which support ARA include the following. Digital audio workstations Audio plug-ins See also Celemony Software PreSonus Studio One (software) Audio plug-in References External links Celemony's Tech Talk video ARA in Studio One ARA in Sonar X3 ARA 2 in Apple Logic Pro X 10.4 Auto-Align Post 2 Music software plugin architectures
77234
https://en.wikipedia.org/wiki/Acoetes
Acoetes
Acoetes (, via ) was the name of four men in Greek and Roman mythology. Acoetes, a fisherman who helped the god Bacchus. Acoetes, father to the Trojan priest Laocoön, who warned about the Trojan Horse. As the brother of Anchises, he was therefore the son of King Capys of Dardania and Themiste, daughter of King Ilus of Troad. Acoetes, an aged man who was the former squire Evander in Arcadia, before the latter emigrated to Italy. Acoetes, a soldier in the army of the Seven against Thebes. When this army fought the Thebes for the first time on the plain, a fierce battle took place at the gates of the city. During these fights Agreus, from Calydon, cut off the arm of the Theban Phegeus. The severed limb fell to the ground while the hand still held the sword. Acoetes, who came forward, was so terrified of that arm that he hit it with his own sword. See also Notes References Gaius Julius Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project. Publius Papinius Statius, The Thebaid translated by John Henry Mozley. Loeb Classical Library Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1928. Online version at the Topos Text Project. Publius Papinius Statius, The Thebaid. Vol I–II. John Henry Mozley. London: William Heinemann; New York: G.P. Putnam's Sons. 1928. Latin text available at the Perseus Digital Library. Publius Vergilius Maro, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library. Publius Vergilius Maro, Bucolics, Aeneid, and Georgics. J. B. Greenough. Boston. Ginn & Co. 1900. Latin text available at the Perseus Digital Library. Trojans Characters in the Aeneid Characters in Greek mythology Characters in Roman mythology
31328702
https://en.wikipedia.org/wiki/QSR%20International
QSR International
QSR International is a qualitative research software developer based in Melbourne, Australia, with offices in the United Kingdom, and the United States. QSR International is the developer of qualitative data analysis (QDA) software products, NVivo, NVivo Server, Interpris and XSight. These are designed to help qualitative researchers organize and analyze non-numerical or unstructured data. Qualitative research is used to gain insight into people's attitudes, behaviours, value systems, concerns, motivations, aspirations, culture or lifestyles. It is used to inform business decisions, policy formation, communication and research. Focus groups, in-depth interviews, content analysis and semiotics are among the many formal approaches that are used, but qualitative research also involves the analysis of any unstructured material, including customer feedback surveys, reports or media clips. History NUD*IST, which stands for, 'Non-numerical Unstructured Data Indexing, Searching and Theorizing', was first developed by Tom Richards at La Trobe University in Melbourne in 1981 as software to support social research by Lyn Richards. Tom and Lyn Richards went on to form QSR International in 1995. QSR released the first version of NVivo, designed to guide researchers from questions to answers, in 1999. John Owen became CEO of QSR International in 2001. The company released the first edition of XSight, enabling researchers to work through information and get to analysis faster, in 2004. In 2005, Professor Lyn Richards' book 'Handling Qualitative Data: A practical guide' was first published. In 2006, QSR International became a Microsoft Gold Partner and released NVivo 7 and XSight 2.0. XSight 2.0 was incorporated into the syllabuses of Australia's Victoria University and Malaysia's Sunway University College. In the same year, XSight was also incorporated into the syllabus of the University of Southhamption's new MSc Marketing Analytics degree. In 2007 QSR partnered with Hulinks to deliver NVivo 7 in Japanese - the first Japanese qualitative research software. In the same year, Dr. Patricia Bazely's book, Qualitative Data Analysis with NVivo was first published. QSR released NVivo 8 in March 2008 and in August 2008 released Simplified Chinese and Spanish versions of NVivo 8. A Japanese version of NVivo 8 followed in November of the same year. French- and German-language versions of NVivo 8 were released in April 2009. In 2009 the second edition of Professor Lyn Richards' text, 'Handling Qualitative Data: A practical guide' was published. In 2010 QSR released NVivo 9 and NVivo Server 9. In 2011 QSR released an update to NVivo 9 featuring five new languages: Spanish, French, German, Portuguese and Simplified Chinese. A separate Japanese-language version of NVivo 9 was made available too. The release also included new support for Framework analysis, a research methodology developed by the National Centre for Social Research (NatCen). In 2012 QSR released NVivo 10 giving users new ability to capture and analyze web and social media data. In 2013 NVivo 10 was released in French, German, Portuguese, Spanish and Simplified Chinese. A separate Japanese version is also released. In September 2013 QSR International announced a partnership with SurveyMonkey. In March 2014 QSR International announced a partnership with TranscribeMe. In March 2014 QSR International released NVivo for Mac Beta, the first true Mac application for qualitative data analysis. In June 2014 QSR International released NVivo for Mac commercially. In September 2014 an update for NVivo for Mac was released. This allows users the ability to capture and import web pages using NCapture, run matrix coding queries and use text-to-speech options. In September 2014 QSR International released NVivo 10 for Windows, Service Pack 6 making it easier to work with content from Evernote and provides improved functionality for project recovery. Kerri Lee Sinclair became CEO in September 2015. In September 2015, QSR International released groundbreaking new software for qualitative data analysis. NVivo 11 for Windows now comes in three editions, plus new releases of NVivo for Mac and NVivo for Teams. In March 2016 Chris Astle became CEO of QSR International. In September 2017, QSR International released a new product, Interpris which lets users sort and analyse qualitative survey data. In March 2018, QSR International released NVivo 12 (Windows) and NVivo 12 (Mac) offering new mixed methods support. Scale of operation QSR International is headquartered in Melbourne, Australia and has offices in the United Kingdom, and the United States. The company also has a global network of resellers and trainers. Products NVivo is a qualitative data analysis (QDA) software package that was first released in 1999. NVivo allows users to import, sort and analyze data like web pages and social media, audio files, spread sheets, databases, digital photos, documents, PDFs, bibliographical data, rich text and plain text documents. Users can interchange data with applications like Microsoft Excel, Microsoft Word, IBM SPSS Statistics, EndNote, Microsoft OneNote, SurveyMonkey and Evernote. NVivo is multi-lingual and can be used in English, French, German, Japanese, Chinese, Portuguese and Spanish. Users can order transcripts of media files from within NVivo 10, 11 and 12 (Windows) using TranscribeMe in English, Chinese, French, German, Japanese, Portuguese and Spanish. NVivo for Windows also allows users to perform text analysis and create visualizations of their data. The software is certified for use with Microsoft's Windows 7, Windows 8, Windows 8.1 and Windows 10 operating system. NVivo for Mac was released in June 2014. It can be used to analyze interviews, focus group discussions, web pages, social media data, observations and literature reviews and enables researchers to work with content from documents, PDFs, audio and video. NVivo for Teams (NVivo Server) is designed to let users analyze and manage NVivo projects centrally so teams can work together in the same project at the same time. XSight software was released in 2006 and supported until January 2014. It was software for commercial market researchers or those undertaking short term qualitative research projects. NVivo 12 (Windows) software offers equivalent functionality, with greater flexibility and enables researchers to work with more data types including PDFs, surveys, images, video, audio, web and social media content. Interpris is software that is intended to help users import, sort and analyse survey data from Survey Monkey, a Microsoft Excel workbook, or uploaded from a .CSV file, Evolution of qualitative data analysis software The first generation of computer-assisted qualitative data analysis (QDA) software emerged in the mid to late 1980s and involved basic word processors and databases with a focus on data management. The programs were designed to help qualitative researchers manage unstructured information like open-ended surveys, focus groups, field notes or interviews. Second generation QDA software introduced functions for coding text and for manipulating, searching and reporting on the coded text. The approach of employing software tools for qualitative analysis was initially developed in the social sciences arena but is now used in an extensive range of other disciplines. The third generation of QDA software goes beyond manipulating, searching and reporting on coded text. It assists actual analysis of the data by providing tools to help researchers examine relationships in the data and assist in the development of theories and in testing hypotheses. Some software supports rich text, diagrams and the incorporation of images, movies and other multimedia data. Other programs have tools that enable the exchange of data and analyses between researchers working together collaboratively. See also NVivo XSight References External links Software companies of Australia QDA software Companies established in 1995 Reporting software Companies based in Melbourne
3110
https://en.wikipedia.org/wiki/Andrew%20S.%20Tanenbaum
Andrew S. Tanenbaum
Andrew Stuart Tanenbaum (born March 16, 1944), sometimes referred to by the handle ast, is an American-Dutch computer scientist and professor emeritus of computer science at the Vrije Universiteit Amsterdam in the Netherlands. He is best known as the author of MINIX, a free Unix-like operating system for teaching purposes, and for his computer science textbooks, regarded as standard texts in the field. He regards his teaching job as his most important work. Since 2004 he has operated Electoral-vote.com, a website dedicated to analysis of polling data in federal elections in the United States. Biography Tanenbaum was born in New York City and grew up in suburban White Plains, New York. He is Jewish. His paternal grandfather was born in Khorostkiv in the Austro-Hungarian empire. He received his Bachelor of Science degree in physics from MIT in 1965 and his Ph.D. degree in astrophysics from the University of California, Berkeley in 1971. Tanenbaum also served as a lobbyist for the Sierra Club. He moved to the Netherlands to live with his wife, who is Dutch, but he retains his United States citizenship. He teaches courses about Computer Organization and Operating Systems and supervises the work of Ph.D. candidates at the VU University Amsterdam. On 9 July 2014, he announced his retirement. Teaching Books Tanenbaum is well recognized for his textbooks on computer science. They include: Structured Computer Organization (1976) Computer Networks, co-authored with David J. Wetherall and Nickolas Feamster (1981) Operating Systems: Design and Implementation, co-authored with Albert Woodhull (1987) Modern Operating Systems (1992) Distributed Operating Systems (1994) Distributed Systems: Principles and Paradigms, co-authored with Maarten van Steen (2001) His book, Operating Systems: Design and Implementation and MINIX were Linus Torvalds' inspiration for the Linux kernel. In his autobiography Just for Fun, Torvalds describes it as "the book that launched me to new heights". His books have been translated into many languages including Arabic, Basque, Bulgarian, Chinese, Dutch, French, German, Greek, Hebrew, Hungarian, Italian, Japanese, Korean, Macedonian, Mexican Spanish, Persian, Polish, Portuguese, Romanian, Russian, Serbian, and Spanish. They have appeared in over 175 editions and are used at universities around the world. Doctoral students Tanenbaum has had a number of Ph.D. students who themselves have gone on to become widely known computer science researchers. These include: Henri Bal, professor at the Vrije Universiteit in Amsterdam Frans Kaashoek, professor at MIT Sape Mullender, researcher at Bell Labs Robbert van Renesse, professor at Cornell University Leendert van Doorn, distinguished engineer at the Microsoft Corporation Werner Vogels, Chief Technology Officer at Amazon.com Dean of the Advanced School for Computing and Imaging In the early 1990s, the Dutch government began setting up a number of thematically oriented research schools that spanned multiple universities. These schools were intended to bring professors and Ph.D. students from different Dutch (and later, foreign) universities together to help them cooperate and enhance their research. Tanenbaum was one of the cofounders and first Dean of the Advanced School for Computing and Imaging (ASCI). This school initially consisted of nearly 200 faculty members and Ph.D. students from the Vrije Universiteit, University of Amsterdam, Delft University of Technology, and Leiden University. They were especially working on problems in advanced computer systems such as parallel computing and image analysis and processing. Tanenbaum remained dean for 12 years, until 2005, when he was awarded an Academy Professorship by the Royal Netherlands Academy of Arts and Sciences, at which time he became a full-time research professor. ASCI has since grown to include researchers from nearly a dozen universities in The Netherlands, Belgium, and France. ASCI offers Ph.D. level courses, has an annual conference, and runs various workshops every year. Projects Amsterdam Compiler Kit The Amsterdam Compiler Kit is a toolkit for producing portable compilers. It was started sometime before 1981 and Andrew Tanenbaum was the architect from the start until version 5.5. MINIX In 1987, Tanenbaum wrote a clone of UNIX, called MINIX (MINi-unIX), for the IBM PC. It was targeted at students and others who wanted to learn how an operating system worked. Consequently, he wrote a book that listed the source code in an appendix and described it in detail in the text. The source code itself was available on a set of floppy disks. Within three months, a Usenet newsgroup, comp.os.minix, had sprung up with over 40,000 subscribers discussing and improving the system. One of these subscribers was a Finnish student named Linus Torvalds, who began adding new features to MINIX and tailoring it to his own needs. On October 5, 1991, Torvalds announced his own (POSIX-like) kernel, called Linux, which originally used the MINIX file system but is not based on MINIX code. Although MINIX and Linux have diverged, MINIX continues to be developed, now as a production system as well as an educational one. The focus is on building a highly modular, reliable, and secure operating system. The system is based on a microkernel, with only 5000 lines of code running in kernel mode. The rest of the operating system runs as a number of independent processes in user mode, including processes for the file system, process manager, and each device driver. The system continuously monitors each of these processes, and when a failure is detected is often capable of automatically replacing the failed process without a reboot, without disturbing running programs, and without the user even noticing. MINIX 3, as the current version is called, is available under the BSD license for free. Research projects Tanenbaum has also been involved in numerous other research projects in the areas of operating systems, distributed systems, and ubiquitous computing, often as supervisor of Ph.D. students or a postdoctoral researcher. These projects include: Amoeba Globe Mansion Orca Paramecium RFID Guardian Turtle F2F Electoral-vote.com In 2004, Tanenbaum created Electoral-vote.com, a web site analyzing opinion polls for the 2004 U.S. Presidential Election, using them to project the outcome in the Electoral College. He stated that he created the site as an American who "knows first hand what the world thinks of America and it is not a pretty picture at the moment. I want people to think of America as the land of freedom and democracy, not the land of arrogance and blind revenge. I want to be proud of America again." The site provided a color-coded map, updated each day with projections for each state's electoral votes. Through most of the campaign period Tanenbaum kept his identity secret, referring to himself as "the Votemaster" and acknowledging only that he personally preferred John Kerry. Mentioning that he supported the Democrats, he revealed his identity on November 1, 2004, the day before the election, and also stating his reasons and qualifications for running the website. Through the site he also covered the 2006 midterm elections, correctly predicting the winner of all 33 Senate races that year. For the 2008 elections, he got every state right except for Indiana, which he said McCain would win by 2% (Obama won by 1%) and Missouri, which he said was too close to call (McCain won by 0.1%). He correctly predicted all the winners in the Senate except for Minnesota, where he predicted a 1% win by Norm Coleman over Al Franken. After 7 months of legal battling and recounts, Franken won by 312 votes (0.01%). In 2010, he correctly projected 35 out of 37 Senate races in the Midterm elections on the website. The exceptions were Colorado and Nevada. Electoral-vote.com incorrectly predicted Hillary Clinton would win the 2016 United States presidential election. The website incorrectly predicted Clinton would win Wisconsin, Michigan, Pennsylvania, North Carolina, and Florida. Electoral-vote.com did not predict a winner for Nevada, which Clinton would win. The website predicted the winners of the remaining 44 states and the District of Columbia correctly. Tanenbaum–Torvalds debate The Tanenbaum–Torvalds debate was a famous debate between Tanenbaum and Linus Torvalds regarding kernel design on Usenet in 1992. Awards Fellow of the ACM Fellow of the IEEE Member of the Royal Netherlands Academy of Arts and Sciences (1994) Eurosys Lifetime Achievement Award, 2015 Honorary doctorate from Petru Maior University, Targu Mures, Romania, 2011 Winner of the TAA McGuffey award for classic textbooks for Modern Operating Systems, 2010 Coauthor of the Best Paper Award at the LADC Conference, 2009 Winner of a 2.5 million euro European Research Council Advanced Grant, 2008 USENIX Flame Award 2008 for his many contributions to systems design and to openness both in discussion and in source Honorary doctorate from Polytechnic University of Bucharest, Romania Coauthor of the Best Paper Award at the Real-Time and Network Systems Conf., 2008 Winner of the 2007 IEEE James H. Mulligan, Jr. Education Medal Coauthor of the Best Paper Award at the USENIX LISA Conf., 2006 Coauthor of the Best Paper for High Impact at the IEEE Percom Conf., 2006 Academy Professor, 2004 Winner of the 2005 PPAP Award for best education on computer science software Winner of the 2003 TAA McGuffey award for classic textbooks for Computer Networks Winner of the 2002 TAA Texty Award for new textbooks Winner of the 1997 ACM SIGCSE for contributions to computer science education Winner of the 1994 ACM Karl V. Karlstrom Outstanding Educator Award Coauthor of the 1984 ACM SOSP Distinguished Paper Award Honorary doctorates On May 12, 2008, Tanenbaum received an honorary doctorate from Universitatea Politehnica din București. The award was given in the academic senate chamber, after which Tanenbaum gave a lecture on his vision of the future of the computer field. The degree was given in recognition of Tanenbaum's career work, which includes about 150 published papers, 18 books (which have been translated into over 20 languages), and the creation of a large body of open-source software, including the Amsterdam Compiler Kit, Amoeba, Globe, and MINIX. On October 7, 2011, Universitatea Petru Maior din Târgu Mureș (Petru Maior University of Târgu Mureș) granted Tanenbaum the Doctor Honoris Causa (honorary doctorate) title for his remarkable work in the field of computer science and achievements in education. The academic community is hereby honoring his devotion to teaching and research with this award. At the ceremony, the Chancellor, the Rector, the Dean of the Faculty of Sciences and Letters, and others all spoke about Tanenbaum and his work. The pro-rector then read the 'laudatio,' summarizing Tanenbaum's achievements. These include his work developing MINIX (the predecessor to Linux), the RFID Guardian, his work on Globe, Amoeba, and other systems, and his many books on computer science, which have been translated in many languages, including Romanian, and which are used at Petru Maior University. Keynote talks Tanenbaum has been keynote speaker at numerous conferences, most recently RIOT Summit 2020 Online Event, Sept. 14, 2020. FrOSCon 2015 Sankt Augustin, Germany, Aug. 22, 2015 BSDCan 2015 Ottawa, Canada, June 12, 2015 HAXPO 2015 Amsterdam May 28, 2015 Codemotion 2015 Rome Italy, March 28, 2015 SIREN 2010 Veldhoven, The Netherlands, Nov. 2, 2010 FOSDEM Brussels, Belgium, Feb 7, 2010 NSCNE '09 Changsha, China, Nov. 5, 2009 E-Democracy 2009 Conference Athens, Greece, Sept. 25, 2009 Free and Open Source Conference Sankt Augustin, Germany, August 23, 2008 XV Semana Informática of the Instituto Superior Técnico, Lisbon, Portugal, March 13, 2008 NLUUG 25 year anniversary conference, Amsterdam, November 7, 2007 linux.conf.au in Sydney, Australia, January 17, 2007 Academic IT Festival in Cracow, Poland, February 23, 2006 (2nd edition) ACM Symposium on Operating System Principles, Brighton, England, October 24, 2005 References External links Minix Article in Free Software Magazine contains an interview with Andrew Tanenbaum The MINIX 3 Operating System MINIX Official Website 1944 births American political writers American male non-fiction writers American technology writers Computer systems researchers American computer scientists Fellows of the Association for Computing Machinery Fellow Members of the IEEE Free software programmers Kernel programmers Living people MIT Department of Physics alumni Members of the Royal Netherlands Academy of Arts and Sciences MINIX Scientists from New York City University of California, Berkeley alumni Vrije Universiteit Amsterdam faculty Information technology in the Netherlands Computer science educators Jewish American writers European Research Council grantees 21st-century American Jews
68188163
https://en.wikipedia.org/wiki/Marina%20Umaschi%20Bers
Marina Umaschi Bers
Marina Umaschi Bers is a professor at Tufts University known for her work on computational thinking, technology, and tools for children to learn computer programming. She has brought robotics into the classroom through her work on ScratchJr and Blocks to Robots, which includes robotics kits designed for young children. Education Umaschi Bers went to Buenos Aires University in Argentina and received her undergraduate degree in Social Communications (1993). In 1994, she earned a Master’s degree in Educational Media and Technology from Boston University; she also has an M.S. from Massachusetts Institute of Technology. In 2001, she earned a Ph.D. from the MIT Media Laboratory where she worked on the design of computational tools. From 2005 until 2011, she worked at Boston Children's Hospital, and in 2007 she accepted a position at Tufts University where she was promoted to professor in 2013. In 2018 she was named the chair of the Eliot-Pearson Dept. of Child Development. Bers co-founded KinderLab Robotics in 2013, and has worked with WGBH-TV and PBS on content for children's broadcasting. Research and work Bers’ research centers around the potential of technology to foster the development of children. Her early work examined storytelling and language in children, robotics in early childhood education, and the development of values in virtual environments. In 2012 she developed the TangibleK robotics program to teach young children about the world of technology. Bers developed the ScratchJr programming language collaboratively with Mitch Resnick, Paula Bonta, and Brian Silverman. ScratchJr targets children from ages 5 to 7, and is an offshoot of Scratch which is used to teach programming to children from 8 to 16 computer programming. Bers also works to train childhood educators on the use of technology in the classroom and develops curriculum that can be used to teach programming and computational thinking. She developed the KIBO robot kit, a robot that young children can program with wooden blocks and serves as a tool to teach children computer programming. As of 2021, she has more than 150 publications and an h-index of 48. Bers' work has been covered by media outlets worldwide, including venues such as the New York Times, NPR, CNBC, CBS News, Wall Street Journal, and The Economist. Her book, Coding as a Playground, has been reviewed by Association for the Advancement of Computing in Education and Medium. During the pandemic, Bers talked with the Boston Globe on how children may learn during the isolation introduced by the pandemic Selected publications Awards and honors In 2005, Bers received a Presidential Early Career Award for Scientists and Engineers (PECASE). She also received a National Science Foundation (NSF)‘s Young Investigator’s Career Award, and the American Educational Research Association (AERA) Jan Hawkins Award for Early Career Contributions. In 2015, Bers was chosen as one of the recipients of the Boston Business Journal’s Women to Watch in Science and Technology awards, and in 2016, Bers received the Outstanding Faculty Contribution to Graduate Student Studies award at Tufts University. References External links , January 20, 2015 2020 interview with Bers Tufts University faculty University of Buenos Aires alumni Boston University alumni Massachusetts Institute of Technology alumni Women computer scientists Educational researchers Linguists Living people 21st-century women
3887969
https://en.wikipedia.org/wiki/Harry%20J
Harry J
Harry Zephaniah Johnson (6 July 1945 – 3 April 2013), known by the stage name Harry J, was a Jamaican reggae record producer. Biography Born in Westmoreland Parish, Jamaica, Johnson started to play music with the Virtues as a bass player before moving into management of the group. When the band split up he worked as an insurance salesman. He first appeared as a record producer in 1968, when he launched his own record label, "Harry J", by releasing The Beltones' local hit "No More Heartaches", one of the earliest reggae songs to be recorded. His agreement with Coxsone Dodd allowed him to use Studio One's facilities, where he produced the hit "Cuss Cuss" with singer Lloyd Robinson, which became one of the most covered riddims in Jamaica. Johnson also released music under a subsidiary label, Jaywax. In October 1969, he met success in the UK with "The Liquidator" (number 9 in the UK Singles Chart) recorded with his sessionband, The Harry J All Stars (it was also a hit in 1980, reaching number 42). This single became one of the anthems of the emerging skinhead youth subculture; together with other instrumental hits released in the UK through his own subdivision "Harry J" on Trojan Records, on a compilation album of the same name. In the beginning of the 1970s he enjoyed another big success with the vocal duo Bob and Marcia with the song "Young, Gifted and Black". His productions also included Jamaican hits with DJs like Winston Blake or Scotty among others, and many dub versions. Harry J Studio Johnson is mainly known for his Harry J Studio where Bob Marley & The Wailers recorded some of their albums in the 1970s. The studio was also a 'must stop' hangout of many British and other musicians including the Rolling Stones, The Who, and Grace Jones. In addition, Chris Blackwell, founder of Island Records, could be found hanging out in the sound room prior to moving to England in the early 1970s. In 1972, Harry Johnson sold his record shop and set up his own recording studio "Harry J", on 10 Roosevelt Avenue, Uptown Kingston, where he employed Sid Bucknor and later Sylvan Morris as resident recording engineer. Harry J Studio soon became one of the most famous Jamaican studios after having recorded several Bob Marley & The Wailers albums from 1973 to 1976 before the Tuff Gong era; such as Rastaman Vibration and Catch A Fire. Johnson's deal with Island Records led him to record artists such as Burning Spear and The Heptones. Throughout the 1970s and the 1980s, assisted by former Studio One sound engineer Sylvan Morris, he also recorded Ken Boothe, Augustus Pablo, The Cables and the American pop singer, Johnny Nash, and produced albums by Zap Pow and Sheila Hylton. In 2000, after seven years of inactivity, Stephen Stewart who worked in the early years alongside Sylvan Morris, refurbished, re-equipped and reopened Harry J Studio. Since then under the management of Stewart the studio has seen the return of Burning Spear, Toots, Shaggy, Sly & Robbie, and newer projects of Shakira, Papa Sam/Kirk Franklyn, Luciano and Sizzla. The studio appeared in the film, Rockers. Personal life Johnson died on 3 April 2013 after a long battle with diabetes. He was 67. Discography Harry J Allstars Harry J Allstars – The Liquidator – 1969 – Harry J/Trojan Harry J Allstars – Liquidator: The Best Of The Harry J Allstars – 2003 – Trojan Harry J Allstars – Dubbing At Harry J's 1972–1975 – Jamaican Recordings Compilations Various Artists – Reggae Movement – 1970 – Harry J/Trojan Various Artists – What Am I To Do – 1970 – Harry J/Trojan Various Artists – Reggay Roots – 1977 – Harry J Various Artists – Computer – 1985 – Sunset Various Artists – The Return Of the Liquidator: 30 Skinhead Classics 1968–1970 – 1989 – Trojan – 2 CD As a producer Sylvan Morris & Harry J – Cultural Dub – 1978 – Harry J Sylvan Morris – Jah Jah Dub – Roosevelt The Heptones – Book of Rules – 1973 – Jaywax The Heptones – Cool Rasta – 1976 – Trojan Leslie Butler – Ja-Gan – 1975 – Trojan Zap Pow – Revolution – 1976 – Trojan Lloyd Willis – Gits Plays Bob Marley's Greatest Hits – 1977 – Harry J The Melodians – Sweet Sensation – 1977 – Harry J Sheila Hilton – "Breakfast in Bed" – 1977– Harry J Dennis Brown – So Long Rastafari – 1979 – Harry J See also List of Jamaican record producers References 1945 births 2013 deaths Deaths from diabetes Jamaican guitarists Male guitarists Jamaican people of Scottish descent Jamaican record producers Jamaican reggae musicians People from Westmoreland Parish Trojan Records artists
768322
https://en.wikipedia.org/wiki/Los%20Angeles%20Memorial%20Sports%20Arena
Los Angeles Memorial Sports Arena
The Los Angeles Memorial Sports Arena was a multi-purpose arena at Exposition Park, in the University Park neighborhood of Los Angeles. It was located next to the Los Angeles Memorial Coliseum and just south of the campus of the University of Southern California, which managed and operated both venues under a master lease agreement with the Los Angeles Memorial Coliseum Commission. The arena was demolished in 2016 and replaced with Banc of California Stadium, home of Major League Soccer's Los Angeles FC which opened in 2018. History The arena was opened by Vice President Richard Nixon on July 4, 1959 and its first event followed four days later, a bantamweight title fight between José Becerra and Alphonse Halimi on July 8. It became a companion facility to the adjacent Los Angeles Memorial Coliseum. The venue was the home court of the Los Angeles Lakers of the NBA from October 1960 to December 1967, the Los Angeles Clippers also of the NBA from 1984 to 1999, and the home ice of the Los Angeles Kings of the NHL from October to December 1967 during their inaugural 1967–68 season. It was the home for college basketball for the USC Trojans from 1959 to 2006 and the UCLA Bruins from 1959 to 1965 and again as a temporary home in the 2011–2012 season. It also hosted the Los Angeles Aztecs of the NASL who played one season of indoor soccer there (1980–81), the Los Angeles Blades of the Western Hockey League from 1961 to 1967, the Los Angeles Sharks of the WHA from 1972 to 1974, the Los Angeles Cobras of the AFL in 1988, and the original Los Angeles Stars of the ABA from 1968 to 1970. The arena played host to the top indoor track athletics meet on the West Coast, the annual Los Angeles Invitational track meet (frequently called the "Sunkist Invitational", with title sponsorship by Sunkist Growers, Incorporated), from 1960 until the event's demise in 2004. The arena hosted the 1960 Democratic National Convention, the 1968 and 1972 NCAA Men's Basketball Final Four, the 1992 NCAA Women's Basketball Final Four, the 1963 NBA All-Star Game, and the boxing competitions during the 1984 Summer Olympics. In addition to hosting the final portion of WrestleMania 2 in 1986, the Los Angeles Memorial Sports Arena also hosted WrestleMania VII in 1991 as well as other WWE events. The arena hosted When Worlds Collide, a 1994 joint card between the Mexican lucha libre promotion Asistencia Asesoría y Administración (AAA) and WCW (which normally called the Great Western Forum home until they, too, moved to Staples Center) that is credited with introducing the lucha style to English-speaking audiences in the U.S. NBC's renewed version of American Gladiators and the 1999–2001 syndicated show Battle Dome were filmed from the arena. After then-Clippers owner Donald Sterling turned down an agreement to re-locate the franchise permanently to Anaheim's Arrowhead Pond (now Honda Center) in 1996, the Coliseum Commission had discussions to build an on-site replacement for the Sports Arena. Plans included a seating capacity of 18,000 for basketball, 84 luxury suites, and an on-site practice facility for the Clippers. However, as a new Downtown Los Angeles sports and entertainment arena was being planned and eventually built (Staples Center) two miles north along Figueroa Street, the Coliseum Commission scuttled plans for a Sports Arena replacement, and as a result, the Clippers became one of the original tenants at the new downtown arena. There were also similar plans years earlier, in 1989, as Sterling had discussions with then-Los Angeles mayor Tom Bradley and then-Coliseum Commission president (and eventual Bradley mayoral successor) Richard Riordan about a Sports Arena replacement; Sterling threatened to leave the Sports Arena and move elsewhere in the Los Angeles region if plans did not come together. After the Trojans departed to the new Galen Center in 2006, the arena assumed a lower profile. The arena still continued to hold high school basketball championships, as well as concerts and conventions. The UCLA men's basketball team played a majority of their home games at the Sports Arena during the 2011–12 season while Pauley Pavilion underwent renovation. 2000s The Los Angeles Memorial Coliseum Commission embarked on a seismic retrofit, designed to bring the Sports Arena up to 21st century seismic standards. Bentley Management Group was hired as the project manager for the Seismic Bracing Remodel. In order to reinforce the existing structure, a series of steel braced frames were connected to the existing concrete structural system at both the arena and loge levels of the building. To provide a solid footing for these steel frames, portions of the arena floor had to be excavated, then reinforced to provide extra strength. Once the steel frames were fitted and incorporated into the existing structure between existing support columns, concrete was then re-poured into the area. The original crown of the arena, one of its most distinguishing characteristics, was the countless small ceramic tiles, each measuring no more than a square inch in width. A multitude of the crown's tiles were loosening and many others were discolored. In order to remedy this, a new crown was designed, this time using individual sections of EIFS (Exterior insulation finishing system), which offered the decided advantages of better durability, easier maintenance and improved thermal characteristics. A foundation surface was applied directly over the existing tiles, in order to seal the crown and give the new surface something to adhere to. Once the structural work was finished, the walls, ceilings, doors, floors and other areas involved in the modification had to be put back together. Throughout the entire project, the Los Angeles Memorial Sports Arena remained open for business. The result was a brand-new crown around the exterior of the building, as well as a new terrazzo floor on the concourse level. During an open session meeting on July 17, 2013, the Coliseum Commission authorized the amendment to the existing USC-Coliseum Commission Lease for the operation of the Los Angeles Memorial Coliseum and the Los Angeles Memorial Sports Arena. On July 25, 2013, the Coliseum Commission and USC executed this new long-term master lease agreement. It became effective on July 29, 2013, and the Commission transferred day-to-day management and financial responsibilities for the Coliseum and Sports Arena to USC. This included the rehiring by USC, on a fixed term basis, of the Coliseum/Sports Arena employees who had been working for the commission the previous day. For most of the former Coliseum Commission employees, the fixed term of their employment would be short-lived, ending 10 months later on May 30, 2014. Closure and replacement The Sports Arena was demolished in order to replace it with a more in-demand facility — a soccer-specific stadium that would house an MLS team. On May 18, 2015, Los Angeles Football Club announced its intentions to build a privately funded 22,000-seat soccer-specific stadium at the site for $250 million. The stadium would be completed by 2018. From March 15 to 19, 2016, Bruce Springsteen performed a series of three sold-out concerts, the last events held in the arena. When he introduced his song "Wrecking Ball" during the last concert, he opened by saying "We gotta play this one for the old building... We're gonna miss this place, it's a great place to play rock 'n' roll." The arena closed after the last concert. Demolition began in September 2016 for the new stadium development. After a groundbreaking for the new stadium, the arena was demolished between August and October 2016. Banc of California Stadium now stands in the old Sports Arena footprint. The arena The arena underwent major renovations to bring it up to 21st century seismic standards and was well maintained. There were four fully equipped team rooms, two smaller rooms for officials, and two private dressing rooms for individual performers. There were two additional meeting rooms on site which could be used for administrative or hospitality functions. The floor area comprised a space (), affording the largest standing floor capacity of any arena in the area. There was a vertical clearance. The arena has a unique, expansive floor-level footprint of nearly and on the concourse level, allowing the installation of any needed display, food or other programming requirements. There was an enormous load-in ramp at the west side of the arena with a wide entry. Print, radio and television media was serviced on each side of the arena by installation of any kind of portable facilities. Five permanent TV locations were sited on the concourse level. In addition, a catwalk was suspended from the ceiling and circled the arena for cameras or spotlights. Spectators could reach arena level seating area either by a circulatory ramp on the southwest side of the building or by a stairway located next to the north doors. There were also escalators located at the southwest and northeast sides of the building. The Sports Arena was the first NBA arena to feature a rotating billboard at courtside, which also acted as the scorer's table. Rotating billboards eventually became standard at all NBA arenas until the mid-2000s, when LED billboard/scorer's tables were introduced. Spectator amenities included a full-service main ticket office, a secondary box office and 2 portable booths, 6 permanent concession stands, and a first-aid station. A club and restaurant were located on the arena level of the facility. A number of operational improvements had been made to enhance accessibility for the handicapped, including the installation of 14 additional handicapped parking stalls, hand rails on both sides of the pedestrian ramp leading to the floor level seating, handicapped accessible drinking fountains, an Assistive Listening System to aid the hearing impaired, conversion of restroom facilities, dressing rooms and bathroom fixtures for the handicapped, and increased informational signage. Event presentation was augmented by a four-sided overhead scoreboard with several auxiliary boards. Seating capacity The arena seated up to 16,740 for boxing/wrestling, and 14,546 for hockey. There were 12,389 fixed upper-level, theatre-type seats, and floor-level seating which could be configured by sport. The seating capacity for basketball changed over the years: Concerts Pink Floyd performed five shows at Memorial Sports Arena during their Wish You Were Here tour April 23–27, 1975. They would open The Wall Tour at the same venue February 7–13, 1980 and would perform three more nights in November 1987 on the A Momentary Lapse of Reason Tour. U2 performed five shows at Memorial Sports Arena during The Joshua Tree Tour on April 17, 18, 20, 21 and 22, 1987. Michael Jackson performed six sold-out shows at Memorial Sports Arena, during his Bad World Tour on November 13, 1988 and January 16–18, 26–27, 1989. Madonna performed five sold-out shows at Memorial Sports Arena during her Blond Ambition World Tour on May 11–13 and 15–16, 1990. The Grateful Dead performed at the Sports Arena on December 8–10 in 1993, and December 15–16 and 18–19 in 1994. Bruce Springsteen was a popular act at the arena, having played there 35 times between 1980 and 2016. Springsteen humorously referred to the arena as "the dump that jumps" due to its age, poor infrastructure, and its lack of VIP suites, a feature that Springsteen criticized in other arenas. Daft Punk performed a sold-out show at the Sports Arena on July 21, 2007. Other than Coachella in 2006, this was the only LA-area show of the Alive 2006/2007 tour. The Philippine variety show of ABS-CBN titled ASAP, held an out of town show on October 11, 2014 titled "ASAP Live in LA". Major events The heavyweight championship fight scenes between Rocky Balboa and Apollo Creed characters in the 1976 best picture winner Rocky and its first sequel, Rocky II, were filmed at the arena as a stand-in for the Spectrum in Philadelphia. The Arena featured in a two-part episode – "Angels on Ice" – of the second season of Charlie's Angels, 1977. The Los Angeles portion of WrestleMania 2 on April 7, 1986 Host of WrestleMania VII on March 24, 1991 The arena was the location for a memorial ceremony honoring Gerardo Hernandez, the Transportation Security Administration officer who was killed in the 2013 Los Angeles International Airport shooting. Portions of the 1966 science fiction film Fantastic Voyage were filmed in the interior corridors and parking areas of the arena. The arena appears as the exterior and foyer of the euthanasia center in the 1973 film Soylent Green. 1960 Democratic National Convention held from July 11 to July 15, 1960. Bernie Sanders hosted a campaign rally on August 10, 2015 that was attended by over 27,500 people. The Fugitive episode "Decision in the Ring" features a climax that takes place in the arena. See also Los Angeles Memorial Coliseum Los Angeles Pop Festival References External links Los Angeles Sports Council Defunct sports venues in California Demolished sports venues in California Demolished buildings and structures in Los Angeles Los Angeles Clippers venues Los Angeles Kings arenas Los Angeles Lakers venues UCLA Bruins basketball venues USC Trojans basketball venues Venues of the 1984 Summer Olympics American Basketball Association venues Defunct arena football venues Exposition Park (Los Angeles) Legends Football League venues World Hockey Association venues Former National Basketball Association venues Defunct National Hockey League venues Defunct athletics (track and field) venues in the United States Defunct college basketball venues in the United States s Defunct indoor soccer venues in the United States Olympic boxing venues Sports venues completed in 1959 Sports venues demolished in 2016 North American Soccer League (1968–1984) indoor venues NCAA Division I Men's Basketball Tournament Final Four venues Athletics (track and field) venues in Los Angeles Basketball venues in Los Angeles Boxing venues in Los Angeles Indoor arenas in Los Angeles Indoor ice hockey venues in Los Angeles Indoor track and field venues in California Tennis venues in Los Angeles Wrestling venues in Los Angeles
9122953
https://en.wikipedia.org/wiki/NonVisual%20Desktop%20Access
NonVisual Desktop Access
NonVisual Desktop Access (NVDA) is a free and open-source, portable screen reader for Microsoft Windows. The project was started by Michael Curran in 2006. NVDA is programmed in Python. It currently works exclusively with accessibility APIs such as UI Automation, Microsoft Active Accessibility, IAccessible2 and the Java Access Bridge, rather than using specialized video drivers to "intercept" and interpret visual information. It is licensed under the GNU General Public License version 2. History Concerned by the high cost of commercial screen readers, In April 2006, Michael Curran began writing a Python-based screen reader with Microsoft SAPI as its speech engine. It provided support for Microsoft Windows 2000 onwards, and provided screen reading capabilities such as basic support for some third-party software and web browsing. Towards the end of 2006, Curran named his project Nonvisual Desktop Access (NVDA) and released version 0.5 the following year. Throughout 2008 and 2009, several versions of 0.6 appeared, featuring enhanced web browsing, support for more programs, braille display output, and improved support for more languages. To manage continued development of NVDA, Curran, along with James Teh, founded NV access in 2007. NVDA's features and popularity continued to grow. 2009 saw support for 64-bit versions of Windows as well as greater program stability in 2010. Major code restructuring to support third-party modules, coupled with basic support for Windows 8, became available in 2011. Throughout 2012, NVDA gained improved support for Windows 8, ability to perform automatic updates, included add-ons manager to manage third-party add-ons, gained improved support for entering East Asian text and introduced touchscreen support, the first of its kind for third-party screen readers for Windows. NVDA gained support for Microsoft PowerPoint in 2013 and was updated in 2014 to support PowerPoint 2013; NVDA also added enhanced WAI-ARIA support that same year. Also in 2013, NV Access introduced a restructured method of reviewing screen text, and introduced a facility to manage profiles for applications, as well as improving access to Microsoft Office and other office suites in 2014. Accessibility of mathematical formulas can be an issue for blind and visually impaired persons. In 2015, NVDA gained support for MathML through MathPlayer, along with improved support for Mintty, the desktop client for Skype, and charts in Microsoft Excel, and the ability to lower background audio was introduced in 2016. Also in 2015, NVDA became one of the first screen readers to support Windows 10 and added support for Microsoft Edge in an experimental capacity. In 2021, NVDA was the second-most popular screen reader in use throughout the world in a survey by WebAIM, having been the most popular in their 2019 survey. In 2013 Michael Curran and James Teh presented a talk on NVDA at TEDx Brisbane. It is especially popular in developing countries as being free to download and use makes it accessible to many blind and visually impaired people who would otherwise not have access to the internet. In 2020 NVDA was featured in the University of Queensland Contact Magazine. NVDA can be used with steganography based software to provide a textual description of pictures. Features and accessibility API support NVDA uses eSpeak as its integrated speech synthesizer. It also supports the Microsoft Speech platform synthesiser, ETI Eloquence and also supports SAPI synthesizers. Output to braille displays is supported officially from Version 0.6p3 onward. Besides general Windows functionality, NVDA works with software such as Microsoft office applications, WordPad, Notepad, Windows Media Player, web browsers such as Mozilla Firefox, Google Chrome, Internet Explorer, and Microsoft Edge. It supports most email clients such as Outlook, Mozilla Thunderbird, and Outlook Express. NVDA also works with most functions of Microsoft Word, Microsoft PowerPoint and Microsoft Excel. The free office suites LibreOffice and OpenOffice.org are supported by way of the Java Access Bridge package. Since early 2009, NVDA supports the WAIARIA standard for Accessible Rich Internet Applications, to facilitate better accessibility of web applications for blind users. In 2021 the screen reader user survey by WebAIM found NVDA to be the second-most popular screen reader worldwide, having previously assumed the number one position in their 2019 survey; 30.7% of survey participants used it as a primary screen reader, while 58.8% of participants used it often. Screen readers can be used to test the accessibility of software and websites. NVDA is the primary screen reader of choice by accessibility practitioners. Technical features NVDA is organized into various subsystems, including the core loop, add-ons manager, app modules, event handler and input and output handlers, along with modules to support accessibility APIs such as Microsoft Active Accessibility. NvDA also features various graphical user interfaces of its own powered by wxPython, such as various preference dialogs, and setup and update management dialogs. NVDA uses objects to represent elements in an application such as menu bars, status bars and various foreground windows. Various information about an object such as its name, value and screen coordinates are gathered by NVDA through accessibility APIs exposed by an object, such as through UIA (User Interface Automation). The gathered information is passed through various subsystems, such as speech handler and presented to the user in speech, braille and via on-screen window. NVDA also provides facilities to handle events such as key presses, name changes and when an application gains or loses focus. NVDA provides facilities to examine an application's object hierarchy and implement ways to enhance accessibility of a program. It provides dedicated commands to move through object hierarchy within an application, as well as an interactive python console to perform focus manipulation, monitoring objects for events and test code for improving accessibility of an application to be packaged in an app module. Development model From 2006 to 2013, NVDA's source code was managed via Bazaar, with NV Access switching to Git in 2013, citing development progress with Bazaar. The developers also took the opportunity to modify the release schedule to happen at regular intervals to prevent delay in releasing an official release and to make the release time frame predictable. In addition to official releases, nightly snapshot builds are also available for testing. Similar to the release process for the Linux kernel, NVDA snapshots are available in beta and alpha branches, with special topic branches created from time to time. NV Access describes the beta branch as a chance for users to gain early access to new features, alpha branch as bleeding-edge code for possible inclusion in the upcoming release, and topic branches for developing a major feature or to prepare for official release (rc branch). Some third-party developers also maintain specific branches, including language-specific versions of NVDA or to offer public preview for a feature under active development. The current lead developers are Michael "Mick" Curran and Reef Turner with code and translation contributions from users and other developers around the world. References External links 2006 software Free screen readers Free software programmed in Python Screen readers Software that uses wxWidgets Windows-only free software Free speech synthesis software
33536762
https://en.wikipedia.org/wiki/Nokia%20Asha%20303
Nokia Asha 303
The Nokia Asha 303 is a QWERTY messenger phone powered by Nokia's Series 40 operating system. It was announced at Nokia World 2011 in London along with three others Asha phones - the Nokia Asha 200, 201 and 300. The 303 is considered to be the flagship of the Asha family. Its main features are the QWERTY keyboard and capacitive touchscreen, the pentaband 3G radio, SIP VoIP over 3G and Wi-Fi and the ability to play Angry Birds which were all never seen before on a Series 40 phone. Nokia Asha 303 is available in a number of languages depending on which territory it is marketed for. Models sold in South Asia support at least eight languages: English, Hindi, Gujarati, Marathi, Tamil, Kannada, Telugu and Malayalam. History and availability The Nokia Asha 303 was announced at Nokia World 2011 in London. It will be available shortly in China, Eurasia, Europe, India, Latin America, Middle East and Southeast Asian markets. The phone will be sold at a price of €115 subject to taxes and subsidies. Hardware Processors The Nokia Asha 303 is powered by the same 1 GHz ARM11 processor found in Symbian Belle phones such as the Nokia 500, 600 and 700 but lack the dedicated Broadcom GPU which is not supported by the Nokia Series 40 operating system. The system also has 128 MB of low power single channel RAM (Mobile DDR). Screen and input The Nokia Asha 303 has a 2.6-inch (66 mm) transmissive LCD capacitive touchscreen (1 point) with a resolution of 320 × 240 pixel (QVGA, 154 ppi). In contrast with the Nokia C3-00, the screen of the Asha 303 is taller than wider (portrait). According to Nokia it is capable of displaying up to 262 thousands colors. The device also has a backlit 4-row keyboard with regional variant available (QWERTY, AZERTY, etc. ...). The back camera has an extended depth of field feature (no mechanical zoom), no flash and has a 4× digital zoom for both video and camera. The sensor size of the back camera is 3.2-megapixel (2048 x 1536 px), has a f/2.8 aperture and a 50 cm to infinity focus range. It is capable of video recording at up to 640 x 480 px at 15 fps with mono sound. Buttons On the front of the device, above the 4-row keyboard, there the answer/call key, the messaging key which brings up an onscreen menu (instant messaging and e-mail), the music key which also brings up an onscreen menu (last song/rewind, play/pause, next song/fast forward) and the end call/close application key. On the right side of the device there are the volume rocker and the lock/unlock button. A long press on the space bar brings up the wireless network menu. Audio and output The Nokia Asha 303 has one microphone and a loudspeaker, which is situated on the back of the device below the anodized aluminum battery cover. On the top, there is a 3.5 mm AV connector which simultaneously provides stereo audio output and microphone input. Between the 3.5 mm AV connector and the 2 mm charging connector, there is a High-Speed USB 2.0 USB Micro AB connector provided for data synchronization, battery charging and supports for USB On-The-Go 1.3 (the ability to act as a USB host) using a Nokia Adapter Cable for USB OTG CA-157 (not included upon purchase). The built-in Bluetooth v2.1 +EDR (Enhanced Data Rate) supports stereo audio output with the A2DP profile. Built-in car hands-free kits are also supported with the HFP profile. File transfer is supported (FTP) along with the OPP profile for sending/receiving objects. It is possible to remote control the device with the AVRCP profile. It supports wireless earpieces and headphones through the HSP profile. The DUN profile which permits access to the Internet from a laptop by dialing up on a mobile phone wirelessly (tethering) and PAN profile for networking using Bluetooth are also supported. The device also functions as an FM receiver, allowing one to listen to the FM radio by using headphones connected to the 3.5 jack as antenna. Battery and SIM The battery life of the BP-3L (1300 mAh) as claimed by Nokia is from 7 to 8 hours of talk time, from 30 to 35 days of standby and 47 hours of music playback depending on actual usage. The SIM card is located under the battery which can be accessed by removing the back panel of the device. The microSDHC card socket is also located under the back cover (but not under the battery). No tool is necessary to remove the back panel. Storage The phone has 150 MB of available non-removable storage. Additional storage is available via a microSDHC card socket, which is certified to support up to 32 GB of additional storage. Software The Nokia Asha 303 is powered by Nokia Series 40 operating system with service pack 1 for touchscreen devices and comes with a variety of applications: Web: Nokia (proxy) Browser for Series 40 Conversations: Nokia Messaging Service 3.2 (instant messaging and e-mail) and SMS, MMS Social: Facebook, Twitter, Flickr, Orkut and instagram Media: Camera, Photos, Music player, Nokia Music Store (on selected market), Flash Lite 3.0 (for YouTube video), Video player Personal Information Management: Calendar, Detailed contact information Utilities: VoIP, Notes, Calculator, To-do list, Alarm clock, Voice recorder, Stopwatch Games: Angry Birds Lite (first level only, additional levels can be purchased on the Nokia Store) The Home screen is customizable and allow the user to add, amongst others, favorite contacts, Twitter/Facebook feeds, applications shortcuts, IM/e-mail notifications and calendar alert. The phone also integrate a basic form of finger gesture (marketed by Nokia as Swype), as first seen on the Nokia N9, to navigate the user interface. For example, on the homescreen, the user has to swipe their finger from the left side of the bezel surrounding the screen to the opposite side to bring up the application drawer. A swipe from the right side will bring up the calendar application; this can be configured to fit the user preferences. The device comes with Nokia Maps for Series 40 and make use of cellular network for positioning as there is no GPS in the phone. Nokia Maps for Series 40 phones does not provide voice guided navigation and only allows for basic route (<10 km) to be plan. The software will provide step by step instructions, allows the user to see the route on a map and search for nearby points of interest. Depending on where the phone was purchased, regional maps (Europe, South America, etc.) are preloaded and, as such, an active internet connection to download map data is not required. See also List of Nokia products Comparison of smartphones References External links http://www.nokia.com/nokia-asha-smarter-mobile-phones http://europe.nokia.com/find-products/devices/nokia-asha-303/specifications http://www.developer.nokia.com/Devices/Device_specifications/303 https://www.webcitation.org/6B7hfLoMa?url=http://www.developer.nokia.com/Community/Wiki/VoIP_support_in_Nokia_devices#Support_in_Series_40_devices Smartphones Asha 303
1335094
https://en.wikipedia.org/wiki/Random%20password%20generator
Random password generator
A random password generator is software program or hardware device that takes input from a random or pseudo-random number generator and automatically generates a password. Random passwords can be generated manually, using simple sources of randomness such as dice or coins, or they can be generated using a computer. While there are many examples of "random" password generator programs available on the Internet, generating randomness can be tricky and many programs do not generate random characters in a way that ensures strong security. A common recommendation is to use open source security tools where possible since they allow independent checks on the quality of the methods used. Note that simply generating a password at random does not ensure the password is a strong password, because it is possible, although highly unlikely, to generate an easily guessed or cracked password. In fact, there is no need at all for a password to have been produced by a perfectly random process: it just needs to be sufficiently difficult to guess. A password generator can be part of a password manager. When a password policy enforces complex rules, it can be easier to use a password generator based on that set of rules than to manually create passwords. Long strings of random characters are difficult for most people to memorize. Mnemonic hashes, which reversibly convert random strings into more memorable passwords, can substantially improve the ease of memorization. As the hash can be processed by a computer to recover the original 60-bit string, it has at least as much information content as the original string. Similar techniques are used in memory sport. The naive approach Here are two code samples that a programmer who is not familiar with the limitations of the random number generators in standard programming libraries might implement: C # include <time.h> # include <stdio.h> # include <stdlib.h> int main(void) { /* Length of the password */ unsigned short int length = 8; /* Seed number for rand() */ srand((unsigned int) time(0)); /* ASCII characters 33 to 126 */ while (length--) { putchar(rand() % 94 + 33); } printf("\n"); return EXIT_SUCCESS; } In this case, the standard C function rand, which is a pseudo-random number generator, is initially seeded using the C functions time, but later iterations use rand instead. According to the ANSI C standard, time returns a value of type time_t, which is implementation-defined, but most commonly a 32-bit integer containing the current number of seconds since January 1, 1970 (see: Unix time). There are about 31 million seconds in a year, so an attacker who knows the year (a simple matter in situations where frequent password changes are mandated by password policy) and the process ID that the password was generated with, faces a relatively small number, by cryptographic standards, of choices to test. If the attacker knows more accurately when the password was generated, he faces an even smaller number of candidates to test – a serious flaw in this implementation. In situations where the attacker can obtain an encrypted version of the password, such testing can be performed rapidly enough so that a few million trial passwords can be checked in a matter of seconds. See: password cracking. The function rand presents another problem. All pseudo-random number generators have an internal memory or state. The size of that state determines the maximum number of different values it can produce: an n-bit state can produce at most different values. On many systems rand has a 31 or 32-bit state, which is already a significant security limitation. Microsoft documentation does not describe the internal state of the Visual C++ implementation of the C standard library rand, but it has only 32767 possible outputs (15 bits) per call. Microsoft recommends a different, more secure function, rand_s, be used instead. The output of rand_s is cryptographically secure, according to Microsoft, and it does not use the seed loaded by the srand function. However its programming interface differs from rand. PHP function pass_gen(int $length = 8): string { $pass = array(); for ($i = 0; $i < $length; $i++) { $pass[] = chr(mt_rand(32, 126)); } return implode($pass); } In the second case, the PHP function microtime is used, which returns the current Unix timestamp with microseconds. This increases the number of possibilities, but someone with a good guess of when the password was generated, for example, the date an employee started work, still has a reasonably small search space. Also, some operating systems do not provide time to microsecond resolution, sharply reducing the number of choices. Finally, the rand function usually uses the underlying C rand function, and may have a small state space, depending on how it is implemented. An alternative random number generator, mt_rand, which is based on the Mersenne Twister pseudorandom number generator, is available in PHP, but it also has a 32-bit state. There are proposals for adding strong random number generation to PHP. Stronger methods A variety of methods exist for generating strong, cryptographically secure random passwords. On Unix platforms /dev/random and /dev/urandom are commonly used, either programmatically or in conjunction with a program such as makepasswd. Windows programmers can use the Cryptographic Application Programming Interface function CryptGenRandom. The Java programming language includes a class called SecureRandom. Another possibility is to derive randomness by measuring some external phenomenon, such as timing user keyboard input. Many computer systems already have an application (typically named "apg") to implement FIPS 181. FIPS 181—Automated Password Generator—describes a standard process for converting random bits (from a hardware random number generator) into somewhat pronounceable "words" suitable for a passphrase. However, in 1994 an attack on the FIPS 181 algorithm was discovered, such that an attacker can expect, on average, to break into 1% of accounts that have passwords based on the algorithm, after searching just 1.6 million passwords. This is due to the non-uniformity in the distribution of passwords generated, which can be addressed by using longer passwords or by modifying the algorithm. Bash Here is a code sample that uses /dev/urandom to generate a password with a simple Bash function. This function takes password length as a parameter, or uses 16 by default: function mkpw() { LC_ALL=C tr -dc '[:graph:]' < /dev/urandom | head -c ${1:-16}; echo; } Java Here is a code sample (adapted from the class PasswordGenerator) that uses SecureRandom to generate a 10 hexadecimal character password: char[] symbols = {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f'}; int length = 10; Random random = SecureRandom.getInstanceStrong(); // as of JDK 8, this returns a SecureRandom implementation known to be strong StringBuilder sb = new StringBuilder(length); for (int i = 0; i < length; i++) { int randomIndex = random.nextInt(symbols.length); sb.append(symbols[randomIndex]); } String password = sb.toString(); JavaScript This example uses the Web Crypto API to generate cryptographically secure random numbers with uniform distribution. function secureRandomInt(max) { let num = 0; const min = 2 ** 32 % max; // for eliminating bias const rand = new Uint32Array(1); do { num = crypto.getRandomValues(rand)[0]; } while (num < min); return num % max; } function generate(length = 12) { const uppercase = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'; const lowercase = 'abcdefghijklmnopqrstuvwxyz'; const numbers = '0123456789'; const symbols = '!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~'; const all = uppercase + lowercase + numbers + symbols; let password = ''; for (let i = 0; i < length; i++) { const randomIndex = secureRandomInt(all.length) password += all[randomIndex]; } return password; } Perl This example uses the Crypt::Random::Source module to find a source of strong random numbers (which is platform dependent). use Crypt::Random::Source qw(get_strong); while(length($out) < 15) { my $a = get_strong(1); $a =~ s/[^[:graph:]]//g; $out .= $a; } print $out; Python The language Python includes a SystemRandom class that obtains cryptographic grade random bits from /dev/urandom on a Unix-like system, including Linux and macOS, while on Windows it uses CryptGenRandom. Here is a simple Python script that demonstrates the use of this class: #!/usr/bin/env python3 import random, string myrg = random.SystemRandom() length = 10 alphabet = string.ascii_letters + string.digits # a-z A-Z 0-9 password = "".join(myrg.choice(alphabet) for _ in range(length)) print(password) PHP A PHP program can open and read from /dev/urandom, if available, or invoke the Microsoft utilities. A third option, if OpenSSL is available is to employ the function openssl_random_pseudo_bytes'.''' Mechanical methods Yet another method is to use physical devices such as dice to generate the randomness. One simple way to do this uses a 6 by 6 table of characters. The first die roll selects a row in the table and the second a column. So, for example, a roll of 2 followed by a roll of 4 would select the letter "j" from the fractionation table below. To generate upper/lower case characters or some symbols a coin flip can be used, heads capital, tails lower case. If a digit was selected in the dice rolls, a heads coin flip might select the symbol above it on a standard keyboard, such as the '$' above the '4' instead of '4'. {| class="wikitable" ! |1||2||3||4||5||6|- |1! a ! b ! c ! d ! e ! f |- |2! g ! h ! i ! j ! k ! l |- |3! m ! n ! o ! p ! q ! r |- |4! s ! t ! u ! v ! w ! x |- |5! y ! z ! 0 ! 1 ! 2 ! 3 |- |6! 4 ! 5 ! 6 ! 7 ! 8 ! 9 |} Type and strength of password generated Random password generators normally output a string of symbols of specified length. These can be individual characters from some character set, syllables designed to form pronounceable passwords, or words from some word list to form a passphrase. The program can be customized to ensure the resulting password complies with the local password policy, say by always producing a mix of letters, numbers and special characters. Such policies typically reduce strength slightly below the formula that follows, because symbols are no longer independently produced. The Password strength of a random password against a particular attack (brute-force search), can be calculated by computing the information entropy of the random process that produced it. If each symbol in the password is produced independently and with uniform probability, the entropy in bits is given by the formula where N is the number of possible symbols and L is the number of symbols in the password. The function log2 is the base-2 logarithm. H is typically measured in bits. {| class="wikitable" |+ Entropy per symbol for different symbol sets ! Symbol set || Symbol count N || Entropy per symbol H'' |- | Arabic numerals (0–9) (e.g. PIN) || 10 || 3.32 bits |- | Hexadecimal numerals (0–9, A–F) (e.g. WEP key) || 16 || 4.00 bits |- | Case insensitive Latin alphabet (a–z or A–Z) || 26 || 4.70 bits |- | Case insensitive alphanumeric (a–z or A–Z, 0–9) || 36 || 5.17 bits |- | Case sensitive Latin alphabet (a–z, A–Z) || 52 || 5.70 bits |- | Case sensitive alphanumeric (a–z, A–Z, 0–9) || 62 || 5.95 bits |- | All ASCII printable characters || 94 || 6.55 bits |- | Diceware word list || 7776 || 12.9 bits |- |} Any password generator is limited by the state space of the pseudo-random number generator used if it is based on one. Thus a password generated using a 32-bit generator is limited to 32 bits entropy, regardless of the number of characters the password contains. Note, however, that a different type of attack might succeed against a password evaluated as 'very strong' by the above calculation. Password generator programs and websites A large number of password generator programs and websites are available on the Internet. Their quality varies and can be hard to assess if there is no clear description of the source of randomness that is used and if source code is not provided to allow claims to be checked. Furthermore, and probably most importantly, transmitting candidate passwords over the Internet raises obvious security concerns, particularly if the connection to the password generation site's program is not properly secured or if the site is compromised in some way. Without a secure channel, it is not possible to prevent eavesdropping, especially over public networks such as the Internet. A possible solution to this issue is to generate the password using a client-side programming language such as JavaScript. The advantage of this approach is that the generated password stays in the client computer and is not transmitted to or from an external server. See also Cryptographically secure pseudorandom number generator Diceware Hardware random number generator Key size Password length parameter Password manager References External links Cryptographically Secure Random number on Windows without using CryptoAPI from MSDN RFC 4086 on Randomness Recommendations for Security (Replaces earlier RFC 1750.) Password authentication Applications of randomness Cryptographic algorithms
13999263
https://en.wikipedia.org/wiki/Packet%20injection
Packet injection
Packet injection (also known as forging packets or spoofing packets) in computer networking, is the process of interfering with an established network connection by means of constructing packets to appear as if they are part of the normal communication stream. The packet injection process allows an unknown third party to disrupt or intercept packets from the consenting parties that are communicating, which can lead to degradation or blockage of users' ability to utilize certain network services or protocols. Packet injection is commonly used in man-in-the-middle attacks and denial-of-service attacks. Capabilities By utilizing raw sockets, NDIS function calls, or direct access to a network adapter kernel mode driver, arbitrary packets can be constructed and injected into a computer network. These arbitrary packets can be constructed from any type of packet protocol (ICMP, TCP, UDP, and others) since there is full control over the packet header while the packet is being assembled. General procedure Create a raw socket Create an Ethernet header in memory Create an IP header in memory Create a TCP header or UDP header in memory Create the injected data in memory Assemble (concatenate) the headers and data together to form an injection packet Compute the correct IP and TCP or UDP packet checksums Send the packet to the raw socket Uses Packet injection has been used for: Disrupting certain services (file sharing or HTTP) by Internet service providers and wireless access points Compromising wireless access points and circumventing their security Exploiting certain functionality in online games Determining the presence of internet censorship Allows for custom packet designers to test their custom packets by directly placing them onto a computer network Simulation of specific network traffic and scenarios Testing of network firewalls and intrusion detection systems Computer network auditing and troubleshooting computer network related issues Detecting packet injection Through the process of running a packet analyzer or packet sniffer on both network service access points trying to establish communication, the results can be compared. If point A has no record of sending certain packets that show up in the log at point B, and vice versa, then the packet log inconsistencies show that those packets have been forged and injected by an intermediary access point. Usually TCP resets are sent to both access points to disrupt communication. Software lorcon, part of Airpwn KisMAC pcap Winsock CommView for WiFi Packet Generator Scapy Preinstalled software on Kali Linux (BackTrack was the predecessor) NetHunter (Kali Linux for Android) HexInject See also Packet capture Packet generation model Raw socket Packet crafting Packet sniffer External links Packet Injection using raw sockets References Packets (information technology)
10914360
https://en.wikipedia.org/wiki/Privacy%20in%20file%20sharing%20networks
Privacy in file sharing networks
Peer-to-peer file sharing (P2P) systems like Gnutella, KaZaA, and eDonkey/eMule, have become extremely popular in recent years, with the estimated user population in the millions. An academic research paper analyzed Gnutella and eMule protocols and found weaknesses in the protocol; many of the issues found in these networks are fundamental and probably common on other P2P networks. Users of file sharing networks, such as eMule and Gnutella, are subject to monitoring of their activity. Clients may be tracked by IP address, DNS name, software version they use, files they share, queries they initiate, and queries they answer to. Clients may also share their private files to the network without notice due to inappropriate settings. Much is known about the network structure, routing schemes, performance load and fault tolerance of P2P systems in general. It might be surprising, but the eMule protocol does not provide much privacy to the users, although it is a P2P protocol which is supposed to be decentralized. The Gnutella and eMule protocols The eMule protocol eMule is one of the clients which implements the eDonkey network. The eMule protocol consists of more than 75 types of messages. When an eMule client connects to the network, it first gets a list of known eMule servers which can be obtained from the Internet. Despite the fact that there are millions of eMule clients, there are only small amount of servers. The client connects to a server with TCP connection. That stays open as long as the client is connected to the network. Upon connecting, the client sends a list of its shared files to the server. By this the server builds a database with the files that reside on this client. The server also returns a list of other known servers. The server returns an ID to the client, which is a unique client identifier within the system. The server can only generate query replies to clients which are directly connected to it. The download is done by dividing the file into parts and asking each client a part. The Gnutella protocol Gnutella protocol v0.4 In Gnutella protocol V0.4 all the nodes are identical, and every node may choose to connect to every other. The Gnutella protocol consist of 5 message types: query for tile search. Query messages use a flooding mechanism, i.e. each node that receives a query forwards it on all of its adjacent graph node links. A node that receives a query and has the appropriate file replies with a query hit message. A hop count field in the header limits the message lifetime. Ping and Pong messages are used for detecting new nodes that can be linked to the actual file download performed by opening TCP connection and using the HTTP GET mechanism. Gnutella protocol v0.6 Gnutella protocol V0.6 includes several modifications: A node has one of two operational modes: "leaf node" or "ultrapeer". Initially each node starts in a leaf node mode in which it can only connect to ultrapeers. The leaf nodes send query to an ultrapeer, the ultrapeer forwards the query and waits for the replies. When a node has enough bandwidth and uptime, the node may become an ultrapeer. Ultrapeers send periodically a request for their clients to send a list with the shared files they have. If a query arrives with a search string that matches one of the files in the leaves, the ultrapeer replies and pointing to the specific leaf. Tracking initiators and responders In version 0.4 of the Gnutella protocol, an ultrapeer which receives a message from a leaf node (message with hop count zero) knows for sure that the message was originated from that leaf node. In version 0.6 of the protocol, If an ultrapeer receives a message from an ultrapeer with hop count zero then it knows that the message originated by the ultrapeer or by one of its leaves (The average number of the leaves nodes that are connected to an ultrapeer is 200). Tracking a single node Many clients of Gnutella have an HTTP monitor feature. This feature allows sending information about the node to any node which supports an empty HTTP request, and receiving on response. Research shows that a simple crawler which is connected to Gnutella network can get from an initial entry point a list of IP addresses which are connected to that entry point. Then the crawler can continue to inquire for other IP addresses. An academic research performed the following experiment: At NYU, a regular Gnucleus software client that was connected to the Gnutella network as a leaf node, with distinctive listening TCP port 44121. At the Hebrew University, Jerusalem, Israel, a crawler ran looking for client listening with port 44121. In less than 15 minutes the crawler found the IP address of the Gnucleus client in NYU with the unique port. IP address harvesting If a user is connected to the Gnutella network within, say, the last 24 hours, that user's IP address can be easily harvested by hackers, since the HTTP monitoring feature can collect about 300,000 unique addresses within 10 hours. Tracking nodes by GUID creation A Globally unique identifier (GUID) is a 16 bytes field in the Gnutella message header, which uniquely identifies every Gnutella message. The protocol does not specify how to generate the GUID. Gnucleus on Windows uses the Ethernet MAC address used as the GUID 6 lower bytes. Therefore, Windows clients reveal their MAC address when sending queries. In the JTella 0.7 client software the GUID is created using the Java random number without an initialization. Therefore, on each session, the client creates a sequence of queries with the same repeating IDs. Over time, a correlation between the user queries can be found. Collecting miscellaneous information users The monitoring facility of Gnutella reveals an abundance of precious information on its users. It is possible to collect the information about the software vendor and the version that the clients use. Other statistical information about the client is available as well: capacity, uptime, local files etc. In Gnutella V0.6, information about client software can be collected (even if the client does not support HTTP monitoring). The information is found in the first two messages connection handshake. Tracking users by partial information Some Gnutella users have a small look-alike set, which makes it easier to track them by knowing this very partial information. Tracking users by queries An academic research team performed the following experiment: the team ran five Gnutella as ultrapeer (in order to listen to other nodes’ queries). The team revealed about 6% of the queries. Usage of hash functions SHA-1 hashes refer to SHA-1 of files not search strings. Half of the search queries are strings and half of them are the output of a hash function (SHA-1) applied on the string. Although the usage of hash function is intended to improve the privacy, an academic research showed that the query content can be exposed easily by a dictionary attack: collaborators ultrapeers can gradually collect common search strings, calculate their hash value and store them into a dictionary. When a hashed query arrives, each collaborated ultrapeer can check matches with the dictionary and expose the original string accordingly. Measures A common countermeasure used is concealing a user's IP address when downloading or uploading content by using anonymous networks, such as I2P - The Anonymous Network. There is also data encryption and the use of indirect connections (mix networks) to exchange data between peers. Thus all traffic is anonymized and encrypted. Unfortunately, anonymity and safety come at the price of much lower speeds, and due to the nature of those networks being internal networks there currently still is less content. However, this will change, once there are more users. See also Gnutella2, a reworked network based on Gnutella Bitzi, an open content file catalog integrated with some Gnutella clients Torrent poisoning References Further reading A Quantitative Analysis of the Gnutella Network Traffic - Zeinalipour-Yazti, Folias - 2002 Crawling Gnutella: Lessons Learned - Deschenes, Weber, Davison - 2004 Security Aspects of Napster and Gnutella Steven M. Bellovin 2001 Firewalls and Internet Security: Repelling the Wily Hacker, Second Edition Daswani, Neil; Garcia-Molina, Hector. Query-Flood DoS Attacks in Gnutella eMule Protocol Specification by Danny Bickson and Yoram Kulbak from HUJI. External links eMule project Official website eMule on SourceForge (SourceForge) Contains archives of past versions of eMule List of allowed eMule-Mods File sharing File sharing networks Gnutella Internet privacy
235110
https://en.wikipedia.org/wiki/Multilayer%20switch
Multilayer switch
A multilayer switch (MLS) is a computer networking device that switches on OSI layer 2 like an ordinary network switch and provides extra functions on higher OSI layers. The MLS was invented by engineers at Digital Equipment Corporation. Switching technologies are crucial to network design, as they allow traffic to be sent only where it is needed in most cases, using fast, hardware-based methods. Switching uses different kinds of network switches. A standard switch is known as a layer 2 switch and is commonly found in nearly any LAN. Layer 3 or layer 4 switches require advanced technology (see managed switch) and are more expensive, and thus are usually only found in larger LANs or in special network environments. Multilayer switch Multi-layer switching combines layer 2, 3 and 4 switching technologies and provides high-speed scalability with low latency. Multi-layer switching can move traffic at wire speed and also provide layer 3 routing. There is no performance difference between forwarding at different layers because the routing and switching is all hardware basedrouting decisions are made by specialized ASIC with the help of content-addressable memory. Multi-layer switching can make routing and switching decisions based on the following MAC address in a data link frame Protocol field in the data link frame IP address in the network layer header Protocol field in the network layer header Port numbers in the transport layer header MLSs implement QoS in hardware. A multilayer switch can prioritize packets by the 6 bit differentiated services code point (DSCP). These 6 bits were originally used for type of service. The following 4 mappings are normally available in an MLS: From OSI layer 2, 3 or 4 to IP DSCP (for IP packets) or IEEE 802.1p From IEEE 802.1p to IP DSCP From IP DSCP to IEEE 802.1p From VLAN IEEE 802.1p to port egress queue. MLSs are also able to route IP traffic between VLANs like a common router. The routing is normally as quick as switching (at wire speed). Layer-2 switching Layer-2 switching uses the MAC address of the host's network interface cards (NICs) to decide where to forward frames. Layer 2 switching is hardware-based, which means switches use application-specific integrated circuit (ASICs) to build and maintain the Forwarding information base and to perform packet forwarding at wire speed. One way to think of a layer-2 switch is as multiport bridge. Layer-2 switching is highly efficient because there is no modification to the frame required. Encapsulation of the packet changes only when the data packet passes through dissimilar media (such as from Ethernet to FDDI). Layer-2 switching is used for workgroup connectivity and network segmentation (breaking up collision domains). This allows a flatter network design with more network segments than traditional networks joined by repeater hubs and routers. Layer-2 switches have the same limitations as bridges. Bridges break up collision domains, but the network remains one large broadcast domain which can cause performance issues and limits the size of a network. Broadcast and multicasts, along with the slow convergence of spanning tree, can cause major problems as the network grows. Because of these problems, layer-2 switches cannot completely replace routers. Bridges are good if a network is designed by the 80/20 rule: users spend 80 percent of their time on their local segment. Layer-3 switching A layer-3 switch can perform some or all of the functions normally performed by a router. Most network switches, however, are limited to supporting a single type of physical network, typically Ethernet, whereas a router may support different kinds of physical networks on different ports. Layer-3 switching is solely based on (destination) IP address stored in the header of IP datagram (layer-4 switching may use other information in the header). The difference between a layer-3 switch and a router is the way the device is making the routing decision. Traditionally, routers use microprocessors to make forwarding decisions in software, while the switch performs only hardware-based packet switching (by specialized ASIC with the help of content-addressable memory). However, many routers now also have advanced hardware functions to assist with forwarding. The main advantage of layer-3 switches is the potential for lower network latency as a packet can be routed without making extra network hops to a router. For example, connecting two distinct segments (e.g. VLANs) with a router to a standard layer-2 switch requires passing the frame to the switch (first L2 hop), then to the router (second L2 hop) where the packet inside the frame is routed (L3 hop) and then passed back to the switch (third L2 hop). A layer-3 switch accomplishes the same task without the need for a router (and therefore additional hops) by making the routing decision itself, i.e. the packet is routed to another subnet and switched to the destination network port simultaneously. Because many layer-3 switches offer the same functionality as traditional routers they can be used as cheaper, lower latency replacements in some networks. Layer 3 switches can perform the following actions that can also be performed by routers: determine paths based on logical addressing check and recompute layer-3 header checksums examine and update time to live (TTL) field process and respond to any option information update Simple Network Management Protocol (SNMP) managers with Management Information Base (MIB) information The benefits of layer 3 switching include the following: fast hardware-based packet forwarding with low latency lower per-port cost compared to pure routers flow accounting Quality of service (QoS) IEEE has developed hierarchical terminology that is useful in describing forwarding and switching processes. Network devices without the capability to forward packets between subnetworks are called end systems (ESs, singular ES), whereas network devices with these capabilities are called intermediate systems (ISs). ISs are further divided into those that communicate only within their routing domain (intradomain IS) and those that communicate both within and between routing domains (interdomains IS). A routing domain is generally considered as portion of an internetwork under common administrative authority and is regulated by a particular set of administrative guidelines. Routing domains are also called autonomous systems. A common layer-3 capability is an awareness of IP multicast through IGMP snooping. With this awareness, a layer-3 switch can increase efficiency by delivering the traffic of a multicast group only to ports where the attached device has signaled that it wants to listen to that group. Layer-3 switches typically support IP routing between VLANs configured on the switch. Some layer-3 switches support the routing protocols that routers use to exchange information about routes between networks. Layer 4 switching Layer 4 switching means hardware-based layer 3 switching technology that can also consider the type of network traffic (for example, distinguishing between UDP and TCP). Layer 4 switching provides additional datagram inspection by reading the port numbers found in the transport layer header to make routing decisions (i.e. ports used by HTTP, FTP and VoIP). These port numbers are found in RFC 1700 and reference the upper-layer protocol, program, or application. Using layer-4 switching, the network administrator can configure a layer-4 switch to prioritize data traffic by application. Layer-4 information can also be used to help make routing decisions. For example, extended access lists can filter packets based on layer-4 port numbers. Another example is accounting information gathered by open standards using sFlow. A layer-4 switch can use information in the transport-layer protocols to make forwarding decisions. Principally this refers to an ability to use source and destination port numbers in TCP and UDP communications to allow, block and prioritize communications. Layer 4–7 switch, web switch, or content switch Some switches can use packet information up to OSI layer 7; these may be called layer 4–7 switches, , , web switches or application switches. Content switches are typically used for load balancing among groups of servers. Load balancing can be performed on HTTP, HTTPS, VPN, or any TCP/IP traffic using a specific port. Load balancing often involves destination network address translation so that the client of the load-balanced service is not fully aware of which server is handling its requests. Some layer 4–7 switches can perform Network address translation (NAT) at wire speed. Content switches can often be used to perform standard operations such as SSL encryption and decryption to reduce the load on the servers receiving the traffic, or to centralize the management of digital certificates. Layer 7 switching is a technology used in a content delivery network. Some applications require that repeated requests from a client are directed at the same application server. Since the client isn't generally aware of which server it spoke to earlier, content switches define a notion of stickiness. For example, requests from the same source IP address are directed to the same application server each time. Stickiness can also be based on SSL IDs, and some content switches can use cookies to provide this functionality. Layer 4 load balancer The router operates on the transport layer and makes decisions on where to send the packets. Modern load balancing routers can use different rules to make decisions on where to route traffic. This can be based on least load, or fastest response times, or simply balancing requests out to multiple destinations providing the same services. This is also a redundancy method, so if one machine is not up, the router will not send traffic to it. The router may also have NAT capability with port and transaction awareness and performs a form of port translation for sending incoming packets to one or more machines that are hidden behind a single IP address. Layer 7 Layer-7 switches may distribute the load based on uniform resource locators (URLs), or by using some installation-specific technique to recognize application-level transactions. A layer-7 switch may include a web cache and participate in a content delivery network (CDN). See also Application delivery controller Bridge router Multiprotocol Label Switching (MPLS) Residential gateway References External links What is the difference between a Layer-3 switch and a router? Multilayer Switching Networking hardware
4927855
https://en.wikipedia.org/wiki/Software%20peer%20review
Software peer review
In software development, peer review is a type of software review in which a work product (document, code, or other) is examined by author's colleagues, in order to evaluate the work product's technical content and quality. Purpose The purpose of a peer review is to provide "a disciplined engineering practice for detecting and correcting defects in software artifacts, and preventing their leakage into field operations" according to the Capability Maturity Model. When performed as part of each Software development process activity, peer reviews identify problems that can be fixed early in the lifecycle. That is to say, a peer review that identifies a requirements problem during the Requirements analysis activity is cheaper and easier to fix than during the Software architecture or Software testing activities. The National Software Quality Experiment, evaluating the effectiveness of peer reviews, finds, "a favorable return on investment for software inspections; savings exceeds costs by 4 to 1". To state it another way, it is four times more costly, on average, to identify and fix a software problem later. Distinction from other types of software review Peer reviews are distinct from management reviews, which are conducted by management representatives rather than by colleagues, and for management and control purposes rather than for technical evaluation. They are also distinct from software audit reviews, which are conducted by personnel external to the project, to evaluate compliance with specifications, standards, contractual agreements, or other criteria. Review processes Peer review processes exist across a spectrum of formality, with relatively unstructured activities such as "buddy checking" towards one end of the spectrum, and more Informal approaches such as walkthroughs, technical peer reviews, and software inspections, at the other. The IEEE defines formal structures, roles, and processes for each of the last three. Management representatives are typically not involved in the conduct of a peer review except when included because of specific technical expertise or when the work product under review is a management-level document. This is especially true of line managers of other participants in the review. Processes for formal peer reviews, such as software inspections, define specific roles for each participant, quantify stages with entry/exit criteria, capture software metrics on the peer review process. "Open source" reviews In the free / open source community, something like peer review has taken place in the engineering and evaluation of computer software. In this context, the rationale for peer review has its equivalent in Linus's law, often phrased: "Given enough eyeballs, all bugs are shallow", meaning "If there are enough reviewers, all problems are easy to solve." Eric S. Raymond has written influentially about peer review in software development. References Software review Peer review
1030104
https://en.wikipedia.org/wiki/Darknet
Darknet
A dark net or darknet is an overlay network within the Internet that can only be accessed with specific software, configurations, or authorization, and often uses a unique customized communication protocol. Two typical darknet types are social networks (usually used for file hosting with a peer-to-peer connection), and anonymity proxy networks such as Tor via an anonymized series of connections. The term "darknet" was popularized by major news outlets to associate with Tor Onion services, when the infamous drug bazaar Silk Road used it, despite the terminology being unofficial. Technology such as Tor, I2P, and Freenet was intended to defend digital rights by providing security, anonymity, or censorship resistance and is used for both illegal and legitimate reasons. Anonymous communication between whistle-blowers, activists, journalists and news organisations is also facilitated by darknets through use of applications such as SecureDrop. Terminology The term originally described computers on ARPANET that were hidden, programmed to receive messages but not respond to or acknowledge anything, thus remaining invisible, in the dark. An account detailed how the first online transaction related to drugs transpired in 1971 when students of the Massachusetts Institute of Technology and Stanford University traded marijuana using ARPANET accounts in the former's Artificial Intelligence Laboratory. Since ARPANET, the usage of dark net has expanded to include friend-to-friend networks (usually used for file sharing with a peer-to-peer connection) and privacy networks such as Tor. The reciprocal term for a darknet is a clearnet or the surface web when referring to content indexable by search engines. The term "darknet" is often used interchangeably with "dark web" because of the quantity of hidden services on Tor's darknet. Additionally, the term is often inaccurately used interchangeably with the deep web because of Tor's history as a platform that could not be search-indexed. Mixing uses of both these terms has been described as inaccurate, with some commentators recommending the terms be used in distinct fashions. Origins "Darknet" was coined in the 1970s to designate networks isolated from ARPANET (the government-founded military/academical network which evolved into the Internet), for security purposes. Darknet addresses could receive data from ARPANET but did not appear in the network lists and would not answer pings or other inquiries. The term gained public acceptance following publication of "The Darknet and the Future of Content Distribution", a 2002 paper by Peter Biddle, Paul England, Marcus Peinado, and Bryan Willman, four employees of Microsoft who argued the presence of the darknet was the primary hindrance to the development of workable digital rights management (DRM) technologies and made copyright infringement inevitable. This paper described "darknet" more generally as any type of parallel network that is encrypted or requires a specific protocol to allow a user to connect to it. Sub-cultures Journalist J. D. Lasica, in his 2005 book Darknet: Hollywood's War Against the Digital Generation, described the darknet's reach encompassing file sharing networks. Subsequently, in 2014, journalist Jamie Bartlett in his book The Dark Net used the term to describe a range of underground and emergent subcultures, including camgirls, cryptoanarchists, darknet drug markets, self harm communities, social media racists, and transhumanists. Uses Darknets in general may be used for various reasons, such as: To better protect the privacy rights of citizens from targeted and mass surveillance Computer crime (cracking, file corruption, etc.) Protecting dissidents from political reprisal File sharing (warez, personal files, pornography, confidential files, illegal or counterfeit software, etc.) Sale of restricted goods on darknet markets Whistleblowing and news leaks Purchase or sale of illicit or illegal goods or services Circumventing network censorship and content-filtering systems, or bypassing restrictive firewall policies Software All darknets require specific software installed or network configurations made to access them, such as Tor, which can be accessed via a customized browser from Vidalia (aka the Tor browser bundle), or alternatively via a proxy configured to perform the same function. Active Tor is the most popular instance of a darknet, often mistakenly equated with darknet in general. Alphabetical list: anoNet is a decentralized friend-to-friend network built using VPN and software BGP routers. Bisq is the only true peer-to-peer fiat to cryptocurrency exchange. BitTorrent is a high performance semi-decentralized peer-to-peer communication protocol. Decentralized network 42 (not for anonymity but research purposes). Freenet is a popular DHT file hosting darknet platform. It supports friend-to-friend and opennet modes. GNUnet can be utilized as a darknet if the "F2F (network) topology" option is enabled. I2P (Invisible Internet Project) is an overlay proxy network that features hidden services called "Eepsites". IPFS has a browser extension that may backup popular webpages. OpenBazaar is an open source project developing a protocol for e-commerce transactions in a fully decentralized marketplace RetroShare is a friend-to-friend messenger communication and file transfer platform. It may be used as a darknet if DHT and Discovery features are disabled. Riffle is a government, client-server darknet system that simultaneously provides secure anonymity (as long as at least one server remains uncompromised), efficient computation, and minimal bandwidth burden. Secure Scuttlebutt is a peer-to peer communication protocol, mesh network, and self-hosted social media ecosystem Syndie is software used to publish distributed forums over the anonymous networks of I2P, Tor and Freenet. Tor (The onion router) is an anonymity network that also features a darknet – via its onion services. Tribler is an anonymous BitTorrent client with built in search engine. Zeronet is a DHT Web 2.0 hosting with Tor users. No longer supported StealthNet (discontinued) WASTE Defunct AllPeers Turtle F2F See also Crypto-anarchism Cryptocurrency Darknet market Dark web Deep web Private peer-to-peer (P2P) Sneakernet Virtual private network (VPN) References File sharing Virtual private networks Darknet markets Cyberspace Internet culture Internet terminology Dark web Network architecture Distributed computing architecture 1970s neologisms Internet architecture
14934336
https://en.wikipedia.org/wiki/2081%3A%20A%20Hopeful%20View%20of%20the%20Human%20Future
2081: A Hopeful View of the Human Future
2081: A Hopeful View of the Human Future is a 1981 book by Princeton physicist Gerard K. O'Neill. The book is an attempt to predict the social and technological state of humanity 100 years in the future. O'Neill's positive attitude towards both technology and human potential distinguished this book from gloomy predictions of a Malthusian catastrophe by contemporary scientists. Paul R. Ehrlich wrote in 1968 in The Population Bomb, "in the 1970s and 1980s hundreds of millions of people will starve to death". The Club of Rome's 1972 Limits to Growth predicted a catastrophic end to the Industrial Revolution within 100 years from resource exhaustion and pollution. O'Neill's contrary view had two main components. First, he analyzed the previous attempts to predict the future of society—including many catastrophes that had not materialized. Second, he extrapolated historical trends under the assumption that the obstacles identified by other authors would be overcome by five technological "Drivers of Change". He extrapolated an average American family income in 2081 of $1 million/year. Two developments based on his own research were responsible for much of his optimism. In The High Frontier: Human Colonies in Space O'Neill described solar power satellites that provide unlimited clean energy, making it far easier for humanity to reach and exceed present developed-world living standards. Overpopulation pressures would be relieved as billions of people eventually emigrate to colonies in free space. These colonies would offer an Earth-like environment but with vastly higher productivity for industry and agriculture. These colonies and satellites would be constructed from asteroid or lunar materials launched into the desired orbits cheaply by the mass drivers O'Neill's group developed. Part I: The Art of Prophecy Previous futurist authors he cites: Edward Bellamy J.D. Bernal McGeorge Bundy Arthur C. Clarke George Darwin J. B. S. Haldane Robert Heilbroner Aldous Huxley Rudyard Kipling Thomas More George Orwell George Thompson Konstantin Tsiolkovski Jules Verne H.G. Wells Eugene Zamiatin Clarke Arthur C. Clarke's Profiles of the Future included a long list of predictions, many of which O'Neill endorsed. Two of his maxims that O'Neill quotes seem to sum up O'Neill's attitude, as well: Part II: The Drivers of Change Sections are included on the five key "Drivers of Change" believed by O'Neill to be the focus of future development: Automation Space Colonies Communications Computers Energy O'Neill applied basic physics to understand the limits of possible change, using the history of the technology to extrapolate likely progress. He applied the history of computing to reason about how people and institutions will shape and be shaped by the likely changes. He predicted that future computers must run at a very low-voltage because of heat. The main basis of his technology extrapolation for computers is Moore's Law, one of the greatest successes of Trend estimation in predicting human progress. His predicted the social aspects of the future of computers. He identified computers as the most certain of his five "drivers of change", because their adoption could be driven primarily by individual or local decisions, while the other four such as space colonies depended on large-scale decision-making. He observed the success of minicomputers, calculators, and the first home computers, and predicted that every home would have a computer in a hundred years. With the aid of speculations by computer pioneers such as John von Neumann and the writers of "tracts" such as Zamyatin's We, O'Neill also predicted that privacy would be under siege from computers in 2081. O'Neill predicted that software engineering issues and the intractability of artificial intelligence problems would require massive programming efforts and very powerful processors to achieve truly usable computers. His prediction was based on the difficulties and failures of computer use he had observed in 1981, including a candid horror story of his own Princeton University library's attempt to computerize its operations. His computers of the future, represented by the robot butler his visitor to Earth encounters in 2081, included speaker-independent speech recognition and natural language processing. O'Neill correctly pointed out the huge difference between computers and human brains, and stated that, while a more human-like artificial brain is a worthy goal, computers will be vastly improved descendants of today's rather than truly intelligent and creative artificial brains. Part III: The World in 2081 This section was written as a series of dispatches home from "Eric C. Rawson", a native of a distant space colony called "Fox Cluster". By analogy with American religious colonists such as the Puritans and Mormons, O'Neill suggests that such a colony might have been founded by a group of pacifists who chose to live about twice as far from the Sun as Pluto in order to avoid involvement in Earth's wars. His calculations indicate that colonies at this distance could have Earth-level sunlight using a mirror the same weight as the colony itself. Eric pays a visit to the Earth of 2081 to take care of family business and explore a world that is nearly as foreign to him as it is to us. After each dispatch, O'Neill added a section that described his reasoning for each situation the visitor described, such as riding a "floater" train going thousands of miles per hour in vacuum, interacting with a household robot or visiting a fully enclosed Pennsylvania city with a tropical climate in midwinter. Each section was written from his perspective as a physicist. For example, his description of "Honolulu, Pennsylvania" included multiple roof layers that could be retracted in good weather. The city enjoyed an artificial tropical climate all year because of internal climate controls and advanced insulation. He also proposed magnetically levitated "floater" trains moving in very-low-pressure tunnels that would replace airplanes on heavily traveled routes. Part IV: Wild Cards This section explores not the most probable outcomes, but "the limits of the possible": how likely some scenarios O'Neill considered less probable are, and what they might mean. These included nuclear annihilation, attaining immortality, and contact with extraterrestrial civilizations. For this last case, he presents a thought experiment about how a hypothetical alien civilization, the "Primans", could explore the galaxy with self-replicating robots, monitoring every planetary system in the Galaxy without betraying their own position, and destroying intelligent life (by building giant mirrors to incinerate the planet) if they felt threatened. This experiment seems to prove that conflict or even surprise contact with an intelligent alien life form—that staple of science fiction—is highly unlikely. See also Orbiting skyhooks Prediction Futures studies 2000s in science and technology Technologies discussed Space advocacy Space technology Space colonization Solar power satellite Asteroid mining Space elevator Space manufacturing Space mining Space-based industry Domed city References Bibliography NSS review of 2081 1981 non-fiction books 1981 in the environment Futurology books Environmental non-fiction books Technology books Books about environmentalism Space advocacy Mining the Sky Asteroid mining 2081 American non-fiction books Thought experiments
2223691
https://en.wikipedia.org/wiki/Media%20100
Media 100
Media 100 is a manufacturer of video editing software and non-linear editing systems designed for professional cutting and editing. The editing systems can be used with AJA Video Systems, Blackmagic or matrox hardware or as software-only solution with Firewire support and run exclusively on Macs. The current released software version is Media 100 Suite Version 2.1.8 and runs on macOS 10.14.x (Mojave), macOS 10.13.4 (High Sierra), macOS 10.12 (Sierra), OS X 10.11 (El Capitan), OS X 10.10 (Yosemite), OS X 10.6.7 (Snow Leopard), 10.7.x (Lion), 10.8.x (Mountain Lion), 10.9.x (Mavericks) and 10.10.x (Yosemite). In the past, the editing systems were nearly exclusively based on custom hardware boards (vincent601/P6000/HDX) to be placed into Apple Macintosh computers, but Microsoft Windows-based systems were available as well (iFinish, 844/X). Media 100 was established as a division of Marlboro, Massachusetts-based Data Translation, Inc. then spun off as an independent company in 1996. After absorbing or merging with several companies (Terran Interactive, Digital Origin, and Wired, Inc.) it entered bankruptcy proceedings, with its assets and employees acquired by Optibase in March 2004. It is owned by Boris FX, which acquired the company from Optibase in October 2005. Legacy products Media 100 for 68K and PowerPC Macintosh computers with NuBus slots. This system used two cards connected internally by a pair of short ribbon cables, and a breakout box with two cables, one connecting to an external port on each card. The highest software version supported is 2.6.2 "Vincent601", Media 100's first PCI version, released while the original NuBus model was still in production. Media 100i based on the vincent601 or P6000 PCI boards, latest version: 7.5 for Mac OS 9.x, 8.2.3 for Mac OS X 10.4.x. Media 100 ICE (an accelerator card for rendering certain effect plugins faster on Adobe After Effects, Avid, and Media 100's own systems) iFinish (Windows version of Media 100i, based on the same hardware) Media 100 qx (the i / iFinish without the software, for use with the Macintosh and Windows versions of Adobe Premiere) 844/x (Windows-based real-time editing and compositing system) HDx (Mac-based real-time editing and compositing system with realtime HD up-/downscaling) Cleaner and PowerSuite (an ICE accelerated version of Cleaner) Cleaner was previously known as Media Cleaner Pro and Movie Cleaner Pro and was owned by Terran Interactive. The Cleaner product line was subsequently sold to discreet/Autodesk. Media 100 HD Suite (Digital-only HDTV and SDTV NLE system) Media 100 HDe (Digital / Analog HDTV and SDTV NLE system) Media 100 SDe (Digital / Analog SDTV NLE system) Media 100 Producer (software-only version of Media 100 HD) Media 100 Producer Suite (software-only version of Media 100 HD with bundled Boris RED 4 plugin for graphics, titles, effects and so forth) Media 100 i Tune-Up (upgrade for the legacy Media100i SDTV NLE system), Final Effects Complete (a collection of effect plugins for Adobe After Effects) Media 100 Suite Version 1 (can be run with or without hardware; supported hardware: Media 100 HDx, Blackmagic, matrox and AJA Video Systems, Universal app runs on PPC as well as intel machines with Mac OS X 10.5.8 up to 10.6.7) Current products Media 100 Suite Version 2 (4k support, support for the Red Rocket accelerator, new motion editor, dropped support for PPC-based Macs, includes Boris RED 5 for titling/vector graphics) References External links http://www.media100.com/ http://www.media100.de/ Video editing software
36522251
https://en.wikipedia.org/wiki/CRIU
CRIU
Checkpoint/Restore In Userspace (CRIU) (pronounced kree-oo, ), is a software tool for the Linux operating system. Using this tool, it is possible to freeze a running application (or part of it) and checkpoint it to persistent storage as a collection of files. One can then use the files to restore and run the application from the point it was frozen at. The distinctive feature of the CRIU project is that it is mainly implemented in user space, rather than in the kernel. The project is currently under active development, with monthly release cycle for stable releases. History The initial version of CRIU software was presented to the Linux developers community by Pavel Emelyanov, the OpenVZ kernel team leader, on 15 July 2011. In September 2011, the project was presented at the Linux Plumbers Conference. In general, most of the attendees took a positive view of the project, which is proven by the fact that a number of kernel patches required for implementing the project were included in the mainline kernel. Andrew Morton, however, was a bit skeptical: Use The CRIU tool is being developed as part of the OpenVZ project, with the aim of replacing the in-kernel checkpoint/restore. Though its main focus is to support the migration of containers, allowing users to check-point and restore the current state of running processes and process groups. The tool can currently be used on x86-64 and ARM systems and supports the following features: Processes: their hierarchy, PIDs, user and group authenticators (UID, GID, SID, etc.), system capabilities, threads, and running and stopped states Application memory: memory-mapped files and shared memory Open files Pipes and FIFOs Unix domain sockets Network sockets, including TCP sockets in ESTABLISHED state (see below) System V IPC Timers Signals Terminals Linux kernel-specific system calls: inotify, signalfd, eventfd and epoll , no kernel patching is required because all of the required functionality has already been merged into the Linux kernel mainline since kernel version 3.11, which was released on September 2, 2013. TCP connection migration One of the initial project goals was to support the migration of TCP connections, the biggest challenge being to suspend and then restore only one side of a connection. This was necessary for performing the live migration of containers (along with all their active network connections) between physical servers, the main scenario of using the checkpoint/restore feature in OpenVZ. To cope with this problem, a new feature, "TCP repair mode", was implemented. The feature was included in version 3.5 of the Linux kernel mainline and provides users with additional means to disassemble and reconstruct TCP sockets without the necessity of exchanging network packets with the opposite side of the connection. Similar projects The following projects provide functionality similar to CRIU: OpenVZ DMTCP BLCR Linux C/R References Further reading Linux software Linux-only free software
17869570
https://en.wikipedia.org/wiki/Draw%20a%20Secret
Draw a Secret
Draw a Secret (DAS) is a graphical password input scheme developed by Ian Jermyn, Alain Mayer, Fabian Monrose, Michael K. Reiter and Aviel D. Rubin and presented in a paper at the 8th USENIX Security Symposium in Augusts 1999. The scheme replaces alphanumeric password strings with a picture drawn on a grid. Instead of entering an alphanumeric password, this authentication method allows users to use a set of gestures drawn on a grid to authenticate. The user's drawing is mapped to a grid on which the order of coordinate pairs used to draw the password are recorded in a sequence. New coordinates are inserted to the recorded "password" sequence when the user ends one stroke (the motion of pressing down on the screen or mouse to begin drawing followed by taking the stylus or mouse off to create a line or shape) and begins another on the grid. Overview In DAS, a password is a picture drawn free-form on a grid of size N x N. Each grid cell is denoted by two-dimensional discrete coordinates (x, y) ∈ [1, N] × [1, N]. A completed drawing, i.e., a secret, is encoded as the ordered sequence of cells that the user crosses whilst constructing the secret. The predominant argument in favor of graphical over alphanumeric passwords is use of the Picture superiority effect which describes the improved performance of the human mind in recalling images and objects over strings of text. This effect is utilized through DAS, as complex drawings are less difficult for the human mind to memorize than a long string of alphanumeric characters. This allows for the user to input stronger and more secure sequences through graphical password input schemes than conventional text input with relative ease. Variations Background Draw a Secret (BDAS) This variation on the original DAS scheme is meant to improve both the security of the scheme and the ease of verification by the user. The same grid is used as the original Draw a Secret, but a background image is simply shown over the grid. The background image aids in the reconstruction of difficult to remember passwords. This is because when using the original system, the user must not only remember the strokes associated with the password but also the grid cells that the strokes pass through. This may introduce difficulty as all the grid cells are alike and have no uniqueness. With BDAS, the user can choose an image to place over the grid which has unique features to aid in correct placement of the drawing. A study done at Newcastle university showed that with a background image, participants in the study tended to construct more complex pass phrases (e.g. with a larger length or stroke count) than others that had used DAS, though the rate of recall after a one-week period showed an almost identical percentage of participants having the ability to recall DAS sequences over BDAS sequences. Rotational Draw a Secret (R-DAS) R-DAS is a variation on the original Draw a Secret system whereby the user is allowed to rotate the drawing grid either between strokes in the sequence or after the entire sequence has been inputted and the "secret" has been drawn. After one rotation is done, any following rotations in the same direction, without a counter rotation in a different direction between them are treated as one rotation. An example of the added password strength is shown below: If the original password is entered as follows (Presented as the sequence of strokes through the grid): (1,1)(2,1)(3,1)(4,1)(5,1)(6)(5,1)(5,2)(5,3)(5,4)(5,5)(6)(1,1)(1,2)(1,3)(1,4)(1,5)(6)(3,1)(3,2)(3,3)(6) With R-DAS, multiple directional changes can be inserted to increase security: (1,1)(2,1)(3,1)(4,1)(5,1)(6) (-90) (5,1)(5,2)(5,3)(5,4)(5,5)(6) (+90) (-45) (1,1)(1,2)(1,3)(1,4)(1,5)(6) (+225) (3,1)(3,2)(3,3)(6) (+180) Security Issues Multiple Accepted Passwords The encoding of a particular secret has a one-to-many relationship with the possible drawings it can represent. This implies that more than one drawing may in fact be accepted as a successful authentication of the user. This is especially true with a small number of cells in the N x N grid. To resolve this issue, more cells can be included in the grid. This process makes it more difficult to cross through all of the cells required to fulfill the password sequence. The cost of this added security is an increase in difficulty to reproduce the password by the actual user. The more cells that are present in the grid, the more accurate the user must be when entering the password to stroke through all of the required cells in the correct order. Graphical Dictionary Attacks Through the use of common "hotspots" or "Points-of-interest" in a grid or background image, a graphical dictionary attack can be initiated to guess users' passwords . Other factors such as similar shapes and objects in the background image also form "click order" vulnerabilities as these shapes may be clumped together and used in a sequence . These attacks are far more common to the Background variation of Draw a Secret as it utilizes an image that can used to exploit the vulnerabilities explained above. A study in 2013 also showed that users have the tendency to go through similar password selection processes across different background images. Shoulder Surfing Attacks This form of an attack is initiated by a bystander watching the user enter their password. This attack is present in most input schemes for authentication, but DAS schemes are especially vulnerable as the users strokes are displayed on the screen for all to see. This is unlike alphanumeric text input where the characters entered are not actually displayed on screen. Three techniques have been designed for protecting DAS and BDAS systems from shoulder surfing attacks: Decoy Strokes - the use of strikes which are inputted simply to confuse potential onlookers, they may be differentiated by colors chosen by the user. Disappearing Strokes - each stroke is removed from the screen after it is inputted by the user. Line Snaking - an extension of the disappearing strokes method, where shortly after a stroke is started, the end of the stroke begins disappearing shortly after, giving the appearance of a "line snaking" Implementations The initial implementation of DAS was on PDAs (Personal digital assistant). Recently with the release of Windows 8, Microsoft included the option of switching to a "picture password". This is essentially an implementation of BDAS (as it requires the choice of a picture in the background) but is only limited to a three gesture sequence to set a password reducing the actual security that BDAS provides over conventional alphanumerical passwords. References Computer access control Password authentication
52471220
https://en.wikipedia.org/wiki/Dominique%20Guinard
Dominique Guinard
Dominique "Dom" Guinard is the CTO of EVRYTHNG. He is a technologist, entrepreneur and developer with a career dedicated to building the Internet of Things both in the cloud and on embedded Things. He is particularly known for his early contributions to the Web of Things alongside with other researchers such as Vlad Trifa, Erik Wilde and Friedemann Mattern. Guinard is a published researcher, a book author and a recognized expert in Internet of Things technologies Career Guinard studied Computer Science at Université de Fribourg and graduated with a master's degree in computer science with a minor in business administration. During his studies he also worked at and co-founded several startups (Spoker, Dartfish, GMIPSoft), taught computer science and software developed at several private and public schools. Guinard began working on the Internet of Things in 2005 with Sun Microsystems working on RFID applications. He continued studying the field with his a master's thesis at Lancaster University on Ubiquitous Computing. After graduating from university, he went on to get his PhD in Computer Science at ETH Zurich. During his time as a PhD he also worked as a Research Associate for SAP where he met Vlad Trifa. Both focused on the Internet of Things applications at SAP especially looking at the integration of real-world devices such as wireless sensor networks to business processes and enterprise software (e.g., ERPs). The complexity of these integrations at the time lead them to look for simpler integration mechanisms. In 2007 they defined an application layer for the Internet of Things that uses Web standards called the Web of Things and founded the Webofthings.org community to promote the use of Web standards in the IoT. Guinard wrote his Ph.D thesis on the Web of Things, particularly looking at the physical mashups of Things on the Web. His thesis was granted an ETH Medal in 2012 . Towards the end of his Ph.D worked applying the Web of Things concepts to Smart Supply chains and IoT applications in manufacturing environments at the MIT Auto-ID Lab with Professor Sanjay Sarma. In 2011, Guinard co-founded EVRYTHNG together with Vlad Trifa, Niall Murphy and Andy Hobsbawm. The founding idea of EVRYTHNG was to create digital identities and Web APIs for all kinds of objects: from consumer goods to consumer electronics. As such, EVRYTHNG was the first commercial Web of Things platform. Dominique has been the CTO of EVRYTHNG since then, overseeing all the technical aspects of the platform. In 2015, Guinard co-authored the Web Thing Model which was accepted as an official W3C member submission. The Web Thing model is a first attempt at creating a simple Web based standard for the application layer of the Internet of Things. Publications Guinard published a number of scientific articles in journals and conferences covering many aspects of the Internet of Things and the Web of Things. One of his most cited publications is "Towards the Web of Things: Web Mashups for Embedded Devices" which lays the foundations for integrating everyday devices and sensors to the Web. Guinard co-authored a number of books related IoT and in particular "Building the Web of Things". This book was the first to provide an applicable step-by-step guide about how to implement Web-based smart products and applications using Node.js and the Raspberry Pi. References Living people ETH Zurich alumni Swiss computer scientists Internet of things 1981 births
27838
https://en.wikipedia.org/wiki/Sequence
Sequence
In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called elements, or terms). The number of elements (possibly infinite) is called the length of the sequence. Unlike a set, the same elements can appear multiple times at different positions in a sequence, and unlike a set, the order does matter. Formally, a sequence can be defined as a function from natural numbers (the positions of elements in the sequence) to the elements at each position. The notion of a sequence can be generalized to an indexed family, defined as a function from an index set that may not be numbers to another set of elements. For example, (M, A, R, Y) is a sequence of letters with the letter 'M' first and 'Y' last. This sequence differs from (A, R, M, Y). Also, the sequence (1, 1, 2, 3, 5, 8), which contains the number 1 at two different positions, is a valid sequence. Sequences can be finite, as in these examples, or infinite, such as the sequence of all even positive integers (2, 4, 6, ...). The position of an element in a sequence is its rank or index; it is the natural number for which the element is the image. The first element has index 0 or 1, depending on the context or a specific convention. In mathematical analysis, a sequence is often denoted by letters in the form of , and , where the subscript n refers to the nth element of the sequence; for example, the nth element of the Fibonacci sequence is generally denoted as . In computing and computer science, finite sequences are sometimes called strings, words or lists, the different names commonly corresponding to different ways to represent them in computer memory; infinite sequences are called streams. The empty sequence ( ) is included in most notions of sequence, but may be excluded depending on the context. Examples and notation A sequence can be thought of as a list of elements with a particular order. Sequences are useful in a number of mathematical disciplines for studying functions, spaces, and other mathematical structures using the convergence properties of sequences. In particular, sequences are the basis for series, which are important in differential equations and analysis. Sequences are also of interest in their own right, and can be studied as patterns or puzzles, such as in the study of prime numbers. There are a number of ways to denote a sequence, some of which are more useful for specific types of sequences. One way to specify a sequence is to list all its elements. For example, the first four odd numbers form the sequence (1, 3, 5, 7). This notation is used for infinite sequences as well. For instance, the infinite sequence of positive odd integers is written as (1, 3, 5, 7, ...). Because notating sequences with ellipsis leads to ambiguity, listing is most useful for customary infinite sequences which can be easily recognized from their first few elements. Other ways of denoting a sequence are discussed after the examples. Examples The prime numbers are the natural numbers greater than 1 that have no divisors but 1 and themselves. Taking these in their natural order gives the sequence (2, 3, 5, 7, 11, 13, 17, ...). The prime numbers are widely used in mathematics, particularly in number theory where many results related to them exist. The Fibonacci numbers comprise the integer sequence whose elements are the sum of the previous two elements. The first two elements are either 0 and 1 or 1 and 1 so that the sequence is (0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...). Other examples of sequences include those made up of rational numbers, real numbers and complex numbers. The sequence (.9, .99, .999, .9999, ...), for instance, approaches the number 1. In fact, every real number can be written as the limit of a sequence of rational numbers (e.g. via its decimal expansion). As another example, is the limit of the sequence (3, 3.1, 3.14, 3.141, 3.1415, ...), which is increasing. A related sequence is the sequence of decimal digits of , that is, (3, 1, 4, 1, 5, 9, ...). Unlike the preceding sequence, this sequence does not have any pattern that is easily discernible by inspection. The On-Line Encyclopedia of Integer Sequences comprises a large list of examples of integer sequences. Indexing Other notations can be useful for sequences whose pattern cannot be easily guessed or for sequences that do not have a pattern such as the digits of . One such notation is to write down a general formula for computing the nth term as a function of n, enclose it in parentheses, and include a subscript indicating the set of values that n can take. For example, in this notation the sequence of even numbers could be written as . The sequence of squares could be written as . The variable n is called an index, and the set of values that it can take is called the index set. It is often useful to combine this notation with the technique of treating the elements of a sequence as individual variables. This yields expressions like , which denotes a sequence whose nth element is given by the variable . For example: One can consider multiple sequences at the same time by using different variables; e.g. could be a different sequence than . One can even consider a sequence of sequences: denotes a sequence whose mth term is the sequence . An alternative to writing the domain of a sequence in the subscript is to indicate the range of values that the index can take by listing its highest and lowest legal values. For example, the notation denotes the ten-term sequence of squares . The limits and are allowed, but they do not represent valid values for the index, only the supremum or infimum of such values, respectively. For example, the sequence is the same as the sequence , and does not contain an additional term "at infinity". The sequence is a bi-infinite sequence, and can also be written as . In cases where the set of indexing numbers is understood, the subscripts and superscripts are often left off. That is, one simply writes for an arbitrary sequence. Often, the index k is understood to run from 1 to ∞. However, sequences are frequently indexed starting from zero, as in In some cases, the elements of the sequence are related naturally to a sequence of integers whose pattern can be easily inferred. In these cases, the index set may be implied by a listing of the first few abstract elements. For instance, the sequence of squares of odd numbers could be denoted in any of the following ways. Moreover, the subscripts and superscripts could have been left off in the third, fourth, and fifth notations, if the indexing set was understood to be the natural numbers. In the second and third bullets, there is a well-defined sequence , but it is not the same as the sequence denoted by the expression. Defining a sequence by recursion Sequences whose elements are related to the previous elements in a straightforward way are often defined using recursion. This is in contrast to the definition of sequences of elements as functions of their positions. To define a sequence by recursion, one needs a rule, called recurrence relation to construct each element in terms of the ones before it. In addition, enough initial elements must be provided so that all subsequent elements of the sequence can be computed by successive applications of the recurrence relation. The Fibonacci sequence is a simple classical example, defined by the recurrence relation with initial terms and . From this, a simple computation shows that the first ten terms of this sequence are 0, 1, 1, 2, 3, 5, 8, 13, 21, and 34. A complicated example of a sequence defined by a recurrence relation is Recamán's sequence, defined by the recurrence relation with initial term A linear recurrence with constant coefficients is a recurrence relation of the form where are constants. There is a general method for expressing the general term of such a sequence as a function of ; see Linear recurrence. In the case of the Fibonacci sequence, one has and the resulting function of is given by Binet's formula. A holonomic sequence is a sequence defined by a recurrence relation of the form where are polynomials in . For most holonomic sequences, there is no explicit formula for expressing explicitly as a function of . Nevertheless, holonomic sequences play an important role in various areas of mathematics. For example, many special functions have a Taylor series whose sequence of coefficients is holonomic. The use of the recurrence relation allows a fast computation of values of such special functions. Not all sequences can be specified by a recurrence relation. An example is the sequence of prime numbers in their natural order (2, 3, 5, 7, 11, 13, 17, ...). Formal definition and basic properties There are many different notions of sequences in mathematics, some of which (e.g., exact sequence) are not covered by the definitions and notations introduced below. Definition In this article, a sequence is formally defined as a function whose domain is an interval of integers. This definition covers several different uses of the word "sequence", including one-sided infinite sequences, bi-infinite sequences, and finite sequences (see below for definitions of these kinds of sequences). However, many authors use a narrower definition by requiring the domain of a sequence to be the set of natural numbers. This narrower definition has the disadvantage that it rules out finite sequences and bi-infinite sequences, both of which are usually called sequences in standard mathematical practice. Another disadvantage is that, if one removes the first terms of a sequence, one needs reindexing the remainder terms for fitting this definition. In some contexts, to shorten exposition, the codomain of the sequence is fixed by context, for example by requiring it to be the set R of real numbers, the set C of complex numbers, or a topological space. Although sequences are a type of function, they are usually distinguished notationally from functions in that the input is written as a subscript rather than in parentheses, that is, rather than . There are terminological differences as well: the value of a sequence at the lowest input (often 1) is called the "first element" of the sequence, the value at the second smallest input (often 2) is called the "second element", etc. Also, while a function abstracted from its input is usually denoted by a single letter, e.g. f, a sequence abstracted from its input is usually written by a notation such as , or just as Here is the domain, or index set, of the sequence. Sequences and their limits (see below) are important concepts for studying topological spaces. An important generalization of sequences is the concept of nets. A net is a function from a (possibly uncountable) directed set to a topological space. The notational conventions for sequences normally apply to nets as well. Finite and infinite The length of a sequence is defined as the number of terms in the sequence. A sequence of a finite length n is also called an n-tuple. Finite sequences include the empty sequence ( ) that has no elements. Normally, the term infinite sequence refers to a sequence that is infinite in one direction, and finite in the other—the sequence has a first element, but no final element. Such a sequence is called a singly infinite sequence or a one-sided infinite sequence when disambiguation is necessary. In contrast, a sequence that is infinite in both directions—i.e. that has neither a first nor a final element—is called a bi-infinite sequence, two-way infinite sequence, or doubly infinite sequence. A function from the set Z of all integers into a set, such as for instance the sequence of all even integers ( ..., −4, −2, 0, 2, 4, 6, 8, ... ), is bi-infinite. This sequence could be denoted . Increasing and decreasing A sequence is said to be monotonically increasing if each term is greater than or equal to the one before it. For example, the sequence is monotonically increasing if and only if an+1 an for all n ∈ N. If each consecutive term is strictly greater than (>) the previous term then the sequence is called strictly monotonically increasing. A sequence is monotonically decreasing, if each consecutive term is less than or equal to the previous one, and strictly monotonically decreasing, if each is strictly less than the previous. If a sequence is either increasing or decreasing it is called a monotone sequence. This is a special case of the more general notion of a monotonic function. The terms nondecreasing and nonincreasing are often used in place of increasing and decreasing in order to avoid any possible confusion with strictly increasing and strictly decreasing, respectively. Bounded If the sequence of real numbers (an) is such that all the terms are less than some real number M, then the sequence is said to be bounded from above. In other words, this means that there exists M such that for all n, an ≤ M. Any such M is called an upper bound. Likewise, if, for some real m, an ≥ m for all n greater than some N, then the sequence is bounded from below and any such m is called a lower bound. If a sequence is both bounded from above and bounded from below, then the sequence is said to be bounded. Subsequences A subsequence of a given sequence is a sequence formed from the given sequence by deleting some of the elements without disturbing the relative positions of the remaining elements. For instance, the sequence of positive even integers (2, 4, 6, ...) is a subsequence of the positive integers (1, 2, 3, ...). The positions of some elements change when other elements are deleted. However, the relative positions are preserved. Formally, a subsequence of the sequence is any sequence of the form , where is a strictly increasing sequence of positive integers. Other types of sequences Some other types of sequences that are easy to define include: An integer sequence is a sequence whose terms are integers. A polynomial sequence is a sequence whose terms are polynomials. A positive integer sequence is sometimes called multiplicative, if anm = an am for all pairs n, m such that n and m are coprime. In other instances, sequences are often called multiplicative, if an = na1 for all n. Moreover, a multiplicative Fibonacci sequence satisfies the recursion relation an = an−1 an−2. A binary sequence is a sequence whose terms have one of two discrete values, e.g. base 2 values (0,1,1,0, ...), a series of coin tosses (Heads/Tails) H,T,H,H,T, ..., the answers to a set of True or False questions (T, F, T, T, ...), and so on. Limits and convergence An important property of a sequence is convergence. If a sequence converges, it converges to a particular value known as the limit. If a sequence converges to some limit, then it is convergent. A sequence that does not converge is divergent. Informally, a sequence has a limit if the elements of the sequence become closer and closer to some value (called the limit of the sequence), and they become and remain arbitrarily close to , meaning that given a real number greater than zero, all but a finite number of the elements of the sequence have a distance from less than . For example, the sequence shown to the right converges to the value 0. On the other hand, the sequences (which begins 1, 8, 27, …) and (which begins −1, 1, −1, 1, …) are both divergent. If a sequence converges, then the value it converges to is unique. This value is called the limit of the sequence. The limit of a convergent sequence is normally denoted . If is a divergent sequence, then the expression is meaningless. Formal definition of convergence A sequence of real numbers converges to a real number if, for all , there exists a natural number such that for all we have If is a sequence of complex numbers rather than a sequence of real numbers, this last formula can still be used to define convergence, with the provision that denotes the complex modulus, i.e. . If is a sequence of points in a metric space, then the formula can be used to define convergence, if the expression is replaced by the expression , which denotes the distance between and . Applications and important results If and are convergent sequences, then the following limits exist, and can be computed as follows: for all real numbers , provided that for all and Moreover: If for all greater than some , then . (Squeeze Theorem)If is a sequence such that for all then is convergent, and . If a sequence is bounded and monotonic then it is convergent. A sequence is convergent if and only if all of its subsequences are convergent. Cauchy sequences A Cauchy sequence is a sequence whose terms become arbitrarily close together as n gets very large. The notion of a Cauchy sequence is important in the study of sequences in metric spaces, and, in particular, in real analysis. One particularly important result in real analysis is Cauchy characterization of convergence for sequences: A sequence of real numbers is convergent (in the reals) if and only if it is Cauchy. In contrast, there are Cauchy sequences of rational numbers that are not convergent in the rationals, e.g. the sequence defined by x1 = 1 and xn+1 = is Cauchy, but has no rational limit, cf. here. More generally, any sequence of rational numbers that converges to an irrational number is Cauchy, but not convergent when interpreted as a sequence in the set of rational numbers. Metric spaces that satisfy the Cauchy characterization of convergence for sequences are called complete metric spaces and are particularly nice for analysis. Infinite limits In calculus, it is common to define notation for sequences which do not converge in the sense discussed above, but which instead become and remain arbitrarily large, or become and remain arbitrarily negative. If becomes arbitrarily large as , we write In this case we say that the sequence diverges, or that it converges to infinity. An example of such a sequence is . If becomes arbitrarily negative (i.e. negative and large in magnitude) as , we write and say that the sequence diverges or converges to negative infinity. Series A series is, informally speaking, the sum of the terms of a sequence. That is, it is an expression of the form or , where is a sequence of real or complex numbers. The partial sums of a series are the expressions resulting from replacing the infinity symbol with a finite number, i.e. the Nth partial sum of the series is the number The partial sums themselves form a sequence , which is called the sequence of partial sums of the series . If the sequence of partial sums converges, then we say that the series is convergent, and the limit is called the value of the series. The same notation is used to denote a series and its value, i.e. we write . Use in other fields of mathematics Topology Sequences play an important role in topology, especially in the study of metric spaces. For instance: A metric space is compact exactly when it is sequentially compact. A function from a metric space to another metric space is continuous exactly when it takes convergent sequences to convergent sequences. A metric space is a connected space if and only if, whenever the space is partitioned into two sets, one of the two sets contains a sequence converging to a point in the other set. A topological space is separable exactly when there is a dense sequence of points. Sequences can be generalized to nets or filters. These generalizations allow one to extend some of the above theorems to spaces without metrics. Product topology The topological product of a sequence of topological spaces is the cartesian product of those spaces, equipped with a natural topology called the product topology. More formally, given a sequence of spaces , the product space is defined as the set of all sequences such that for each i, is an element of . The canonical projections are the maps pi : X → Xi defined by the equation . Then the product topology on X is defined to be the coarsest topology (i.e. the topology with the fewest open sets) for which all the projections pi are continuous. The product topology is sometimes called the Tychonoff topology. Analysis In analysis, when talking about sequences, one will generally consider sequences of the form which is to say, infinite sequences of elements indexed by natural numbers. It may be convenient to have the sequence start with an index different from 1 or 0. For example, the sequence defined by xn = 1/log(n) would be defined only for n ≥ 2. When talking about such infinite sequences, it is usually sufficient (and does not change much for most considerations) to assume that the members of the sequence are defined at least for all indices large enough, that is, greater than some given N. The most elementary type of sequences are numerical ones, that is, sequences of real or complex numbers. This type can be generalized to sequences of elements of some vector space. In analysis, the vector spaces considered are often function spaces. Even more generally, one can study sequences with elements in some topological space. Sequence spaces A sequence space is a vector space whose elements are infinite sequences of real or complex numbers. Equivalently, it is a function space whose elements are functions from the natural numbers to the field K, where K is either the field of real numbers or the field of complex numbers. The set of all such functions is naturally identified with the set of all possible infinite sequences with elements in K, and can be turned into a vector space under the operations of pointwise addition of functions and pointwise scalar multiplication. All sequence spaces are linear subspaces of this space. Sequence spaces are typically equipped with a norm, or at least the structure of a topological vector space. The most important sequences spaces in analysis are the ℓp spaces, consisting of the p-power summable sequences, with the p-norm. These are special cases of Lp spaces for the counting measure on the set of natural numbers. Other important classes of sequences like convergent sequences or null sequences form sequence spaces, respectively denoted c and c0, with the sup norm. Any sequence space can also be equipped with the topology of pointwise convergence, under which it becomes a special kind of Fréchet space called an FK-space. Linear algebra Sequences over a field may also be viewed as vectors in a vector space. Specifically, the set of F-valued sequences (where F is a field) is a function space (in fact, a product space) of F-valued functions over the set of natural numbers. Abstract algebra Abstract algebra employs several types of sequences, including sequences of mathematical objects such as groups or rings. Free monoid If A is a set, the free monoid over A (denoted A*, also called Kleene star of A) is a monoid containing all the finite sequences (or strings) of zero or more elements of A, with the binary operation of concatenation. The free semigroup A+ is the subsemigroup of A* containing all elements except the empty sequence. Exact sequences In the context of group theory, a sequence of groups and group homomorphisms is called exact, if the image (or range) of each homomorphism is equal to the kernel of the next: The sequence of groups and homomorphisms may be either finite or infinite. A similar definition can be made for certain other algebraic structures. For example, one could have an exact sequence of vector spaces and linear maps, or of modules and module homomorphisms. Spectral sequences In homological algebra and algebraic topology, a spectral sequence is a means of computing homology groups by taking successive approximations. Spectral sequences are a generalization of exact sequences, and since their introduction by , they have become an important research tool, particularly in homotopy theory. Set theory An ordinal-indexed sequence is a generalization of a sequence. If α is a limit ordinal and X is a set, an α-indexed sequence of elements of X is a function from α to X. In this terminology an ω-indexed sequence is an ordinary sequence. Computing In computer science, finite sequences are called lists. Potentially infinite sequences are called streams. Finite sequences of characters or digits are called strings. Streams Infinite sequences of digits (or characters) drawn from a finite alphabet are of particular interest in theoretical computer science. They are often referred to simply as sequences or streams, as opposed to finite strings. Infinite binary sequences, for instance, are infinite sequences of bits (characters drawn from the alphabet {0, 1}). The set C = {0, 1}∞ of all infinite binary sequences is sometimes called the Cantor space. An infinite binary sequence can represent a formal language (a set of strings) by setting the n th bit of the sequence to 1 if and only if the n th string (in shortlex order) is in the language. This representation is useful in the diagonalization method for proofs. See also Enumeration On-Line Encyclopedia of Integer Sequences Recurrence relation Sequence space Operations Cauchy product Examples Discrete-time signal Farey sequence Fibonacci sequence Look-and-say sequence Thue–Morse sequence List of integer sequences Types ±1-sequence Arithmetic progression Automatic sequence Cauchy sequence Constant-recursive sequence Geometric progression Harmonic progression Holonomic sequence Regular sequence Pseudorandom binary sequence Random sequence Related concepts List (computing) Net (topology) (a generalization of sequences) Ordinal-indexed sequence Recursion (computer science) Set (mathematics) Tuple Permutation Notes References External links The On-Line Encyclopedia of Integer Sequences Journal of Integer Sequences (free) Elementary mathematics
186467
https://en.wikipedia.org/wiki/Language%20education
Language education
Language education – the process and practice of teaching a second or foreign language – is primarily a branch of applied linguistics, but can be an interdisciplinary field. There are four main learning categories for language education: communicative competencies, proficiencies, cross-cultural experiences, and multiple literacies. Need Increasing globalization has created a great need for people in the workforce who can communicate in multiple languages. Common languages are used in areas such as trade, tourism, international relations, technology, media, and science. Many countries such as Korea (Kim Yeong-seo, 2009), Japan (Kubota, 1998) and China (Kirkpatrick & Zhichang, 2002) frame education policies to teach at least one foreign language at the primary and secondary school levels. However, some countries such as India, Singapore, Malaysia, Pakistan, and the Philippines use a second official language in their governments. According to GAO (2010), China has recently been putting enormous importance on foreign language learning, especially the English language. History Ancient to medieval period The need to learn foreign languages is as old as human history itself. In the Ancient Near East, Akkadian was the language of diplomacy, as in the Amarna letters. For many centuries, Latin was the dominant language of education, commerce, religion, and government in much of Europe, but it was displaced for many purposes by French, Italian, and English by the end of the 16th century. John Amos Comenius was one of many people who tried to reverse this trend. He wrote a complete course for learning Latin, covering the entire school curriculum, culminating in his Opera Didactica Omnia, 1657. In this work, Comenius also outlined his theory of language acquisition. He is one of the first theorists to write systematically about how languages are learned and about methods for teaching languages. He held that language acquisition must be allied with sensation and experience. Teaching must be oral. The schoolroom should have models of things, or else pictures of them. He published the world's first illustrated children's book, Orbis sensualium pictus. The study of Latin gradually diminished from the study of a living language to a mere subject in the school curriculum. This decline demanded a new justification for its study. It was then claimed that the study of Latin developed intellectual ability, and the study of Latin grammar became an end in and of itself. "Grammar schools" from the 16th to 18th centuries focused on teaching the grammatical aspects of Classical Latin. Advanced students continued grammar study with the addition of rhetoric. 18th century The study of modern languages did not become part of the curriculum of European schools until the 18th century. Based on the purely academic study of Latin, students of modern languages did much of the same exercises, studying grammatical rules and translating abstract sentences. Oral work was minimal, and students were instead required to memorize grammatical rules and apply these to decode written texts in the target language. This tradition-inspired method became known as the grammar-translation method. 19th and 20th centuries Innovation in foreign language teaching began in the 19th century and became very rapid in the 20th century. It led to a number of different and sometimes conflicting methods, each claiming to be a major improvement over the previous or contemporary methods. The earliest applied linguists included Jean Manes ca, Heinrich Gottfried Ollendorff (1803–1865), Henry Sweet (1845–1912), Otto Jespersen (1860–1943), and Harold Palmer (1877–1949). They worked on setting language teaching principles and approaches based on linguistic and psychological theories, but they left many of the specific practical details for others to devise. The history of foreign-language education in the 20th century and the methods of teaching (such as those related below) might appear to be a history of failure. Very few students in U.S. universities who have a foreign language as a major attain "minimum professional proficiency". Even the "reading knowledge" required for a PhD degree is comparable only to what second-year language students read, and only very few researchers who are native English speakers can read and assess information written in languages other than English. Even a number of famous linguists are monolingual. However, anecdotal evidence for successful second or foreign language learning is easy to find, leading to a discrepancy between these cases and the failure of most language programs. This tends to make the research of second language acquisition emotionally charged. Older methods and approaches such as the grammar translation method and the direct method are dismissed and even ridiculed, as newer methods and approaches are invented and promoted as the only and complete solution to the problem of the high failure rates of foreign language students. Most books on language teaching list the various methods that have been used in the past, often ending with the author's new method. These new methods are usually presented as coming only from the author's mind, as the authors generally give no credence to what was done before and do not explain how it relates to the new method. For example, descriptive linguists seem to claim unhesitatingly that there were no scientifically based language teaching methods before their work (which led to the audio-lingual method developed for the U.S. Army in World War II). However, there is significant evidence to the contrary. It is also often inferred or even stated that older methods were completely ineffective or have died out completely, though in reality even the oldest methods are still in use (e.g. the Berlitz version of the direct method). Proponents of new methods have been so sure that their ideas are so new and so correct that they could not conceive that the older ones have enough validity to cause controversy. This was in turn caused by emphasis on new scientific advances, which has tended to blind researchers to precedents in older work.(p. 5) There have been two major branches in the field of language learning, the empirical and theoretical, and these have almost completely separate histories, with each gaining ground over the other at one time or another. Examples of researchers on the empiricist side are Jesperson, Palmer, and Leonard Bloomfield, who promote mimicry and memorization with pattern drills. These methods follow from the basic empiricist position that language acquisition results from habits formed by conditioning and drilling. In its most extreme form, language learning is seen as much the same as any other learning in any other species, human language being essentially the same as communication behaviors seen in other species. On the theoretical side are, for example, Francois Gouin, M.D. Berlitz, and Emile B. De Sauzé, whose rationalist theories of language acquisition dovetail with linguistic work done by Noam Chomsky and others. These have led to a wider variety of teaching methods, ranging from the grammar-translation method and Gouin's "series method" to the direct methods of Berlitz and De Sauzé. With these methods, students generate original and meaningful sentences to gain a functional knowledge of the rules of grammar. This follows from the rationalist position that man is born to think and that language use is a uniquely human trait impossible in other species. Given that human languages share many common traits, the idea is that humans share a universal grammar which is built into our brain structure. This allows us to create sentences that we have never heard before but that can still be immediately understood by anyone who understands the specific language being spoken. The rivalry between the two camps is intense, with little communication or cooperation between them. 21st century Over time, language education has developed in schools and has become a part of the education curriculum around the world. In some countries, such as the United States, language education (also referred to as World Languages) has become a core subject along with main subjects such as English, Maths and Science. In some countries, such as Australia, it is so common nowadays for a foreign language to be taught in schools that the subject of language education is referred to LOTE or Language Other Than English. In the majority of English-speaking education centers, French, Spanish, and German are the most popular languages to study and learn. English as a Second Language (ESL) is also available for students whose first language is not English and they are unable to speak it to the required standard. Teaching foreign language in classrooms Language education may take place as a general school subject or in a specialized language school. There are many methods of teaching languages. Some have fallen into relative obscurity and others are widely used; still others have a small following, but offer useful insights. While sometimes confused, the terms "approach", "method" and "technique" are hierarchical concepts. An approach is a set of assumptions about the nature of language and language learning, but does not involve procedure or provide any details about how such assumptions should be implemented into the classroom setting. Such can be related to second language acquisition theory. There are three principal "approaches": The structural view treats language as a system of structurally related elements to code meaning (e.g. grammar). The functional view sees language as a vehicle to express or accomplish a certain function, such as requesting something. The interactive view sees language as a vehicle for the creation and maintenance of social relations, focusing on patterns of moves, acts, negotiation and interaction found in conversational exchanges. This approach has been fairly dominant since the 1980s. A method is a plan for presenting the language material to be learned, and should be based upon a selected approach. In order for an approach to be translated into a method, an instructional system must be designed considering the objectives of the teaching/learning, how the content is to be selected and organized, the types of tasks to be performed, the roles of students, and the roles of teachers. Examples of structural methods are grammar translation and the audio-lingual method. Examples of functional methods include the oral approach / situational language teaching. Examples of interactive methods include the direct method, the series method, communicative language teaching, language immersion, the Silent Way, Suggestopedia, the Natural Approach, Tandem Language Learning, Total Physical Response, Teaching Proficiency through Reading and Storytelling and Dogme language teaching. A technique (or strategy) is a very specific, concrete stratagem or trick designed to accomplish an immediate objective. Such are derived from the controlling method, and less directly, from the approach. Online and self-study courses Hundreds of languages are available for self-study, from scores of publishers, for a range of costs, using a variety of methods. The course itself acts as a teacher and has to choose a methodology, just as classroom teachers do. Audio recordings and books Audio recordings use native speakers, and one strength is helping learners improve their accent. Some recordings have pauses for the learner to speak. Others are continuous so the learner speaks along with the recorded voice, similar to learning a song. Audio recordings for self-study use many of the methods used in classroom teaching, and have been produced on records, tapes, CDs, DVDs and websites. Most audio recordings teach words in the target language by using explanations in the learner's own language. An alternative is to use sound effects to show meaning of words in the target language. The only language in such recordings is the target language, and they are comprehensible regardless of the learner's native language. Language books have been published for centuries, teaching vocabulary and grammar. The simplest books are phrasebooks to give useful short phrases for travelers, cooks, receptionists, or others who need specific vocabulary. More complete books include more vocabulary, grammar, exercises, translation, and writing practice. Also, various other "language learning tools" have been entering the market in recent years. Internet and software Software can interact with learners in ways that books and audio cannot: Some software records the learner, analyzes the pronunciation, and gives feedback. Software can present additional exercises in areas where a particular learner has difficulty, until the concepts are mastered. Software can pronounce words in the target language and show their meaning by using pictures instead of oral explanations. The only language in such software is the target language. It is comprehensible regardless of the learner's native language. Websites provide various services geared toward language education. Some sites are designed specifically for learning languages: Some software runs on the web itself, with the advantage of avoiding downloads, and the disadvantage of requiring an internet connection. Some publishers use the web to distribute audio, texts and software, for use offline. For example, various travel guides, for example Lonely Planet, offer software supporting language education. Some websites offer learning activities such as quizzes or puzzles to practice language concepts. Language exchange sites connect users with complementary language skills, such as a native Spanish speaker who wants to learn English with a native English speaker who wants to learn Spanish. Language exchange websites essentially treat knowledge of a language as a commodity, and provide a marketlike environment for the commodity to be exchanged. Users typically contact each other via chat, VoIP, or email. Language exchanges have also been viewed as a helpful tool to aid language learning at language schools. Language exchanges tend to benefit oral proficiency, fluency, colloquial vocabulary acquisition, and vernacular usage, rather than formal grammar or writing skills. Across Australasia, 'Education Perfect' – an online learning site- is frequently used as it enables teachers to monitor students' progress as students gain a "point" for every new word remembered. There is an annual international Education Perfect languages contest held in May. Many other websites are helpful for learning languages, even though they are designed, maintained and marketed for other purposes: All countries have websites in their own languages, which learners elsewhere can use as primary material for study: news, fiction, videos, songs, etc. In a study conducted by the Center for Applied Linguistics, it was noted that the use of technology and media has begun to play a heavy role in facilitating language learning in the classroom. With the help of the internet, students are readily exposed to foreign media (music videos, television shows, films) and as a result, teachers are taking heed of the internet's influence and are searching for ways to combine this exposure into their classroom teaching. Translation sites let learners find the meaning of foreign text or create foreign translations of text from their native language. Speech synthesis or text to speech (TTS) sites and software let learners hear pronunciation of arbitrary written text, with pronunciation similar to a native speaker. Course development and learning management systems such as Moodle are used by teachers, including language teachers. Web conferencing tools can bring remote learners together; e.g. Elluminate Live. Players of computer games can practice a target language when interacting in massively multiplayer online games and virtual worlds. In 2005, the virtual world Second Life started to be used for foreign language tuition, sometimes with entire businesses being developed. In addition, Spain's language and cultural institute Instituto Cervantes has an "island" on Second Life. Some Internet content is free, often from government and nonprofit sites such as BBC Online, Book2, Foreign Service Institute, with no or minimal ads. Some are ad-supported, such as newspapers and YouTube. Some require a payment. Learning strategies Language learning strategies have attracted increasing focus as a way of understanding the process of language acquisition. Listening as a way to learn Clearly listening is used to learn, but not all language learners employ it consciously. Listening to understand is one level of listening but focused listening is not something that most learners employ as a strategy. Focused listening is a strategy in listening that helps students listen attentively with no distractions. Focused listening is very important when learning a foreign language as the slightest accent on a word can change the meaning completely. Reading as a way to learn Many people read to understand but the strategy of reading text to learn grammar and discourse styles can also be employed. Speaking as a way to learn Alongside listening and reading exercises, practicing conversation skills is an important aspect of language acquisition. Language learners can gain experience in speaking foreign languages through in-person language classes, language meet-ups, university language exchange programs, joining online language learning communities (e.g. Conversation Exchange and Tandem), and traveling to a country where the language is spoken. Learning vocabulary Translation and rote memorization have been the two strategies that have been employed traditionally. There are other strategies that also can be used such as guessing, based on looking for contextual clues, spaced repetition with a use of various apps, games and tools (e.g. DuoLingo, LingoMonkey and Vocabulary Stickers). Knowledge about how the brain works can be utilized in creating strategies for how to remember words. Learning Esperanto Esperanto, the most widely used international auxiliary language, was founded by L. L. Zamenhof, a Polish-Jewish ophthalmologist, in 1887, aimed to eliminate language barriers in the international contacts. Esperanto is an artificial language created on the basis of the Indo-European languages, absorbing the reasonable factors of commonality of the Germanic languages. Esperanto is completely consistent in its speech and writing. The stress of every word is fixed on the penultimate syllable. By learning twenty-eight letters and mastering the phonetic rules, one can read and write any words. With further simplification and standardization, Esperanto becomes more easily mastered than other languages. Ease of learning helps one build the confidence and learning Esperanto, as a learning strategy, constitutes a good introduction to foreign language study. Teaching strategies Blended learning Blended learning combines face-to-face teaching with distance education, frequently electronic, either computer-based or web-based. It has been a major growth point in the ELT (English Language Teaching) industry over the last ten years. Some people, though, use the phrase 'Blended Learning' to refer to learning taking place while the focus is on other activities. For example, playing a card game that requires calling for cards may allow blended learning of numbers (1 to 10). Skill teaching When talking about language skills, the four basic ones are: listening, speaking, reading and writing. However, other, more socially based skills have been identified more recently such as summarizing, describing, narrating etc. In addition, more general learning skills such as study skills and knowing how one learns have been applied to language classrooms. In the 1970s and 1980s, the four basic skills were generally taught in isolation in a very rigid order, such as listening before speaking. However, since then, it has been recognized that we generally use more than one skill at a time, leading to more integrated exercises. Speaking is a skill that often is underrepresented in the traditional classroom. This is due to the fact that it is considered harder to teach and test. There are numerous texts on teaching and testing writing but relatively few on speaking. More recent textbooks stress the importance of students working with other students in pairs and groups, sometimes the entire class. Pair and group work give opportunities for more students to participate more actively. However, supervision of pairs and groups is important to make sure everyone participates as equally as possible. Such activities also provide opportunities for peer teaching, where weaker learners can find support from stronger classmates. Sandwich technique In foreign language teaching, the sandwich technique is the oral insertion of an idiomatic translation in the mother tongue between an unknown phrase in the learned language and its repetition, in order to convey meaning as rapidly and completely as possible. The mother tongue equivalent can be given almost as an aside, with a slight break in the flow of speech to mark it as an intruder. When modeling a dialogue sentence for students to repeat, the teacher not only gives an oral mother tongue equivalent for unknown words or phrases, but repeats the foreign language phrase before students imitate it: L2 => L1 => L2. For example, a German teacher of English might engage in the following exchange with the students: Teacher: "Let me try – lass mich versuchen – let me try." Students: "Let me try." Mother tongue mirroring Mother tongue mirroring is the adaptation of the time-honoured technique of literal translation or word-for word translation for pedagogical purposes. The aim is to make foreign constructions salient and transparent to learners and, in many cases, spare them the technical jargon of grammatical analysis. It differs from literal translation and interlinear text as used in the past since it takes the progress learners have made into account and only focuses upon a specific structure at a time. As a didactic device, it can only be used to the extent that it remains intelligible to the learner, unless it is combined with a normal idiomatic translation. This technique is seldom referred to or used these days. Back-chaining Back-chaining is a technique used in teaching oral language skills, especially with polysyllabic or difficult words. The teacher pronounces the last syllable, the student repeats, and then the teacher continues, working backwards from the end of the word to the beginning. For example, to teach the name ‘Mussorgsky' a teacher will pronounce the last syllable: -sky, and have the student repeat it. Then the teacher will repeat it with -sorg- attached before: -sorg-sky, and all that remains is the first syllable: Mus-sorg-sky. Code Switching Code switching is a special linguistic phenomenon that the speaker consciously alternates two or more languages according to different time, places, contents, objects and other factors. Code switching shows its functions while one is in the environment that mother tongue are not playing a dominant role in students' life and study, such as the children in the bilingual family or in the immigrant family. That is to say, the capability of using code switching, relating to the transformation of phonetics, words, language structure, expression mode, thinking mode, cultural differences and so on, is needed to be guided and developed in the daily communication environment. Most people learn foreign language in the circumstance filled with the using of their native language so that their ability of code switching cannot be stimulated, and thus the efficiency of foreign language acquisition would decrease. Therefore, as a teaching strategy, code switching is used to help students better gain conceptual competences and to provide rich semantic context for them to understand some specific vocabularies. By region Practices in language education may vary by region however the underlying understandings which drive it are fundamentally similar. Rote repetition, drilling, memorisation and grammar conjugating are used the world over. Sometimes there are different preferences teaching methods by region. Language immersion is popular in some European countries, but is not used very much in the United States, in Asia or in Australia. By different life stage Early childhood education Early childhood is the fastest and most critical period for one to master language in their life. Children's language communication is transformed from non-verbal communication to verbal communication from ages of one to five. Their mastery of language is largely acquired naturally by living in a verbal communication environment. As long as we are good at guiding and creating opportunities for children, children's language ability is easy to be developed and cultivated. Compulsory education Compulsory education, for most people, is the period that they have access to a second or foreign language for the first time. In this period, the most professional foreign language education and academic atmosphere are provided to the students. They can get help and motivation from teachers and be activated by the peers at any time. One would be able to undergo a lot of specialized learning in order to truly master a great number of rules of vocabulary, grammar and verbal communication. Adult education Learning a foreign language during adulthood means one is pursuing a higher value of themself by obtaining a new skill. At this stage, individuals have already developed the ability to supervise themself learning a language. However, at the same time, the pressure is also an obstacle for adults. Elderly education Compared to other life stages, this period is the hardest to learn a new language due to gradual brain deterioration and memory loss. Notwithstanding its difficulty, language education for seniors can slow this brain degeneration and active ageing. Language study holidays An increasing number of people are now combining holidays with language study in the native country. This enables the student to experience the target culture by meeting local people. Such a holiday often combines formal lessons, cultural excursions, leisure activities, and a homestay, perhaps with time to travel in the country afterwards. Language study holidays are popular across Europe (Malta & UK being the most popular because almost everyone speaks English as a first language) and Asia due to the ease of transportation and variety of nearby countries. These holidays have become increasingly more popular in Central and South America in such countries as Guatemala, Ecuador and Peru. As a consequence of this increasing popularity, several international language education agencies have flourished in recent years. Though education systems around the world invest enormous sums of money into language teaching the outcomes in terms of getting students to actually speak the language(s) they are learning outside the classroom are often unclear. With the increasing prevalence of international business transactions, it is now important to have multiple languages at one's disposal. This is also evident in businesses outsourcing their departments to Eastern Europe. Minority language education Minority language education policy The principal policy arguments in favor of promoting minority language education are the need for multilingual workforces, intellectual and cultural benefits and greater inclusion in global information society. Access to education in a minority language is also seen as a human right as granted by the European Convention on Human Rights and Fundamental Freedoms, the European Charter for Regional or Minority Languages and the UN Human Rights Committee. Bilingual Education has been implemented in many countries including the United States, in order to promote both the use and appreciation of the minority language, as well as the majority language concerned. Materials and e-learning for minority language education Suitable resources for teaching and learning minority languages can be difficult to find and access, which has led to calls for the increased development of materials for minority language teaching. The internet offers opportunities to access a wider range of texts, audios and videos. Language learning 2.0 (the use of web 2.0 tools for language education) offers opportunities for material development for lesser-taught languages and to bring together geographically dispersed teachers and learners. Acronyms and abbreviations ALL: Apprenticeship Language Learning CALL: computer-assisted language learning CLIL: content and language integrated learning CELI: Certificato di Conoscenza della Lingua Italiana CLL: community language learning DELE: Diploma de Español como Lengua Extranjera DELF: diplôme d'études en langue française EFL: English as a foreign language EAL/D: English as an additional language or dialect EAP: English for academic purposes ELL: English language learning ELT: English language teaching ESL: English as a second language ESP: English for specific purposes English for specific purposes FLL: foreign language learning FLT: foreign language teaching HLL: heritage language learning IATEFL: International Association of Teachers of English as a Foreign Language International Association of Teachers of English as a Foreign Language L1: first language, native language, mother tongue L2: second language (or any additional language) LDL: Lernen durch Lehren (German for learning by teaching) LOTE: Languages Other Than English SLA: second language acquisition TELL: technology-enhanced language learning TEFL: teaching English as a foreign language TEFLA: teaching English as a foreign language to adults TESOL: teaching English to speakers of other languages TEYL: teaching English to young learners TPR: Total Physical Response TPRS: Teaching Proficiency through Reading and Storytelling UNIcert is a European language education system of many universities based on the Common European Framework of Reference for Languages. See also English language learning and teaching for information on language teaching acronyms and abbreviations which are specific to English See also American Council on the Teaching of Foreign Languages Eikaiwa school Error analysis (linguistics) Foreign language anxiety Foreign language writing aid Foreign language reading aid Glossary of language teaching terms and ideas Language education by region Language festival Language MOOC Language policy Lexicography Linguistic rights List of language acquisition researchers Monolingual learner's dictionary Self access language learning centers Tandem language learning References Sources Australian-Japanese relations today. (2016). Skwirk. Retrieved 16 May 2016, from http://www.skwirk.com/p-c_s-16_u-430_t-1103_c-4268/australian-japanese-relations-today/nsw/australian-japanese-relations-today/conflict-consensus-and-care/changing-attitudes Parry, M. (2016). Australian university students and their Japanese host families in short term stays. The University of Queensland. Retrieved 16 May 2016, from https://espace.library.uq.edu.au/view/UQ:349330/s3213739_phd_submission.pdf Pérez-Milans, M (2013). Urban schools and English language education in late modern China: A Critical sociolinguistic ethnography. New York & London: Routledge. Gao, Xuesong (Andy). (2010).Strategic Language Learning.Multilingual Matters:Canada, 2010 Kim Yeong-seo (2009) "History of English education in Korea" Kirkpatrick, A & Zhichang, X (2002).”Chinese pragmatic norms and "China English". World Englishes. Vol. 21, pp. 269–279. Kubota, K (1998) "Ideologies of English in Japan" World Englishes Vol.17, No.3, pp. 295–306. Phillips, J. K. (2007). Foreign Language Education: Whose Definition?. The Modern Language Journal, 91(2), 266–268. American Council on the Teaching of Foreign Languages (2011). Language Learning in the 21st Century: 21st Century Skills Map. Further reading Bernhardt, E. B. (Ed.) (1992). Life in language immersion classrooms. Clevedon, England: Multilingual Matters, Ltd. Genesee, F. (1985). Second language learning through immersion: A review of U.S. programs. Review of Educational Research, 55(4), 541–561. Genesee, F. (1987). Learning Through Two Languages: Studies of Immersion and Bilingual Education. Cambridge, Mass.: Newbury House Publishers. Hult, F.M., & King, K.A. (Eds.). (2011). Educational linguistics in practice: Applying the local globally and the global locally. Clevedon, UK: Multilingual Matters. Hult, F.M., (Ed.). (2010). Directions and prospects for educational linguistics. New York: Springer. Lindholm-Leary, K. (2001). Theoretical and conceptual foundations for dual language education programs. In K. Lindholm-Leary, Dual language education (pp. 39–58). Clevedon, England: Multilingual Matters Ltd. McKay, Sharon; Schaetzel, Kirsten, Facilitating Adult Learner Interactions to Build Listening and Speaking Skills, CAELA Network Briefs, CAELA and Center for Applied Linguistics Meunier, Fanny; Granger, Sylviane, "Phraseology in foreign language learning and teaching", Amsterdam and Philadelphia : John Benjamins Publishing Company, 2008 Met, M., & Lorenz, E. (1997). Lessons from U.S. immersion programs: Two decades of experience. In R. Johnson & M. Swain (Eds.), Immersion education: International perspectives (pp. 243–264). Cambridge, UK: Cambridge University Press. Swain, M. & Johnson, R. K. (1997). Immersion education: A category within bilingual education. In R. K. Johnson & M. Swain (Eds.), Immersion education: International perspectives (pp. 1–16). NY: Cambridge University Press. Parker, J. L. (2020). Students' attitudes toward project-based learning in an intermediate Spanish course. International Journal of Curriculum and Instruction, 12(1), 80–97. http://ijci.wcci-international.org/index.php/IJCI/article/view/254/153 External links Language Academia UCLA Language Materials Project EF Education First BBC Learning English Applied linguistics
23291485
https://en.wikipedia.org/wiki/The%20Hackers%20Conference
The Hackers Conference
The Hackers Conference is an annual invitation-only gathering of designers, engineers and programmers to discuss the latest developments and innovations in the computer industry. On a daily basis, many hackers only interact virtually, and therefore rarely have face-to-face contact. The conference is a time for hackers to come together to share ideas. History The first Hackers Conference was organized in 1984 in Marin County, California, by Stewart Brand and his associates at Whole Earth and The Point Foundation. It was conceived in response to Steven Levy's book, Hackers: Heroes of the Computer Revolution, which inspired Brand to arrange a meeting between the individuals, or "hackers", the book named. The first conference's roughly 150 attendees included Steve Wozniak, Ted Nelson, Richard Stallman, John Draper, Richard Greenblatt, Robert Woodhead, and Bob Wallace. The gathering has been identified as instrumental in establishing the libertarian ethos attributed to cyberculture, and was the subject of a PBS documentary, produced by KQED: Hackers - Wizards of the Electronic Age. Participants at the original 1984 Hackers Conference Here is the list of participants at the original 1984 Hackers Conference, given in the contact list distributed to participants titled "List of Participants at the Hackers' Conference November 9–11, 1984" Arthur Abraham, Roe Adams, Phil Agre, Dick Ainsworth, Bob Albrecht, Bill Atkinson, Bill Bates, Allen Baum, Bruce Baumgart, Mike Beeler, Ward Bell, Gerry Berkowitz, Nancy Blachman, Steve Bobker, Stewart Bonn, Russell Brand, Stewart Brand, John Brockman, Dennis Brothers, Bill Budge, John Bumgarner, Bill Burns, Art Canfil, Steve Capps, Doug Carlston, Simon Cassidy, Dave Caulkins, Richard Cheshire, Fred Cisin, Mike Coffey, Margot Comstock, Rich Davis, Steven Dompier, Wes Dorman, John Draper, Mark Duchaineau, Les Earnest, Philip Elmer-DeWitt, Erik Fair, Richard Fateman, Lee Felsenstein, Jay Fenlason, Fabrice Florin, Andrew Fluegelman, Robert Frankston, Paul Freiberger, Rob Fulop, Robert Gaskins, Nasir Gebelli, Steve Gibson, Geoff Goodfellow, Richard Greenblatt, Roger Gregory, Leslie Grimm, Robert Hardy, Brian Harvey, Dick Heiser, Matt Herron, Andy Hertzfeld, Bruce Horn, David Hughes, John James, Tom Jennings, Jerry Jewell, Chris Jochumson, Ted Kaehler, Sat Tara Khalsa, Scott Kim, Peter LaDeau, Fred Lakin, Marc Le Brun, Jim Leeke, David Levitt, Steven Levy, Henry Lieberman, Efrem Lipkin, William Low, David Lubar, Scott Mace, John Markoff, David Maynard, Bob McConaghy, Roger Melen, Diana Merry, Mark Miller, Charles Moore, Michael Naimark, Ted Nelson, Terry Niksch, Guy Nouri, David Oster, Ray Ozzie, Donn Parker, Howard Pearlmutter, Mark Pelczarski, Michael Perry, Patricia Phelan, Tom Pittman, Eric Podietz, Kevin Poulsen, Jerry Pournelle, Larry Press, Steve Purcell, Christopher Reed, David Reed, Barbara Robertson, Michael Rogers, Pete Rowe, Peter Samson, Steve Saunders, Laura Scholl, Rich Schroeppel, Tom Scoville, Rony Sebok, Rhod Sharp, Bob Shur, Burrell Smith, David Snider, Tom Spence, Bud Spurgeon, Richard Stallman, Michael Swaine, David Taylor, Jack Trainor, Bud Tribble, Bruce H. Van Natta, Bob Wallace, Walter E. (Gene) Wallis, Bruce Webster, Ken Williams, Deborah Wise, Steve Witham, Robert Woodhead, Don Woods, Steve Wozniak, Fred Wright Logo Scott Kim designed the iconic Hackers Conference logo. References External links Official site Hackers - Wizards of the Electronic Age (video) Technology conferences Hacker culture Whole Earth Catalog Recurring events established in 1984
51170579
https://en.wikipedia.org/wiki/Software%20Heritage
Software Heritage
Software Heritage is a non-profit multi-stakeholder initiative unveiled in 2016 by Inria, and supported by UNESCO. Overview The stated mission of Software Heritage is to collect, preserve and share all software that is publicly available in source code form, with the goal of building a common, shared infrastructure at the service of industry, research, culture and society as a whole. Software source code is collected by crawling code hosting platforms, like GitHub, GitLab.com or Bitbucket, and package archives, like npm or PyPI, and ingested into a special data structure, a Merkle DAG, that is the core of the archive. Each artifact in the archive is associated with an identifier called a SWHID. In order to increase the chances of preserving the Software Heritage archive over the long term, a mirror program was established in 2018, joined by ENEA and FossID as of October 2020. History Development of Software Heritage began at Inria under the direction of computer scientists Roberto Di Cosmo and Stefano Zacchiroli in early 2015, and the project was officially announced to the public on June 30, 2016. In 2017 Inria signed an agreement with UNESCO for the long-term preservation of software source code and for making it widely available, in particular through the Software Heritage initiative. In June 2018, the Software Heritage Archive was opened at UNESCO headquarters. On July 4, 2018, Software Heritage was included in the French National Plan for Open Science. In October 2018 the strategy and vision underlying the mission of Software Heritage were published in Communications of the ACM. In November 2018, a group of forty international experts met at the invitation of Inria and UNESCO, which led to the publication in February 2019 of Paris Call: Software Source Code as Heritage for Sustainable Development. In November 2019, Inria signed an agreement with GitHub to improve the archival process for GitHub-hosted projects in the Software Heritage archive. As of October 2020, Software Heritage’s repository held over 143 million software projects in an archive of over 9.1 billion unique source files. Funding Software Heritage is a non-profit organization, funded largely from donations from supporting sponsors, that include private companies, public bodies and academic institutions. Software Heritage also seeks support for funding third parties interested in contributing to its mission. A grant from NLNet funded the work of Octobus and Tweag that led to rescuing 250.000 Mercurial repositories phased out from Bitbucket. A grant from the Alfred P. Sloan Foundation funds experts to develop new connectors for expanding coverage of the Software Heritage Archive Development and Community The Software Heritage infrastructure is built transparently and collaboratively. All the software developed in the process is released as free and open-source software. An ambassador program has been announced in December 2020 with the stated goal to grow the community of users and contributors. Awards In 2016 Software Heritage received the best community project award at Paris Open Source Summit 2016. In 2019 Software Heritage received the award of Academic Initiative from the Pôle Systematic. References External links History of the Internet Web archiving Web archiving initiatives Internet properties established in 2016 Digital preservation
1562591
https://en.wikipedia.org/wiki/Vertical%20market%20software
Vertical market software
Vertical market software is aimed at addressing the needs of any given business within a discernible vertical market (specific industry or market). While horizontal market software can be useful to a wide array of industries (such as word processors or spreadsheet programs), vertical market software is developed for and customized to a specific industry's needs. Vertical market software is readily identifiable by the application specific graphical user interface which defines it. One example of vertical market software is point-of-sale software. See also Horizontal market software Horizontal market Product software implementation method Enterprise resource planning Customer relationship management Content management system Supply chain management Resources Microsoft ships first Windows OS for vertical market from InfoWorld The Limits of Open Source - Vertical Markets Present Special Obstacles Software by type
2992958
https://en.wikipedia.org/wiki/SigmaTel
SigmaTel
SigmaTel was an American system-on-a-chip (SoC), electronics and software company headquartered in Austin, Texas, that designed AV media player/recorder SoCs, reference circuit boards, SoC software development kits built around a custom cooperative kernel and all SoC device drivers including USB mass storage and AV decoder DSP, media player/recorder apps, and controller chips for multifunction peripherals. SigmaTel became Austin's largest IPO as of 2003 when it became publicly traded on NASDAQ. The company was driven by a talented mix of electrical and computer engineers plus other professionals with semiconductor industry experience in Silicon Hills, the number two IC design region in the United States, after Silicon Valley. SigmaTel (trading symbol SGTL) was acquired by Freescale Semiconductor in 2008 and delisted from NASDAQ. History In the 90's and early 2000's SigmaTel produced audio codecs which went into the majority of PC sound cards. Creative's Sound Blaster used mainly SigmaTel and ADI codecs. This expanded to on board audio for computer motherboards and MP3 players. In 2004, SigmaTel SoCs were found in over 70% of all flash memory based MP3 devices sold in the global market. However, SigmaTel lost its last iPod socket in 2006 when it was not found in the next-generation iPod Shuffle. PortalPlayer was the largest competitor, but were bought by Nvidia after PortalPlayer's chips lost their socket in the iPod. SigmaTel was voted "Best Place to Work in Austin 2005" by the Austin Chronicle. In July 2005, SigmaTel acquired the rights to different software technologies sold by Digital Networks North America (a subsidiary of D&M Holdings, and owner of Rio Audio). On July 25, 2006, Integrated Device Technology, Inc. (IDT) announced its acquisition of SigmaTel, Inc.'s AC'97 and High Definition Audio (HD-Audio) PC and Notebook audio codec product lines for approximately $72 million in cash, and the acquisition of SigmaTel's intellectual property and employee teams necessary for continuing existing product roadmap, with expected closure by the end of July. SigmaTel also won a spot in Samsung televisions. Sales of the SGTV5800 TV audio solution, which can be used in analog, digital and hybrid televisions, have been ramping up. SigmaTel later introduced SGTV5900, which is anticipated to supplant the SGTV5800. In mid-2007 SigmaTel introduced portable QVGA 320×240 portable video decoder, and support for higher resolutions using WMV and MPEG4 followed. Some SigmaTel microcontrollers, like STDC982G, are used in printers manufactured by Samsung and sold under the Xerox brand. Kodak all-in-one printers also use SigmaTel IC's. SigmaTel's equity traded as low as $100 million below book value. Its peak share price was $45 and its day one IPO max share price was around $18. After the SGTL IPO in 2003, Austin's other biggest IPO was the later spinoff of Freescale Semiconductor by Motorola Corporation. Over 150 models of MP3/WMA players used SigmaTel SDK3.1 and the STMP35xx SoC with its MS DRM10 capabilities. On February 4, 2008, Freescale Semiconductor announced that it had entered into a definitive agreement to acquire SigmaTel for $110 million. The agreement closed in the second quarter of 2008 and all SGTL shares were purchased by Freescale for $3 each. Freescale continued developing and selling the STMP3 portable AV SoC product line, which are ARM9-based STMP37xx and STMP36xx AV SoCs, and the DSP56k-based STMP35xx portable AV SoC. Product info was on Freescale's ARM-based controller site. Freescale's i.MX2 ARM9 and i.MX3 ARM11-based multimedia SoC product line (especially analog SoC features) have been integrated with the STMP3xxx product line, resulting in a stronger portable multimedia product portfolio. On February 25, 2009, Freescale laid off 70% of the former SigmaTel team as part of a company-wide reduction in force. No new products under the SigmaTel design teams will be created. A 'skeleton crew' was chosen to stay and support existing OEM customers that are using the existing chips until the chips enter their 'End Of Life' phase. Freescale integrated analog IP from SigmaTel into its competing product lines and continues to pursue component and Real Time OS device driver-based support for OEM's rather than the complete hardware and software turnkey system design approach of the successful SigmaTel startup that powered hundreds of millions of portable media players enjoyed by many users. After winning MP3 player integrated circuit patent infringement suits at the U.S. International Trade Commission when the STMP35xx principal firmware engineer documented how SigmaTel firmware uses its dynamic voltage and frequency scaling related patents, U.S. customs physically destroyed Actions Semiconductor products at the U.S. border for intellectual property infringement. SigmaTel settled all patent litigation and in 2007 entered into a cross-licensing agreement with the Zhuhai, China-based Actions Semiconductor Co. Ltd. Both companies also agreed not to pursue possible third-party IP infringements or new legal action against each other and their respective customers for three years. Consequently, all of Actions' current and future products may be imported into the U.S. market without restrictions. Products SigmaTel offered a line of efficient audio and video codec chips that were integrated into many desktop computers, notebooks, and audio playback devices, notably MP3/WMA players. Other products included microcontrollers for digital appliances, portable compressed video decoders and TV audio products. The line of popular audio chips included portable STMP35xx, STMP36xx, and AV capable STMP37xx SoC. A key technology was the advanced device driver support of a broad array of multi vendor raw NAND flash memory used for program storage and virtual memory in lieu of discrete RAM, AV file storage, and new audio recordings. The STMP35xx SoC was sold into over 150 million portable media players. Former IBM engineer Dave Baker, Ph.D. EE UTexas Austin, and Texas Instruments alum EE MBA Danny Mulligan led the SoC design team at SigmaTel. Pre-IPO engineer, UTexas Austin Electrical and Computer Engineering alum and Motorola Advanced Media Platforms alum Jonathan L. Nesbitt was STMP35xx SoC Software Development Kit principal lead from 2006 to 2009. Several major contributing principal embedded software engineers from the pre-IPO period included Thor Thayer, ex-Motorolan Jeff Gleason on audio DSP, UT Austin alum Marc Jordan on boot ROM and USB, J.C. Pina, MIT's physicists Gray Abbott and William (Bill) Gordon Ph.D. Others principals formerly with the Motorola Advanced Media Platforms division that later become Freescale's multimedia group included Matt Henson of Carnegie Mellon U and Janna Garafolo. Former Motorolan EE Tom Zudock served as VP of software and managed the software team for the STMP35xx and STMP36xx SDK. Other leading technologists that created this Texas based System On a Chip success are members of the LinkedIn group SigmaTel Alumni. Several IC fabs in Asia were utilized to build SoC wafers including TSMC in Taiwan. Audio encoding and recording to a wide variety of flash memory in MP3 and WAV formats were supported from microphone, SigmaTel FM IC (STFM1000) digital audio source, or line-in. Printed Circuit Board (PCB) layouts and reference schematics were provided to OEM and Original Development Manufacturing (ODM) customers, driving easy manufacturing. Turnkey portable media player custom RTOS, framework, and app software was a large component of the company's success. SigmaTel provided SoC software to equipment manufacturers of portable audio and video player chips. SigmaTel's audio chips have been found in Dell laptops, several Dell desktops, the Sony Vaio notebook, and numerous other audio playback devices. STMP35xx is an audio System-on-a-Chip (SoC) that requires no external RAM, voltage converters, battery chargers, headphone capacitors, analog-to-digital converters, digital-to-analog converters, or amplifiers. Over 150 portable audio product models are based on that STMP35xx SDK3 and over 150 million such portable audio player SoCs were sold from 2002 to 2006. The first-generation iPod Shuffle used the SigmaTel STMP35xx and its product quality Software Development Kit v2.6. Other products using that SigmaTel SoC and software include the Dell Ditty, Creative MuVo, Philips, and many others. Audio quality for the chip was rated as best in industry. SDK3.1x added Microsoft DRM10 support, enabling interoperability with services such as Rhapsody million song subscription service, Napster, and Yahoo! Music Engine. See also SigmaTel STMP3700 References External links Archived company page SigmaTel’s logo SigmaTel/Freescale datasheets reference schematics and other info Companies established in 1993 Electronics companies of the United States Fabless semiconductor companies Digital signal processors Defunct semiconductor companies of the United States
28777163
https://en.wikipedia.org/wiki/Microsoft%20Innovation%20Center
Microsoft Innovation Center
Microsoft Innovation Centers (MICs) are local government organizations, universities, industry organizations, or software or hardware vendors who partner with Microsoft with a common goal to foster the growth of local software economies. These are state of the art technology facilities which are open to students, developers, IT professionals, entrepreneurs, startups and academic researchers. While each Center tunes its programs to local needs, they all provide similar content and services designed to accelerate technology advances and stimulate local software economies through skills and professional training, industry partnerships and innovation. As of 10 September 2010, there are 115 Microsoft Innovation Centers worldwide, most of which are open to the public. Recently it was reported that Microsoft had proposed to build about 100 innovation centers in India, and several in China. Some innovation centers have also started to develop in Pakistan. Overview Microsoft Innovation Centers are offering a comprehensive set of programs and services to foster innovation and grow sustainable local software economies. While each Center tunes its programs to local needs, they all provide similar content and services designed to accelerate technology advances and stimulate local software economies through skills and professional training, industry partnerships and innovation. Primary areas of focus include: Skills and Intellectual Capital: The Skills Accelerator focuses on intellectual capital and people enablement with software, business management and marketing courses, software development courses, and employment programs for students. Industry Partnerships: The Partnership Accelerator focuses on enabling successful partnerships by connecting people and organizations in the innovation ecosystem. The MICs do this by offering programs on partnering with Microsoft, and by cultivating local and regional industry alliances that support the growth of software ‘industry clusters’ and software quality assurance programs. Solutions and Innovation: The Innovation Accelerator focuses on enhancing local capacity for innovation through hands on engagements. This includes labs for ISVs, start ups, partners, students, and entrepreneurs. MIC programs and activities Imagine Cup Student to Business MIC Technical Trainee Program Business Skills Development Technical Skills Development Industry Cluster Prototype Development Business Incubator / Startup Incubation Microsoft for Startups Product Testing IT Academy Partner Showcase Technology Competence Center Executive Academy Developer Camps ITPro Deployment Workshops References External links Microsoft Innovation Center Website Innovation Center Innovation organizations Research and development in the United States
143140
https://en.wikipedia.org/wiki/NCUBE
NCUBE
nCUBE was a series of parallel computing computers from the company of the same name. Early generations of the hardware used a custom microprocessor. With its final generations of servers, nCUBE no longer designed custom microprocessors for machines, but used server-class chips manufactured by a third party in massively parallel hardware deployments, primarily for the purposes of on-demand video. Company history Founding and early growth nCUBE was founded in 1983 in Beaverton, Oregon, by a group of Intel employees (Steve Colley, Bill Ricardson, John Palmer, Doran Wilde, Dave Jurasek) frustrated by Intel's reluctance to enter the parallel computing market, though Intel released its iPSC/1 in the same year as the first nCUBE was released. In December 1985, the first generation of nCUBE's hypercube machines were released. The second generation (N2) was launched in June 1989. The third generation (N3) was released in 1995. The fourth generation (N4) was released in 1999. In 1988, Larry Ellison invested heavily in nCUBE and became the company's majority shareholder. The company's headquarters were relocated to Foster City, California, to be closer to the Oracle Corporation. In 1994, Ronald Dilbeck became CEO and set nCUBE on a fast track to an initial public offering. Pivot to video In 1996, Ellison downsized nCUBE. Dilbeck left and Ellison took over as acting CEO, redirecting the company to become Oracle's Network Computer division. After the network computer diversion, nCUBE resumed development on video servers. nCUBE deployed its first VOD video server in Dubai's Burj al-Arab hotel. In 1999, nCUBE announced it was acquiring SkyConnect, a seven-year-old software company based in Louisville, Colorado, which developed digital advertising and VOD software for cable television. In the 1990s, nCUBE shifted its focus from the parallel computing market and, by 1999, had identified itself as a video on demand (VOD) solutions provider, shipping over 100 VOD systems delivering 17,000 streams and establishing a relationship with Microsoft TV. The company was once again on IPO fast-track, only to be halted again after the bursting of dot-com bubble. Lawsuits and dot-com aftermath In 2000, SeaChange International filed a patent infringement suit against nCUBE, alleging its nCUBE MediaCube-4 product infringed on a SeaChange patent. A jury upheld the validity of SeaChange's patent and awarded damages. The U.S. Court of Appeals for the Federal Circuit subsequently overturned the ruling on June 29, 2005. A separate lawsuit against SeaChange was filed by nCUBE in 2001 after it acquired the patents from Oracle's interactive television division. nCUBE claimed that SeaChange's video server offering violated its VOD patent on delivery to set-top boxes. nCUBE won the lawsuit and was awarded over $2 million in damages. SeaChange appealed, but the decision was upheld in 2004. On the business front, the dot-com bubble burst and ensuing recession as well as lawsuits meant that nCUBE was not doing well. In April 2001 nCUBE laid off 17% of its workforce and began closing offices (Foster City in 2002 and Louisville in 2003) to downsize and consolidate the company around its Beaverton manufacturing office. Also in 2002, Ellison stepped down and named former SkyConnect CEO Michael J. Pohl as CEO. Acquired In January 2005, nCUBE was acquired by C-COR for approximately $89.5 million, with an SEC filing for the purchase in October 2004. In December 2007, C-COR was acquired by the ARRIS. Computer models nCUBE 10 One of the first nCUBE machines to be released was the nCUBE 10 of late 1985. It was originally called NCUBE/ten but the name morphed over time. These were based on a set of custom chips, where each compute node had a processor chip with 32-bit ALU, a 64-bit IEEE 754 FPU, special communication instructions, and 128 KB of RAM. A node delivered 2 MIPS, 500 kiloFLOPS (32-bit single precision), or 300 kiloFLOPS (64-bit double precision). There were 64 nodes per board. The host board, based on an Intel 80286, ran Axis, a custom Unix-like operating system, and each compute node ran a 4KB kernel, Vertex. nCUBE 10 referred to the machine's ability to build an order-ten hypercube, supporting 1,024 CPUs in a single machine. Some of the modules would be used strictly for input/output, which included the nChannel storage control card, frame buffers, and the InterSystem card that allowed nCUBEs to be attached to each other. At least one host board needed to be installed, acting as the terminal driver. It could also partition the machine into "sub-cubes" and allocate them separately to different users. nCUBE 2 For the second series the naming was changed, and they created the single-chip nCUBE 2 processor. This was otherwise similar to the nCUBE 10's CPU, but ran faster, at 25 MHz to provide about 7 MIPS and 3.5 megaFLOPS. This was later improved to 30 MHz in the 2S model. RAM was increased as well, with 4 to 16 MB of RAM on a "single wide" 1 inch x 3.5 inch module, with additional form factors of "double wide" (double modules), and quadruple that in a double wide, double side module. The I/O cards generally had less RAM, with different backend interfaces to support SCSI, HIPPI and other protocols. Each nCUBE 2 CPU also included 13 I/O channels running at 20 Mbit/s. One of these was dedicated to I/O duties, while the other twelve were used as the interconnect system between CPUs. Each channel used wormhole routing to forward messages. The machines themselves were wired up as order-twelve hypercubes, allowing for up to 4,096 CPUs in a single machine. Each module ran a 200 KB microkernel called nCX, but the system now used a Sun Microsystems workstation as the front end and no longer needed the Host Controller. nCX included a parallel filesystem that could do 96-way striping for high performance. C and C++ languages are available, as is NQS, Linda, and Parasoft's Express. These were supported by an in-house compiler team. The largest nCUBE 2 system installed was at Sandia National Laboratories, a 1,024-CPU system that reached 1.91 gigaFLOPS in testing. In addition the nCX operating system, it also ran the SUNMOS lightweight kernel for research purposes. Researchers Robert Benner, John Gustafson and Gary Montry of the Parallel Processing Division of Sandia National Laboratory first won the Karp Prize of $100 and then won the first Gordon Bell Prize in 1987 using the nCUBE 10. nCUBE-3 The nCUBE-3 CPU used a 64-bit arithmetic logic unit (ALU). Its improvements included a process-shrink to 0.5u, allowing the speed to be increased to 50 MHz (with plans for 66 and 100 MHz). The CPU was also superscalar and included 16 KB instruction and data caches, and a memory management unit for virtual memory support. Additional I/O links were added, with 2 dedicated to I/O and 16 for interconnects, allowing for up to 65,536 CPUs in the hypercube. The channels operated at 100 Mbit/s, due to use of 2-bit parallel lines, instead of the serial lines used previously. The nCUBE-3 also added fault-tolerant adaptive routing support, in addition to fixed routing, although in retrospect it's not entirely clear why. A fully loaded nCUBE-3 machine can use up to 65,536 processors, for 3 million MIPS and 6.5 teraFLOPS; the maximum memory would be 65 TB, with a network I/O capability of 24 TB/second. Thus, the processor is biased in terms of I/O, which is usually the limitation. The nChannel board provides 16 I/O channels, where each channel can support transfers at 20 MB/s. A microkernel was developed for the nCUBE-3 machine, but it was never completed, having been abandoned in favor of Plan 9's Transit operating system. nCUBE-4 The nCUBE-4 marked the transition to commodity processors, with each node containing an Intel IA32 server-class CPU. The n4 also brought exclusive focus on video streaming rather than scientific applications. Each hub contained one hypercube node, one CPU, a pair of PCI buses, and up to 12 SCSI drives. The n4 was followed by the n4x, the n4x r2, and the n4x r3. These last two were based on the Serverworks chipset rather than the Intel ones. The nCUBE-5 was very similar to the n4 family but incorporated two hypercube nodes in each hub and only supported video streaming over gigabit ethernet. In 1999, nCUBE announced the MediaCUBE 4, which supported 80 simultaneous 3 Mbit/s streams to 44,000 simultaneous VOD streams, in concurrent MPEG-2, MPEG-1 and mid bit-rate encoding protocols. See also Ametek INMOS transputer iWarp Parsytec SUPRENUM References External links nCUBE Corporation (description of their machines) Beaverton, Oregon Defunct companies based in Oregon Computer companies of the United States Massively parallel computers Supercomputers Companies established in 1983 1983 establishments in Oregon 2005 disestablishments in Oregon Plan 9 from Bell Labs 2005 mergers and acquisitions Privately held companies based in Oregon
50353492
https://en.wikipedia.org/wiki/Early%20Learning%20House
Early Learning House
Early Learning House is a collection of four main educational video games and two compilations for the Windows and Macintosh platforms, developed by Theatrix Interactive, Inc. and published by Edmark software. Each different game focuses on a particular major learning category with selectable skill settings for preschooler, kindergarten and elementary learners. Millie's Math House (1992) on mathematics, Bailey's Book House (1993) on language, Sammy's Science House (1994) on science, and Trudy's Time and Place House (1995) on history and geography. A spin-off, Stanley's Sticker Stories (1996), sees players create animated storybooks with the series' characters. Millie & Bailey Preschool and Millie & Bailey Kindergarten each contain the combined activities from two of the four software products. In addition the programs can be configured by an adult mode to suit students with special needs. Most of the activities in every game have two modes, one to allow learners to explore and try it out for themselves and the other for learners to follow specific tasks set by the game characters. Learners also have the option to print pictures of creative activities and record sounds in phonics activities. Later the games were re-developed by Houghton Mifflin Harcourt Learning Technology and re-published by The Learning Company with newer graphics and additional activities. Production ERAC created an agreement with IBM Canada's K-12 Division, with support from the British Columbia Ministry of Education, to provide The Edmark House Series software to British Canadian schools and districts for free. Enhanced versions of the products were announced on September 25, 1995, which included new activities, added difficulty levels, and a Dear Parents Video Presentation. Games The purpose of the series is to "provide students with a positive environment to explore early learning concepts". Millie's Math House was released in October 1992 (Enhanced in August 1995) and stars the cow Millie. It primarily focuses on counting, quantities and simple figures divided into nine different activities (seven in earlier versions and six in first version). Bailey's Book House was released in June 1993, and stars the cat Bailey. It primarily focuses on reading, playing with words and phonics divided into nine different activities (seven in earlier versions and five in first version). Sammy's Science House was released in June 1994, and stars the snake Sammy. It primarily focuses on biology, experiments and matter divided into seven different activities (five in earlier versions). The Windows 95 version of the game was shipped July 31, 1995. Trudy's Time and Place House was released in September 1995, and stars the crocodile Trudy. It primarily focuses on geography, simulation and time divided into seven different activities (five in earlier versions). Edmark also released software with two house series combined together which included half of the respective software's activities: Millie & Bailey Preschool and Millie & Bailey Kindergarten. Millie & Bailey is a two-part edutainment video game series featuring the titles Millie & Bailey Kindergarten and Millie & Bailey Preschool. Edmark repurposed activities from its Early Learning House titles Millie's Math House, Bailey's Book House, and Sammy's Science House into the two multisubject Millie & Bailey games. The former three games could still be purchased individually. Edmark Singles were added to the titles' main menus. These grade-specific school versions contained teacher's guides and toll-free technical support. Both titles were shipped for the 1997 holiday season. Reception Critical reception The New York Times deemed Edmark an "impressive series", adding that "all four programs are a lot of fun". A reviewer from SuperKids said Millie's Math House was an "excellent introductory math program for pre-schoolers" that was both educational and fun, while adding that the sound and graphics were adequate. A reviewer from TechWithKids thought the title was "well thought out" and offered a "supportive" environment within which the player could learn, noting that it was suitable for both the classroom and home. The game was reviewed in the Oppenheim Toy Portfolio Guide Book where the authors described the "six quality math games" as appropriate for children aged three to six. A reviewer from SuperKids said Bailey's Book House was a "classic" and a "must-have" within the early learning genre. Charles Law of PC Alamode Magazine said the game was "multifaceted", and thought it would help young learners "catch up and keep up". In Jill Fain Lehman's article A Review of Kids' Software for Children with ASD, she deemed the activity Sorting Station from Sammy's Science House a "very good classification game". Ellen Adams wrote that the title offered an "excellent" introduction to science for young children, and thought that the game's entertainment was heightened due to the "constant encouragement". Childhood Education said the game was an "inviting exploration program" and "excellent introduction" to the subject matter. MacUser gave Trudy's Time and Place House a perfect 5 out of 5 score, and named it one of 1996's top 50 CD-ROMs. Referring to Millie & Bailey Kindergarten, Emergency Librarian felt "Edmark has truly picked the best of the best to include on this CD [compilation]". Visual Literacy singled out the storyboarding minigame 'Make a Move' in Kindergarten. Young Kids and Computers deemed it an "excellent variety pack" and felt the activities were "well designed". Consumer reports home computer buying guide 2000 noted it as a prime example of “early-learning” software alongside the Freddi Fish series by Humongous. Child Care Information Exchange wrote of Millie & Bailey Preschool, "overall [it] continues to set the standard for appropriate content, active involvement, and for clever embedding of the learning content in preschool computer activities." Exchange reported that "preschoolers ask to play the program over and over again". Awards The Early Learning House games had earned 40 awards around the time of their creation. |- | 1992 | Millie's Math House | MacUser Magazine Editors' Choice Award for Best Children's Program | |- | 1993 | Millie's Math House | Software Publishers Association Excellence in Software Award for Best Early Education Program | |- | 1993 | Millie's Math House | CODiE Award for Best Early Education Program | |- | 1994 | Millie's Math House | Oppenheim Toy Portfolio Award | |- | 1994 | Bailey's Book House | Parent's Choice Award for Best Early Childhood Software | |- | 1994 | Sammy's Science House | Parent's Choice Award for Software | |- | 1995 | Sammy's Science House | Family Channel Seal of Quality | |- |1996 | Trudy's Time and Place House |Software Publishers Association Excellence in Software Award for Best Early Education Program | |- | 1996 | Trudy's Time and Place House | CODiE Award for Best Early Childhood (K-3) Education Software Program | |- | 1998 | Millie & Bailey Kindergarten | CODiE Award for Best Education Software Upgrade | |} References External links Lander Edmark House Series at Educational Resources Edmark Awards Millie's Math House Awards Bailey's Book House Awards Sammy's Science House Awards Trudy's Time and Place House Awards MacOS games Windows games Houghton Mifflin Harcourt franchises Children's educational video games Educational video games Video games developed in the United States Video game franchises introduced in 1992
2291804
https://en.wikipedia.org/wiki/Microsoft%20Office%20Live%20Meeting
Microsoft Office Live Meeting
Microsoft Office Live Meeting is a discontinued commercial subscription-based web conferencing service operated by Microsoft. Live Meeting included software installed on client PCs and used a central server for all clients to connect to. Microsoft now produces Skype for Business which is an enterprise Unified Communications product, that can be rolled out either on-premises or in the cloud. Overview Microsoft Office Live Meeting was a separate piece of software which was installed on a user's PC (Windows Based Meeting Console). The software was made available for free download from the Microsoft website. There was also a Java-based console with antecedent release functionality. This also operated in Mac OS X and Solaris environments. The desktop client for Live Meeting was not compatible on the Mac in either Firefox or Safari 3.x; however, non-Windows users could connect to a web-based Live Meeting, if the meeting organizer published an HTTP URL to access the meeting. Live Meeting was convergence software (i.e., allowing integration with an audio conference). Using the web users could control PSTN lines (mute all parties except themselves, eject parties, etc.). User accounts were grouped together in Conference Centers (a unique URL) starting with: www.livemeeting.com/cc/. . . or www.placeware.com/cc/. . . Users could join a Live Meeting session free of charge. Charges for Live Meeting were on an account basis. Supply of accounts was mostly done by resellers (Global Telecoms companies) which levied per minute or monthly standing charges. With the introduction of Office 365 Office, Live Meeting customers were encouraged to move to Microsoft Lync Server. Live Meeting 2007 With Live Meeting 2007 Microsoft offered both a hosted model for Microsoft Office Live Meeting 2007 as well as a CPE (customer premises equipment) solution, namely Office Communications Server 2007. In addition to Microsoft directly hosting Microsoft Office Live Meeting 2007, hosting partners also offered Microsoft Office Live Meeting 2007 as a fee-based service. Whether attendees used the Live Meeting service or the Office Communications Server 2007 (OCS 2007) to power their web conference, they were able to use the same client software. New features included: Rich media presentations (incl. Windows Media and Flash) Live webcam video "Panoramic video" with Microsoft RoundTable Multi-party two-way VoIP audio PSTN and VoIP audio integration Active speaker indicator Public events page Advanced testing and grading High fidelity recordings Personal recordings Virtual Breakout Rooms "Handout" distribution (file transfer) Live Meeting Web Access (MWA) was redesigned in this release to provide a user experience nearly identical to the new Windows-based Live Meeting client. One benefit was that Live Meeting Web Access was a Java applet and therefore ran on non-Windows operating systems such as Linux, Solaris, and MacOS. The Live Meeting product was also intended to operate with the Polycom CX5000 (formerly known as the Microsoft RoundTable), a 360 degree video camera optimized to work with Microsoft Office Live Meeting 2007. One new feature included in this version allowed the Microsoft Office Live Meeting client to automatically switch the larger video window to the actively speaking participant. This auto-switch feature was not specific to the Polycom CX5000 product - it worked with any USB-based camera. The main advantage of the CX5000 was its 360 degree camera view, suitable for conference rooms with several participants. With specially designed microphones, the CX5000 was able to determine the location of the active speaker and then tell Microsoft Office Live Meeting which camera angle to focus on. History Live Meeting was originally a separate company called PlaceWare. Microsoft acquired PlaceWare to improve upon NetMeeting, its own webconferencing technology. Microsoft subsequently dropped development of NetMeeting. See also Comparison of office suites Web conferencing Comparison of web conferencing software Collaborative software References External links Web conferencing Live Meeting Teleconferencing Videotelephony
62398615
https://en.wikipedia.org/wiki/PALISADE%20%28software%29
PALISADE (software)
PALISADE is an open-source cross platform software library that provides implementations of lattice cryptography building blocks and homomorphic encryption schemes. History PALISADE adopted the open modular design principles of the predecessor SIPHER software library from the DARPA PROCEED program. SIPHER development began in 2010, with a focus on modular open design principles to support rapid application deployment over multiple FHE schemes and hardware accelerator back-ends, including on mobile, FPGA and CPU-based computing systems. PALISADE began building from earlier SIPHER designs in 2014, with an open-source release in 2017 and substantial improvements every subsequent 6 months. PALISADE development was funded originally by the DARPA PROCEED and SafeWare programs, with subsequent improvements funded by additional DARPA programs, IARPA, the NSA, NIH, ONR, the United States Navy, the Sloan Foundation and commercial entities such as Duality Technologies. PALISADE has subsequently been used in commercial offerings, such as by Duality Technologies who raised funding in a Seed round and a later Series A round led by Intel Capital. Features PALISADE includes the following features: Post-quantum public-key encryption Fully homomorphic encryption (FHE) Brakerski/Fan-Vercauteren (BFV) scheme for integer arithmetic with RNS optimizations Brakerski-Gentry-Vaikuntanathan (BGV) scheme for integer arithmetic with RNS optimizations Cheon-Kim-Kim-Song (CKKS) scheme for real-number arithmetic with RNS optimizations Ducas-Micciancio (FHEW) scheme for Boolean circuit evaluation with optimizations Chillotti-Gama-Georgieva-Izabachene (TFHE) scheme for Boolean circuit evaluation with extensions Multiparty extensions of FHE Threshold FHE for BGV, BFV, and CKKS schemes Proxy re-encryption for BGV, BFV, and CKKS schemes Digital signature Identity-based encryption Ciphertext-policy attribute-based encryption Availability There are several known git repositories/ports for PALISADE: C++ PALISADE Stable Release (official stable release repository) PALISADE Preview Release (official development/preview release repository) PALISADE Digital Signature Extensions PALISADE Attribute-Based Encryption Extensions (includes identity-based encryption and ciphertext-policy attribute-based encryption) JavaScript / WebAssembly PALISADE WebAssembly (official WebAssembly port) Python Python Demos (official Python demos) FreeBSD PALISADE (FreeBSD port) References Homomorphic encryption Cryptographic software Free and open-source software
35235977
https://en.wikipedia.org/wiki/Baldur%27s%20Gate%3A%20Enhanced%20Edition
Baldur's Gate: Enhanced Edition
Baldur's Gate: Enhanced Edition is a remake of the 1998 role-playing video game Baldur's Gate, developed by Overhaul Games, a division of Beamdog, and published by Atari. It was released for Microsoft Windows on November 28, 2012, with additional releases between December 2012, later released in November 2014, for iPad, OS X, Android and Linux and most recently for Xbox One, PlayStation 4, and Nintendo Switch on October 15, 2019. The remake combines the original game, Baldur's Gate, with its expansion Baldur's Gate: Tales of the Sword Coast, retaining the original elements from both (story, in-game locations, gameplay and characters), while including additions, a separate arena adventure entitled The Black Pits, and a number of improvements some of which were imported from Baldur's Gate II: Shadows of Amn. On March 31, 2016, an expansion was released for the remake, Baldur's Gate: Siege of Dragonspear, which focuses on the events following the conclusion of Baldur's Gate, that lead up to Baldur's Gate II: Shadows of Amn. Gameplay Much like the original game, Baldur's Gate: Enhanced Edition follows the rules of the 2nd Edition Advanced Dungeons & Dragons, licensed by Wizards of the Coast. and features both single-player and multiplayer modes, while much of the gameplay, such as moving between locations, the "paper doll" equipment system, and inventory management, remains the same as the original game. With the Enhanced Edition comes a number of new features, as Beamdog stated that the Overhaul team had added several hundred improvements to the original game. Features created for the remake include cross-platform functionality for the multiplayer mode, allowing players on different platforms to be able to play with each other, the addition of a stand-alone arena adventure, The Black Pits, in which players form a party of six adventurers and battle against increasingly difficult opponents, the addition of four characters, each having their own dialogue and some having their own associated adventures, with bonus quest enemies that have been added to the game described as posing a "more vigorous challenge", four class kits - the Dwarven Defender, Shadowdancer, Dragon Disciple, Dark Moon Monk and Sun Soul Monk kits - a few locations added to the game, an achievement system (for Steam versions only), and two difficulty modes - Story Mode, and Legacy of Bhaal. Story Mode is an enhanced version of Easy difficulty, in which all characters in the party cannot die, have a strength stat of 25 (regardless of the actual value) and are immune to most negative effects, while all enemies can be damaged and killed easily. Legacy of Bhaal, in contrast, is an enhanced version of Insane difficulty, with enemies having more hit points, saving throws, THACO and attacks per round, while the difficulty cannot be changed once implemented by the player. Some of the major improvements to the original game that are incorporated into the remake, include a revamped user interface, the ability to play at higher resolutions as well as different viewing modes including widescreen, increased flexibility to mod the game, a new renderer, multiplayer matchmaking abilities (at the time of launch, this function was in a beta state), updates to the map and journal system, the level cap for the remake being raised, and the importation of improvements and some elements from both Baldur's Gate II: Shadows of Amn and its expansion pack, including classes, subraces and class kits, that were not available in the original game; a later patch the romance element from Baldur's Gate II is also incorporated, but only the new characters have this implemented. The developers included an auto-update function, with further modifications made to the remake based on suggestions made by users. The iPad and Android versions are described as a radical departure from the game's original interface, allowing for zooming in and out via multi-touch gestures, which allows for larger text. The tablet versions allow for users to swipe between screens instead of clicking on tabs. Ex-Bioware employee and creative director for the enhanced edition Trent Oster said, "When I describe playing a Baldur's Gate combat scenario to someone, I use the analogy of a football playbook. ... When you think about Baldur's Gate in this light, the iPad makes so much sense. In fact, I think Baldur's Gate is almost the perfect game for the touch interface—it was just released a decade early." Synopsis Much of the game retains the original setting of Baldur's Gate and its expansion - that of the Forgotten Realms continent of Faerûn, the city of Baldur's Gate and the locations south of it within the regions of the Sword Coast and the Western Heartlands, including Beregost, Nashkel, Durlag's Tower and Candlekeep. The main story, too, is still retained - the player creates a character and takes them across the game's setting to investigate the iron crisis that is plunging Baldur's Gate towards a war with the neighbouring nation of Amn, uncovering it as the work of the game's main villain, Sarevok. All of the original characters that can join the player's party, and the quests that can be undertaken from both the main game and the expansion, still present within the remake. The Enhanced Edition adds four non-player characters (NPCs) that can join the party, with three incorporating standalone story-lines (an element used with NPCs in Baldur's Gate II) of which two feature new locations. The standalone arena adventure created for the remake, The Black Pits, features a story set before the main events of Baldur's Gate, in which players create their own party of adventurers, in a similar fashion to that of the Icewind Dale series, and undertake battles in an area situated in a small portion of the Underdark, though there is little to do in terms of exploration except for the holding cells the party can move around within. The Black Pits Prior to the events of Baldur's Gate, a colony of Duergar dwarves is defeated by a mad drow sorcerer named Baeloth, who imprisons them and forces them to create an entertainment complex of his own design within the Underdark. Through his complex, the Black Pits, Baeloth assumes the title of "Baeloth the Entertainer", and draws in living creatures, monsters and adventurers from throughout the realms, either by invitation or forcefully capturing them, and pitting them against each other for loot. All who are brought to the Pits are imprisoned under the effects of a geas and cannot escape, eventually being killed during one of Baeloth's matches, with the exception of a champion, who was pitted against Baeloth himself after winning all their matches, but dying against the sorcerer. Baeloth's latest victims, a party of six adventurers (whose origins and past remain a mystery), survive their qualifying match and find themselves coerced into fighting in the Black Pits, learning that the drow is aided by a djinn named Najim, while their holding cells include several prisoners forced to operate as merchants, and a beholder who offers advice on the matches the party encounter. Over time, with each successful match, Baeloth becomes increasingly frustrated at their survival against the tougher monsters he brings in, while some of merchants express support for their victories, along with hope that they may become free. Eventually, after becoming champions of the Pits, Baeloth decides to take the group on himself, expecting to defeat them and eliminate them so as to provide the crowds with some new fighters to root for. However, the match ends with Baeloth defeated himself and his geas on all of the complex's prisoners lifted. Najim, grateful of his freedom, helps the party escape, knowing that the duergar clan they freed will see them as outsiders and not accept them. The party swiftly leave by a portal, and enjoy their freedom soon afterwards. Development Work on creating the Enhanced Editions of the first two Baldur's Gate games began with negotiations between Beamdog and the game's owners, Atari and Wizards of the Coast, which lasted for approximately fourteen months before a contract was arranged and agreed upon. Along with this, the developer also sought out a license from BioWare for use of the Infinity Engine for the original game's remake. During development, Beamdog made the decision to openly expand the number of platforms that the two Enhanced Editions would be on, and announced the remakes of Baldur's Gate and Baldur's Gate II on March 15, 2012, adding that Baldur's Gate: Enhanced Edition was to have a November 2012 release date and would be available for both PC and iPad, with the iPad version designed to make it compatible with all three generations (at the time) of the handheld device; several days later, on March 29, 2012, further announced the game was being designed to work on OS X, with the main art content creation program used to make content for the re-forged Infinity Engine being revealed as 3DS Max. Plans for two console version of the game were shelved, after developer Trent Oster revealed that an Xbox 360 version of the game could not be made as the controller was not a good fit for it, and that a Wii U version would not be made after the developers' negative experience with Nintendo while developing MDK2 HD Another console version, planned for the PlayStation 3, was also cancelled after negotiations between Sony and Beamdog, the former having contacted the developer first about this, broke down over the funding needed for the redesign of the game for the console; Oster revealed that the redesign cost would have been high, due to the amount of work required to fix the controls and make the UI of the game work for the PlayStation 3. Oster later revealed that due to fan demand, time for more thought to be put into a console edition had been allocated to after the release of Baldur's Gate II: Enhanced Edition. The team also initially looked into a retail edition due to fan demand, later clarifying that it would likely be a collector's edition, but discussion on the matter later led to the possibility being unlikely. For assistance on the game, Nat Jones was recruited as the art director for the project, with senior Dungeons & Dragons writer Dave Gross brought in on the team as a writer, and Sam Hulick composing additional music for the game. Overhaul also reached out to the modding community of the original Baldur's Gate games for further assistance in their efforts to revive the classic RPG. Speculation raised in August 2012 that the resurrection of Black Isle Studios, the producers of the original Baldur's Gate games, would see Overhaul co-develop the game with them, were dismissed by Beamdog who responded by saying that there are no plans to do so, although they wished the new studio luck. The game's first playable demo was revealed to be at PAX Prime 2012. While plans were made to release the game on September 18, 2012, delays caused the launch date to be pushed back, due to the amount of work being done to create it in 16 different languages, including English, French, German and Spanish further improvements to its gameplay, and fixing glitches pointed out to them before launch. In order to make up for the delay, Overhaul revealed information that the game would include further additions to it that the original did not have, including new characters, areas, story and hours of gameplay. Release The game was officially released on November 30, 2012, but due to contract obligations, the game's Windows version was launched on the Beamdog website and also via download from its Client program (although Beamdog Client is not required to play the game after initial activation), while the iPad version was launched on Apple's App Store, and the OS X version on the Mac App Store. On January 16, 2013, the game was released on Steam. The Linux version of the game was released the following year on November 27, 2014, after being delayed earlier that year on July 1, with Trent Oster explaining on Twitter that the decision to do so was to wait until after the release of a patch to update the game to version 1.3., as part of an improved commitment to quality releases. On June 19, 2013, Baldur's Gate: Enhanced Edition was removed from sale on Beamdog's website and the Apple App Store, due to contractual issues. The issues were resolved within less than a month, and the game became available for sale at both outlets once again on August 15. Further extensive work is made on the game, with patches released by the developer to fix glitches and problems encountered by players for all available versions of the game. News of updates is often released on their official forums; on August 29, 2014, Beamdog announced the roll-out of a patch (version 1.3.2053) on their forums, stating that it was to be made immediately available to all versions of the game, adding that it had "..submitted to Apple for approval" prior to rolling it out on the Mac and iTunes App Stores. Console versions for Nintendo Switch, PlayStation 4, and Xbox One were released on October 15, 2019 by Beamdog and Skybound Games. Reception Baldur's Gate: Enhanced Edition received mostly favorable reviews from critics; on aggregate review website Metacritic the game attains an overall score of 78 out of 100 on the PC. Shacknews praised the remake, calling the game "a truly enhanced version of a classic game". IGN gave the game 8.1, saying "Despite a dearth of immediately obvious changes, Baldur's Gate has aged well, and new players will find many hours' worth of fun if they approach it with an understanding of its increasingly antiquated framework." Role-playing game website GameBanshee gave the game a poor review, listing many bugs and oversights and commenting that they "cannot in good conscience recommend Baldur's Gate: Enhanced Edition, considering the original is still available, has hundreds of mods and bug fix packs, costs $10 USD less, and is just as great as it's ever been." GameSpy concurred with this, questioning whether $20 was a fair price and adding that it "would be more impressive if we didn't already have such easy access to the original and its mods, though its cross-platform support will be nice for other platforms." As of 2015, the Enhanced Edition sold over a million copies. Sequels Overhaul Games revealed in an interview on August 27, 2012, that, although they had not received the contract to make it, if both of the enhanced editions of Baldur's Gate and Baldur's Gate II do well financially, and if the team demonstrates the ability to successfully make their own original content, the developers would make Baldur's Gate 3, describing it as their "long-term goal". Furthermore, they stated that the success of the enhanced editions, which had fueled their decision to remake Icewind Dale, would likely see them attempt to produce an overhaul of Planescape: Torment, using rules from Baldur's Gate: Enhanced Edition, and the Infinity Enhanced Engine, and that it would possibly set up the stage for a sequel to Torment. Following its launch, the game was succeeded by Baldur's Gate II: Enhanced Edition on November 15, 2013, a remake that combined Baldur's Gate II: Shadows of Amn with its expansion Baldur's Gate II: Throne of Bhaal, while including the new classes and content from Baldur's Gate: Enhanced Edition, four new characters with new locations and quests tied to them, and a new standalone arena adventure, with the game made available for all of the same platforms. Following its success, Overhaul Games also worked upon and released a remake of Icewind Dale, entitled Icewind Dale: Enhanced Edition, which was released on October 30, 2014. An expansion entitled Baldur's Gate: Siege of Dragonspear was announced on July 10, 2015, with it aimed at adding around 25 – 30 hours of new content, and focusing on events that begin after the conclusion of Baldur's Gate, to just before the start of Baldur's Gate II; the expansion was released on March 31, 2016, with a streaming event in Twitch and an AMA in the Baldur's Gate subreddit. References External links 2012 video games Android (operating system) games Baldur's Gate video games Infinity Engine games IOS games Linux games MacOS games Multiplayer and single-player video games Nintendo Switch games PlayStation 4 games Role-playing video games Video game remakes Video games developed in Canada Video games featuring protagonists of selectable gender Video games with expansion packs Windows games Xbox One games