id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
12791444
https://en.wikipedia.org/wiki/SuperHappyDevHouse
SuperHappyDevHouse
SuperHappyDevHouse (a.k.a. SHDH) is an international series of social events which organizers originally conceived as parties for hackers and thinkers. Founded May 29, 2005 by Jeff Lindsay and David Weekly (founder of PBwiki), SHDH in Silicon Valley began by hosting 150 to 200 people every six weeks at rotating venues throughout San Francisco Bay and Silicon Valley, California. The unusual name derived from a popular 1991 Saturday Night Live satire, Happy Fun Ball, which lampooned TV commercials and the NERF Ball. Weekly lived in a house nicknamed "SuperHappyFunHouse" after SNL's commercial parody, and that name was given yet another twist as SuperHappyDevHouse. Global expansion By 2008, SHDH had expanded globally with CocoaDevHouse (London); SuperHappyDevClub (Cambridge, UK); Cologne DevHouse (Cologne, Germany); SuperHappyDevFlat (Zurich, Switzerland); SuperHappyHackerHouse (Vancouver, Canada); SuperHappyDevHouse Aotearoa (New Zealand); PhoenixDevHouse (Arizona); BostonDevHouse (Cambridge, Massachusetts); and DevHouse Pittsburgh (Pennsylvania). There was a DevHouse in Hermosillo, Mexico in September 2008, marking the first Latin American DevHouse, followed shortly thereafter by a Mexico City DevHouse on November 1, 2008, GuadalajaraDevHouse (Guadalajara, Mexico) on September 6, 2009 and two years later by a Dev House on Mérida, Mexico on February 28, 2010. People, whether technical or creative, gather at a private home for not-so-serious productivity and socialization. Organizers say they're trying to "resurrect the spirit of the Homebrew Computer Club." It is a non-exclusive event intended for passionate and creative technical people that want to have some fun, learn new things, and meet new people. There is no admission charged; the event is paid for by donations and sponsorships. The original SHDH was in Hillsborough, California and later in Los Gatos, California. As of August 2009, SHDH spawned HackerDojo to operate a Hackerspace in Mountain View, CA to allow for 24/7 programming, hackathons, classes, workshops, and lectures. SHDH now operates as an activity of Hacker Dojo Corporation. References Wired News commentary mentions DevHouse 10/25/06 Mercury News Business Cover Article on SHDH, 6/17/07 News.com coverage of SHDH19 Boston Globe mention of DevHouse Boston, 8/11/07 San Francisco Chronicle: 11 Things: SuperHappyDevHouse 10/2/08 Fast Company: Does The Tech Industry's Obsessive Party Fetish Pay Off? External links Computer-related events Recurring events established in 2005
19481730
https://en.wikipedia.org/wiki/2008%20Golden%20Helmet%20%28Poland%29
2008 Golden Helmet (Poland)
The 48th Golden Helmet was the 2008 version of the Golden Helmet organized by the Polish Motor Union (PZM). It took place on October 25 in the Olympic Stadium in Wrocław, Poland. Defending Winner is Grzegorz Walasek from Zielona Góra. Original date of Final was October 17, but after canceled of German Grand Prix and restarted Final Grand Prix on October 18, Main Comminsion for Speedway Sport decided aboute change date. After resignations top riders, Main Commission decided that top three riders are automatically qualify for 2010 Speedway Grand Prix Qualification without Domestic Qualifications. The Golden Helmet was won by Damian Baliński from Unia Leszno, who beat Jarosław Hampel (Unia Leszno also) and Adrian Miedziński (Unibax Toruń). It was first time in history, when Baliński won in Golden Helmet; in 2005 he was second. Hampel was third in 2003, and Miedziński never won medal in this competition. Starting positions draw Main Commission for Speedway Sport (Główna Komisja Sportu Żużlowego, GKSŻ) which is a part of the Polish Motor Union nominated 20 riders (12+3 track reserve from Ekstraliga and 4+1 from First League). Rafał Dobrucki, Tomasz Jędrzejak and Norbert Kościuch later replaced the injured Adrian Gomólski, Tomasz Gapiński and Krzysztof Buczkowski. After changed of Final date, Janusz Kołodziej must resigned, because on October 24 he was started in friendly match United States vs. Rest of the world. A few days before The Final, Tomasz Gollob, Rune Holta, Rafał Dobrucki and Robert Kościecha resigned. Next riders who resigned was Krzysztof Kasprzak, Krzysztof Jabłoński, Tomasz Chrzanowski, Sebastian Ułamek and Wiesław Jaguś, Mariusz Węgrzyk, Marcin Rempała, Maciej Kuciapa and Patryk Pawlaszczyk. Original riders Krzysztof Kasprzak (Unia Leszno SSA) Adrian Gomólski (KM Ostrów) Tomasz Chrzanowski (Lotos Gdańsk) Adrian Miedziński (Unibax Toruń) Tomasz Gapiński (CKM Złomrex Włókniarz Częstochowa) Sebastian Ułamek (CKM Złomrex Włókniarz Częstochowa) Jarosław Hampel (Unia Leszno SSA) Tomasz Gollob (Caeleum Stal Gorzów) Damian Baliński (Unia Leszno SSA) Piotr Świderski (RKM Rybnik) Grzegorz Walasek (ZKŻ Kronopol Zielona Góra) Robert Kościecha (Unibax Toruń) Rune Holta (Caeleum Stal Gorzów) Janusz Kołodziej (Unia Tarnów ŻSSA) Adam Skórnicki (PSŻ Milion Team Poznań) Wiesław Jaguś (Unibax Toruń) Rafał Dobrucki (ZKŻ Kronopol Zielona Góra) Tomasz Jędrzejak (ATLAS Wrocław) Piotr Protasiewicz (ZKŻ Kronopol Zielona Góra) Krzysztof Buczkowski (Polonia Bydgoszcz) Note: riders in bold type is Wrocław' rider. Final participants Sławomir Drabik (Unia Tarnów ŻSSA) Piotr Protasiewicz (ZKŻ Kronopol Zielona Góra) None Adrian Miedziński (Unibax Toruń) Tomasz Jędrzejak (Atlas Wrocław) Roman Povazhny (Marma Polskie Folie Rzeszów) Jarosław Hampel (Unia Leszno SSA) Norbert Kościuch (PSŻ Poznań) Damian Baliński (Unia Leszno SSA) Piotr Świderski (RKM Rybnik) Grzegorz Walasek (ZKŻ Kronopol Zielona Góra) Daniel Jeleniewski (Atlas Wrocław) Grzegorz Zengota (ZKŻ Kronopol Zielona Góra) Rafał Trojanowski (PSŻ Milion Team Poznań) Adam Skórnicki (PSŻ Milion Team Poznań) Jacek Rempała (Unia Tarnów ŻSSA) Patryk Pawlaszczyk (RKM Rybnik) Heat details 2008-10-25, 19:00 CEST Refeere: Wojciech Grodzki Heat after heat Miedziński, Drabik, Protasiewicz Hampel, Kościuch, Jędrzejak, Povazhny Baliński, Świderski, Walasek, Jeleniewski (d1) Skórnicki, Rempała, Zengota, Trojanowski Baliński, Jędrzejak, Zengota, Drabik Protasiewicz, Świderski, Trojanowski, Povazhny Skórnicki, Hampel, Walasek Miedziński, Kościuch, Jeleniewski, Rempała Povazhny, Walasek, Rempała, Drabik Jeleniewski, Skórnicki, Protasiewicz, Jędrzejak Baliński, Kościuch, Trojanowski Miedziński, Hampel, Świderski, Zengota Hampel, Jeleniewski, Trojanowski, Drabik Zengota, Walasek, Protasiewicz, Kościuch Świderski, Jędrzejak, Rempała Baliński, Povazhny, Miedziński, Skórnicki Drabik, Kościuch, Świderski, Skórnicki Hampel, Baliński, Protasiewicz, Rempała Povazhny, Jeleniewski, Zengota Jędrzejak, Walasek, Miedziński, Trojanowski References See also Speedway in Poland Helmet Golden 2008
66056725
https://en.wikipedia.org/wiki/Kerala%20University%20of%20Digital%20Sciences%2C%20Innovation%20and%20Technology
Kerala University of Digital Sciences, Innovation and Technology
Kerala University of Digital Sciences, Innovation and Technology, also known as Digital University Kerala (DUK), is a state university located in Technocity, Mangalapuram, Pallippuram, Thiruvananthapuram, Kerala, India. It was established in 2020 by upgrading Indian Institute of Information Technology and Management, Kerala (IIITM-K), established 2000. It is the 14th State University in Kerala and first Digital University in the state. History The Indian Institute of Information Technology and Management, Kerala (IIITM-K) was established in 2000. It originally operated from a campus in Technopark in Thiruvananthapuram until the construction of the campus at Technocity and was affiliated to Cochin University of Science and Technology. In 2020 IIITM-K was upgraded to a university through an Ordinance of the Government of Kerala. Campus The residential campus is located in campus of Technocity in Thiruvananthapuram. Leadership The Governor of Kerala is, by virtue of office, the Chancellor of the university and the Minister in Charge of Information Technology Department is the Pro-Chancellor of the university. The Government of Kerala had appointed Saji Gopinath as the first Vice-chancellor of the university. Major schools School of Computer Science and Engineering School of Electronic Systems and Automation School of Digital Sciences School of Informatics School of Digital Humanities and Liberal Arts Kerala Blockchain Academy The institute also houses Kerala Blockchain Academy, second in the country. Kerala Blockchain Academy is also an associate member of the Hyperledger community. References External links Science and technology in Kerala Research institutes in Thiruvananthapuram Universities in Thiruvananthapuram 2020 establishments in Kerala
68445379
https://en.wikipedia.org/wiki/Kathryn%20Strutynski
Kathryn Strutynski
Kathryn Betty Strutynski (née Latimer) (5 February 1931 – 9 April 2010) was a mathematician and computer scientist, who taught at Brigham Young University and Naval Postgraduate School. Besides jobs at Pan Am Airways and Bechtel Corporation, she worked at Digital Research, where she contributed to develop CP/M, the first mainstream operating system for microcomputers. Early life and education Kathryn Betty Latimer was born on 5 February 1931 in Nephi, Utah, USA. Her father was Andrew Hans Latimer and her mother Henriette Norton. Latimer obtained an undergraduate degree in mathematics from Brigham Young University in 1953, and taught in Utah for two years. Career In the early 1950s, she moved to San Francisco, where she worked at Pan Am Airways under the supervision of her husband's college roommate, doing research. When he moved to Convair, she became responsible for his former job, all the charter bids at the Western Division of Pan Am. When Pan Am consolidated its offices in New York Latimer was offered moving expenses to relocate to New York, but she declined the offer. After Pan Am, Kathryn Latimer worked at McGraw-Hill and the estimating department of Bechtel Corporation in the same city. When the company decided to purchase a mainframe computer, Latimer was sent to take every class given at IBM. In 1952 and 1953, she built the company's first database retrieval system, with 10 engineers working under her charge, renting computer time since they did not have a mainframe computer at that time. The database was used for a period of ten years. In 1958, she married Alfred Waldemar Strutynski. Kathryn Strutynski's husband moved to Monterey, California to work for RTT. The couple lived in Carmel Valley Village, where she worked at the Naval Postgraduate School (NPS) since 1967 and completed a master's degree program in computer programming at the same time. Strutynski was given system responsibility for the VM operating system at the NPS. At the same time, Gary Kildall also taught at the NPS and was interested in operating systems. They became friends, studied and made unofficial changes to the IBM VM-360 and 370. Digital Research Kathryn Strutynski left NPS and, in 1978/1979, became the fourth employee of Digital Research, Inc. She adapted CP/M-80 for the Apple II and worked on CP/M 2.0, CP/M 2.2, CP/M Plus, and DESPOOL, a background spooler for printing (utilizing simple multi-tasking) as well as on the system guides. She also was the project manager for CP/M-86, Concurrent CP/M-86 and Concurrent PC DOS. Around 1985, Strutynski returned to work for NPS at the W. R. Church Computer Center, where she raised the PC lab and taught MS-DOS and Word Perfect courses as Manager of Microcomputing Support and Learning Resource Centers. In her later years she ran Strutynski Associates in Carmel. Personal life Kathryn Latimer met Alfred Waldemar Strutynski in a German dance hall. They married in 1958 and moved to Carmel since her husband started working at RTT. She worked for the Naval Postgraduate School in Monterey and later at Digital Research. Death Kathryn Strutynski's husband was interned during the third week of March 2010 at Garden East Community Hospital. She died three weeks later on 9 April 2010 at her daughter's home in Calgary while she was 79. Her husband died two days later. In popular culture Harold Evans wrote about her in his book "They Made America" (2004). For the reworked paperback issue (2006) Strutynski spent many hours working with Evans updating the chapter of his book related to the birth of CP/M. See also Intel INTERP (Digital Research CP/M-80 with 6 MHz Advanced Logic Systems (ALS) "The CP/M Card" for the Apple II) History of personal computers References Further reading [1:22]; Bill Selmeier (ed.) 2006-05-20 (NB. At the Naval Post Graduate School, describes how Strutynski became friends with Gary Kildall, founder of Digital Research.) [2:53]; Luanne Johnson (ed.) 2006-05-22 (NB. Describes how Gary Kildall hired Strutynski into Digital Research and a little on how everyone worked together.) [8:23]; Bill Selmeier (ed.) 2006-05-24 (NB. About tasks, working relations, and stories from the very earliest years of Digital Research Incorporated.) [2:56]; Bill Selmeier (ed.) 2006-04-18/2006-04-28 [8:01]; Bill Selmeier (ed.) 2006-05-24 (NB. Relates some of Strutynski experiences developing CP/M 2.2.) [3:40]; Bill Selmeier (ed.) 2006-05-24 (NB. Strutynski talks about her CP/M-86 recollections.) [7:21]; Bill Selmeier (ed.) 2006-05-24 (NB. Relates why Strutynski regretfully has to leave Digital Research and then reflects back on her many experiences at the company.) (1 page) (1 page) (NB. Reprinted from Digital Research News Vol. 2 no. 3.) https://www.retrotechnology.com/dri/d_dri_refs.html American women computer scientists American computer scientists Digital Research employees CP/M people 1931 births 2010 deaths People from Nephi, Utah Brigham Young University alumni Naval Postgraduate School faculty
2848481
https://en.wikipedia.org/wiki/Evernote
Evernote
Evernote is an app designed for note taking, organizing, task management, and archiving. It is developed by the Evernote Corporation, headquartered in Redwood City, California. The app allows users to create notes, which can be text, drawings, photographs, audio, or saved web content. Notes are stored in notebooks and can be tagged, annotated, edited, searched, given attachments, and exported. Evernote is cross-platform, with a web client as well as applications on Android, iOS, macOS, and Microsoft Windows. It is free to use with monthly usage limits, and offers paid plans for expanded or lifted limits. Company After being founded in 2000 by Russian-American computer entrepreneur Stepan Pachikov, EverNote Corporation ('EverNote' stylized with a capital 'N' by then) started marketing software for Windows desktop PCs, Tablet PCs and handheld devices like the handwriting recognition software ritePen and the notetaking and web clipping application EverNote (also with a capital 'N'), a Windows application which stored notes on an 'infinite roll of paper'. Under the new CEO Phil Libin, the company shifted its focus to the Web, smartphones and also the Apple Mac, starting with Evernote (now with lower-case 'n') 3.0 in 2008. The Evernote Web service launched into open beta on June 24, 2008 and reached 11 million users in July 2011. In October 2010, the company raised a US$20 million funding round led by DoCoMo Capital with participation from Morgenthaler Ventures and Sequoia Capital. Since then, the company raised an additional $50 million in funding led by Sequoia Capital and Morgenthaler Ventures, and another $70 million in funding led by Meritech Capital and CBC Capital. On November 30, 2012, Evernote raised another $85 million in funding led by AGC Equity Partners/m8 Capital and Valiant Capital Partners. On November 9, 2014, Evernote raised an additional $20 million in funding from Nikkei, Inc. On May 7, 2013, TechCrunch reported that Evernote launched Yinxiang Biji Business into the Chinese market at the Global Mobile Internet Conference. Linda Kozlowski was named the Chief Operating Officer of Evernote in June 2015, after more than two years with the company, but left before the end of the year. Libin stepped down as CEO in July 2015 and was replaced by former Google Glass executive Chris O'Neill, but remained Executive Chairman. In October 2015, the Evernote Corp. announced that the company was laying off 18 percent of its workforce and would be closing three out of 10 global offices. In September 2016, Libin stepped down as Executive Chairman to focus on other business ventures. In February 2017, CEO O'Neill stated in a blog post that the business was now cash-flow positive. Sequoia Capital, one of Evernote's equity owners, said, "It's great when a company starts to raise non-dilutive capital every day, which is called revenue." In August 2018, Chief Technical Officer Anirban Kundu, Chief Financial Officer Vincent Toolan, Chief Product Officer Erik Wrobel, and head of HR Michelle Wagner left the company. Wrobel and Wagner both joined in 2016. On September 18, 2018, 54 employees—about 15 percent of the workforce—were laid off. In a blog post, O'Neill said, "After a successful 2017, I set incredibly aggressive goals for Evernote in 2018. Though we have steadily grown, we committed too many resources too quickly. We built up areas of our business in ways that have proven to be inefficient. Going forward, we are streamlining certain functions, like sales, so we can continue to speed up and scale others, like product development and engineering." On October 29, 2018, Evernote announced that Ian Small, former CEO of TokBox, would replace O'Neill as CEO of Evernote. Architecture Coding and versions In 2010, the programming language used to write Evernote's software was changed from C# for version 3.5 to C++ in version 4.0 to improve performance. Data entry As well as the keyboard entry of typed notes, Evernote supports image capture from cameras on supported devices, and the recording of voice notes. In some situations, text that appears in captured images can be recognized using OCR and annotated. Evernote also supports touch and tablet screens with handwriting recognition. Evernote web-clipping plugins are available for the most popular Internet browsers that allow marked sections of webpages to be captured and clipped to Evernote. If no section of a webpage has been highlighted, Evernote can clip the full page. Evernote also supports the ability to e-mail notes to the service, allowing for automated note entry via e-mail rules or filters. Where suitable hardware is available, Evernote can automatically add geolocation tags to notes. As of November 2018, Evernote Pro integrates directly with Google Drive, Microsoft Outlook, Microsoft Teams, and Slack, and Evernote Pro adds an integration with Salesforce. All versions of Evernote also support integrations through IFTTT and Zapier. In 2013, Evernote deprecated its direct integration with Twitter in favor of these third-party services. Data storage and access On supported operating systems, Evernote allows users to store and edit notes on their local machine, using a SQLite database in Windows. Users with Internet access and an Evernote account can also have their notes automatically synchronized with a master copy held on Evernote's servers. This approach lets a user access and edit their data across multiple machines and operating system platforms, but still view, input and edit data when an Internet connection is not available. However, notes stored on Evernote servers are not encrypted. Where Evernote client software is not available, online account-holders can access their note archive via a web interface or through a media device. The service also allows selected files to be shared for viewing and editing by other users. The Evernote software can be downloaded and used as "stand-alone" software without using the online portion of an Evernote account (online registration is required for initial setup, however), but it will not be able to upload files to the Evernote server, or use the server to synchronize or share files between different Evernote installations. Also, no image or Image-PDF (Premium only) recognition and indexing will take place if the software is used entirely offline. Accounts Evernote is a free online service that allows users to upgrade to Premium or Business accounts. All Free, Plus and Premium Evernote accounts have a maximum limit of 100,000 notes and 250 notebooks. Basic customers can upload 60 MB of data each month. Plus customers get a 1 GB upload limit, offline notes on mobile devices, as well as passcode lock for mobile devices. Emails can also be sent to their Evernote account. Premium subscribers are granted 10 GB of new uploaded data every month, faster word recognition in images, heightened security, PDF annotation, Context, where notes and news articles can be seen, which are related to the open note and the ability to search text within PDF documents. They also receive additional options for notebook sharing. Each of free, Plus and Premium account types allow notebook sharing with other Evernote users; however, the accounts are distinguished by editing capabilities. In regards to shared notebooks, editing permissions to non-paid account holders may only be granted to premium Evernote subscribers. The free service does not make files available offline on iOS and Android devices; while sometimes they are available from cache, editing these files can cause conflicts when synchronizing. With the full version of Evernote Business, users sync and view work documents through a variety of platforms, such as Mac, iPhone and iPads, Web, Windows and Android Devices. Files that can be uploaded include spreadsheets, presentations, notes and design mock ups. In addition, administrators can monitor company progress and individual employees through the admin console. In June 2016, Evernote announced the limitation for users of its free Basic account to two devices per year and raised prices for its premium service tiers. Non-paying Evernote users are able to sync notes between two devices. Plus lets users store notes offline and upload up to 1GB files, while Premium adds document-parsing features and 10GB of additional storage. From early April 2018, Evernote Plus was no longer available for purchase; however, users who currently have the Plus subscription can maintain it as long as their subscription is still active. Supported platforms Evernote clients are available for Android, iOS (iPad, iPhone, and iPod Touch), macOS, Microsoft Windows, and Web. Additionally, portable versions of Evernote are available for flash drives and U3 drives. There is currently no officially supported native client for BSD or Linux, but the company provides an API for external Linux clients. There is substantial variation in supported features on different platforms. For example, it is possible to edit Rich Text Format and sketches on Windows; on Mac it is possible to edit rich text, but only view sketches; and on the iPad only plain text could be edited prior to version 4.1.0 (August 2011). Web clipping support is installed by default on the Internet Explorer and Safari browsers when the Evernote software is installed under Windows or macOS. Evernote web-clipping plugins are also available for the Firefox, Google Chrome, Opera, and Yandex Browsers, and need to be downloaded and installed separately from the respective browser. The Evernote email-clipper is automatically installed in Microsoft Office Outlook if the desktop version is installed on the same computer. There is also a Thunderbird email plugin, which must be installed separately from the Thunderbird client. Apps and tools Scannable Scannable captures paper quickly, transforming it into high-quality scans ready to save or share. Skitch Skitch is a free screenshot editing and sharing utility for OS X, iOS, Windows, and Android. The app permits the user to add shapes and text to an image, and then share it online. Images can also be exported to various image formats. Originally developed by Plasq, Skitch was acquired by Evernote on August 18, 2011. On December 17, 2015, Evernote announced that it will be ending support for Skitch for Windows, Windows Touch, iOS, and Android on January 22, 2016. Evernote said it will continue to offer Skitch for Mac. Web Clipper Evernote Web Clipper is a simple browser extension that lets a user capture full-page articles, images, selected text, important emails, and any web page. Partnerships Blinkist The book-summarizing service Blinkist offers members to synchronise their highlighted text passages to Evernote. This happens in notes for each book with the title of the book as the note title. Deutsche Telekom On March 25, 2013, Evernote announced a partnership with Deutsche Telekom to provide German customers with free access to Evernote Premium for one year. In January 2014 the partnership was expanded to additional European markets. Moleskine In August 2012, Moleskine partnered with Evernote to produce a digital-friendly notebook with specially designed pages and stickers for smartphone syncing. Samsung All Samsung Galaxy Note 3 phablets included a free one-year subscription to Evernote Premium. Telefónica Digital On August 13, 2013, The New York Times reported that Telefónica Digital and Evernote entered into a global partnership agreement, giving Brazilian customers free access to Evernote Premium for one year. Under this global deal Telefónica users in Costa Rica, Guatemala, Panama, the UK and Spain were also offered the promotion. Incidents Data loss The service has experienced several cases of losing customer data. Denial-of-service attacks On June 11, 2014, Evernote suffered a crippling distributed denial-of-service attack that prevented customers from accessing their information; the attackers demanded a ransom, which Evernote refused to pay. A denial-of-service attack on August 8, 2014, resulted in a brief period of downtime for evernote.com; service was quickly restored. Security breach On March 2, 2013, Evernote revealed that hackers had gained access to their network and been able to access user information, including usernames, email addresses, and hashed passwords. All users were asked to reset their passwords. In the wake of this, Evernote accelerated plans to implement an optional two-factor authentication for all users. Privacy controversy In December 2016, Evernote announced its privacy policy would be changing in January 2017, leading to claims the policy allowed employees of the firm to access users' content in some situations. In response to the concerns, Evernote apologised and announced the policy would not be implemented, and that its employees would not have access to users' content unless users opted in. Evernote v10 controversy In late 2020, Evernote released Evernote v10, written from scratch in the Electron framework, to replace older versions on multiple platforms. Some users noted the new app was much slower than legacy Windows / iOS app, had many features removed, and did not work with some default keyboard layouts (like Turkish, Latvian, Polish) due to conflict of hardcoded key bindings. No options were enabled to disable or change hotkeys in Evernote v10. See also Comparison of notetaking software List of personal information managers Springpad, a similar notetaking tool that was often compared to Evernote References External links NeverNote.sourceforge.net. Note-taking software Web annotation Cloud storage Proprietary cross-platform software Mobile software Companies based in Redwood City, California Android (operating system) software BlackBerry software IOS software Symbian software American companies established in 2007 Windows Phone software Data synchronization Internet properties established in 2007 Privately held companies based in California Webby Award winners Collaborative real-time editors File hosting Presentation software Online word processors Online office suites WatchOS software
81952
https://en.wikipedia.org/wiki/Polydorus
Polydorus
In Greek mythology, Polydorus (; , i.e. "many-gift[ed]") or Polydoros referred to several different people. Polydorus, son of Phineus and Cleopatra, and brother of Polydector (Polydectus). These two sons by his first wife were blinded by Phineus because of the instigation of their stepmother, Idaea who accused them of corrupting her virtue. Prince Polydorus, son of the King Cadmus and goddess Harmonia, fathered Labdacus by his wife Nycteis. Polydorus, an Argive, son of Hippomedon and Euanippe, daughter of Elatus. Pausanias lists him as one of the Epigoni, who attacked Thebes in retaliation for the deaths of their fathers, the Seven against Thebes, who died attempting the same thing. Prince Polydorus, a Trojan, was the King Priam's youngest son. Polydorus, a Ceteian warrior who participated in the Trojan War. During the siege of Troy, he was killed by Odysseus using his sword along with Aenus, another Ceteian. (Ceteius is called a stream in Asia Minor). Polydorus (son of Astyanax) Polydorus, one of the Suitors of Penelope who came from Zacynthus along with other 43 wooers. He, with the other suitors, was shot dead by Odysseus with the assistance of Eumaeus, Philoetius, and Telemachus. In history, Polydorus was: Polydorus of Sparta (reigned from c. 741 to c. 665 BC) In art, Polydorus was: One of the three Rhodian sculptors who created the sculpture Laocoön and His Sons and signed the Sperlonga sculptures See also Polydora Notes References Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Gaius Julius Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project. Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library. Quintus Smyrnaeus, The Fall of Troy translated by Way. A. S. Loeb Classical Library Volume 19. London: William Heinemann, 1913. Online version at theio.com Quintus Smyrnaeus, The Fall of Troy. Arthur S. Way. London: William Heinemann; New York: G.P. Putnam's Sons. 1913. Greek text available at the Perseus Digital Library. William Smith. A Dictionary of Greek and Roman biography and mythology vs .Polydorus-1, Polydorus-2 & Polydorus-3. London. John Murray: printed by Spottiswoode and Co., New-Street Square and Parliament Street. 1849. Princes in Greek mythology People of the Trojan War Suitors of Penelope Characters in Greek mythology Greek mythology of Thrace Mythology of Argos Theban mythology
9444359
https://en.wikipedia.org/wiki/Technostress
Technostress
Technostress has been defined as the negative psychological link between people and the introduction of new technologies. Where ergonomics is the study of how humans react to and physically fit with machines in their environment, technostress is a result of altered habits of work and collaboration that are being brought about due to the use of modern information technologies at office and home situations. People experience technostress when they cannot adapt to or cope with information technologies in a healthy manner. They feel compulsive about being connected and sharing constant updates, feel forced to respond to work-related information in real-time, and engage in almost habitual multi-tasking. They feel compelled to work faster because information flows faster, and have little time to spend on sustained thinking and creative analysis. Craig Brod, a leader in the field of technostress research, states that technostress is "a modern disease of adaptation caused by an inability to cope with the new computer technologies in a healthy manner." Some of the earliest technostress scholarly studies in the field of Management Information Systems show that technostress is an undesirable phenomenon spawned by use of computing and communication devices such as computers, tablets and smartphones. Newer empirical research in this field however indicates that technostress has both positive and negative aspects. Research also suggests that technostress is dependent on gender, age and computer literacy. Women experience lower technostress than men, older people experience less technostress at work than younger people and those with greater computer literacy experience lower technostress. First use of the term As early as 1984, Brod defined technostress as a “modern disease of adaptation caused by an inability to cope with the new computer technologies in a healthy manner”. Technostress is later considered as the name for mental stress related to technology use, nevertheless including also exesive physiological and emotional arousal. More recent definitions could refer to physical, behavioral, and psychological strain in response to ICT dependence, to increasing computer complexity, and accelerated ICT-driven work changes. Atanasoff and Venable considere that models and definitions resort to three main categories: transactional and perceived stress, biology, and occupational health. Symptoms and outcomes Psychological stress can manifest itself physically. Similarly there are a number of symptoms of technostress. The anxiety expressed by those experiencing technostress: insomnia, loss of temper, irritability, frustration and can increase errors in judgement and poor job performance if not dealt with. In the 21st century, people equipped with technology at the workplace are most especially those experiencing technostress. People are sitting and facing computer monitors for a longer time which results in physical strain. In the 21st century work environment, people spend hours day at work because it is critical to their security and job satisfaction. However, these demands are becoming increasingly hazardous to their health. In a technological world, providing people with appropriate and safe physical environment is a necessity. Too much exposure to computer monitors is associated with emotional stress, and people are emotionally affected by technostress in their workplaces. Consequences of technostress include decreased job satisfaction, organizational commitment and productivity. A periodic assessment is necessary to check the level of technostress affecting professionals especially the physical and emotional aspects. Managers should organize technology-based trainings for employees to make them comfortable with technologies and awareness of their harmful effects. Technology skills for employees are important to consistently update their technological skills. Institutions, companies, and agencies are needed to employ IT specialists and troubleshooters to maximize system accessibility and provide a level of comfort to the employee. The causes of technostress amount to: The quick pace of technological change Lack of proper training An increased workload Lack of standardization within technologies The reliability of hardware and software Four Aspects of Technostress: Physical aspects are eye strain, backaches, headaches, stiff shoulders, neck pain, joint pain, dry mouth and throat, muscle tension, stomach discomfort, keyboard related injuries, chest pain, rapid heart rate, irritable bowel syndrome, increased blood pressure and difficulty in breathing. Emotional aspects like irritability, loss of temper, having a high state of anxiety when separated from a computer monitor, feelings of indifference, frustration, lack of appreciation, depression, guilt, feeling fearful, paranoia that leads to avoiding computers and negative attitudes. Behavioral aspects consists of feeling overly comfortable with computers, overspending on computers, insomnia, uncooperativeness and unwillingness, using computer terms in non-computer conversation, smoking, social withdrawal in favor of terminal time, cruising computer stores and drinking alcohol. Psychological aspects are composed of information overload in order to find, analyze, evaluate, and apply it in the right context with resources, underwork and routine jobs lead to frustrations when underemployed or when the work done involves only routine operations, job security, where people have a fear that computers may replace human roles, professional jealousy produced by technological competency, de-motivation due to prolonged periods of any technological activity, uncertainty about job role caused by an increased time working with technology. There are five conditions that are classified as "technostress creators": "Techno-overload" describes situations where use of computers forces people to work more and work faster. "Techno-invasion" describes being “always exposed” where people can potentially be reached anywhere and any time and feel the need to be constantly connected. The regular work-day is extended, office work is done at all sorts of hours, and it is almost impossible to "cut away." "Techno-complexity" describes situations where the complex computer systems used at work force people to spend time and effort in learning and understanding how to use new applications and to update their skills. People find the variety of applications, functions, and jargon intimidating and consequently feel stressed. "Techno-insecurity" is associated with situations where people feel threatened about losing their jobs to other people who have a better understanding of new gadgets and computing devices. "Techno-uncertainty" relates to short life cycles of computer systems. Continuing changes and upgrades do not give people a chance to acquire experience with a particular system. People find this unsettling because their knowledge becomes rapidly outdated and they are required to re-learn things very rapidly and often. Positive and negative sub-processes A recent empirical study on technostress reframes technostress by conceptualizing it in terms of a holistic technostress process involving two critical subprocesses: the techno-eustress subprocess and the techno-distress subprocess. This holistic technostress model frames technostress as a process that hinges on individuals appraising environmental conditions as challenge techno-stressors, defined as techno-stressors that individuals tend to appraise as related to promoting task accomplishment, or hindrance techno-stressors, defined as techno-stressors that individuals tend to appraise as a barrier to task accomplishment. The challenge and hindrance techno-stressors are related to positive and negative psychological states in the individual, respectively, and in turn, positive and negative individual and organizational outcomes. The holistic technostress model and its components were empirically validated in the context of healthcare. The holistic technostress process model: "Environmental conditions" represent potential sources of technology-related stressful situations that may interact with any other environmental conditions such as role demands, task demands, interpersonal and behavioral expectations, job conditions, and workplace policies, among others. The "Techno-eustress subprocess" involves: "Challenge techno-stressors" are technostressors that individuals tend to appraise as related to promoting task accomplishment. Examples include usefulness and involvement facilitation. "Positive psychological responses" are positive psychological responses to a technostressor, as indicated by the presence of positive psychological states. "Positive outcomes" are positive individual and organizational consequences related to the positive psychological state of the individual. Examples include an increase in job satisfaction and a decrease in turnover intention. The "Techno-distress subprocess" involves: "Hindrance techno-stressors" are technostressors that individuals tend to appraise as a barrier or obstacle to task accomplishment. Examples include unreliability, insecurity, and overload. "Negative psychological responses" are negative psychological responses to a technostressor, as indicated by the presence of negative psychological states. "Negative outcomes" are negative individual and organizational consequences related to the negative psychological state of the individual. Examples include a decrease in job satisfaction and an increase in attrition and turnover intention. The techno-stressors, psychological responses, and outcomes are governed by three evaluation processes: An "Appraisal process" is an intrapersonal process through which individuals appraise the technology-associated environmental conditions as challenges or hindrances. A "Decision process" is an intrapersonal process through which individuals decide how to respond either positively or negatively to the appraised technostressor. This process occurs before the psychological response but after the individual has determined if the environmental condition represents a challenge or hindrance techno-stressor. A "Performance process" is an intrapersonal process through which individuals decide how to act on their psychological response. This process occurs after the psychological response but before the individual determines his or her outcome. Coping strategies Technostress can be treated by getting user friendly software and educating people about new technology and creating better level of reassurance, patience and stability and communication within the job environment. Other option is avoid or restrict use of technology. Ways to eliminate technostress are conducting stress management activities to lessen and eliminate the problem of technostress such as exercise, meditation, progressive muscle relaxation, positive self talk, staying healthy and having healthy diet. Taking frequent breaks from technology, having a schedule, counseling, having awareness of technostress, establishing a teamwork relationship with colleagues may help. See also Information overload Computer rage Technophobia Culture shock Internet fear Notes and references General Califf, C. B., Sarker, S., & Sarker, S. (2020). The Bright and Dark Sides of Technostress: A Mixed-Methods Study Involving Healthcare IT. MIS Quarterly, Vol. 44, No.2, pp. 809–856. Tarafdar, M., Cooper, C.L., Stich, J. The technostress trifecta - techno eustress, techno distress and design: Theoretical directions and an agenda for research, Information Systems Journal, Volume 30, Issue 1, 2019 Bachiller, R. T. (2001): Technostress among library staff and patrons of the U. P. Diliman libraries. University of the Philippines, Diliman. Caguiat, C.A. (2001): Effects of stress and burnout on librarians in selected academic libraries in Metro Manila. University of the Philippines, Diliman. Dimzon, S. (2007): Technostress among PAARL members and their management strategies: a basis for a staff development program. PUP, Manila. Riedl, R., Kindermann, H., Auinger, A., Javor, A. (2012): Technostress From a Neurobiological Perspective: System Breakdown Increases the Stress Hormone Cortisol in Computer Users. Business & Information Systems Engineering, Vol. 4, No. 2, pp. 61–69. Riedl, R. (2013). On the Biology of Technostress: Literature Review and Research Agenda. The DATABASE for Advances in Information Systems, Vol. 44, No. 1, pp. 18–55. Riedl, R., Kindermann, H., Auinger, A., Javor, A. (2013): Computer Breakdown as a Stress Factor during Task Completion under Time Pressure: Identifying Gender Differences Based on Skin Conductance. Advances in Human-Computer Interaction, Volume 2013, Article ID 420169. External links Technostress and the Organization by Nina Davis-Millis Technostress in Social Networking Sites Psychological stress Digital media use and mental health
5112484
https://en.wikipedia.org/wiki/Eric%20Lengyel
Eric Lengyel
Eric Lengyel is a computer scientist specializing in game engine development, computer graphics, and geometric algebra. He holds a Ph.D. in computer science from the University of California, Davis and a master's degree in mathematics from Virginia Tech. Lengyel is an expert in font rendering technology for 3D applications and is the inventor of the Slug font rendering algorithm, which allows glyphs to be rendered directly from outline data on the GPU with full resolution independence. Lengyel is also the inventor of the Transvoxel algorithm, which is used to seamlessly join multiresolution voxel data at boundaries between different levels of detail that have been triangulated with the Marching cubes algorithm. Among his many written contributions to the field of game development, Lengyel is the author of the four-volume book series Foundations of Game Engine Development. The first volume, covering the mathematics of game engines, was published in 2016 and is now known for its unique treatment of Grassmann algebra. The second volume, covering a wide range of rendering topics, was published in 2019. Lengyel is also the author of the textbook Mathematics for 3D Game Programming and Computer Graphics and the editor for the three-volume Game Engine Gems book series. Lengyel founded Terathon Software in 2000 and is currently President and Chief Technology Officer at the company, where he leads development of the C4 Engine. He has previously worked in the advanced technology group at Naughty Dog, and before that was the lead programmer for the fifth installment of Sierra's popular RPG adventure series Quest for Glory. In addition to the C4 Engine, Lengyel is the creator of the Open Data Description Language (OpenDDL) and the Open Game Engine Exchange (OpenGEX) file format. Lengyel is originally from Reynoldsburg, Ohio, but now lives in Lincoln, California. He is a cousin of current Ohioan and "Evolution of Dance" creator Judson Laipply. Games Eric Lengyel is credited on the following games: Formula One Championship Edition (2007), Sony Computer Entertainment America, Inc. Heavenly Sword (2007), Sony Computer Entertainment America, Inc. Ratchet & Clank Future: Tools of Destruction (2007), Sony Computer Entertainment America, Inc. Warhawk (2007), Sony Computer Entertainment America, Inc. MotorStorm (2006), Sony Computer Entertainment Incorporated Resistance: Fall of Man (2006), Sony Computer Entertainment Incorporated Jak 3 (2004), Sony Computer Entertainment America, Inc. Quest for Glory V: Dragon Fire (1998), Sierra On-Line, Inc. Patents Eric Lengyel is the primary inventor on the following patents: Method for rendering resolution-independent shapes directly from outline control points Graphics processing apparatus, graphics library module and graphics processing method References External links List of publications by Eric Lengyel Moby Games rap sheet Year of birth missing (living people) Living people University of California, Davis alumni Video game programmers Computer graphics professionals Virginia Tech alumni People from Columbus, Ohio People from Reynoldsburg, Ohio People from Lincoln, California Sierra On-Line employees American chief technology officers
1085663
https://en.wikipedia.org/wiki/Turkix
Turkix
Turkix was a live Linux distribution, capable of self-installing on hard disk using a graphical wizard. The main goal of Turkix was to provide a very user-friendly Linux environment. Turkix was based on the Mandriva distribution. In visual style Turkix was similar to Windows XP. First two releases of Turkix (1.0 and 1.9) were in Turkish and Azerbaijani only, but later releases had more language support, especially English. Latest stable release of Turkix was 3.0, and the last unstable release was 10.0 Alpha. At this point the project was aborted. Release history References RPM-based Linux distributions Turkish-language Linux distributions Discontinued Linux distributions Linux distributions
1448960
https://en.wikipedia.org/wiki/Negros%20Oriental%20State%20University
Negros Oriental State University
Negros Oriental State University is the only state university in the province of Negros Oriental, Philippines. Its Main Campus is located in Dumaguete City and has the greatest number of academic programs and student organizations. It also has 7 satellite campuses all over the province. Formerly Central Visayas Polytechnic College, it was converted into a state university for students from Visayas and Mindanao. The Main Campus is on Kagawasan Avenue, Dumaguete City, beside the provincial capitol building of Negros Oriental. History The beginnings of what is now the Negros Oriental State University date back to 1907, from a single woodworking class at what was then the Negros Oriental Provincial School, the forerunner of the present Negros Oriental High School. As more industrial art subjects were added, a separate arts and trade school on the secondary level called the Negros Oriental Trade School, which became the East Visayan School of Arts and Trades in 1956 and the Central Visayas Polytechnic College in 1983. In 2004, it is then converted into what is now the Negros Oriental State University. The Negros Oriental Provincial School The Negros Oriental Provincial High School was the precursor of what is now the Negros Oriental High School. It opened in Dumaguete on September 1, 1902. The "Provincial School," as it was simply referred to before, arose at the time when the principal stress in the program of public instruction of the American Civil Government in the Philippines was simply the introduction of the most basic academic program at the elementary and secondary levels. It was one of the 23 high schools in the country at that time. 1907 The school was started as a small shop on the intermediate level (fifth grade). It was located on the City Hall ground and was adjunct to the provincial schools which had both intermediate and high school classes. Mr. Leonard Brendenstein, a foreigner, was in charge of the school and woodworking was the only course offered at that time. 1916 Mr. Candido Alcazar became the principal of the school and the only course offered was still woodworking. The sixth and seventh grades were opened and conducted in the same small shop. 1922 Mr. Teodoro Senador Sr. took over as principal with Miss Salud Blanco, Mr. Estanislao Alviola Sr. and Mr. Fermin Canlas as teachers. There were at that time 258 intermediate pupils. The curriculum offered by the Provincial School included English, reading, grammar, composition, arithmetic, geography, US history, and spelling. There was also a sewing class, which served as prototype of the vocational arts and trades education in the public schools in the province. The Negros Oriental Trade School (NOTS) The Negros Oriental Trade School (NOTS) was ordered to be created on December 3, 1927 by virtue of Act No. 3377 of the Philippine Legislature. The school was officially named NEGROS ORIENTAL TRADE SCHOOL (NOTS) and became a separate trade school on the secondary level to stress the promotion of education in trades and industries. In July, 1928, with Mr. Flaviano Santos as Principal, the first year class has 25 students. Though it was already considered as a separate institution, its students, however, continued to take their academic courses in English and Mathematics at the Negros Oriental Provincial High School. With the growth in size of the Negros Oriental Trade School, it became imperative to have its own campus. Thus on July 26, 1930, Mr. Paul Wittman, the Division Superintendent of Schools for Negros Oriental, petitioned the Governor-General Henry Dwight F. Davis to reserve for the future campus of the Negros Oriental Trade School a piece of property adjoining the Catholic town cemetery, which lay at what was then the outskirts of Dumaguete. In 1930 Mr. Flaviano Santos continued to be the principal with two sections of the First year and one section of the Second year. Mr. Fermin Canlas taught drawing and Mr. Estanislao Alviola Sr. taught shopwork. Building Construction was introduced with woodworking as a course. In 1932, NOTS was transferred from its premises at the ground floor of the Municipal Hall to its present campus. That same year, it conferred diplomas on its first 18 graduates. The trade school was transferred to the present site and 18 fourth year students were turned out as first graduates with Julian Abrasado and Sixto Dilicano as valedictorian and salutatorian respectively. Upon the transfer of Mr. Flaviano Santos, Mr. Isabelo Sarmiento assumed office as principal of the school. He served less than a year on account of his transfer to Bohol. In 1933 Mr. Vicente Enrile took over the principalship of the school for a short time and he was later transferred to Zamboanga Trade School. A permanent L-shaped building costing more than P35,000.00 was constructed from national funds. From 1934-1941 Mr. Vicente Macairan became the principal. Shop courses were housed in the concrete building and students took their academic subjects at the Negros Oriental High School. Mr. Simplicio Mamicpic headed the academic department. About 1938, Building Construction was offered as a course. The members of class 1942 of NOTS were never to finish the school year. Like all other schools throughout the country, Negros Oriental Trade School was closed. Some of its male faculty and students rose to join the colors during the World War II. The Negros Oriental Trade School campus was used by the US Army in 1945 as quarters for Japanese prisoners of war whom they captured. NOTS was then reopened in July 1946. In 1946 Mr. Francisci Apilado took charge of the school when it reopened. Later Mr. Roberto Angeles became the principal of the school until his transfer to Agusan as Industrial Supervisor. Then Mr. Proceso Gabor became the principal. Electricity and Automechanics were new courses offered. Reparation machines were acquired from Leyte to augment the technical shop equipment. In 1948 The Related Subjects building was constructed from provincial funds. The academic department was headed by Mr. Esperidion G. Heceta after the liberation. After serving for a year, Mr. Heceta was promoted as principal of Larena Sub-Provincial High School. Mr. Fermin C. Santos took over the headship of the department. In view of the BPS ruling that National (Insular) teachers be placed in the national schools, Mr. Santos was persuaded by the Division Superintendent of Schools to exchange places with Mr. Pedro S. Flores, a National (Insular) teacher of the Negros Oriental High School. Mr. Flores did not stay long in this capacity and Mr. Santos was called back to assume the position of the academic department head. In 1950 Mr. Marcelo Bonilla headed the school as principal. The Girls Trade semi-permanent building was constructed. Courses for girls were offered for the first time and 24 girls enrolled. The total enrolment was 865 and there were 40 teachers. It became coeducational for the first time. In 1951 This year the enrolment soared to 1476 and there were 311 girls and 1165 boys. The faculty and staff totaled 66. From 1953-1955 under the PHILCUSA-FOA Program, equipment and supplies were given to the school. Some buildings were constructed under the foreign aid program. Equipment and machinery in the Machine Shop, Woodworking, and Sheet Metal were installed. The enrolment of the school further rose to 1943 in the school year 1954-55, and the personnel and the teachers totaled 84. Mr. Teodulfo Despojo was the principal when Mr. Marcelo Bonilla was promoted as Superintendent of Zamboanga School of Arts and Trades. The East Visayan School of Arts and Trades (EVSAT) By virtue of Republic Act No. 1579 signed into law on June 16, 1956, the Negros Oriental Trade School became the EAST VISAYAN SCHOOL OF ARTS AND TRADES (EVSAT. Under this new status, EVSAT was headed by a "Superintendent" with a "Principal" assisting him in administering the academic program of the school. The responsibility for the financial support of the school also shifted from the shoulders of the province of Negros Oriental to the national government. The most salient developments in the life of the school at this time included its rise in status to a collegiate level, the diversification of its technical curriculum, and the increase in buildings, machinery, and equipment. The implementation took effect during the school year 1957-58. Mr. Mariano P. Dagdag became the first Superintendent of the School and Mr. Julian A. Corpuz assumed office as principal of the school replacing Mr. Despojo who was transferred Agusan Trade School. Technical Education college courses like machine shop technology, electricity technology, technical drafting, technical building construction and girls trade technical courses were offered. In 1957, during the administration of Mr. Mariano P. Dagdag, technical education courses on the collegiate level were offered for the first time. These included technical machine shop, technical building construction, technical automotive mechanics, and a number of girl's trades technical courses. In 1959 Mr. Gregorio P. Espinosa took over as the second superintendent of the school on February 9, succeeding Mr. Dagdag upon his transferred. In 1960, Evening Opportunity Classes were introduced for the first time, to make trade education accessible to adults and out-of-school youth, and in 1961, three other government schools in Negros Oriental were placed under the administration and supervision of EVSAT. These were the Negros Oriental National Agricultural School (NONAS) in Bayawan, the Guihulngan Vocational School in Guihulngan, and the Bais School of Fisheries in Bais City. To the three was subsequently added the Larena National Vocational School in Larena, Siquijor. In 1965, EVSAT was authorized by virtue of Republic Act No. 4401 to offer a teacher education program leading to the degree of Bachelor of Science in Industrial Education. This raised EVSAT to the full status of a collegiate institution and pointed in a fresh direction which in time was to bring an entirely new character to the institution. In 1975, new shop courses in Marine Engineering and Electronics, and Saturday classes in Practical Arts were offered for the first time. EVSAT was also authorized to offer a four-year technical educational program, leading to the degree of Bachelor of Science in Industrial Technology (BSIT), with a major in industrial management and supervision. The need for candidates for the BSIE degree, major in industrial arts, for laboratory classes to do practice teaching led to MECS authorization in 1976 for EVSAT to open elementary classes at first in Grade V to VI. The full elementary school program began at the start of the new school year in June 1977. In later years, a high school was added as a second laboratory schools. In 1976, EVSAT's graduate program was inaugurated, starting with the Master of Education degree. The Central Visayas Polytechnic College By virtue of Batas Pambansa No. 401 passed on April 14, 1983 and signed into law by President Ferdinand E. Marcos on June 10, 1983, the Central Visayas Polytechnic College came into being. The state college was the result of the merger of three government institutions in Negros Oriental, namely the East Visayan School of Arts and Trades in Dumaguete City, the Bais School of Fisheries in Okiot, Bais City, and the Guihulngan Vocational School in Guihulngan City, Negros Oriental. In its educational task, the primary responsibility of the Central Visayas Polytechnic College was "to give professional and technical training in science and technology, advanced specialized instruction in literature, philosophy, arts and sciences, besides providing for the promotion of scientific and technological researchers." The State Collegewas authorized to offer undergraduate courses in liberal arts, engineering, fisheries, agriculture, and short-term vocational courses for the development of middle level skills. It was also authorized to offer graduate courses, after the passage of five years and at the discretion of its Board of Trustees." In December 11, 1986, Atty. Marcelo C. Jalandoon was formally appointed as the first President of Central Visayas Polytechnic College. President Jalandoon's administration of CVPC encompassed the transition period after Martial Law. Philippine education was faced with the great challenge of responding to the compelling need to stabilize the country's political situation by solidifying its economic foundations and fulfilling popular expectations of a better life, now that freedom has been recovered from the morass of oppressive days. By 1991, with the solid foundation established by the earnest efforts of CVPC's previous administrators and the unflagging commitment to service of its faculty and staff, Dr. Henry A. Sojor was appointed by Philippine President Corazon C. Aquino on August 1, 1991, as the second President of the College. He took his oath of office five days later before the Secretary of the Department of Education, Culture and Sports, Dr. Isidro Carino. Negros Oriental State University Republic Act No. 9299 was signed by President Gloria Macapagal Arroyo on June 25, 2004 and the Central Visayas Polytechnic College (CVPC) was converted into a state university, now known as the Negros Oriental State University (NORSU), integrating therewith the Genaro Goñi Memorial College in the City of Bais, the Siaton Community College in the Municipality of Siaton, and the Mabinay Institute of Technology in the Municipality of Mabinay. Colleges & courses offered The Dumaguete Campuses of NORSU is composed of nine colleges offering several undergraduate degrees. The Graduate School is located at the Main Campus College of Agriculture, Forestry and Fisheries Bachelor of Science in Agriculture Major in Animal Science, Agronomy, Agribusiness Bachelor of Science in Forestry College of Arts and Sciences Bachelor of Arts Major in Social Science, General Curriculum Bachelor of Mass Communication Bachelor of Science in Biology Bachelor of Science in Chemistry Bachelor of Science in Computer Science Bachelor of Science in Geology Bachelor of Science in Information Technology Bachelor of Science in Mathematics Bachelor of Science in Psychology College of Business Administration Associate in Hospitality Management Associate in Secretarial Science Bachelor of Science in Accountancy Bachelor of Science in Business Administration Major in Financial Management, Human Resource Development Management Bachelor of Science in Hospitality Management Bachelor of Science in Office Administration Bachelor of Science in Tourism Management College of Criminal Justice Education Bachelor of Science in Criminology College of Teacher Education Center of Development in Teacher Education Bachelor of Elementary Education Area of Specialization in Special Education, Early Childhood Education, General Curriculum Bachelor of Secondary Education Major in English, Mathematics, Filipino, Social Studies, Biological Science, Physical Science, TLE, MAPEH College of Engineering and Architecture Bachelor of Science in Architecture Bachelor of Science in Civil Engineering Bachelor of Science in Computer Engineering Bachelor of Science in Electrical Engineering Bachelor of Science in Electronics and Communications Engineering Bachelor of Science in Geodetic Engineering Bachelor of Science in Geothermal Engineering Bachelor of Science in Mechanical Engineering College of Industrial Technology Associate in Industrial Technology Major in Architectural Drafting, Automotive, Civil, Computer, Electrical, Electronics, Food, Mechanical, and Refrigeration & Air-conditioning Technology Diploma of Technology Major in Automotive, Computer, Electronics, Electrical, and Mechanical Technology Bachelor of Science in Aviation Maintenance Major in Airframe and Powerplant, and Avionics Bachelor of Science in Industrial Technology Major in Architectural Drafting, Automotive, Civil, Computer, Electrical, Electronics, Food, Mechanical, and Refrigeration & Air-conditioning Technology Bachelor of Technological Education Major in Computer, and Electrical Technology Bachelor of Technology Major in Computer, and Electrical Technology Short-Term Courses Continuing Education Programs in Technology, Driving, Computer, and others College of Nursing, Pharmacy and Allied Health Sciences Bachelor of Science in Nursing Bachelor of Science in Pharmacy College of Law Bachelor of Laws (LLB) Graduate School The satellite campuses of NORSU also offer several bachelor's and associate degrees. Bais City Campuses College of Arts and Sciences Associate in Information Technology Bachelor of Science in Computer Science Bachelor of Science in Information Technology College of Business Administration Bachelor of Science in Business Administration Major in Financial Management, Human Resource Development Management, and Marketing Management Bachelor of Science in Hospitality Management Bachelor of Science in Office Administration Bachelor of Science in Tourism Management College of Criminal Justice Education Bachelor of Science in Criminology College of Education Bachelor of Elementary Education Area of Specialization in General Curriculum Bachelor of Secondary Education Major in English, Mathematics, Filipino, Social Studies, Biological Science, TLE, MAPEH College of Industrial Technology Associate in Medical-Dental-Nursing Assistant Bachelor of Science in Fisheries Bachelor of Science in Industrial Technology Major in Automotive, Computer, Electrical, and Electronics Technology Bayawan-Sta. Catalina Campus College of Agriculture and Forestry Bachelor of Agricultural Technology Major in Animal Husbandry Bachelor of Science in Agriculture Major in Crop Science Bachelor of Science in Forestry College of Arts and Sciences Bachelor of Science in Computer Science Bachelor of Science in Information Technology College of Business Administration Associate in Hospitality Management Bachelor of Science in Accountancy Bachelor of Science in Business Administration Major in Human Resource Development Management Bachelor of Science in Hospitality Management Bachelor of Science in Office Administration Bachelor of Science in Tourism Management College of Criminal Justice Education Bachelor of Science in Criminology College of Education Bachelor of Elementary Education Area of Specialization in General Curriculum, and Special Education Bachelor of Secondary Education Major in English, Mathematics, Filipino, Physical Science, MAPEH College of Industrial Technology Associate in Industrial Technology Major in Computer Technology Bachelor of Science in Industrial Technology Major in Automotive, Computer, Electrical, Electronics, and Refrigeration & Air-conditioning Technology Guihulngan City Campus College of Agriculture, Forestry and Fisheries Bachelor of Science in Agriculture Major in Agronomy, and Animal Science College of Arts and Sciences Bachelor of Science in Computer Science College of Business Administration Bachelor of Science in Business Administration Major in Human Resource Development Management Bachelor of Science in Hospitality Management Bachelor of Science in Office Administration College of Criminal Justice Education Bachelor of Science in Criminology College of Education Bachelor of Elementary Education Area of Specialization in General Curriculum Bachelor of Secondary Education Major in English, Mathematics, Social Studies, TLE College of Industrial Technology Bachelor of Science in Industrial Technology Major in Automotive, Computer, Electrical, Electronics, and Food Technology Mabinay Campus College of Technology Bachelor of Science in Agriculture Major in Agronomy Bachelor of Science in Computer Science Bachelor of Science in Industrial Technology Major in Automotive, and Computer Technology College of Business Administration Bachelor of Science in Business Administration Major in Human Resource Development Management Bachelor of Science in Hospitality ManagementCollege of Criminal Justice EducationBachelor of Science in CriminologyCollege of EducationBachelor of Elementary Education Area of Specialization in General Curriculum Bachelor of Secondary Education Major in Filipino Siaton CampusCollege of Arts, Sciences and EducationBachelor of Science in Information Technology Bachelor of Elementary Education Area of Specialization in General Curriculum Bachelor of Secondary Education Major in English, Mathematics, Social StudiesCollege of Business AdministrationBachelor of Science in Business Administration Major in Human Resource Development Management Bachelor of Science in Hospitality ManagementCollege of Criminal Justice EducationBachelor of Science in Criminology Pamplona CampusCollege of Agriculture, Forestry and Fisheries'Bachelor of Science in Agriculture Major in Agronomy, and Animal Science Bachelor of Science in Forestry Campuses Main Campus - Kagawasan Ave., Dumaguete City Bajumpandan Campus - Bajumpandan, Dumaguete City Bais City Campus 1 (formerly Bais School of Fisheries) - Okiot, Bais City Bais City Campus 2 (formerly Genaro Goñi Memorial College) - Quezon St., Bais City Bayawan-Sta. Catalina Campus (formerly Negros Oriental National Agricultural School) - Nat'l Highway, Caranoche, Santa Catalina Guihulngan City Campus (formerly Guihulngan Vocational School) - Nat'l Highway, Guihulngan City Mabinay Campus (formerly Mabinay Institute of Technology) - Old Namangka, Mabinay Siaton Campus (formerly Siaton Community College'') - Progresso St., Brgy. III, Siaton Pamplona Campus - Pamplona (an extension of Main Campus 2 College of Agriculture, Forestry and Fisheries) Student life NORSU is known as the school for poor but deserving students. As a state university, NORSU is covered by the free tuition law and thus attracting even more students. NORSU consistently produces many board passers and topnotchers every year. Enrollment and Population The university adopted a bi-semestral system wherein students enroll twice each year. Starting S.Y. 2019-2020, NORSU opened the school year on August following the mandate of CHED. The Dumaguete campuses alone has an average enrollment of about 12,000 students per semester. University-wide enrollment reaches more than 20,000. NORSU doesn't only cater Oriental Negrenses but also students from other provinces notably Siquijor, Negros Occidental, Cebu and Zamboanga del Norte. Some may come from farther provinces, even from Luzon. As of the 1st semester of SY 2017-2018, the population is more or less 12,000 students in the main campuses. Pylon Pylon is the Official Yearbook of Negros Oriental State University System. Pylon consist of four (4) Departments: Creative Design & Photography Department, Creative Writing Department, Information, Equipment and Record Management Department and Multimedia & Information System Department. Student Government The student government of Negros Oriental State University, named Negros Oriental State University - Federation of Student Governments (NORSU-FSG), is composed of all the student governments of all NORSU system namely: a. Student Governments of Dumaguete City I b. Student Governments of Dumaguete City II c. Student Governments of Bayawan- Sta. Catalina Campus d. Student Governments of Siaton Campus e. Student Governments of Bais Campuses f. Student Governments of Guihulngan Campus g. Student Governments of Mabinay Campus h. Student Governments of Pamplona Campus The NORSUnian Weekly Publication The NORSUnian is the official Weekly student publication of Negros Oriental State University system. The NORSUnian is one of the three (3) acclaimed student publications in the Philippines which comes out weekly together with The Philippine Collegian of the University of the Philippines in Metro Manila, and The Weekly Sillimanian of Silliman University of Dumaguete City. Hugyawan Festival A major event of Negros Oriental State University is the Hugyawan Festival, a merrymaking activity highlighting the celebration of NORSU’s Foundation Day celebration featuring spectacular parade of colorful costumes, festivities, humorous gimmicks and merrymaking in the streets of Dumaguete City from different colleges and satellite campuses of NORSU to showcase the unique way of life of the inhabitants of Negros Oriental, and at the same time, capture the customary response of the Negrenses towards Nature, Fate, and what God has given them, which is thanksgiving through celebration, merrymaking and revelry. Hugyawan, comes from the Cebuano term “hugyaw” or revelry, is a condensation of the phrase “hugot sa pagbayaw”, which literally translates to sincere tribute or heartfelt offering to God, country and culture and to ourselves. On the cultural front, “hugyaw” means “to make very loud noise using drums and other musical instruments or any other indigenous materials that would make varied types of noise” Making noise is done while dancing, jumping and parading on the streets. It is participated in by all the campuses of the NORSU System. NORSU-ROTC Negros Oriental State University—ROTC is the only program offering the Air Force-ROTC, Naval-ROTC and Army-ROTC in the Philippines together with Civic Welfare Training Service (CWTS) & Literacy Training Service (LTS) pursuant to Republic Act 9163 or otherwise known as " The National Service Training Program (NSTP) Act of 2001." See also The NORSUnian References External links NORSU Official University Website The NORSUnian Official Website NORSU Official Yearbook Educational institutions established in 1927 Universities and colleges in Negros Oriental State universities and colleges in the Philippines Education in Dumaguete Philippine Association of State Universities and Colleges 1927 establishments in the Philippines
34672370
https://en.wikipedia.org/wiki/QDA%20Miner
QDA Miner
QDA Miner is a mixed methods and qualitative data analysis software developed by Provalis Research. The program was designed to assist researchers in managing, coding and analyzing qualitative data. QDA Miner was first released in 2004 after being developed by Normand Peladeau. The latest version 6 was released in September, 2020. QDA Miner is a widely used software for qualitative research. It is used by market researchers, survey companies, government, education researchers, crime and fraud detection experts, journalists and others. The data typically used with this qualitative research software comes from (among others), journal articles, script from TV or radio news, social media (such as Facebook, Twitter or reviews from websites), interviews or focus group transcripts and open-ended questions from surveys. Release history QDA Miner 1: January 2004 QDA Miner 2: June 2006 QDA Miner 3: October 2007 QDA Miner 4: December 2011 QDA Miner 4 Lite: November 2012, a free variant of QDA Miner with reduced functionality QDA Miner 5: December 2016 QDA Miner 6: September 2020 Features of QDA Miner 6 New Grid mode for coding shorter responses Quotation Matrices Enhanced annotation feature Word frequency analysis and interactive word cloud Importation of Nexis UNI and Factiva Files Improved Importation of Excel, CSV and TSV files Deviation Table Export Results to Tableau Software Numerical Transformation Binning Support of Missing Values Silhouette plot Date transformation Improved code filtering feature Donut, Radar, 100% Stacked Bar and Area Charts Ordering of series in comparison charts Color Coding of items in Correspondence Plot Improved Bubble Chart Link Analysis Buffer New Table Format and Table Editor Several new options and interface improvements have been made to existing dialog boxes (code color selection, graphic options, etc.), management and analysis features. Features of QDA Miner 5 Import different formats of documents and images: PDF, Word, Excel, HTML, RTF, SPSS files, JPEG, etc. Import data from Facebook, Twitter, Reddit, RSS feeds within the software Import from directly reference managers tools and emails Perform GIS mapping with qualitative data Text retrieval tools: Keyword Retrieval, Query-by-Example, Cluster Extraction. Statistical functions: Coding frequencies, cluster analysis, coding sequences, coding by variables. Visualization tools: multidimensional scaling, heatmaps, correspondence analysis graphic, proximity plot. GeoTagging (GIS) and Time-Tagging tools Report manager tool to store queries and analysis results, tables and graphs, research notes and quotes. References QDA software Science software for Windows
2172801
https://en.wikipedia.org/wiki/Clancy%20Eccles
Clancy Eccles
Clancy Eccles (9 December 1940 in Dean Pen, St. Mary, Jamaica – 30 June 2005 in Spanish Town, Jamaica) was a Jamaican ska and reggae singer, songwriter, arranger, promoter, record producer and talent scout. Known mostly for his early reggae works, he brought a political dimension to this music. His house band was known as The Dynamites. Biography Son of a tailor and builder, Eccles spent his childhood in the countryside of the parish of Saint Mary. Eccles had an itinerant childhood due to his father's need to travel Jamaica seeking work. He used to regularly attend church, and he became influenced by spiritual singing; In his words: "One of my uncles was a spiritual revivalist, who always did this heavy type of spiritual singing, and I got to love that". Eccles's professional singing career began as a teenager, working the north-coast hotel circuit in the mid-1950s. In his late teens, he moved to Ocho Rios, where he performed at night in various shows, with artists such as The Blues Busters, Higgs & Wilson and Buster Brown. He moved to Kingston in 1959, where he started his recording career. He first recorded for Coxsone Dodd, who had organised a talent show in which Eccles took part. Eccles had a Jamaican hit in 1961 with the early ska song "Freedom", which was recorded in 1959, and was featured on Dodd's sound system for two years before it was released. It was one of the first Jamaican songs with socially oriented lyrics. The song discussed the concept of repatriation to Africa, an idea developed by the growing Rastafari movement. The song became the first Jamaican hit to be used for political purposes; Alexander Bustamante, founder of the Jamaican Labour Party and at that time Chief Minister of Jamaica adopted it for his fight against the Federation of the West Indies in 1960. In the following years, Eccles had other successful songs, mixing boogie/rhythm and blues influences with ska rhythms, such as "River Jordan" and "Glory Hallelujah". In 1962, he started promoting concerts and set up his Christmas Morning talent show; first with Dodd, then on his own. He organised concerts for The Clarendonians in 1963, and for The Wailers in 1964 and 1965. He launched other talent search contests, with Battle of the Stars, Clancy Eccles Revue, Independent Revue and Reggae Soul Revue, from which emerged stars such as Barrington Levy and Culture. Starting in 1963, he recorded with producers such as Charlie Moo (Leslie Kong's business partner) and the husband of Sonia Pottinger, Lyndon. He couldn't make a living from his music, so he quit in 1965 to work as a tailor in Annotto Bay. During this period, he made stage outfits for musicians such as Kes Chin, The Mighty Vikings, Byron Lee and the Dragonaires, Carlos Malcolm and The Blues Busters. He went back to music in 1967, producing his own recordings as well as those of other artists. He scored a hit with Eric 'Monty' Morris' reggae song "Say What You're Saying", and with his own song "Feel The Rhythm", one of several records that were instrumental in the shift from rocksteady to reggae. Eccles has also been credited with deriving the name 'reggae' from 'streggae', Kingston slang for a good-time girl. Eccles' first hit, "What Will Your Mama Say" which was released by the recently created United Kingdom label, Pama Records. In 1968, his song "Fattie Fattie" became a skinhead reggae classic, along with his productions of recordings by the toasting DJ King Stitt ("Fire Corner", "Van Cleef", "Herbman Shuffle"). Eccles recorded many organ-led instrumentals with his session band The Dynamites (same band has Derrick Harriott's ), featuring Jackie Jackson, Hux Brown, Paul Douglas, Winston Wright, Gladstone Anderson, Winston Grennan, Joe Isaacs, and Hugh Malcolm, with Johnny Moore and Bobby Ellis both contributing trumpet in different sessions. In 1970, Eccles helped pave the way to the dub music genre by releasing an instrumental version of "Herbman Shuffle" called "Phantom", with a mix focusing on the bass line. Eccles launched different record labels for his works: Clansone, New Beat and Clandisc (the latter also the name of a sub-label set up by Trojan Records for Eccles' UK releases). He recorded artists such as Alton Ellis, Joe Higgs, the Trinidian Lord Creator ("Kingston Town"), Larry Marshall, Hemsley Morris, Earl Lawrence, The Beltones, Glen Ricks, Cynthia Richards, Buster Brown and Beres Hammond. Appreciated by musicians for his fairness and sense of equity, he helped Lee Perry set up his Upsetter record label in 1968 after Perry left Dodd's employment, and helped Winston 'Niney' Holmes (later known as 'The Observer') record his first hit as a producer in 1971 ("Blood & Fire"). A socialist militant, Eccles was appointed as an adviser on the music industry to Michael Manley's People's National Party (PNP) and took part in Jamaica's 1972 prime ministerial elections by organising a "Bandwagon" featuring musicians such as Bob Marley & the Wailers, Dennis Brown, Max Romeo, Delroy Wilson and Inner Circle, performing around the island in support of Manley's campaign. Throughout the 1970s, he remained close to Manley and wrote several songs in praise of the PNP program, including his hits "Power for the People", "Rod of Correction" or "Generation Belly". Eccles' political interests meant that he spent less time on music, although in the late 1970s, Eccles had further success as a producer with recordings by Tito Simon and Exuma the Obeah Man, as well as collaborations with King Tubby. After the 1970s, new Eccles recordings were rare, and he concentrated on live concert promotion and re-issues of his back catalogue. In the 1980s, Eccles slowed down his musical activities, and he never met success again, apart from a few political songs, such as "Dem Mash Up The Country" in 1985. Eccles died on 30 June 2005, in Spanish Town Hospital from complications of a heart attack. Eccles' son, Clancy Eccles Jr., has followed his father into the music business, initially performing as simply "Clancy". Discography Singles before 1967 "River Jordan" / "I Live And I Love" – 1960 – Blue Beat produced by Coxsone Dodd "Freedom" / "More Proof" – 1960 – Blue Beat produced by Coxsone Dodd "Judgement" / "Baby Please" – 1963 – Island Records produced for Charlie Moo "I'm The Greatest" – 1963 – produced by Mike Shadad "Glory Hallelujah" – 1963 – Island Records produced by Coxsone Dodd "Sammy No Dead" / "Roam Jerusalem" – 1965 – Ska Beat produced by Lyndon Pottinger. "Miss Ida" – 1965 – Ska Beat Compilations after 1967 Clancy Eccles Clancy Eccles – Freedom – 1969 – Clandisc/Trojan Clancy Eccles – 1967–1983 – Joshua's Rod of Correction – Jamaican Gold (1996) Clancy Eccles – Top of the Ladder – 1973 – Big Shot/Trojan Clancy Eccles & The Dynamites The Dynamites – Fire Corner – 1969 – Clandisc Clancy Eccles & The Dynamites – Herbsman Reggae – 1970 – Clandisc Clancy Eccles & The Dynamites – Top of the Ladder – 1973 – Big Shot/Trojan The Dynamites – The Wild Bunch Are The Dynamites – 1967–71 – Jamaican Gold (1996) Clancy Eccles & The Dynamites – Nyah Reggae Rock – 1969–70 – Jamaican Gold (1997) Clancy Eccles productions King Stitt – Reggae Fire Beat – 1969–70 – Jamaican Gold (1996) Cynthia Richards & Friends – Foolish Fool −1970 – Clandisc Tito Simon – Just Tito Simon – 1973 – Horse/Trojan coproduced by Joe Sinclair Various – Clancy Eccles – Fatty Fatty – 1967–70 – Trojan (1998) Various – Clancy Eccles Presents His Reggae Revue – Rock Steady Intensified – 1967–72 – Heartbeat Records (1990) Various – Kingston Town: 18 Reggae Hits – Heartbeat Records (1993) Various – Clancy Eccles – Feel The Rhythm - 1966–68 – Jamaican Gold (2000) Various – Clancy Eccles' Rock Steady Reggae Revue at Sombrero Club – 1967–69 – Jamaican Gold (2001) Various – Clancy Eccles' Reggae Revue At The Ward Theatre – 1969–70 – Jamaican Gold (2001) Various – Clancy Eccles' Reggae Revue At The VIP Club – 1970–73 – Jamaican Gold (2001) Various – Clancy Eccles' Reggae Revue At The Carib Theatre – 1973–86 – Jamaican Gold (2001) Various – Clancy Eccles: Freedom – An Anthology – Trojan (October 2005) Notes References Steve Barrow & Peter Dalton (2004) The Rough Guide to Reggae, 3rd edn., Rough Guides, Mel Cooke (2005) "Spacious setting, good musical atmosphere – At Andy's Place", Jamaica Gleaner, 14 September 2005 David Katz (2005) "Obituaries: Clancy Eccles", The Independent, 5 August 2005 Colin Larkin (1998) The Virgin Encyclopedia of Reggae, Virgin Books, Norman Munroe (2003) "A Moonlight Serenade", Jamaica Observer, 19 February 2003 Dave Thompson (2002) Reggae & Caribbean Music, Backbeat Books, Basil Walters (2005) "Remembering Clancy Eccles", Jamaica Observer, 10 July 2005 1940 births 2005 deaths People from Saint Mary Parish, Jamaica Jamaican reggae musicians Jamaican record producers Island Records artists Trojan Records artists
58863
https://en.wikipedia.org/wiki/G%C3%B6del%27s%20incompleteness%20theorems
Gödel's incompleteness theorems
Gödel's incompleteness theorems are two theorems of mathematical logic that are concerned with the limits of in formal axiomatic theories. These results, published by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The theorems are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible. The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency. Employing a diagonal argument, Gödel's incompleteness theorems were the first of several closely related theorems on the limitations of formal systems. They were followed by Tarski's undefinability theorem on the formal undefinability of truth, Church's proof that Hilbert's Entscheidungsproblem is unsolvable, and Turing's theorem that there is no algorithm to solve the halting problem. Formal systems: completeness, consistency, and effective axiomatization The incompleteness theorems apply to formal systems that are of sufficient complexity to express the basic arithmetic of the natural numbers and which are consistent and effectively axiomatized. These concepts being detailed below. Particularly in the context of first-order logic, formal systems are also called formal theories. In general, a formal system is a deductive apparatus that consists of a particular set of axioms along with rules of symbolic manipulation (or rules of inference) that allow for the derivation of new theorems from the axioms. One example of such a system is first-order Peano arithmetic, a system in which all variables are intended to denote natural numbers. In other systems, such as set theory, only some sentences of the formal system express statements about the natural numbers. The incompleteness theorems are about formal provability within these systems, rather than about "provability" in an informal sense. There are several properties that a formal system may have, including completeness, consistency, and the existence of an effective axiomatization. The incompleteness theorems show that systems which contain a sufficient amount of arithmetic cannot possess all three of these properties. Effective axiomatization A formal system is said to be effectively axiomatized (also called effectively generated) if its set of theorems is a recursively enumerable set . This means that there is a computer program that, in principle, could enumerate all the theorems of the system without listing any statements that are not theorems. Examples of effectively generated theories include Peano arithmetic and Zermelo–Fraenkel set theory (ZFC). The theory known as true arithmetic consists of all true statements about the standard integers in the language of Peano arithmetic. This theory is consistent and complete, and contains a sufficient amount of arithmetic. However it does not have a recursively enumerable set of axioms, and thus does not satisfy the hypotheses of the incompleteness theorems. Completeness A set of axioms is (syntactically, or negation-) complete if, for any statement in the axioms' language, that statement or its negation is provable from the axioms . This is the notion relevant for Gödel's first Incompleteness theorem. It is not to be confused with semantic completeness, which means that the set of axioms proves all the semantic tautologies of the given language. In his completeness theorem (not to be confused with the incompleteness theorems described here), Gödel proved that first order logic is semantically complete. But it is not syntactically complete, since there are sentences expressible in the language of first order logic that can be neither proved nor disproved from the axioms of logic alone. In a mere system of logic it would be absurd to expect syntactic completeness. But in a system of mathematics, thinkers such as Hilbert had believed that it is just a matter of time to find such an axiomatization that would allow one to either prove or disprove (by proving its negation) each and every mathematical formula. A formal system might be syntactically incomplete by design, as logics generally are. Or it may be incomplete simply because not all the necessary axioms have been discovered or included. For example, Euclidean geometry without the parallel postulate is incomplete, because some statements in the language (such as the parallel postulate itself) can not be proved from the remaining axioms. Similarly, the theory of dense linear orders is not complete, but becomes complete with an extra axiom stating that there are no endpoints in the order. The continuum hypothesis is a statement in the language of ZFC that is not provable within ZFC, so ZFC is not complete. In this case, there is no obvious candidate for a new axiom that resolves the issue. The theory of first order Peano arithmetic seems to be consistent. Assuming this is indeed the case, note that it has an infinite but recursively enumerable set of axioms, and can encode enough arithmetic for the hypotheses of the incompleteness theorem. Thus by the first incompleteness theorem, Peano Arithmetic is not complete. The theorem gives an explicit example of a statement of arithmetic that is neither provable nor disprovable in Peano's arithmetic. Moreover, this statement is true in the usual model. In addition, no effectively axiomatized, consistent extension of Peano arithmetic can be complete. Consistency A set of axioms is (simply) consistent if there is no statement such that both the statement and its negation are provable from the axioms, and inconsistent otherwise. That is to say, a consistent axiomatic system is one that is free from contradiction. Peano arithmetic is provably consistent from ZFC, but not from within itself. Similarly, ZFC is not provably consistent from within itself, but ZFC + "there exists an inaccessible cardinal" proves ZFC is consistent because if is the least such cardinal, then sitting inside the von Neumann universe is a model of ZFC, and a theory is consistent if and only if it has a model. If one takes all statements in the language of Peano arithmetic as axioms, then this theory is complete, has a recursively enumerable set of axioms, and can describe addition and multiplication. However, it is not consistent. Additional examples of inconsistent theories arise from the paradoxes that result when the axiom schema of unrestricted comprehension is assumed in set theory. Systems which contain arithmetic The incompleteness theorems apply only to formal systems which are able to prove a sufficient collection of facts about the natural numbers. One sufficient collection is the set of theorems of Robinson arithmetic Q. Some systems, such as Peano arithmetic, can directly express statements about natural numbers. Others, such as ZFC set theory, are able to interpret statements about natural numbers into their language. Either of these options is appropriate for the incompleteness theorems. The theory of algebraically closed fields of a given characteristic is complete, consistent, and has an infinite but recursively enumerable set of axioms. However it is not possible to encode the integers into this theory, and the theory cannot describe arithmetic of integers. A similar example is the theory of real closed fields, which is essentially equivalent to Tarski's axioms for Euclidean geometry. So Euclidean geometry itself (in Tarski's formulation) is an example of a complete, consistent, effectively axiomatized theory. The system of Presburger arithmetic consists of a set of axioms for the natural numbers with just the addition operation (multiplication is omitted). Presburger arithmetic is complete, consistent, and recursively enumerable and can encode addition but not multiplication of natural numbers, showing that for Gödel's theorems one needs the theory to encode not just addition but also multiplication. has studied some weak families of arithmetic systems which allow enough arithmetic as relations to formalise Gödel numbering, but which are not strong enough to have multiplication as a function, and so fail to prove the second incompleteness theorem; that is to say, these systems are consistent and capable of proving their own consistency (see self-verifying theories). Conflicting goals In choosing a set of axioms, one goal is to be able to prove as many correct results as possible, without proving any incorrect results. For example, we could imagine a set of true axioms which allow us to prove every true arithmetical claim about the natural numbers . In the standard system of first-order logic, an inconsistent set of axioms will prove every statement in its language (this is sometimes called the principle of explosion), and is thus automatically complete. A set of axioms that is both complete and consistent, however, proves a maximal set of non-contradictory theorems . The pattern illustrated in the previous sections with Peano arithmetic, ZFC, and ZFC + "there exists an inaccessible cardinal" cannot generally be broken. Here ZFC + "there exists an inaccessible cardinal" cannot from itself, be proved consistent. It is also not complete, as illustrated by the in ZFC + "there exists an inaccessible cardinal" theory unresolved continuum hypothesis. The first incompleteness theorem shows that, in formal systems that can express basic arithmetic, a complete and consistent finite list of axioms can never be created: each time an additional, consistent statement is added as an axiom, there are other true statements that still cannot be proved, even with the new axiom. If an axiom is ever added that makes the system complete, it does so at the cost of making the system inconsistent. It is not even possible for an infinite list of axioms to be complete, consistent, and effectively axiomatized. First incompleteness theorem Gödel's first incompleteness theorem first appeared as "Theorem VI" in Gödel's 1931 paper "On Formally Undecidable Propositions of Principia Mathematica and Related Systems I". The hypotheses of the theorem were improved shortly thereafter by using Rosser's trick. The resulting theorem (incorporating Rosser's improvement) may be paraphrased in English as follows, where "formal system" includes the assumption that the system is effectively generated. First Incompleteness Theorem: "Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F." (Raatikainen 2015) The unprovable statement GF referred to by the theorem is often referred to as "the Gödel sentence" for the system F. The proof constructs a particular Gödel sentence for the system F, but there are infinitely many statements in the language of the system that share the same properties, such as the conjunction of the Gödel sentence and any logically valid sentence. Each effectively generated system has its own Gödel sentence. It is possible to define a larger system F’ that contains the whole of F plus GF as an additional axiom. This will not result in a complete system, because Gödel's theorem will also apply to F’, and thus F’ also cannot be complete. In this case, GF is indeed a theorem in F’, because it is an axiom. Because GF states only that it is not provable in F, no contradiction is presented by its provability within F’. However, because the incompleteness theorem applies to F’, there will be a new Gödel statement GF′ for F’, showing that F’ is also incomplete. GF′ will differ from GF in that GF′ will refer to F’, rather than F. Syntactic form of the Gödel sentence The Gödel sentence is designed to refer, indirectly, to itself. The sentence states that, when a particular sequence of steps is used to construct another sentence, that constructed sentence will not be provable in F. However, the sequence of steps is such that the constructed sentence turns out to be GF itself. In this way, the Gödel sentence GF indirectly states its own unprovability within F . To prove the first incompleteness theorem, Gödel demonstrated that the notion of provability within a system could be expressed purely in terms of arithmetical functions that operate on Gödel numbers of sentences of the system. Therefore, the system, which can prove certain facts about numbers, can also indirectly prove facts about its own statements, provided that it is effectively generated. Questions about the provability of statements within the system are represented as questions about the arithmetical properties of numbers themselves, which would be decidable by the system if it were complete. Thus, although the Gödel sentence refers indirectly to sentences of the system F, when read as an arithmetical statement the Gödel sentence directly refers only to natural numbers. It asserts that no natural number has a particular property, where that property is given by a primitive recursive relation . As such, the Gödel sentence can be written in the language of arithmetic with a simple syntactic form. In particular, it can be expressed as a formula in the language of arithmetic consisting of a number of leading universal quantifiers followed by a quantifier-free body (these formulas are at level of the arithmetical hierarchy). Via the MRDP theorem, the Gödel sentence can be re-written as a statement that a particular polynomial in many variables with integer coefficients never takes the value zero when integers are substituted for its variables . Truth of the Gödel sentence The first incompleteness theorem shows that the Gödel sentence GF of an appropriate formal theory F is unprovable in F. Because, when interpreted as a statement about arithmetic, this unprovability is exactly what the sentence (indirectly) asserts, the Gödel sentence is, in fact, true (; also see ). For this reason, the sentence GF is often said to be "true but unprovable." . However, since the Gödel sentence cannot itself formally specify its intended interpretation, the truth of the sentence GF may only be arrived at via a meta-analysis from outside the system. In general, this meta-analysis can be carried out within the weak formal system known as primitive recursive arithmetic, which proves the implication Con(F)→GF, where Con(F) is a canonical sentence asserting the consistency of F (, ). Although the Gödel sentence of a consistent theory is true as a statement about the intended interpretation of arithmetic, the Gödel sentence will be false in some nonstandard models of arithmetic, as a consequence of Gödel's completeness theorem . That theorem shows that, when a sentence is independent of a theory, the theory will have models in which the sentence is true and models in which the sentence is false. As described earlier, the Gödel sentence of a system F is an arithmetical statement which claims that no number exists with a particular property. The incompleteness theorem shows that this claim will be independent of the system F, and the truth of the Gödel sentence follows from the fact that no standard natural number has the property in question. Any model in which the Gödel sentence is false must contain some element which satisfies the property within that model. Such a model must be "nonstandard" – it must contain elements that do not correspond to any standard natural number (, ). Relationship with the liar paradox Gödel specifically cites Richard's paradox and the liar paradox as semantical analogues to his syntactical incompleteness result in the introductory section of "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I". The liar paradox is the sentence "This sentence is false." An analysis of the liar sentence shows that it cannot be true (for then, as it asserts, it is false), nor can it be false (for then, it is true). A Gödel sentence G for a system F makes a similar assertion to the liar sentence, but with truth replaced by provability: G says "G is not provable in the system F." The analysis of the truth and provability of G is a formalized version of the analysis of the truth of the liar sentence. It is not possible to replace "not provable" with "false" in a Gödel sentence because the predicate "Q is the Gödel number of a false formula" cannot be represented as a formula of arithmetic. This result, known as Tarski's undefinability theorem, was discovered independently both by Gödel, when he was working on the proof of the incompleteness theorem, and by the theorem's namesake, Alfred Tarski. Extensions of Gödel's original result Compared to the theorems stated in Gödel's 1931 paper, many contemporary statements of the incompleteness theorems are more general in two ways. These generalized statements are phrased to apply to a broader class of systems, and they are phrased to incorporate weaker consistency assumptions. Gödel demonstrated the incompleteness of the system of Principia Mathematica, a particular system of arithmetic, but a parallel demonstration could be given for any effective system of a certain expressiveness. Gödel commented on this fact in the introduction to his paper, but restricted the proof to one system for concreteness. In modern statements of the theorem, it is common to state the effectiveness and expressiveness conditions as hypotheses for the incompleteness theorem, so that it is not limited to any particular formal system. The terminology used to state these conditions was not yet developed in 1931 when Gödel published his results. Gödel's original statement and proof of the incompleteness theorem requires the assumption that the system is not just consistent but ω-consistent. A system is ω-consistent if it is not ω-inconsistent, and is ω-inconsistent if there is a predicate P such that for every specific natural number m the system proves ~P(m), and yet the system also proves that there exists a natural number n such that P(n). That is, the system says that a number with property P exists while denying that it has any specific value. The ω-consistency of a system implies its consistency, but consistency does not imply ω-consistency. strengthened the incompleteness theorem by finding a variation of the proof (Rosser's trick) that only requires the system to be consistent, rather than ω-consistent. This is mostly of technical interest, because all true formal theories of arithmetic (theories whose axioms are all true statements about natural numbers) are ω-consistent, and thus Gödel's theorem as originally stated applies to them. The stronger version of the incompleteness theorem that only assumes consistency, rather than ω-consistency, is now commonly known as Gödel's incompleteness theorem and as the Gödel–Rosser theorem. Second incompleteness theorem For each formal system F containing basic arithmetic, it is possible to canonically define a formula Cons(F) expressing the consistency of F. This formula expresses the property that "there does not exist a natural number coding a formal derivation within the system F whose conclusion is a syntactic contradiction." The syntactic contradiction is often taken to be "0=1", in which case Cons(F) states "there is no natural number that codes a derivation of '0=1' from the axioms of F." Gödel's second incompleteness theorem shows that, under general assumptions, this canonical consistency statement Cons(F) will not be provable in F. The theorem first appeared as "Theorem XI" in Gödel's 1931 paper "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I". In the following statement, the term "formalized system" also includes an assumption that F is effectively axiomatized. Second Incompleteness Theorem: "Assume F is a consistent formalized system which contains elementary arithmetic. Then ." This theorem is stronger than the first incompleteness theorem because the statement constructed in the first incompleteness theorem does not directly express the consistency of the system. The proof of the second incompleteness theorem is obtained by formalizing the proof of the first incompleteness theorem within the system F itself. Expressing consistency There is a technical subtlety in the second incompleteness theorem regarding the method of expressing the consistency of F as a formula in the language of F. There are many ways to express the consistency of a system, and not all of them lead to the same result. The formula Cons(F) from the second incompleteness theorem is a particular expression of consistency. Other formalizations of the claim that F is consistent may be inequivalent in F, and some may even be provable. For example, first-order Peano arithmetic (PA) can prove that "the largest consistent subset of PA" is consistent. But, because PA is consistent, the largest consistent subset of PA is just PA, so in this sense PA "proves that it is consistent". What PA does not prove is that the largest consistent subset of PA is, in fact, the whole of PA. (The term "largest consistent subset of PA" is meant here to be the largest consistent initial segment of the axioms of PA under some particular effective enumeration.) The Hilbert–Bernays conditions The standard proof of the second incompleteness theorem assumes that the provability predicate ProvA(P) satisfies the Hilbert–Bernays provability conditions. Letting #(P) represent the Gödel number of a formula P, the provability conditions say: If F proves P, then F proves ProvA(#(P)). F proves 1.; that is, F proves ProvA(#(P)) → ProvA(#(ProvA(#(P)))). F proves ProvA(#(P → Q)) ∧ ProvA(#(P)) → ProvA(#(Q))   (analogue of modus ponens). There are systems, such as Robinson arithmetic, which are strong enough to meet the assumptions of the first incompleteness theorem, but which do not prove the Hilbert–Bernays conditions. Peano arithmetic, however, is strong enough to verify these conditions, as are all theories stronger than Peano arithmetic. Implications for consistency proofs Gödel's second incompleteness theorem also implies that a system F1 satisfying the technical conditions outlined above cannot prove the consistency of any system F2 that proves the consistency of F1. This is because such a system F1 can prove that if F2 proves the consistency of F1, then F1 is in fact consistent. For the claim that F1 is consistent has form "for all numbers n, n has the decidable property of not being a code for a proof of contradiction in F1". If F1 were in fact inconsistent, then F2 would prove for some n that n is the code of a contradiction in F1. But if F2 also proved that F1 is consistent (that is, that there is no such n), then it would itself be inconsistent. This reasoning can be formalized in F1 to show that if F2 is consistent, then F1 is consistent. Since, by second incompleteness theorem, F1 does not prove its consistency, it cannot prove the consistency of F2 either. This corollary of the second incompleteness theorem shows that there is no hope of proving, for example, the consistency of Peano arithmetic using any finitistic means that can be formalized in a system the consistency of which is provable in Peano arithmetic (PA). For example, the system of primitive recursive arithmetic (PRA), which is widely accepted as an accurate formalization of finitistic mathematics, is provably consistent in PA. Thus PRA cannot prove the consistency of PA. This fact is generally seen to imply that Hilbert's program, which aimed to justify the use of "ideal" (infinitistic) mathematical principles in the proofs of "real" (finitistic) mathematical statements by giving a finitistic proof that the ideal principles are consistent, cannot be carried out . The corollary also indicates the epistemological relevance of the second incompleteness theorem. It would actually provide no interesting information if a system F proved its consistency. This is because inconsistent theories prove everything, including their consistency. Thus a consistency proof of F in F would give us no clue as to whether F really is consistent; no doubts about the consistency of F would be resolved by such a consistency proof. The interest in consistency proofs lies in the possibility of proving the consistency of a system F in some system F’ that is in some sense less doubtful than F itself, for example weaker than F. For many naturally occurring theories F and F’, such as F = Zermelo–Fraenkel set theory and F’ = primitive recursive arithmetic, the consistency of F’ is provable in F, and thus F’ cannot prove the consistency of F by the above corollary of the second incompleteness theorem. The second incompleteness theorem does not rule out altogether the possibility of proving the consistency of some theory T, only doing so in a theory that T itself can prove to be consistent. For example, Gerhard Gentzen proved the consistency of Peano arithmetic in a different system that includes an axiom asserting that the ordinal called ε0 is wellfounded; see Gentzen's consistency proof. Gentzen's theorem spurred the development of ordinal analysis in proof theory. Examples of undecidable statements There are two distinct senses of the word "undecidable" in mathematics and computer science. The first of these is the proof-theoretic sense used in relation to Gödel's theorems, that of a statement being neither provable nor refutable in a specified deductive system. The second sense, which will not be discussed here, is used in relation to computability theory and applies not to statements but to decision problems, which are countably infinite sets of questions each requiring a yes or no answer. Such a problem is said to be undecidable if there is no computable function that correctly answers every question in the problem set (see undecidable problem). Because of the two meanings of the word undecidable, the term independent is sometimes used instead of undecidable for the "neither provable nor refutable" sense. Undecidability of a statement in a particular deductive system does not, in and of itself, address the question of whether the truth value of the statement is well-defined, or whether it can be determined by other means. Undecidability only implies that the particular deductive system being considered does not prove the truth or falsity of the statement. Whether there exist so-called "absolutely undecidable" statements, whose truth value can never be known or is ill-specified, is a controversial point in the philosophy of mathematics. The combined work of Gödel and Paul Cohen has given two concrete examples of undecidable statements (in the first sense of the term): The continuum hypothesis can neither be proved nor refuted in ZFC (the standard axiomatization of set theory), and the axiom of choice can neither be proved nor refuted in ZF (which is all the ZFC axioms except the axiom of choice). These results do not require the incompleteness theorem. Gödel proved in 1940 that neither of these statements could be disproved in ZF or ZFC set theory. In the 1960s, Cohen proved that neither is provable from ZF, and the continuum hypothesis cannot be proved from ZFC. In 1973, Saharon Shelah showed that the Whitehead problem in group theory is undecidable, in the first sense of the term, in standard set theory. Gregory Chaitin produced undecidable statements in algorithmic information theory and proved another incompleteness theorem in that setting. Chaitin's incompleteness theorem states that for any system that can represent enough arithmetic, there is an upper bound c such that no specific number can be proved in that system to have Kolmogorov complexity greater than c. While Gödel's theorem is related to the liar paradox, Chaitin's result is related to Berry's paradox. Undecidable statements provable in larger systems These are natural mathematical equivalents of the Gödel "true but undecidable" sentence. They can be proved in a larger system which is generally accepted as a valid form of reasoning, but are undecidable in a more limited system such as Peano Arithmetic. In 1977, Paris and Harrington proved that the Paris–Harrington principle, a version of the infinite Ramsey theorem, is undecidable in (first-order) Peano arithmetic, but can be proved in the stronger system of second-order arithmetic. Kirby and Paris later showed that Goodstein's theorem, a statement about sequences of natural numbers somewhat simpler than the Paris–Harrington principle, is also undecidable in Peano arithmetic. Kruskal's tree theorem, which has applications in computer science, is also undecidable from Peano arithmetic but provable in set theory. In fact Kruskal's tree theorem (or its finite form) is undecidable in a much stronger system codifying the principles acceptable based on a philosophy of mathematics called predicativism. The related but more general graph minor theorem (2003) has consequences for computational complexity theory. Relationship with computability The incompleteness theorem is closely related to several results about undecidable sets in recursion theory. presented a proof of Gödel's incompleteness theorem using basic results of computability theory. One such result shows that the halting problem is undecidable: there is no computer program that can correctly determine, given any program P as input, whether P eventually halts when run with a particular given input. Kleene showed that the existence of a complete effective system of arithmetic with certain consistency properties would force the halting problem to be decidable, a contradiction. This method of proof has also been presented by ; ; and . explains how Matiyasevich's solution to Hilbert's 10th problem can be used to obtain a proof to Gödel's first incompleteness theorem. Matiyasevich proved that there is no algorithm that, given a multivariate polynomial p(x1, x2,...,xk) with integer coefficients, determines whether there is an integer solution to the equation p = 0. Because polynomials with integer coefficients, and integers themselves, are directly expressible in the language of arithmetic, if a multivariate integer polynomial equation p = 0 does have a solution in the integers then any sufficiently strong system of arithmetic T will prove this. Moreover, if the system T is ω-consistent, then it will never prove that a particular polynomial equation has a solution when in fact there is no solution in the integers. Thus, if T were complete and ω-consistent, it would be possible to determine algorithmically whether a polynomial equation has a solution by merely enumerating proofs of T until either "p has a solution" or "p has no solution" is found, in contradiction to Matiyasevich's theorem. Hence it follows that T cannot be w-consistent and complete. Moreover, for each consistent effectively generated system T, it is possible to effectively generate a multivariate polynomial p over the integers such that the equation p = 0 has no solutions over the integers, but the lack of solutions cannot be proved in T (; ). shows how the existence of recursively inseparable sets can be used to prove the first incompleteness theorem. This proof is often extended to show that systems such as Peano arithmetic are essentially undecidable (see ). Chaitin's incompleteness theorem gives a different method of producing independent sentences, based on Kolmogorov complexity. Like the proof presented by Kleene that was mentioned above, Chaitin's theorem only applies to theories with the additional property that all their axioms are true in the standard model of the natural numbers. Gödel's incompleteness theorem is distinguished by its applicability to consistent theories that nonetheless include statements that are false in the standard model; these theories are known as ω-inconsistent. Proof sketch for the first theorem The proof by contradiction has three essential parts. To begin, choose a formal system that meets the proposed criteria: Statements in the system can be represented by natural numbers (known as Gödel numbers). The significance of this is that properties of statements—such as their truth and falsehood—will be equivalent to determining whether their Gödel numbers have certain properties, and that properties of the statements can therefore be demonstrated by examining their Gödel numbers. This part culminates in the construction of a formula expressing the idea that "statement S is provable in the system" (which can be applied to any statement "S" in the system). In the formal system it is possible to construct a number whose matching statement, when interpreted, is self-referential and essentially says that it (i.e. the statement itself) is unprovable. This is done using a technique called "diagonalization" (so-called because of its origins as Cantor's diagonal argument). Within the formal system this statement permits a demonstration that it is neither provable nor disprovable in the system, and therefore the system cannot in fact be ω-consistent. Hence the original assumption that the proposed system met the criteria is false. Arithmetization of syntax The main problem in fleshing out the proof described above is that it seems at first that to construct a statement p that is equivalent to "p cannot be proved", p would somehow have to contain a reference to p, which could easily give rise to an infinite regress. Gödel's ingenious technique is to show that statements can be matched with numbers (often called the arithmetization of syntax) in such a way that "proving a statement" can be replaced with "testing whether a number has a given property". This allows a self-referential formula to be constructed in a way that avoids any infinite regress of definitions. The same technique was later used by Alan Turing in his work on the Entscheidungsproblem. In simple terms, a method can be devised so that every formula or statement that can be formulated in the system gets a unique number, called its Gödel number, in such a way that it is possible to mechanically convert back and forth between formulas and Gödel numbers. The numbers involved might be very long indeed (in terms of number of digits), but this is not a barrier; all that matters is that such numbers can be constructed. A simple example is how English can be stored as a sequence of numbers for each letter and then combined into a single larger number: The word hello is encoded as 104-101-108-108-111 in ASCII, which can be converted into the number 104101108108111. The logical statement x=y => y=x is encoded as 120-061-121-032-061-062-032-121-061-120 in ASCII, which can be converted into the number 120061121032061062032121061120. In principle, proving a statement true or false can be shown to be equivalent to proving that the number matching the statement does or doesn't have a given property. Because the formal system is strong enough to support reasoning about numbers in general, it can support reasoning about numbers that represent formulae and statements as well. Crucially, because the system can support reasoning about properties of numbers, the results are equivalent to reasoning about provability of their equivalent statements. Construction of a statement about "provability" Having shown that in principle the system can indirectly make statements about provability, by analyzing properties of those numbers representing statements it is now possible to show how to create a statement that actually does this. A formula F(x) that contains exactly one free variable x is called a statement form or class-sign. As soon as x is replaced by a specific number, the statement form turns into a bona fide statement, and it is then either provable in the system, or not. For certain formulas one can show that for every natural number , is true if and only if it can be proved (the precise requirement in the original proof is weaker, but for the proof sketch this will suffice). In particular, this is true for every specific arithmetic operation between a finite number of natural numbers, such as "23 = 6". Statement forms themselves are not statements and therefore cannot be proved or disproved. But every statement form F(x) can be assigned a Gödel number denoted by G(F). The choice of the free variable used in the form F(x) is not relevant to the assignment of the Gödel number G(F). The notion of provability itself can also be encoded by Gödel numbers, in the following way: since a proof is a list of statements which obey certain rules, the Gödel number of a proof can be defined. Now, for every statement p, one may ask whether a number x is the Gödel number of its proof. The relation between the Gödel number of p and x, the potential Gödel number of its proof, is an arithmetical relation between two numbers. Therefore, there is a statement form Bew(y) that uses this arithmetical relation to state that a Gödel number of a proof of y exists: ( y is the Gödel number of a formula and x is the Gödel number of a proof of the formula encoded by y). The name Bew is short for beweisbar, the German word for "provable"; this name was originally used by Gödel to denote the provability formula just described. Note that "Bew(y)" is merely an abbreviation that represents a particular, very long, formula in the original language of T; the string "Bew" itself is not claimed to be part of this language. An important feature of the formula Bew(y) is that if a statement p is provable in the system then Bew(G(p)) is also provable. This is because any proof of p would have a corresponding Gödel number, the existence of which causes Bew(G(p)) to be satisfied. Diagonalization The next step in the proof is to obtain a statement which, indirectly, asserts its own unprovability. Although Gödel constructed this statement directly, the existence of at least one such statement follows from the diagonal lemma, which says that for any sufficiently strong formal system and any statement form F there is a statement p such that the system proves p ↔ F(G(p)). By letting F be the negation of Bew(x), we obtain the theorem p ↔ ~Bew(G(p)) and the p defined by this roughly states that its own Gödel number is the Gödel number of an unprovable formula. The statement p is not literally equal to ~Bew(G(p)); rather, p states that if a certain calculation is performed, the resulting Gödel number will be that of an unprovable statement. But when this calculation is performed, the resulting Gödel number turns out to be the Gödel number of p itself. This is similar to the following sentence in English: ", when preceded by itself in quotes, is unprovable.", when preceded by itself in quotes, is unprovable. This sentence does not directly refer to itself, but when the stated transformation is made the original sentence is obtained as a result, and thus this sentence indirectly asserts its own unprovability. The proof of the diagonal lemma employs a similar method. Now, assume that the axiomatic system is ω-consistent, and let p be the statement obtained in the previous section. If p were provable, then Bew(G(p)) would be provable, as argued above. But p asserts the negation of Bew(G(p)). Thus the system would be inconsistent, proving both a statement and its negation. This contradiction shows that p cannot be provable. If the negation of p were provable, then Bew(G(p)) would be provable (because p was constructed to be equivalent to the negation of Bew(G(p))). However, for each specific number x, x cannot be the Gödel number of the proof of p, because p is not provable (from the previous paragraph). Thus on one hand the system proves there is a number with a certain property (that it is the Gödel number of the proof of p), but on the other hand, for every specific number x, we can prove that it does not have this property. This is impossible in an ω-consistent system. Thus the negation of p is not provable. Thus the statement p is undecidable in our axiomatic system: it can neither be proved nor disproved within the system. In fact, to show that p is not provable only requires the assumption that the system is consistent. The stronger assumption of ω-consistency is required to show that the negation of p is not provable. Thus, if p is constructed for a particular system: If the system is ω-consistent, it can prove neither p nor its negation, and so p is undecidable. If the system is consistent, it may have the same situation, or it may prove the negation of p. In the later case, we have a statement ("not p") which is false but provable, and the system is not ω-consistent. If one tries to "add the missing axioms" to avoid the incompleteness of the system, then one has to add either p or "not p" as axioms. But then the definition of "being a Gödel number of a proof" of a statement changes. which means that the formula Bew(x) is now different. Thus when we apply the diagonal lemma to this new Bew, we obtain a new statement p, different from the previous one, which will be undecidable in the new system if it is ω-consistent. Proof via Berry's paradox sketches an alternative proof of the first incompleteness theorem that uses Berry's paradox rather than the liar paradox to construct a true but unprovable formula. A similar proof method was independently discovered by Saul Kripke . Boolos's proof proceeds by constructing, for any computably enumerable set S of true sentences of arithmetic, another sentence which is true but not contained in S. This gives the first incompleteness theorem as a corollary. According to Boolos, this proof is interesting because it provides a "different sort of reason" for the incompleteness of effective, consistent theories of arithmetic . Computer verified proofs The incompleteness theorems are among a relatively small number of nontrivial theorems that have been transformed into formalized theorems that can be completely verified by proof assistant software. Gödel's original proofs of the incompleteness theorems, like most mathematical proofs, were written in natural language intended for human readers. Computer-verified proofs of versions of the first incompleteness theorem were announced by Natarajan Shankar in 1986 using Nqthm , by Russell O'Connor in 2003 using Coq and by John Harrison in 2009 using HOL Light . A computer-verified proof of both incompleteness theorems was announced by Lawrence Paulson in 2013 using Isabelle . Proof sketch for the second theorem The main difficulty in proving the second incompleteness theorem is to show that various facts about provability used in the proof of the first incompleteness theorem can be formalized within a system S using a formal predicate for provability. Once this is done, the second incompleteness theorem follows by formalizing the entire proof of the first incompleteness theorem within the system S itself. Let p stand for the undecidable sentence constructed above, and assume for purposes of obtaining a contradiction that the consistency of the system S can be proved from within the system S itself. This is equivalent to proving the statement "System S is consistent". Now consider the statement c, where c = "If the system S is consistent, then p is not provable". The proof of sentence c can be formalized within the system S, and therefore the statement c, "p is not provable", (or identically, "not P(p)") can be proved in the system S. Observe then, that if we can prove that the system S is consistent (ie. the statement in the hypothesis of c), then we have proved that p is not provable. But this is a contradiction since by the 1st Incompleteness Theorem, this sentence (ie. what is implied in the sentence c, ""p" is not provable") is what we construct to be unprovable. Notice that this is why we require formalizing the first Incompleteness Theorem in S: to prove the 2nd Incompleteness Theorem, we obtain a contradiction with the 1st Incompleteness Theorem which can do only by showing that the theorem holds in S. So we cannot prove that the system S is consistent. And the 2nd Incompleteness Theorem statement follows. Discussion and implications The incompleteness results affect the philosophy of mathematics, particularly versions of formalism, which use a single system of formal logic to define their principles. Consequences for logicism and Hilbert's second problem The incompleteness theorem is sometimes thought to have severe consequences for the program of logicism proposed by Gottlob Frege and Bertrand Russell, which aimed to define the natural numbers in terms of logic . Bob Hale and Crispin Wright argue that it is not a problem for logicism because the incompleteness theorems apply equally to first order logic as they do to arithmetic. They argue that only those who believe that the natural numbers are to be defined in terms of first order logic have this problem. Many logicians believe that Gödel's incompleteness theorems struck a fatal blow to David Hilbert's second problem, which asked for a finitary consistency proof for mathematics. The second incompleteness theorem, in particular, is often viewed as making the problem impossible. Not all mathematicians agree with this analysis, however, and the status of Hilbert's second problem is not yet decided (see "Modern viewpoints on the status of the problem"). Minds and machines Authors including the philosopher J. R. Lucas and physicist Roger Penrose have debated what, if anything, Gödel's incompleteness theorems imply about human intelligence. Much of the debate centers on whether the human mind is equivalent to a Turing machine, or by the Church–Turing thesis, any finite machine at all. If it is, and if the machine is consistent, then Gödel's incompleteness theorems would apply to it. suggested that while Gödel's theorems cannot be applied to humans, since they make mistakes and are therefore inconsistent, it may be applied to the human faculty of science or mathematics in general. Assuming that it is consistent, either its consistency cannot be proved or it cannot be represented by a Turing machine. has proposed that the concept of mathematical "knowability" should be based on computational complexity rather than logical decidability. He writes that "when knowability is interpreted by modern standards, namely via computational complexity, the Gödel phenomena are very much with us." Douglas Hofstadter, in his books Gödel, Escher, Bach and I Am a Strange Loop, cites Gödel's theorems as an example of what he calls a strange loop, a hierarchical, self-referential structure existing within an axiomatic formal system. He argues that this is the same kind of structure which gives rise to consciousness, the sense of "I", in the human mind. While the self-reference in Gödel's theorem comes from the Gödel sentence asserting its own unprovability within the formal system of Principia Mathematica, the self-reference in the human mind comes from the way in which the brain abstracts and categorises stimuli into "symbols", or groups of neurons which respond to concepts, in what is effectively also a formal system, eventually giving rise to symbols modelling the concept of the very entity doing the perception. Hofstadter argues that a strange loop in a sufficiently complex formal system can give rise to a "downward" or "upside-down" causality, a situation in which the normal hierarchy of cause-and-effect is flipped upside-down. In the case of Gödel's theorem, this manifests, in short, as the following: "Merely from knowing the formula's meaning, one can infer its truth or falsity without any effort to derive it in the old-fashioned way, which requires one to trudge methodically "upwards" from the axioms. This is not just peculiar; it is astonishing. Normally, one cannot merely look at what a mathematical conjecture says and simply appeal to the content of that statement on its own to deduce whether the statement is true or false." (I Am a Strange Loop.) In the case of the mind, a far more complex formal system, this "downward causality" manifests, in Hofstadter's view, as the ineffable human instinct that the causality of our minds lies on the high level of desires, concepts, personalities, thoughts and ideas, rather than on the low level of interactions between neurons or even fundamental particles, even though according to physics the latter seems to possess the causal power. "There is thus a curious upside-downness to our normal human way of perceiving the world: we are built to perceive “big stuff” rather than “small stuff”, even though the domain of the tiny seems to be where the actual motors driving reality reside." (I Am a Strange Loop.) Paraconsistent logic Although Gödel's theorems are usually studied in the context of classical logic, they also have a role in the study of paraconsistent logic and of inherently contradictory statements (dialetheia). argues that replacing the notion of formal proof in Gödel's theorem with the usual notion of informal proof can be used to show that naive mathematics is inconsistent, and uses this as evidence for dialetheism. The cause of this inconsistency is the inclusion of a truth predicate for a system within the language of the system . gives a more mixed appraisal of the applications of Gödel's theorems to dialetheism. Appeals to the incompleteness theorems in other fields Appeals and analogies are sometimes made to the incompleteness theorems in support of arguments that go beyond mathematics and logic. Several authors have commented negatively on such extensions and interpretations, including Torkel Franzén (2005); Panu ; ; and . , for example, quote from Rebecca Goldstein's comments on the disparity between Gödel's avowed Platonism and the anti-realist uses to which his ideas are sometimes put. criticize Régis Debray's invocation of the theorem in the context of sociology; Debray has defended this use as metaphorical (ibid.). History After Gödel published his proof of the completeness theorem as his doctoral thesis in 1929, he turned to a second problem for his habilitation. His original goal was to obtain a positive solution to Hilbert's second problem . At the time, theories of the natural numbers and real numbers similar to second-order arithmetic were known as "analysis", while theories of the natural numbers alone were known as "arithmetic". Gödel was not the only person working on the consistency problem. Ackermann had published a flawed consistency proof for analysis in 1925, in which he attempted to use the method of ε-substitution originally developed by Hilbert. Later that year, von Neumann was able to correct the proof for a system of arithmetic without any axioms of induction. By 1928, Ackermann had communicated a modified proof to Bernays; this modified proof led Hilbert to announce his belief in 1929 that the consistency of arithmetic had been demonstrated and that a consistency proof of analysis would likely soon follow. After the publication of the incompleteness theorems showed that Ackermann's modified proof must be erroneous, von Neumann produced a concrete example showing that its main technique was unsound (; ). In the course of his research, Gödel discovered that although a sentence which asserts its own falsehood leads to paradox, a sentence that asserts its own non-provability does not. In particular, Gödel was aware of the result now called Tarski's indefinability theorem, although he never published it. Gödel announced his first incompleteness theorem to Carnap, Feigel and Waismann on August 26, 1930; all four would attend the Second Conference on the Epistemology of the Exact Sciences, a key conference in Königsberg the following week. Announcement The 1930 Königsberg conference was a joint meeting of three academic societies, with many of the key logicians of the time in attendance. Carnap, Heyting, and von Neumann delivered one-hour addresses on the mathematical philosophies of logicism, intuitionism, and formalism, respectively . The conference also included Hilbert's retirement address, as he was leaving his position at the University of Göttingen. Hilbert used the speech to argue his belief that all mathematical problems can be solved. He ended his address by saying, This speech quickly became known as a summary of Hilbert's beliefs on mathematics (its final six words, "Wir müssen wissen. Wir werden wissen!", were used as Hilbert's epitaph in 1943). Although Gödel was likely in attendance for Hilbert's address, the two never met face to face . Gödel announced his first incompleteness theorem at a roundtable discussion session on the third day of the conference. The announcement drew little attention apart from that of von Neumann, who pulled Gödel aside for conversation. Later that year, working independently with knowledge of the first incompleteness theorem, von Neumann obtained a proof of the second incompleteness theorem, which he announced to Gödel in a letter dated November 20, 1930 . Gödel had independently obtained the second incompleteness theorem and included it in his submitted manuscript, which was received by Monatshefte für Mathematik on November 17, 1930. Gödel's paper was published in the Monatshefte in 1931 under the title "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I" ("On Formally Undecidable Propositions in Principia Mathematica and Related Systems I"). As the title implies, Gödel originally planned to publish a second part of the paper in the next volume of the Monatshefte; the prompt acceptance of the first paper was one reason he changed his plans . Generalization and acceptance Gödel gave a series of lectures on his theorems at Princeton in 1933–1934 to an audience that included Church, Kleene, and Rosser. By this time, Gödel had grasped that the key property his theorems required is that the system must be effective (at the time, the term "general recursive" was used). Rosser proved in 1936 that the hypothesis of ω-consistency, which was an integral part of Gödel's original proof, could be replaced by simple consistency, if the Gödel sentence was changed in an appropriate way. These developments left the incompleteness theorems in essentially their modern form. Gentzen published his consistency proof for first-order arithmetic in 1936. Hilbert accepted this proof as "finitary" although (as Gödel's theorem had already shown) it cannot be formalized within the system of arithmetic that is being proved consistent. The impact of the incompleteness theorems on Hilbert's program was quickly realized. Bernays included a full proof of the incompleteness theorems in the second volume of Grundlagen der Mathematik (1939), along with additional results of Ackermann on the ε-substitution method and Gentzen's consistency proof of arithmetic. This was the first full published proof of the second incompleteness theorem. Criticisms Finsler used a version of Richard's paradox to construct an expression that was false but unprovable in a particular, informal framework he had developed. Gödel was unaware of this paper when he proved the incompleteness theorems (Collected Works Vol. IV., p. 9). Finsler wrote to Gödel in 1931 to inform him about this paper, which Finsler felt had priority for an incompleteness theorem. Finsler's methods did not rely on formalized provability, and had only a superficial resemblance to Gödel's work . Gödel read the paper but found it deeply flawed, and his response to Finsler laid out concerns about the lack of formalization (). Finsler continued to argue for his philosophy of mathematics, which eschewed formalization, for the remainder of his career. Zermelo In September 1931, Ernst Zermelo wrote to Gödel to announce what he described as an "essential gap" in Gödel's argument (). In October, Gödel replied with a 10-page letter (, ), where he pointed out that Zermelo mistakenly assumed that the notion of truth in a system is definable in that system (which is not true in general by Tarski's undefinability theorem). But Zermelo did not relent and published his criticisms in print with "a rather scathing paragraph on his young competitor" (). Gödel decided that to pursue the matter further was pointless, and Carnap agreed (). Much of Zermelo's subsequent work was related to logics stronger than first-order logic, with which he hoped to show both the consistency and categoricity of mathematical theories. Wittgenstein Ludwig Wittgenstein wrote several passages about the incompleteness theorems that were published posthumously in his 1953 Remarks on the Foundations of Mathematics, in particular one section sometimes called the "notorious paragraph" where he seems to confuse the notions of "true" and "provable" in Russell's system. Gödel was a member of the Vienna Circle during the period in which Wittgenstein's early ideal language philosophy and Tractatus Logico-Philosophicus dominated the circle's thinking. There has been some controversy about whether Wittgenstein misunderstood the incompleteness theorem or just expressed himself unclearly. Writings in Gödel's Nachlass express the belief that Wittgenstein misread his ideas. Multiple commentators have read Wittgenstein as misunderstanding Gödel , although Juliet Floyd and , as well as have provided textual readings arguing that most commentary misunderstands Wittgenstein. On their release, Bernays, Dummett, and Kreisel wrote separate reviews on Wittgenstein's remarks, all of which were extremely negative . The unanimity of this criticism caused Wittgenstein's remarks on the incompleteness theorems to have little impact on the logic community. In 1972, Gödel stated: "Has Wittgenstein lost his mind? Does he mean it seriously? He intentionally utters trivially nonsensical statements" , and wrote to Karl Menger that Wittgenstein's comments demonstrate a misunderstanding of the incompleteness theorems writing: Since the publication of Wittgenstein's Nachlass in 2000, a series of papers in philosophy have sought to evaluate whether the original criticism of Wittgenstein's remarks was justified. argue that Wittgenstein had a more complete understanding of the incompleteness theorem than was previously assumed. They are particularly concerned with the interpretation of a Gödel sentence for an ω-inconsistent system as actually saying "I am not provable", since the system has no models in which the provability predicate corresponds to actual provability. argues that their interpretation of Wittgenstein is not historically justified, while argues against Floyd and Putnam's philosophical analysis of the provability predicate. explores the relationship between Wittgenstein's writing and theories of paraconsistent logic. See also Gödel machine Gödel's completeness theorem Gödel's speed-up theorem Gödel, Escher, Bach Löb's Theorem Minds, Machines and Gödel Münchhausen trilemma Non-standard model of arithmetic Proof theory Provability logic Quining Tarski's undefinability theorem Theory of everything#Gödel's incompleteness theorem Third Man Argument Typographical Number Theory References Citations Articles by Gödel Kurt Gödel, 1931, "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I", Monatshefte für Mathematik und Physik, v. 38 n. 1, pp. 173–198. —, 1931, "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I", in Solomon Feferman, ed., 1986. Kurt Gödel Collected works, Vol. I. Oxford University Press, pp. 144–195. . The original German with a facing English translation, preceded by an introductory note by Stephen Cole Kleene. —, 1951, "Some basic theorems on the foundations of mathematics and their implications", in Solomon Feferman, ed., 1995. Kurt Gödel Collected works, Vol. III, Oxford University Press, pp. 304–323. . Translations, during his lifetime, of Gödel's paper into English None of the following agree in all translated words and in typography. The typography is a serious matter, because Gödel expressly wished to emphasize "those metamathematical notions that had been defined in their usual sense before . . ." . Three translations exist. Of the first John Dawson states that: "The Meltzer translation was seriously deficient and received a devastating review in the Journal of Symbolic Logic; "Gödel also complained about Braithwaite's commentary . "Fortunately, the Meltzer translation was soon supplanted by a better one prepared by Elliott Mendelson for Martin Davis's anthology The Undecidable . . . he found the translation "not quite so good" as he had expected . . . [but because of time constraints he] agreed to its publication" (ibid). (In a footnote Dawson states that "he would regret his compliance, for the published volume was marred throughout by sloppy typography and numerous misprints" (ibid)). Dawson states that "The translation that Gödel favored was that by Jean van Heijenoort" (ibid). For the serious student another version exists as a set of lecture notes recorded by Stephen Kleene and J. B. Rosser "during lectures given by Gödel at to the Institute for Advanced Study during the spring of 1934" (cf commentary by and beginning on p. 41); this version is titled "On Undecidable Propositions of Formal Mathematical Systems". In their order of publication: B. Meltzer (translation) and R. B. Braithwaite (Introduction), 1962. On Formally Undecidable Propositions of Principia Mathematica and Related Systems, Dover Publications, New York (Dover edition 1992), (pbk.) This contains a useful translation of Gödel's German abbreviations on pp. 33–34. As noted above, typography, translation and commentary is suspect. Unfortunately, this translation was reprinted with all its suspect content by Stephen Hawking editor, 2005. God Created the Integers: The Mathematical Breakthroughs That Changed History, Running Press, Philadelphia, . Gödel's paper appears starting on p. 1097, with Hawking's commentary starting on p. 1089. Martin Davis editor, 1965. The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable problems and Computable Functions, Raven Press, New York, no ISBN. Gödel's paper begins on page 5, preceded by one page of commentary. Jean van Heijenoort editor, 1967, 3rd edition 1967. From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931, Harvard University Press, Cambridge Mass., (pbk). van Heijenoort did the translation. He states that "Professor Gödel approved the translation, which in many places was accommodated to his wishes." (p. 595). Gödel's paper begins on p. 595; van Heijenoort's commentary begins on p. 592. Martin Davis editor, 1965, ibid. "On Undecidable Propositions of Formal Mathematical Systems." A copy with Gödel's corrections of errata and Gödel's added notes begins on page 41, preceded by two pages of Davis's commentary. Until Davis included this in his volume this lecture existed only as mimeographed notes. Articles by others George Boolos, 1989, "A New Proof of the Gödel Incompleteness Theorem", Notices of the American Mathematical Society, v, 36, pp. 388–390 and p. 676, reprinted in Boolos, 1998, Logic, Logic, and Logic, Harvard Univ. Press. Bernd Buldt, 2014, "The Scope of Gödel's First Incompleteness Theorem", Logica Universalis, v. 8, pp. 499–552. Arthur Charlesworth, 1980, "A Proof of Godel's Theorem in Terms of Computer Programs", Mathematics Magazine, v. 54 n. 3, pp. 109–121. Martin Davis, 2006, "The Incompleteness Theorem", Notices of the AMS, v. 53 n. 4, pp. 414. Jean van Heijenoort, 1963, "Gödel's Theorem" in Edwards, Paul, ed., Encyclopedia of Philosophy, v. 3. Macmillan, pp. 348–57. Geoffrey Hellman, 1981, "How to Gödel a Frege-Russell: Gödel's Incompleteness Theorems and Logicism", Noûs, v. 15 n. 4, Special Issue on Philosophy of Mathematics, pp. 451–468. David Hilbert, 1900, "Mathematical Problems." English translation of a lecture delivered before the International Congress of Mathematicians at Paris, containing Hilbert's statement of his Second Problem. Martin Hirzel, 2000, "On formally undecidable propositions of Principia Mathematica and related systems I.." An English translation of Gödel's paper. Archived from the original. Sept. 16, 2004. Makoto Kikuchi and Kazayuki Tanaka, 1994, "On formalization of model-theoretic proofs of Gödel's theorems", Notre Dame Journal of Formal Logic, v. 5 n. 3, pp. 403–412. Stephen Cole Kleene, 1943, "Recursive predicates and quantifiers", reprinted from Transactions of the American Mathematical Society, v. 53 n. 1, pp. 41–73 in Martin Davis 1965, The Undecidable (loc. cit.) pp. 255–287. Panu Raatikainen, 2015, "Gödel's Incompleteness Theorems", Stanford Encyclopedia of Philosophy. Accessed April 3, 2015. Panu Raatikainen, 2005, "On the philosophical relevance of Gödel's incompleteness theorems", Revue Internationale de Philosophie 59 (4):513-534. John Barkley Rosser, 1936, "Extensions of some theorems of Gödel and Church", reprinted from the Journal of Symbolic Logic, v. 1 (1936) pp. 87–91, in Martin Davis 1965, The Undecidable (loc. cit.) pp. 230–235. —, 1939, "An Informal Exposition of proofs of Gödel's Theorem and Church's Theorem", Reprinted from the Journal of Symbolic Logic, v. 4 (1939) pp. 53–60, in Martin Davis 1965, The Undecidable (loc. cit.) pp. 223–230 C. Smoryński, 1982, "The incompleteness theorems", in Jon Barwise, ed., Handbook of Mathematical Logic, North-Holland, pp. 821–866. Dan E. Willard, 2001, "Self-Verifying Axiom Systems, the Incompleteness Theorem and Related Reflection Principles", Journal of Symbolic Logic, v. 66 n. 2, pp. 536–596. Books about the theorems Francesco Berto. There's Something about Gödel: The Complete Guide to the Incompleteness Theorem John Wiley and Sons. 2010. Norbert Domeisen, 1990. Logik der Antinomien. Bern: Peter Lang. 142 S. 1990. . . Torkel Franzén, 2005. Gödel's Theorem: An Incomplete Guide to its Use and Abuse. A.K. Peters. Douglas Hofstadter, 1979. Gödel, Escher, Bach: An Eternal Golden Braid. Vintage Books. . 1999 reprint: .  —, 2007. I Am a Strange Loop. Basic Books. . . Stanley Jaki, OSB, 2005. The drama of the quantities. Real View Books. Per Lindström, 1997. Aspects of Incompleteness, Lecture Notes in Logic v. 10. J.R. Lucas, FBA, 1970. The Freedom of the Will. Clarendon Press, Oxford, 1970. Ernest Nagel, James Roy Newman, Douglas Hofstadter, 2002 (1958). Gödel's Proof, revised ed. . Rudy Rucker, 1995 (1982). Infinity and the Mind: The Science and Philosophy of the Infinite. Princeton Univ. Press. Peter Smith, 2007. An Introduction to Gödel's Theorems. Cambridge University Press. N. Shankar, 1994. Metamathematics, Machines and Gödel's Proof, Volume 38 of Cambridge tracts in theoretical computer science. Raymond Smullyan, 1987. Forever Undecided - puzzles based on undecidability in formal systems —, 1991. Godel's Incompleteness Theorems. Oxford Univ. Press. —, 1994. Diagonalization and Self-Reference. Oxford Univ. Press. —, 2013. The Godelian Puzzle Book: Puzzles, Paradoxes and Proofs. Courier Corporation. . Hao Wang, 1997. A Logical Journey: From Gödel to Philosophy. MIT Press. Miscellaneous references Francesco Berto, 2009, "The Gödel Paradox and Wittgenstein's Reasons" Philosophia Mathematica (III) 17. Rebecca Goldstein, 2005, Incompleteness: the Proof and Paradox of Kurt Gödel, W. W. Norton & Company. John Harrison, 2009, "Handbook of Practical Logic and Automated Reasoning", Cambridge University Press, David Hilbert and Paul Bernays, Grundlagen der Mathematik, Springer-Verlag. John Hopcroft and Jeffrey Ullman 1979, Introduction to Automata Theory, Languages, and Computation, Addison-Wesley, James P. Jones, "Undecidable Diophantine Equations", Bulletin of the American Mathematical Society, v. 3 n. 2, 1980, pp. 859–862. Stephen Cole Kleene, 1967, Mathematical Logic. Reprinted by Dover, 2002. Russell O'Connor, 2005, "Essential Incompleteness of Arithmetic Verified by Coq", Lecture Notes in Computer Science v. 3603, pp. 245–260. Lawrence Paulson, 2013, "A machine-assisted proof of Gödel's incompleteness theorems for the theory of hereditarily finite sets", Review of Symbolic Logic, v. 7 n. 3, 484–498. Graham Priest, 1984, "Logic of Paradox Revisited", Journal of Philosophical Logic, v. 13,` n. 2, pp. 153–179. —, 2004, Wittgenstein's Remarks on Gödel's Theorem in Max Kölbel, ed., Wittgenstein's lasting significance, Psychology Press, pp. 207–227. —, 2006, In Contradiction: A Study of the Transconsistent, Oxford University Press, Hilary Putnam, 1960, Minds and Machines in Sidney Hook, ed., Dimensions of Mind: A Symposium. New York University Press. Reprinted in Anderson, A. R., ed., 1964. Minds and Machines. Prentice-Hall: 77. Wolfgang Rautenberg, 2010, A Concise Introduction to Mathematical Logic, 3rd. ed., Springer, Victor Rodych, 2003, "Misunderstanding Gödel: New Arguments about Wittgenstein and New Remarks by Wittgenstein", Dialectica v. 57 n. 3, pp. 279–313. Stewart Shapiro, 2002, "Incompleteness and Inconsistency", Mind, v. 111, pp 817–32. Alan Sokal and Jean Bricmont, 1999, Fashionable Nonsense: Postmodern Intellectuals' Abuse of Science, Picador. Joseph R. Shoenfield (1967), Mathematical Logic. Reprinted by A.K. Peters for the Association for Symbolic Logic, 2001. Jeremy Stangroom and Ophelia Benson, Why Truth Matters, Continuum. George Tourlakis, Lectures in Logic and Set Theory, Volume 1, Mathematical Logic, Cambridge University Press, 2003. Avi Wigderson, 2010, "The Gödel Phenomena in Mathematics: A Modern View", in Kurt Gödel and the Foundations of Mathematics: Horizons of Truth, Cambridge University Press. Hao Wang, 1996, A Logical Journey: From Gödel to Philosophy, The MIT Press, Cambridge MA, . External links . . Paraconsistent Logic § Arithmetic and Gödel's Theorem entry in the Stanford Encyclopedia of Philosophy. MacTutor biographies: Kurt Gödel. Gerhard Gentzen. What is Mathematics:Gödel's Theorem and Around by Karlis Podnieks. An online free book. World's shortest explanation of Gödel's theorem using a printing machine as an example. October 2011 RadioLab episode about/including Gödel's Incompleteness theorem How Gödel’s Proof Works by Natalie Wolchover, Quanta Magazine, July 14, 2020. and Gödel's incompleteness theorems formalised in Isabelle/HOL ` Theorems in the foundations of mathematics Mathematical logic Model theory Proof theory Epistemology Metatheorems Incompleteness theorems
42557451
https://en.wikipedia.org/wiki/Amlogic
Amlogic
Amlogic Inc. (sometimes stylized AMLogic) is a fabless semiconductor company that was founded on March 14, 1995 in Santa Clara, California and is predominantly focused on designing and selling system on a chip integrated circuits. Like most fabless companies in the industry, the company outsources the actual manufacturing of its chips to third-party independent chip manufacturers such as TSMC. Its main target applications as of 2021 are entertainment devices such as Android TV-based devices and IPTV/OTT set-top boxes, media dongles, smart TVs and tablets. It has offices in Shanghai, Shenzhen, Beijing, Xi'an, Chengdu, Hefei, Nanjing, Qingdao, Taipei, Hong Kong, Seoul, Mumbai, London, Munich, Indianapolis, Milan and Santa Clara, California. It developed Video CD player chips and later chips for DVD players and other applications involving MPEG2 decoding. Amlogic was involved in the creation of the HVD (High-Definition Versatile Disc) standard promoted in China as an alternative to DVD video disks used in DVD players. The company was a player in the developing Chinese tablet processor market since 2010–2013. Amlogic is an ARM licensee and uses the ARM architecture in the majority of its products . According to a joint press release with ARM in 2013, it was the first company to use ARM's Mali-450 GPU in a configuration with six cores or more. Products Tablet computer SoC AML8726 family Amlogic AML8726-M Legacy single core ARM Cortex A9-based SoC with ARM Mali-400 GPU released in 2011, with a 16-bit DRAM interface and manufactured on a 65 nm process. Amlogic AML8726-M3 Legacy single-core ARM Cortex A9-based SoC with ARM Mali-400 GPU, released in 2012, with a 16-bit DRAM interface and manufactured on a 45 nm process. Amlogic MX (also known as AML8726-M6) Dual-core ARM Cortex A9-based SoC with ARM Mali-400 MP2 GPU, released in 2012 on a 40 nm process. M8 family (announced 2013) Amlogic M802 (originally called AML8726-M8) Quad-core ARM Cortex A9-based SoC with ARM Mali-450 MP6 GPU running at 600 MHz. Supports 4 GB DRAM and 4K2K display output. 64-bit DRAM interface, manufactured on a 28 nm HPM process. Amlogic M801 Similar to M802 but with DRAM limited to 2 GB and display output limited to 1080p. Amlogic M805 Quad-core ARM Cortex-A5-based SoC with Mali-450 MP2 GPU in a reduced-size 12 mm x 12 mm LFBGA package. The M801/802 uses a new version of ARM's Cortex-A9 core (A9r4) that theoretically allows for higher clock speeds and lower power consumption compared to older versions of the Cortex A9 core such as the A9r3 used in Rockchip's RK3188. Originally scheduled to be in production as early as the middle of 2013 in the form of the AML8726-M8, , only one tablet (Onda V975M) has been announced using a chip from the M8 family. A few manufacturers have shown Android TV boxes using the M802 (Shenzhen Tomato Technology, Tronsmart, Eny Technology and GeniaTech). It has been noted that some devices using the M802 may run hot and use a heatsink for cooling. This is common among other popular OTT set top boxes such as the Amazon FireTV which uses a metal shell to disperse heat, much like a heatsink would. TV SoCs Media player SoCs (S8 family) Amlogic also offers SoC products (S802, S805, and S812) specifically targeting Android TV boxes and OTT set-top boxes (which are variations of similar SoCs in the M series targeting tablets). Amlogic S802 Similar to M802, quad-core ARM Cortex-A9-based SoC with ARM Mali-450 MP6 GPU. Amlogic S805 A low cost SoC similar to M805 with quad-core ARM Cortex A5-based SoC with Mali-450 MP2 GPU running at 500 MHz, with hardware support for HEVC/H.265 decoding up to 1080p. Amlogic S812 Quad-core ARM Cortex-A9-based SoC with ARM Mali-450 MP6 GPU running at 600 MHz with hardware support for HEVC/H.265 decoding up to 4K. S8**-H models include Dolby/DTS licenses. Media player SoCs (S9 family) First 64-bit Amlogic Products lineup. Amlogic S905 Quad-core ARM Cortex-A53-based SoC with a Mali-450 MP3 GPU running at 750 MHz, supports hardware decoding up to 4K@60fps for multiple formats including H.265 10-bit, H.264, AVS+. Amlogic S905X Similar to S905 except it supports up to 4K@60fps VP9 profile-2 hardware decoding, HDR, HDMI 2.0a and having a built-in DAC. Amlogic S905L Similar to S905X except it supports HDMI 2.0b but lack VP9 decoding, camera interface and TS inputs. Amlogic S905D Similar to S905X except it supports DVP (Digital Video Port) interface. Amlogic S905W A low cost variant of the S905X, it supports video decoding only up to 4K@30fps. Amlogic S905Z Similar to S905X (VP9 hardware decoding, HDR, 4K@60fps ...), but no more details known about it, used in the third generation Amazon Fire TV and the Fire TV Cube. Amlogic S912 Octa-core ARM Cortex-A53-based SoC (Big.LITTLE configuration 4x1.5 GHz and 4x1.0 GHz) with a Mali-T820 MP3 GPU running at 600 MHz. S9**(*)-H models include Dolby/DTS licenses. Amlogic S805X A low cost version of S905X SoC with 1.2 GHz quad-core ARM Cortex-A53-based SoC with a Mali-450 MP3 GPU, with hardware support for HEVC/H.265/VP9 decoding up to 1080p. Devices based on them are already in the market running Android 5.1 to 7.1, they are usually paired with 1 GB, 2 GB or 3 GB RAM, 8 GB to 64 GB flash memory, they have features such as a Gigabit Lan and Dual band 2.4G/5G A/C WiFi. S905X was scheduled to be released in Q1 2016 while S905D and S912 were scheduled for Q2 2016. All three of the SoCs have Android Marshmallow and Buildroot Linux SDKs released. Media player SoCs (S9 family gen 2) At IBC 2018 Amlogic showed to the public for the first time their second generation SoCs for media players on reference boards running Android TV 9. Amlogic S905X2 Quad-core ARM Cortex-A53-based SoC with a Mali-G31 MP2 "Dvalin" GPU and adds to the first generation SoCs support to HDMI 2.1 at 4k60 and to the HDR formats of Dolby vision and TCH Prime. Amlogic S905Y2 Silmilar to the S905X2, built for smaller HDMI dongles and because of that it loses some features like Ethernet, DVP (Digital Video Port) interface, CVBS (Composite video). Amlogic S922X Quad-core ARM Cortex-A73+Dual-Core ARM Cortex-A53-based SoC with a Mali-G52 MP4 GPU. Media player SoCs (S9 family gen 3) Amlogic S905X3 quad core Cortex-A55 SoC. The S905X3 has an optional Neural Network Accelerator with 1.2 TOPS NN inference accelerator supporting TensorFlow and Caffe. Arm Mali G31 MP2 GPU with support for OpenGL ES 3.2, Vulkan 1.0 and OpenCL 2.0 support. Amlogic S922D Quad-core ARM Cortex-A73+Dual-Core ARM Cortex-A53-based SoC with a Mali-G52 MP4 GPU. The S922D has an Neural Network Accelerator with 2.5 TOPS (16-bit?) and 5.0 TOPS (8-bit?) NN inference accelerator supporting TensorFlow and Caffe. Media player SoCs (S8 & S9 family gen 4) According to a leaked roadmap, Amlogic was to launch its next generation of media processors starting from Q4 2019. The main new feature is support of AV1 video decoding in hardware. Three new SoCs are in development: Amlogic S905X4 (Q4 2019): Mid-range SoC pin-compatible with S905X2 and -X3 processors. Adds 4k 120fps AV1 decoding. Amlogic S805X2 (Q2 2020): Low-end SoC with at least 1080p AV1 decoding and unknown CPU and GPU. Amlogic S908X (Q3 2020): High-end SoC with 8K 60fps AV1 and AVS3 decoding, HDMI 2.1 and unknown CPU and GPU. Smart speakers and audio applications SoCs In Q3 2017 Amlogic released new SoCs targeting smart speakers and audio applications. Amlogic A111 Quad-core ARM Cortex-A5-based SoC,2-channel I2S input and output, TDM/PCM input and output, up to 8 channels, S/PDIF output, ethernet 100M and RGB888 output Amlogic A112 Quad-core ARM Cortex-A53-based SoC, 8-channel I2S and S/PDIF input and output, TDM/PCM input and output, up to 8 channels, 2-channel PDM input, ethernet 1Gig and LVDS/MIPI-DSI panel output Amlogic A113 Similar to A112 except it support 16 I2S channels, 8 PDM channels. Amlogic A311X Support 2ch sensor input maximum 8M pixel ISP. Neural Network Accelerator up to 5 Tops. Quad core ARM Cortex-A73 and dual core ARM Cortex-A53 high performance CPU architecture. Low latency 1080p H.265/H.264 60fps encoder. USB3.0/PCIE High speed data interface. Power management auxiliary processor. Amlogic A311D Hexa-core SoC featuring 4x ARM Cortex-A73 cores and 2 ARM Cortex-A53 cores. The GPU would be a 4-core Mali-G52 ARM with support for Vulkan 1.1, OpenGL 3.2 and OpenCL 2.2. It also has a Neural Processing Unit (NPU) for AI inference. The VPU supports 4K2K@60 Hz with CEC, HDR and 4K decoding h.265, VP9 and AVS2. Smart Vision series SoCs Amlogic C308X quad core Cortex-A55 SoC. Dual-core HiFi-4 Acoustic/Audio DSP. It also has a Neural Processing Unit (NPU) for AI inference. The VPU supports 4K@30fps + 1080P@30fps . Amlogic C305X Neural Processing Unit (NPU) for AI inference. Dual core Cortex-A35 SoC. The VPU supports 5M@30fps + 1080P@30fps . Wireless Connectivity series products Amlogic W155S1 Amlogic W155S1 is an integrated Wi-Fi and Bluetooth combo chip. It has a host interface of SDIO3.0 for Wi-Fi and UART HS for Bluetooth. Wi-Fi is designed to be fully compliant with IEEE 802.11ac standard and operated at both 2.4GHz and 5GHz band. It can support up-to 80MHz bandwidth and PHY data rate of 433Mbps. Located in the same die is the Bluetooth system that can support both Classic BDR/EDR and BLE mode. Automotive Electronics series products Amlogic V901D 64-bit quad core ARM Cortex-A55 CPU, ARM Mali-G31 MP2 GPU processor, Neural Network Processor up to 1 Tops, HIFI 4 DSP for ultra-low power far-field voice, Automotive AEC-Q100 grade 3, HW UHD 4K AV1/H.265/VP9 10-bit video decoder, DolbyVision, HDR10/10+, HLG, Prime HDR, HDMI 2.1 receivers with dynamic HDR, ALLM, eARC and HDCP 1.4/2.2/2.3 support, PDM/I2S/TDM interface for far-field voice. Other products The Amlogic MX, S802 and S805 SoCs are also targeted at media dongles. Amlogic also offers SoCs targeting smart TVs and projectors, including M6L, M6C, M6D, M948,T826, T828, T866, T868, T962, T966 and T968. Comparison table Markets and sales Amlogic does not publish sales or financial information on its website. The company is listed as a client of several venture capital firms. In the market for SoCs targeting Chinese tablet manufacturers and manufacturers of Android media players, TV boxes and media dongles, it faces competition primarily from Rockchip, Allwinner Technology, Actions Semiconductor, MediaTek, Intel and Realtek. Amlogic was reported to be fourth largest application processor supplier for Chinese tablets in 2012. For Q2 2014, Amlogic was reported to be the fifth largest supplier, after Rockchip, MediaTek, Allwinner and Actions Semiconductor. Chinese SoC suppliers that do not have cellular baseband technology are at a disadvantage compared to companies such as MediaTek that also supply the smartphone market as white-box tablet makers increasingly add phone functionality to their products. In 2011, the AML8726-M was selected as one of the "hottest" processors by EE Times China, while in 2012, the AML8726-MX won EE Times-China's Processor of the Year award. Open source commitment Amlogic maintains a website dedicated to providing source code for the Linux kernel and Android SDK supporting Amlogic chips and reference designs. The Linux kernel source code is freely available, and has recently (as of April 2014) been updated to support certain chips in the M8 family as well as the older MX family, with Android versions up to 4.4 (KitKat) being supported (based on Linux kernel version 3.10.x). However, the Android SDK requires a NDA and is only available to business partners. The source code includes Linux kernel 3.10.10, U-Boot, Realtek and Broadcom Wi-Fi drivers, NAND drivers, "TVIN" drivers, and kernel space GPU drivers for the Mali-400/450 GPU. XBMC/Kodi Amlogic S805 / M805 / S806 / M806 / S812 Android video decoding compatibility list: http://kodi.wiki/view/Android_hardware#Compatible_chipsets However an effort to push Linux upstream support for the GX ARM64 lineup is ongoing on http://linux-meson.com/. Currently only the AML8726MX, S802, S805 and S905 SoC are booting headless on Linux 4.8. But S905X, S905D and S912 Headless support is expected for Linux 4.10. See also Allwinner Technology Actions Semiconductor Leadcore Technology MediaTek Nufront Rockchip UNISOC References 1995 establishments in California Technology companies established in 1995 ARM architecture Embedded microprocessors System on a chip Fabless semiconductor companies Semiconductor companies of the United States Companies based in Santa Clara, California Technology companies based in the San Francisco Bay Area
28015549
https://en.wikipedia.org/wiki/App%20Inventor%20for%20Android
App Inventor for Android
MIT App Inventor is a web application integrated development environment originally provided by Google, and now maintained by the Massachusetts Institute of Technology (MIT). It allows newcomers to computer programming to create application software(apps) for two operating systems (OS): Android, and iOS, which, , is in final beta testing. It is free and open-source software released under dual licensing: a Creative Commons Attribution ShareAlike 3.0 Unported license, and an Apache License 2.0 for the source code. It uses a graphical user interface (GUI) very similar to the programming languages Scratch (programming language) and the StarLogo, which allows users to drag and drop visual objects to create an application that can run on Android devices, while a App-Inventor Companion (The program that allows the app to run and debug on) that works on iOS running devices are still under development. In creating App Inventor, Google drew upon significant prior research in educational computing, and work done within Google on online development environments. App Inventor and the other projects are based on and informed by constructionist learning theories, which emphasize that programming can be a vehicle for engaging powerful ideas through active learning. As such, it is part of an ongoing movement in computers and education that began with the work of Seymour Papert and the MIT Logo Group in the 1960s, and has also manifested itself with Mitchel Resnick's work on Lego Mindstorms and StarLogo. App Inventor also supports the use of cloud data via an experimental Firebase#Firebase Realtime Database component. History The application was made available through request on July 12, 2010, and released publicly on December 15, 2010. The App Inventor team was led by Hal Abelson and Mark Friedman. In the second half of 2011, Google released the source code, terminated its server, and provided funding to create The MIT Center for Mobile Learning, led by App Inventor creator Hal Abelson and fellow MIT professors Eric Klopfer and Mitchel Resnick. The MIT version was launched in March 2012. On December 6, 2013 (the start of the Hour of Code), MIT released App Inventor 2, renaming the original version "App Inventor Classic" Major differences are: The blocks editor in the original version ran in a separate Java process, using the Open Blocks Java library for creating visual blocks programming languages and programming Open Blocks is distributed by MIT's Scheller Teacher Education Program (STEP) and is derived from master's thesis research by Ricarose Roque. Professor Eric Klopfer and Daniel Wendel of the Scheller Program supported the distribution of Open Blocks under an MIT License. Open Blocks visual programming is closely related to StarLogo TNG, a project of STEP, and Scratch, a project of the MIT Media Lab's Lifelong Kindergarten Group led by Mitchel Resnick. App Inventor 2 replaced Open Blocks with Blockly, a blocks editor that runs within a web browser. The MIT AI2 Companion app enables real-time debugging on connected devices via Wi-Fi, or Universal Serial Bus (USB). In addition to this the user may use an "on computer" emulator available for Windows, MacOS, and Linux. See also Android software development Logo (programming language) Lego Mindstorms HyperNext Windows Phone App Studio References External links Tutorials and Sample Apps 2010 software App Inventor Integrated development environments Massachusetts Institute of Technology software Mobile software programming tools Visual programming languages Programming languages created in 2010
13578949
https://en.wikipedia.org/wiki/Physical%20unclonable%20function
Physical unclonable function
A physical unclonable function (sometimes also called physically unclonable function, which refers to a weaker security metric than a physical unclonable function), or PUF, is a physical object that for a given input and conditions (challenge), provides a physically defined "digital fingerprint" output (response) that serves as a unique identifier, most often for a semiconductor device such as a microprocessor. PUFs are most often based on unique physical variations which occur naturally during semiconductor manufacturing. A PUF is a physical entity embodied in a physical structure. Today, PUFs are usually implemented in integrated circuits and are typically used in applications with high security requirements, more specifically cryptography. History Early references about systems that exploit the physical properties of disordered systems for authentication purposes date back to Bauder in 1983 and Simmons in 1984. Naccache and Frémanteau provided an authentication scheme in 1992 for memory cards. The terms POWF (physical one-way function) and PUF (physical unclonable function) were coined in 2001 and 2002, the latter publication describing the first integrated PUF where, unlike PUFs based on optics, the measurement circuitry and the PUF are integrated onto the same electrical circuit (and fabricated on silicon). Starting in 2010, PUF gained attention in the smartcard market as a promising way to provide "silicon fingerprints", creating cryptographic keys that are unique to individual smartcards. PUFs are now established as a secure alternative to battery-backed storage of secret keys in commercial FPGAs, such as the Xilinx Zynq Ultrascale+, and Altera Stratix 10. Concept PUFs depend on the uniqueness of their physical microstructure. This microstructure depends on random physical factors introduced during manufacturing. These factors are unpredictable and uncontrollable, which makes it virtually impossible to duplicate or clone the structure. Rather than embodying a single cryptographic key, PUFs implement challenge–response authentication to evaluate this microstructure. When a physical stimulus is applied to the structure, it reacts in an unpredictable (but repeatable) way due to the complex interaction of the stimulus with the physical microstructure of the device. This exact microstructure depends on physical factors introduced during manufacture which are unpredictable (like a fair coin). The applied stimulus is called the challenge, and the reaction of the PUF is called the response. A specific challenge and its corresponding response together form a challenge–response pair or CRP. The device's identity is established by the properties of the microstructure itself. As this structure is not directly revealed by the challenge–response mechanism, such a device is resistant to spoofing attacks. Using a fuzzy extractor or the fuzzy commitment scheme that are provably suboptimal in terms of storage and privacy leakage amount or using nested polar codes that can be made asymptotically optimal, one can extract a unique strong cryptographic key from the physical microstructure. The same unique key is reconstructed every time the PUF is evaluated. The challenge–response mechanism is then implemented using cryptography. PUFs can be implemented with a very small hardware investment compared to other cryptographic primitives that provide unpredictable input/output behaviour such as pseudo-random functions. In some cases PUFs can even be built from existing hardware with the right properties. Unclonability means that each PUF device has a unique and unpredictable way of mapping challenges to responses, even if it was manufactured with the same process as a similar device, and it is infeasible to construct a PUF with the same challenge–response behavior as another given PUF because exact control over the manufacturing process is infeasible. Mathematical unclonability means that it should be very hard to compute an unknown response given the other CRPs or some of the properties of the random components from a PUF. This is because a response is created by a complex interaction of the challenge with many or all of the random components. In other words, given the design of the PUF system, without knowing all of the physical properties of the random components, the CRPs are highly unpredictable. The combination of physical and mathematical unclonability renders a PUF truly unclonable. Note that a PUF is "unclonable" using the same physical implementation, but once a PUF key is extracted, there's generally no problem to clone the key – the output of the PUF – using other means. Because of these properties PUFs can be used as a unique and untamperable device identifier. PUFs can also be used for secure key generation and storage as well as for a source of randomness. Types Over 40 types of PUF have been suggested. These range from PUFs that evaluate an intrinsic element of a pre-existing integrated electronic system to concepts that involve explicitly introducing random particle distributions to the surface of physical objects for authentication. All PUFs are subject to environmental variations such as temperature, supply voltage and electromagnetic interference, which can affect their performance. Therefore, rather than just being random, the real power of a PUF is its ability to be different between devices, but simultaneously to be the same under different environmental conditions on the same device. Error correction In many applications it is important that the output is stable. If the PUF is used for a key in cryptographic algorithms it is necessary that error correction be done to correct any errors caused by the underlying physical processes and reconstruct exactly the same key each time under all operating conditions. In principle there are two basic concepts: Pre-Processing and Post-Processing Error Correction. Strategies have been developed which lead SRAM PUF to become more reliable over time without degrading the other PUF quality measures such as security and efficiency. Research at Carnegie Mellon University into various PUF implementations found that some error reduction techniques reduced errors in PUF response in a range of ~70 percent to ~100 percent. Research at the University of Massachusetts Amherst to improve the reliability of SRAM PUF-generated keys posited an error correction technique to reduce the error rate. Joint reliability–secrecy coding methods based on transform coding are used to obtain significantly higher reliabilities for each bit generated from a PUF such that low-complexity error-correcting codes such as BCH codes suffice to satisfy a block error probability constraint of 1 bit errors out of 1 billion bits. Nested polar codes are used for vector quantisation and error correction jointly. Their performance is asymptotically optimal in terms of, for a given blocklength, the maximum number of secret bits generated, minimum amount of private information leaked about the PUF outputs, and minimum storage required. The fuzzy commitment scheme and fuzzy extractors are shown to be suboptimal in terms of the minimum storage. Availability PUF technology can be licensed from several companies including eMemory, or its subsidiary, PUFsecurity, Enthentica, ICTK, Intrinsic ID, Invia, QuantumTrace and Verayo. PUF technology has been implemented in several hardware platforms including Microsemi SmartFusion2, NXP SmartMX2, Coherent Logix HyperX, InsideSecure MicroXsafe, Altera Stratix 10, Redpine Signals WyzBee and Xilinx Zynq Ultrascale+. Vulnerabilities In 2011, university research showed that delay-based PUF implementations are vulnerable to side-channel attacks and recommends that countermeasures be employed in the design to prevent this type of attack. Also, improper implementation of PUF could introduce "backdoors" to an otherwise secure system. In June 2012, Dominik Merli, a scientist at Fraunhofer Research Institution for Applied and Integrated Security (AISEC) further claimed that PUF introduces more entry points for hacking into a cryptographic system and that further investigation into the vulnerabilities of PUFs is required before PUFs can be used in practical security-related applications. The presented attacks are all on PUFs implemented in insecure systems, such as FPGA or Static RAM (SRAM). It is also important to ensure that the environment is suitable for the needed security level, as otherwise attacks taking advantage of temperature and other variations may be possible. In 2015, some studies claimed it is possible to attack certain kinds of PUFs with low-cost equipment in a matter of milliseconds. A team at Ruhr Universität of Bochum, Germany demonstrated a method to create a model of XOR Arbiter PUFs and thus be able to predict their response to any kind of challenge. Their method requires only 4 CRPs which even on resource-constrained devices should not take more than about 200ms to produce. Using this method and a $25 device or an NFC-enabled smartphone, the team was able to successfully clone PUF-based RFID cards stored in the wallet of users while it was in their back pocket. Provable machine learning attacks The attacks mentioned above range from invasive, e.g., to non-invasive attacks. One of the most celebrated types of non-invasive attacks is machine learning (ML) attacks. From the beginning of the era of PUFs, it has been doubted if these primitives are subject to this type of attacks. In the lack of thorough analysis and mathematical proofs of the security of PUFs, ad hoc attacks against PUFs have been introduced in the literature. Consequently, countermeasures presented to cope with these attacks are less effective. In line with these efforts, it has been conjectured if PUFs can be considered as circuits, being provably hard to break. In response, a mathematical framework has been suggested, where provable ML algorithms against several known families of PUFs have been introduced. Along with this provable ML framework, to assess the security of PUFs against ML attacks, property testing algorithms have been reintroduced in the hardware security community and made publicly accessible. These algorithms trace their roots back to well-established fields of research, namely property testing, machine learning theory, and Boolean analysis. ML attacks are possible to apply to PUFs also because majority of the pre- and post-processing methods applied until now ignore the effect of correlations between PUF-circuit outputs. For instance, obtaining one bit by comparing two ring oscillator outputs is a method to decrease the correlation. However, this method does not remove all correlations. Therefore, the classic transforms from the signal-processing literature are applied to raw PUF-circuit outputs to decorrelate them before quantizing the outputs in the transform domain to generate bit sequences. Such decorrelation methods can help to overcome the correlation based information leakages about the PUF outputs even if the ambient temperature and supply voltage change. Optical PUFs Optical PUFs rely on a random optical multiple-scattering medium, which serves as a token. Optical PUFs offer a rather promising approach to the development of entity authentication schemes, which are robust against many of the aforementioned attacks. However, their security against emulation attacks can be ensured only in the case of quantum readout (see below), or when the database of challenge-response pairs is somehow encrypted. See also Hardware Trojan Quantum Readout of PUFs Random number generation Defense strategy (computing) References External links "Physical Unclonable Functions and Applications", by Srini Devadas and others, MIT Ultra-low-cost true randomness AND physical fingerprinting Cryptographic primitives Applications of randomness
8318184
https://en.wikipedia.org/wiki/ImageMagick
ImageMagick
ImageMagick is a free and open-source cross-platform software suite for displaying, creating, converting, modifying, and editing raster images. Created in 1987 by John Cristy, it can read and write over 200 image file formats. It and its components are widely used in open-source applications. History ImageMagick was created in 1987 by John Cristy when working at DuPont, to convert 24-bit images (16 million colors) to 8-bit images (256 colors), so they could be displayed on most screens at the time. It was freely released in 1990 when DuPont agreed to transfer copyright to ImageMagick Studio LLC, still currently the project maintainer organization. In May 2016, it was reported that ImageMagick had a vulnerability through which an attacker can execute arbitrary code on servers that use the application to edit user-uploaded images. Security experts including CloudFlare researchers observed actual use of the vulnerability in active hacking attempts. The security flaw was due to ImageMagick calling backend tools without first properly checking to ensure path and file names are free of improper shell commands. The vulnerability did not affect ImageMagick distributions that included a properly configured security policy. Features and capabilities The software mainly consists of a number of command-line interface utilities for manipulating images. ImageMagick does not have a robust graphical user interface to edit images as do Adobe Photoshop and GIMP, but does include – for Unix-like operating systems – a basic native X Window GUI (called IMDisplay) for rendering and manipulating images and API libraries for many programming languages. The program uses magic numbers to identify image file formats. A number of programs, such as Drupal, MediaWiki, phpBB, and vBulletin, can use ImageMagick to create image thumbnails if installed. ImageMagick is also used by other programs, such as LyX, for converting images. ImageMagick has a fully integrated Perl binding called PerlMagick, as well as many others: G2F (Ada), MagickCore (C), MagickWand (C), ChMagick (Ch), ImageMagickObject (COM+), Magick++ (C++), JMagick (Java), L-Magick (Lisp), NMagick (Neko/Haxe), MagickNet (.NET), PascalMagick (Pascal), MagickWand for PHP (PHP), IMagick (PHP), PythonMagick (Python), RMagick (Ruby), or TclMagick (Tcl/TK). File format conversion One of the basic and thoroughly-implemented features of ImageMagick is its ability to efficiently and accurately convert images between different file formats (it uses the command convert to achieve this). Color quantization The number of colors in an image can be reduced to an arbitrary number and this is done by weighing the most prominent color values present among the pixels of the image. A related capability is the posterization artistic effect, which also reduces the number of colors represented in an image. The difference between this and standard color quantization is that while in standard quantization the final palette is selected based upon a weighting of the prominence of existing colors in the image, posterization creates a palette of colors smoothly distributed across the spectrum represented in the image. Whereas with standard color quantization all of the final color values are ones that were in the original image, the color values in a posterized image may not have been present in the original image but are in between the original color values. Dithering A fine control is provided for the dithering that occurs during color and shading alterations, including the ability to generate halftone dithering. Liquid rescaling In 2008, support for liquid rescaling was added. This feature allows, for example, rescaling 4:3 images into 16:9 images without distorting the image. Artistic effects ImageMagick includes a variety of filters and features intended to create artistic effects: Charcoal sketch transform Posterization OpenCL ImageMagick can use OpenCL to use an accelerated graphics card (GPU) for processing. Deep color The Q8 version supports up-to 8 bits-per-pixel component (8-bit grayscale, 24- or 32-bit RGB color). The Q16 version supports up-to 16 bits-per-pixel component (16-bit grayscale, up-to 48- or 64-bit RGB color). Other Below are some other features of ImageMagick: Format conversion: convert an image from one format to another (e.g. PNG to JPEG). Transform: resize, rotate, crop, flip or trim an image. (Applies these without generation loss on JPEG files, where possible.) Transparency: render portions of an image invisible. Draw: add shapes or text to an image. Decorate: add a border or frame to an image. Special effects: blur, sharpen, threshold, or tint an image. Animation: assemble a GIF animation file from a sequence of images. Text and comments: insert descriptive or artistic text in an image. Image identification: describe the format and attributes of an image. Composite: overlap one image over another. Montage: juxtapose image thumbnails on an image canvas. Generalized pixel distortion: correct for, or induce image distortions including perspective. Morphology of shapes: extract features, describe shapes and recognize patterns in images. Motion picture support: read and write the common image formats used in digital film work. Image calculator: apply a mathematical expression to an image or image channels. Discrete Fourier transform: implements forward and inverse DFT. Color management: accurate color management with color profiles or in lieu of – built-in gamma compression or expansion as demanded by the colorspace. High-dynamic-range images: accurately represent the wide range of intensity levels found in real scenes ranging from the brightest direct sunlight to the deepest darkest shadows. Encipher or decipher an image: convert ordinary images into unintelligible gibberish and back again. Virtual pixel support: convenient access to pixels outside the image region. Large image support: read, process, or write mega-, giga-, or tera-pixel image sizes. Threads of execution support: ImageMagick is thread safe and most internal algorithms execute in parallel to take advantage of speed-ups offered by multi-core processor chips. Heterogeneous distributed processing: certain algorithms are OpenCL-enabled to take advantage of speed-ups offered by executing in concert across heterogeneous platforms consisting of CPUs, GPUs, and other processors. Distributed pixel cache: offload intermediate pixel storage to one or more remote servers. ImageMagick on the iPhone: convert, edit, or compose images on an iOS computing device such as the iPhone or iPad. Distribution ImageMagick is cross-platform, and runs on Microsoft Windows and Unix-like systems including Linux, macOS, iOS, Android, Solaris, Haiku and FreeBSD. The project's source code can be compiled for other systems, including AmigaOS 4.0 and MorphOS. It has been run under IRIX. Related software GraphicsMagick is a fork of ImageMagick 5.5.2 made in 2002, emphasizing the cross-release stability of the programming API and command-line options. GraphicsMagick emerged as a result of irreconcilable differences in the developers' group. See also DevIL GD Graphics Library Netpbm References Further reading External links ImageMagick Security Policy – ImageMagick Security Policy – best practices strongly encourages you to configure a security policy that suits your local environment IM Examples – Examples of CLI Usage – provides many small examples demonstrating its vast range of capabilities Fred's ImageMagick Scripts – provides a plethora of shell scripts using ImageMagick to do more complex tasks How to automate PDF structural testing using ImageMagick – Demonstrates convert, compare and collate features of ImageMagick. Critical ImageMagick vulnerability ImageMagick suffers from a vulnerability that allows malformed images to force a Web server to execute code 1990 software Command-line software Free graphics software Free raster graphics editors Free software programmed in C Graphics libraries Graphics software IRIX software Java platform software Screenshot software
298512
https://en.wikipedia.org/wiki/Blaster%20%28computer%20worm%29
Blaster (computer worm)
Blaster (also known as Lovsan, Lovesan, or MSBlast) was a computer worm that spread on computers running operating systems Windows XP and Windows 2000 during August 2003. The worm was first noticed and started spreading on August 11, 2003. The rate that it spread increased until the number of infections peaked on August 13, 2003. Once a network (such as a company or university) was infected, it spread more quickly within the network because firewalls typically did not prevent internal machines from using a certain port. Filtering by ISPs and widespread publicity about the worm curbed the spread of Blaster. In September 2003, Jeffrey Lee Parson, an 18-year-old from Hopkins, Minnesota, was indicted for creating the B variant of the Blaster worm; he admitted responsibility and was sentenced to an 18-month prison term in January 2005. The author of the original A variant remains unknown. Creation and effects According to court papers, the original Blaster was created after security researchers from the Chinese group Xfocus reverse engineered the original Microsoft patch that allowed for execution of the attack. The worm spreads by exploiting a buffer overflow discovered by the Polish security research group Last Stage of Delirium in the DCOM RPC service on the affected operating systems, for which a patch had been released one month earlier in MS03-026 and later in MS03-039. This allowed the worm to spread without users opening attachments simply by spamming itself to large numbers of random IP addresses. Four versions have been detected in the wild. These are the most well-known exploits of the original flaw in RPC, but there were in fact another 12 different vulnerabilities that did not see as much media attention. The worm was programmed to start a SYN flood against port 80 of windowsupdate.com if the system date is after August 15 and before December 31 and after the 15th day of other months, thereby creating a distributed denial of service attack (DDoS) against the site. The damage to Microsoft was minimal as the site targeted was windowsupdate.com, rather than windowsupdate.microsoft.com, to which the former was redirected. Microsoft temporarily shut down the targeted site to minimize potential effects from the worm. The worm's executable, MSBlast.exe, contains two messages. The first reads: I just want to say LOVE YOU SAN!! This message gave the worm the alternative name of Lovesan. The second reads: billy gates why do you make this possible ? Stop making money and fix your software!! This is a message to Bill Gates, the co-founder of Microsoft and the target of the worm. The worm also creates the following registry entry so that it is launched every time Windows starts: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run\ windows auto update=msblast.exe Timeline May 28, 2003: Microsoft releases a patch that would protect users from an exploit in WebDAV that Welchia used. (Welchia used the same exploit as MSBlast but had an additional method of propagation that was fixed in this patch. This method was only used after 200,000 RPC DCOM attacks - the form that MSBlast used.) July 5, 2003: Timestamp for the patch that Microsoft releases on the 16th. July 16, 2003: Microsoft releases a patch that would protect users from the yet unknown MSBlast. At the same time they also released a bulletin describing the exploit. Around July 16, 2003: White hat hackers create proof-of-concept code verifying that the unpatched systems are vulnerable. The code was not released. July 17, 2003: CERT/CC releases a warning and suggests blocking port 135. July 21, 2003: CERT/CC suggests also blocking ports 139 and 445. July 25, 2003: xFocus releases information on how to exploit the RPC bug that Microsoft released the July 16 patch to fix. August 1, 2003: The U.S. issues an alert to be on the lookout for malware exploiting the RPC bug. Sometime prior to August 11, 2003: Other viruses using the RPC exploit exist. August 11, 2003: Original version of the worm appears on the Internet. August 11, 2003: Symantec Antivirus releases a rapid release protection update. August 11, 2003, evening: Antivirus and security firms issued alerts to run Windows Update. August 12, 2003: The number of infected systems is reported at 30,000. August 13, 2003: Two new worms appear and begin to spread. (Sophos, a variant of MSBlast and W32/RpcSpybot-A, a totally new worm that used the same exploit) August 15, 2003: The number of infected systems is reported at 423,000. August 16, 2003: DDoS attack against windowsupdate.com starts. (Largely unsuccessful because that URL is merely a redirect to the real site, windowsupdate.microsoft.com.) August 18, 2003: Microsoft issues an alert regarding MSBlast and its variants. August 18, 2003: The related helpful worm, Welchia, appears on the internet. August 19, 2003: Symantec upgrades their risk assessment of Welchia to "high" (category 4). August 25, 2003: McAfee lowers their risk assessment to "Medium". August 27, 2003: A potential DDoS attack against HP is discovered in one variant of the worm. January 1, 2004: Welchia deletes itself. January 13, 2004: Microsoft releases a stand-alone tool to remove the MSBlast worm and its variants. February 15, 2004: A variant of the related worm Welchia is discovered on the internet. February 26, 2004: Symantec lowers their risk assessment of the Welchia worm to "Low" (category 2). March 12, 2004: McAfee lowers their risk assessment to "Low". April 21, 2004: Another variant is discovered. January 28, 2005: The creator of the "B" variant of MSBlaster is sentenced to 18 months in prison. Side effects Although the worm can only spread on systems running Windows 2000 or Windows XP, it can cause instability in the RPC service on systems running other versions of Windows NT, including Windows Server 2003 and Windows XP Professional x64 Edition. In particular, the worm does not spread in Windows Server 2003 because Windows Server 2003 was compiled with the /GS switch, which detected the buffer overflow and shut the RPCSS process down. When infection occurs, the buffer overflow causes the RPC service to crash, leading Windows to display the following message and then automatically reboot, usually after 60 seconds. This was the first indication many users had an infection; it often occurred a few minutes after every startup on compromised machines. A simple resolution to stop countdown is to run the "shutdown /a" command, causing some side effects such as an empty (without users) Welcome Screen. The Welchia worm had a similar effect. Months later, the Sasser worm surfaced, which caused a similar message to appear. See also Botnet BlueKeep (security vulnerability) Conficker Gameover ZeuS Helpful worm Operation Tovar Nachia (computer worm) Sasser (computer worm) Spam Timeline of computer viruses and worms Tiny Banker Trojan Torpig List of convicted computer criminals Zeus (malware) Zombie (computer science) References Windows malware Exploit-based worms Hacking in the 2000s
47796548
https://en.wikipedia.org/wiki/Amaryllo
Amaryllo
Amaryllo Inc. is a multinational company founded in Amsterdam, the Netherlands, operating in AI as a Service market. It develops biometric robotic technologies, real-time data mining, a camera robot, fast object recognition, an encrypted P2P network, and flexible cloud storage. Amaryllo developed and acquired patents for a new type of robotic cameras that is claimed to "talk, hear, sense, recognize human faces, and track intruders". It also claims to have made the world's first security robot based on the WebRTC protocol, iCamPRO FHD, and won the 2015 CES Best of Innovation Award under Embedded Technology category. Its home security robots claim to employ 256-bit encryption and run on the WebRTC protocol. Amaryllo products are sold in over 100 countries across 6 continents. History Amaryllo revealed its first smart home security products at Internationale Funkausstellung Berlin (IFA) 2013 with a Skype-enabled IP camera called iCam HD. Amaryllo announced its second Skype-certified smart home product, iBabi HD, at CES 2014. The company was chosen as a "Cool Vendor" by Gartner in Connected Home 2014. Amaryllo introduced WebRTC-based smart home products after Microsoft terminated embedded Skype services in mid 2014. Since then, Amaryllo has been developing a camera robots with auto-tracking and facial recognition technologies. Its camera robots, ATOM AR3 and ATOM AR3S, were introduced in late 2016. It focuses on wired and wireless technology based on AI services. Biometric Robotic Technologies Facial Recognition Amaryllo debuted its facial recognition technologies on the new auto-tracking model, ATOM at IFA 2016. ATOM is designed to recognize human faces from learning faces. It claims to take 0.5 seconds to detect a human face and another 0.5 second to identify a person, totaling 1 second to recognize a human face. It can recognize over 100 people simultaneously. ASUS SmartHome platform has integrated ATOM. Embedded Auto-Tracking Amaryllo uses a multi-core processor embedded in cameras to make tracking systems in single units. Amaryllo security drones act as individual security robots to track moving objects without many commands from remote computers. With a 1920 x 1080 resolution, they claim to be able to track intruders over 30 feet away. Infrared lights are aesthetically hidden by a mask and are activated when the environment becomes dark, so that they can manage to work in darkness. Multiple Sensor Network The drones have multiple motion sensors around them for "360-degree" tracking, once a sensor is triggered, embedded CPUs will guide the drones to turn to the spotted direction to follow objects. This can use multiple cameras in a single unit to reduce cost. These robots even claim to "talk to intruders if they are spotted" and track intruders. Object Recognition Amaryllo develops cloud-based artificial intelligence with its camera robots to recognize objects like faces, human body, vehicle, animals, airplanes, etc.,. It uses "real-time picture frame analysis" to identify over 100 human faces within seconds. Once a human-like face is recognized, robots deliver face snapshots to smart devices. This patent-pending method claims to eliminate possible false alerts generated by Passive infrared sensors. Interactive Services Amaryllo robots are linked to Google Services. These robots can alertabout emails, appointments, say "Hello", "Good Morning", "Good Afternoon", etc. when they detect events like motion, audio, or face detection pre-determined by users. They are wirelessly connected to networks, so they are aware of local time and can report time on an hourly fashion, acting as a regular clock. More interactive voice communications are reported. P2P Communications and Cloud Service Dynamic P2P Server Amaryllo was the first to "establish global Peer-to-Peer (P2P) server based on WebRTC protocol in smart home service". Amaryllo Live is a plug-in-free H.264-based browser service to access their cameras over the Internet. It runs on WebRTC protocol and was initially supported by Firefox. Other browsera have vowed to support the WebRTC H.264-based codec. Video Alert Amaryllo offers free and paid cloud storage plans including video alerts for urgent video messages from smart devices. Amaryllo launched Urgent Home Care Service with an introduction of iCare FHD, which give alerts based on remote devices. It can detect faces and sends real-time face alert video to family members. Data Analytics Service Soteria Amaryllo expanded its business to B2B smart retailers by introducing Soteria service in 2018, which claims to "employ biometric analytics cameras with cloud intelligence". Amaryllo Cloud Storage (ACS) Amaryllo Cloud is Amaryllo's dedicated cloud storage service, which began in January, 2021. The service is available on Windows, Android, Linux, and iOS via browser or Amaryllo Cloud App. The service is also scheduled to be available in VR via Metaverse starting 2022. Account Options Amaryllo Cloud offers multiple account types including free accounts and monthly, yearly, and lifetime storage plans. Free Accounts - 10GB. Monthly Plans - 100GB to 10TB. Annual Plans - 100GB to 10TB. Lifetime Plans - 50GB to 10TB. Features Users can share storage with up to nine other people and it claims to offer unlimited bandwidth. Encryption Amaryllo Cloud claims to use "AES 256-bit encryption", the claim not being independently verifiable due to Amaryllo intentionally hiding the source code of their closed-source products. References Robotics Home automation companies
1204623
https://en.wikipedia.org/wiki/Trifid%20cipher
Trifid cipher
The trifid cipher is a classical cipher invented by Félix Delastelle and described in 1902. Extending the principles of Delastelle's earlier bifid cipher, it combines the techniques of fractionation and transposition to achieve a certain amount of confusion and diffusion: each letter of the ciphertext depends on three letters of the plaintext and up to three letters of the key. The trifid cipher uses a table to fractionate each plaintext letter into a trigram, mixes the constituents of the trigrams, and then applies the table in reverse to turn these mixed trigrams into ciphertext letters. Delastelle notes that the most practical system uses three symbols for the trigrams:In order to split letters into three parts, it is necessary to represent them by a group of three signs or numbers. Knowing that n objects, combined in trigrams in all possible ways, give n × n × n = n3, we recognize that three is the only value for n; two would only give 23 = 8 trigrams, while four would give 43 = 64, but three give 33 = 27. Description As discussed above, the cipher requires a 27-letter mixed alphabet: we follow Delastelle by using a plus sign as the 27th letter. A traditional method for constructing a mixed alphabet from a key word or phrase is to write out the unique letters of the key in order, followed by the remaining letters of the alphabet in the usual order. For example, the key FELIX MARIE DELASTELLE yields the mixed alphabet FELIXMARDSTBCGHJKNOPQUVWYZ+. To each letter in the mixed alphabet we assign one of the 27 trigrams (111, 112, …, 333) by populating a 3 × 3 × 3 cube with the letters of the mixed alphabet, and using the Cartesian coordinates of each letter as the corresponding trigram. From this cube we build tables for enciphering letters as trigrams and deciphering trigrams as letters: The encryption protocol divides the plaintext into groups of fixed size (plus possibly one short group at the end): this confines encoding errors to the group in which they occur, an important consideration for ciphers that must be implemented by hand. The group size should be coprime to 3 to get the maximum amount of diffusion within each group: Delastelle gives examples with groups of 5 and 7 letters. He describes the encryption step as follows:We start by writing vertically under each letter, the numerical trigram that corresponds to it in the enciphering alphabet: then proceeding horizontally as if the numbers were written on a single line, we take groups of three numbers, look them up in the deciphering alphabet, and write the result under each column. For example, if the message is aide-toi, le ciel t'aidera, and the group size is 5, then encryption proceeds as follows: a i d e-t o i l e c i e l t'a i d e r a 1 1 1.1 2 3 1 1.1 2 1 1 1.2 1 1 1 1.1 1 3.2 3 1.1 1.2 1 1.2 2.1 1 1.3 2.3 1 3.3 1 1.3 2 2 1 1.3 2 1 1 2.3 2 1 1 3.2 2 1 F M J F V O I S S U F T F P U F E Q Q C In this table the periods delimit the trigrams as they are read horizontally in each group, thus in the first group we have 111 = F, 123 = M, 231 = J, and so on. Notes References Classical ciphers
4482233
https://en.wikipedia.org/wiki/Coverity
Coverity
Coverity is a proprietary static code analysis tool from Synopsys. This product enables engineers and security teams to find and fix software defects. Coverity started as an independent software company in 2002 at the Computer Systems Laboratory at Stanford University in Palo Alto, California. It was founded by Benjamin Chelf, Andy Chou, and Seth Hallem with Stanford professor Dawson Engler as a technical adviser. The headquarters was moved to San Francisco. In June 2008, Coverity acquired Solidware Technologies. In February 2014, Coverity announced an agreement to be acquired by Synopsys, an electronic design automation company, for $350 million net of cash on hand. Products Coverity is a static code analysis tool for C, C++, C#, Java, JavaScript, PHP, Python, .NET, ASP.NET, Objective-C, Go, JSP, Ruby, Swift, Fortran, Scala, VB.NET, iOS, and Typescript. It also supports more than 70 different frameworks for Java, JavaScript, C# and other languages. Coverity Scan is a free static-analysis cloud-based service for the open source community. Applications Under a United States Department of Homeland Security contract in 2006, the tool was used to examine over 150 open source applications for bugs; 6000 bugs found by the scan were fixed across 53 projects. National Highway Traffic Safety Administration used the tool in its 2010-2011 investigation into reports of sudden unintended acceleration in Toyota vehicles. The tool was used by CERN on the software employed in the Large Hadron Collider and in the NASA Jet Propulsion Laboratory during the flight software development of the Mars rover Curiosity. References Static program analysis tools Software testing tools Software companies based in California Companies based in San Francisco Software companies of the United States
2254865
https://en.wikipedia.org/wiki/FromSoftware
FromSoftware
FromSoftware, Inc. is a Japanese video game development company founded in November 1986 and a subsidiary of Kadokawa Corporation. The company is best known for their Armored Core and Souls series, including the related games Bloodborne, Sekiro: Shadows Die Twice and Elden Ring, known for their high levels of difficulty. History FromSoftware was founded as a productivity software developer in Tokyo, Japan, on November 1, 1986. Their first video game did not come until 1994, when they released King's Field as a launch title for the PlayStation. The game did not see a release in North America, although a 1995 sequel would later be released in North America bearing the same title and known as King's Field II in Japan. After releasing a third title in that series, FromSoftware moved on to release Echo Night as well as Shadow Tower in 1998. IGN would later note that the latter was "effectively a King's Field follow-up" as it shared many of the gameplay conventions with it. Also during this time FromSoftware would release Armored Core, the first in a mech game series which would go on to spawn many sequels. The making of Armored Core solidified the company's development skills, and in July 1999, they released the multiplayer action game Frame Gride for the Sega Dreamcast. When the PlayStation 2 was launched in 2000, FromSoftware supported the system with the two RPGs Eternal Ring, which like the King's Field series is a first person RPG, and Evergrace, a more conventional action RPG viewed from a third person perspective. In addition to these titles, FromSoftware published Tenchu: Wrath of Heaven, a stealth game that combines action and adventure elements. The company also released a pair of sequels to their PlayStation 1 offerings with King's Field IV and Shadow Tower Abyss. FromSoftware also released the Lost Kingdoms titles for the GameCube, a competing sixth generation console. IGN would note however that during this generation FromSoftware's focus would shift from RPGs to mech games due in part to the success of the Armored Core series. In 2002, FromSoftware released the mech action game Murakumo: Renegade Mech Pursuit for the Xbox before entering the mobile game market, where they released another King's Field title. In 2004, they released another Xbox title, Metal Wolf Chaos. In 2005, FromSoftware would start to produce a series of licensed games based on the various anime properties under the banner Another Century's Episode. In the same year, the company hosted the video game industry's first internship that let students experience game development through a game creation kit, Adventure Player, for the PlayStation Portable. In 2008, FromSoftware underwent a stock split before entering the Nintendo Wii market to release Tenchu: Shadow Assassins. After the success of Dark Souls in 2011, Hidetaka Miyazaki became the president of FromSoftware in May 2014. In April 2014, Kadokawa Corporation announced its intention to purchase the company from former shareholder Transcosmos. The deal was finalized on May 21, 2014. In December 2015, FromSoftware was nominated for developer of the year at The Game Awards 2015, but lost to CD Projekt Red. In January 2016, FromSoftware established a studio in Fukuoka that focuses on creating computer-generated imagery (CGI) assets for their games. Games With fifteen games developed, the Armored Core series is the studio's longest running franchise. The most recent title, Armored Core: Verdict Day, was released worldwide in September 2013 for the PlayStation 3 and Xbox 360. Earlier, less notable outside Japan, titles include the Enchanted Arms, King's Field, Chromehounds, Otogi, and Tenchu series. In 2009, FromSoftware released Demon's Souls for the PlayStation 3, which brought them international exposure. Its spiritual successor, Dark Souls, was released in 2011. In March 2014, Dark Souls II, was released, while Dark Souls III was released in 2016. A title inspired by the Souls series, Bloodborne, was released in March 2015. The Souls series, along with Bloodborne, received widespread critical acclaim, as well as strong sales domestically and internationally. They have also received a number of awards, primarily those for the role-playing genre, including multiple "RPG of the Year" and Game of the Year awards. Since release, Dark Souls and Bloodborne have been cited by many publications to be among the greatest games of all time. In April 2016, FromSoftware revealed that they were working on a new intellectual property, as well as stating their intent to return to the Armored Core series. Two games, the PlayStation VR exclusive Déraciné and the multiplatform Sekiro: Shadows Die Twice, were announced at E3 2018. An action role-playing game featuring the collaboration of FromSoftware president Hidetaka Miyazaki and A Song of Ice and Fire series author George R. R. Martin, titled Elden Ring, was released in 2022. References External links Japanese companies established in 1986 Software companies based in Tokyo Kadokawa Corporation subsidiaries Japanese brands Video game companies of Japan Video game companies established in 1986 Video game development companies Video game publishers 2014 mergers and acquisitions
41669016
https://en.wikipedia.org/wiki/Fundamental%20Analysis%20Software
Fundamental Analysis Software
Fundamental analysis software automates analysis that supports fundamental analysts in their review of a company's financial statements and valuation. Features The following are the most common features of fundamental analysis applications. Backtesting Enables traders to test fundamental analysis strategies or algorithms to see what kind of return they would have achieved if they had invested based on that strategy or algorithm in the past. Backtest results will typically display total and annualized returns compared to a benchmark such as the S&P 500. In addition to returns, backtest results will also display volatility statistics such as average beta or maximum drawdown. Scanner Stock scanning, or screening, is the most common feature of fundamental analysis software. Scanners enable users to 'scan' the market, be it stocks, options, currencies etc., to identify investment opportunities that meet a user's specific investment criteria. Using a fundamental analysis scanner, a user could, for example, scan the market to identify stocks with below industry average PE Ratios and above industry average sales growth. Alerts Alerts are a common feature of fundamental analysis software. Alerts will typically notify the investor to buy or sell a stock, or notify an investor when a stock enters or exits his/her saved strategy. When alert conditions are met, a notification is typically communicated via an on screen pop up or sent as an email. Data Feed Fundamental analysis software is typically used with end of day (EOD), delayed or real time data feeds. EOD data feeds provide the end of day close, open, high, and low price for the given equity and is typically updated once a day at market close. Delayed data is typically delayed 15 to 30 minutes depending on the exchange and is the most commonly used data feed type. Real time data feeds provide tick by tick 'real time' data. Built in Strategies Most fundamental analysis software includes built in strategies that have been validated with backtesting to give positive returns on average. Some examples of built in strategies include: Stock Investor Pro's IBD Stable 70 or Equities Lab's Dividend Champions. Users will typically find a built in strategy that aligns with their interests and investment style then follow the strategy to keep up with stocks that pass the strategy. Broker Interface Some fundamental analysis software can be integrated with brokerage platforms to enable traders to place trades, or to update the users portfolio with what is actually held in the brokerage account. References Business software Fundamental analysis
2741879
https://en.wikipedia.org/wiki/HMAC-based%20one-time%20password
HMAC-based one-time password
HMAC-based one-time password (HOTP) is a one-time password (OTP) algorithm based on hash-based message authentication codes (HMAC). It is a cornerstone of the Initiative for Open Authentication (OATH). HOTP was published as an informational IETF RFC 4226 in December 2005, documenting the algorithm along with a Java implementation. Since then, the algorithm has been adopted by many companies worldwide (see below). The HOTP algorithm is a freely available open standard. Algorithm The HOTP algorithm provides a method of authentication by symmetric generation of human-readable passwords, or values, each used for only one authentication attempt. The one-time property leads directly from the single use of each counter value. Parties intending to use HOTP must establish some ; typically these are specified by the authenticator, and either accepted or not by the authenticated: A cryptographic hash method, H (default is SHA-1) A secret key, K, which is an arbitrary byte string, and must remain private A counter, C, which counts the number of iterations A HOTP value length, d (6–10, default is 6, and 6–8 is recommended) Both parties compute the HOTP value derived from the secret key K and the counter C. Then the authenticator checks its locally-generated value against the value supplied by the authenticated. The authenticator and the authenticated increment the counter, C, independently of each other, where the latter may increase ahead of the former, thus a resynchronisation protocol is wise. RFC4226 doesn't actually require any such, but does make a recommendation. This simply has the authenticator repeatedly try verification ahead of their counter through a window of size, s. The authenticator's counter continues forward of the value at which verification succeeds, and requires no actions by the authenticated. The recommendation is made that persistent throttling of HOTP value verification take place, to address their relatively small size and thus vulnerability to brute force attacks. It is suggested that verification be locked out after a small number of failed attempts, or that each failed attempt attracts an additional (linearly-increasing) delay. 6-digit codes are commonly provided by proprietary hardware tokens from a number of vendors informing the default value of d. Truncation extracts 31 bits or ≈ 9.3 decimal digits, meaning, at most, d can be 10, with the 10th digit providing less extra variation, taking values of 0, 1, and 2 (i.e., 0.3 digits). After verification, the authenticator can authenticate itself simply by generating the next HOTP value, returning it, and then the authenticated can generate their own HOTP value to verify it. Note that counters are guaranteed to be synchronised at this point in the process. The HOTP value is the human-readable design output, a d-digit decimal number (without omission of leading 0s): HOTP value = HOTP(K, C) mod 10d That is, the value is the d least significant base-10 digits of HOTP. HOTP is a truncation of the hash-based message authentication code (HMAC) of the counter, C (under the key, K, and hash function, H). HOTP(K, C) = truncate(HMAC(K, C)) where C, the counter, must be used big-endian. Truncation first takes the 4 least significant bits of the MAC and uses them as a byte offset, i. truncate(MAC) = extract31(MAC, MAC[(19 × 8) + 4:(19 × 8) + 7]) where ':' here is used to extract bits from a starting bit number up to and including an ending bit number, where these bit numbers are 0-origin. The use of '19' in the above formula relates to the size of the output from the hash function. With the default of SHA-1, the output is 20 bytes and so the last byte is byte 19 (0-origin). That index i is used to select 31 bits from MAC, starting at bit i * 8 + 1. extract31(MAC, i) = MAC[i × 8 + 1:i × 8 + (4 × 8) − 1] 31 bits is a single bit short of a 4-byte word. Thus, the value can be placed inside such a word without using the sign bit (the most significant bit). This is done to definitely avoid doing modular arithmetic on negative numbers, as this has many differing definitions and implementations. Tokens Both hardware and software tokens are available from various vendors, for some of them see references below. Hardware tokens implementing OATH HOTP tend to be significantly cheaper than their competitors based on proprietary algorithms. As of 2010, OATH HOTP hardware tokens can be purchased for a marginal price. Some products can be used for strong passwords as well as OATH HOTP. Software tokens are available for (nearly) all major mobile/smartphone platforms (J2ME, Android, iPhone, BlackBerry, Maemo, macOS, and Windows Mobile). Reception Although the reception from some of the computer press has been negative during 2004 and 2005, after IETF adopted HOTP as RFC 4226 in December 2005, various vendors started to produce HOTP compatible tokens and/or whole authentication solutions. According to a paper on strong authentication (entitled "Road Map: Replacing Passwords with OTP Authentication") published by Burton Group (a division of Gartner, Inc.) in 2010, "Gartner's expectation is that the hardware OTP form factor will continue to enjoy modest growth while smartphone OTPs will grow and become the default hardware platform over time." See also Initiative for Open Authentication S/KEY Time-based one-time password algorithm References External links RFC 4226: HOTP: An HMAC-Based One-Time Password Algorithm RFC 6238: TOTP: Time-Based One-Time Password Algorithm RFC 6287: OCRA: OATH Challenge-Response Algorithm Initiative For Open Authentication Implementation of RFC 4226 - HOPT Algorithm Step by step Python implementation in a Jupyter Notebook Internet protocols Cryptographic algorithms Computer access control protocols
2326082
https://en.wikipedia.org/wiki/Internet%20Communications%20Engine
Internet Communications Engine
The Internet Communications Engine, or Ice, is an open-source RPC framework developed by ZeroC. It provides SDKs for C++, C#, Java, JavaScript, MATLAB, Objective-C, PHP, Python, Ruby and Swift, and can run on various operating systems, including Linux, Windows, macOS, iOS and Android. Ice implements a proprietary application layer communications protocol, called the Ice protocol, that can run over TCP, TLS, UDP, WebSocket and Bluetooth. As its name indicates, Ice can be suitable for applications that communicate over the Internet, and includes functionality for traversing firewalls. History Initially released in February 2003, Ice was influenced by the Common Object Request Broker Architecture (CORBA) in its design, and indeed was created by several influential CORBA developers, including Michi Henning. However, according to ZeroC, it was smaller and less complex than CORBA because it was designed by a small group of experienced developers, instead of suffering from design by committee. In 2004, it was reported that a game called "Wish" by a company named Mutable Realms used Ice. In 2008, it was reported that Big Bear Solar Observatory had used the software since 2005. The source code repository for Ice is on GitHub since May 2015. Components Ice components include object-oriented remote-object-invocation, replication, grid-computing, failover, load-balancing, firewall-traversals and publish-subscribe services. To gain access to those services, applications are linked to a stub library or assembly, which is generated from a language-independent IDL-like syntax called slice. IceStorm is an object-oriented publish-and-subscribe framework that also supports federation and quality-of-service. Unlike other publish-subscribe frameworks such as Tibco Software's Rendezvous or SmartSockets, message content consist of objects of well defined classes rather than of structured text. IceGrid is a suite of frameworks that provide object-oriented load balancing, failover, object-discovery and registry services. IcePatch facilitates the deployment of ICE-based software. For example, a user who wishes to deploy new functionality and/or patches to several servers may use IcePatch. Glacier is a proxy-based service to enable communication through firewalls, thus making ICE an internet communication engine. IceBox Icebox is a service-oriented architecture container of executable services implemented in .dll or .so libraries. This is a lighter alternative to building entire executable for every service. Slice Slice is a ZeroC-proprietary file format that programmers follow to edit computer-language independent declarations and definitions of classes, interfaces, structures and enumerations. Slice definition files are used as input to the stub generating process. The stub in turn is linked to applications and servers that should communicate with one another based on interfaces and classes as declared/defined by the slice definitions. Apart from CORBA, classes and interfaces support inheritance and abstract classes. In addition, slice provides configuration options in form of macros and attributes to direct the code generation process. An example is the directive to generate a certain STL list<double> template instead of the default, which is to generate a STL vector<double> template. See also Cisco's Etch Google's gRPC SOAP Apache Thrift Microsoft's WCF Notes External links https://github.com/zeroc-ice/ice Inter-process communication Grid computing products Application layer protocols Remote procedure call Object request broker
1950428
https://en.wikipedia.org/wiki/Wallace%20v.%20International%20Business%20Machines%20Corp.
Wallace v. International Business Machines Corp.
Wallace v. International Business Machines Corp., 467 F.3d 1104 (7th Cir. 2006), was a significant case in the development of free software. The case decided, at the Court of Appeals for the Seventh Circuit, that in United States law the GNU General Public License (GPL) did not contravene federal antitrust laws. Daniel Wallace, a United States citizen, sued the Free Software Foundation (FSF) for price fixing. In a later lawsuit, he unsuccessfully sued IBM, Novell, and Red Hat. Wallace claimed that free Linux prevented him from making a profit from selling his own operating system. FSF lawsuit On April 28, 2005, Daniel Wallace filed suit against the FSF in the U.S. District Court for the Southern District of Indiana, stating that the GPL, by requiring copies of computer software licensed under it to be made available freely, and possibly even at no cost, is tantamount to price fixing. In November 2005 the case was dismissed without prejudice, and Wallace filed multiple amended complaints in an effort to satisfy the requirements of an antitrust allegation. His fourth and final amended complaint was dismissed on March 20, 2006, by Judge John Daniel Tinder, and Wallace was ordered to pay the FSF's costs. In its decision to grant the motion to dismiss, the Court ruled that Wallace had failed to allege any antitrust injury on which his claim could be based, since Wallace was obligated to claim not only that he had been injured but also that the market had. The Court instead found that [T]he GPL encourages, rather than discourages, free competition and the distribution of computer operating systems, the benefits of which directly pass to consumers. These benefits include lower prices, better access and more innovation. The Court also noted that prior cases have established that the Sherman Act was enacted to assure customers the benefits of price competition, and have emphasized the act's primary purpose of protecting the economic freedom of participants in the relevant market. This decision thus supports the right of authors and content creators to offer their creations free of charge. IBM, Novell, and Red Hat lawsuit In 2006, Daniel Wallace filed a lawsuit against the software companies IBM, Novell, and Red Hat, who profit from the distribution of open-source software, specifically the Linux operating system. Wallace's allegation was that these software companies were engaging in anticompetitive price fixing. On May 16, 2006, Judge Richard L. Young dismissed the case with prejudice: Wallace has had two chances to amend his complaint [...]. His continuing failure to state an antitrust claim indicates that the complaint has "inherent internal flaws." [...] Wallace will not be granted further leave to file an amended complaint because the court finds that such amendment would be futile. Wallace later filed an appeal in the Seventh Circuit Appeal Court, where his case was heard de novo in front of a three-judge panel led by Frank Easterbrook. He lost his appeal, with the judge citing a number of problems with his complaint. See also IBM v. Papermaster References External links Plaintiff Daniel Wallace's Memorandum on Motion for Summary Judgment (Groklaw) United States antitrust case law United States intellectual property case law Free software Computer case law Free Software Foundation IBM Red Hat Novell Linux
220314
https://en.wikipedia.org/wiki/Instruction%20pipelining
Instruction pipelining
In computer science, instruction pipelining is a technique for implementing instruction-level parallelism within a single processor. Pipelining attempts to keep every part of the processor busy with some instruction by dividing incoming instructions into a series of sequential steps (the eponymous "pipeline") performed by different processor units with different parts of instructions processed in parallel. Concept and motivation In a pipelined computer, instructions flow through the central processing unit (CPU) in stages. For example, it might have one stage for each step of the von Neumann cycle: Fetch the instruction, fetch the operands, do the instruction, write the results. A pipelined computer usually has "pipeline registers" after each stage. These store information from the instruction and calculations so that the logic gates of the next stage can do the next step. This arrangement lets the CPU complete an instruction on each clock cycle. It is common for even numbered stages to operate on one edge of the square-wave clock, while odd-numbered stages operate on the other edge. This allows more CPU throughput than a multicycle computer at a given clock rate, but may increase latency due to the added overhead of the pipelining process itself. Also, even though the electronic logic has a fixed maximum speed, a pipelined computer can be made faster or slower by varying the number of stages in the pipeline. With more stages, each stage does less work, and so the stage has fewer delays from the logic gates and could run at a higher clock rate. A pipelined model of computer is often the most economical, when cost is measured as logic gates per instruction per second. At each instant, an instruction is in only one pipeline stage, and on average, a pipeline stage is less costly than a multicycle computer. Also, when made well, most of the pipelined computer's logic is in use most of the time. In contrast, out of order computers usually have large amounts of idle logic at any given instant. Similar calculations usually show that a pipelined computer uses less energy per instruction. However, a pipelined computer is usually more complex and more costly than a comparable multicycle computer. It typically has more logic gates, registers and a more complex control unit. In a like way, it might use more total energy, while using less energy per instruction. Out of order CPUs can usually do more instructions per second because they can do several instructions at once. In a pipelined computer, the control unit arranges for the flow to start, continue, and stop as a program commands. The instruction data is usually passed in pipeline registers from one stage to the next, with a somewhat separated piece of control logic for each stage. The control unit also assures that the instruction in each stage does not harm the operation of instructions in other stages. For example, if two stages must use the same piece of data, the control logic assures that the uses are done in the correct sequence. When operating efficiently, a pipelined computer will have an instruction in each stage. It is then working on all of those instructions at the same time. It can finish about one instruction for each cycle of its clock. But when a program switches to a different sequence of instructions, the pipeline sometimes must discard the data in process and restart. This is called a "stall." Much of the design of a pipelined computer prevents interference between the stages and reduces stalls. Number of steps The number of dependent steps varies with the machine architecture. For example: The 1956–61 IBM Stretch project proposed the terms Fetch, Decode, and Execute that have become common. The classic RISC pipeline comprises: Instruction fetch Instruction decode and register fetch Execute Memory access Register write back The Atmel AVR and the PIC microcontroller each have a two-stage pipeline. Many designs include pipelines as long as 7, 10 and even 20 stages (as in the Intel Pentium 4). The later "Prescott" and "Cedar Mill" Netburst cores from Intel, used in the last Pentium 4 models and their Pentium D and Xeon derivatives, have a long 31-stage pipeline. The Xelerated X10q Network Processor has a pipeline more than a thousand stages long, although in this case 200 of these stages represent independent CPUs with individually programmed instructions. The remaining stages are used to coordinate accesses to memory and on-chip function units. As the pipeline is made "deeper" (with a greater number of dependent steps), a given step can be implemented with simpler circuitry, which may let the processor clock run faster. Such pipelines may be called superpipelines. A processor is said to be fully pipelined if it can fetch an instruction on every cycle. Thus, if some instructions or conditions require delays that inhibit fetching new instructions, the processor is not fully pipelined. History Seminal uses of pipelining were in the ILLIAC II project and the IBM Stretch project, though a simple version was used earlier in the Z1 in 1939 and the Z3 in 1941. Pipelining began in earnest in the late 1970s in supercomputers such as vector processors and array processors. One of the early supercomputers was the Cyber series built by Control Data Corporation. Its main architect, Seymour Cray, later headed Cray Research. Cray developed the XMP line of supercomputers, using pipelining for both multiply and add/subtract functions. Later, Star Technologies added parallelism (several pipelined functions working in parallel), developed by Roger Chen. In 1984, Star Technologies added the pipelined divide circuit developed by James Bradley. By the mid-1980s, pipelining was used by many different companies around the world. Pipelining was not limited to supercomputers. In 1976, the Amdahl Corporation's 470 series general purpose mainframe had a 7-step pipeline, and a patented branch prediction circuit. Hazards The model of sequential execution assumes that each instruction completes before the next one begins; this assumption is not true on a pipelined processor. A situation where the expected result is problematic is known as a hazard. Imagine the following two register instructions to a hypothetical processor: 1: add 1 to R5 2: copy R5 to R6 If the processor has the 5 steps listed in the initial illustration (the 'Basic five-stage pipeline' at the start of the article), instruction 1 would be fetched at time t1 and its execution would be complete at t5. Instruction 2 would be fetched at t2 and would be complete at t6. The first instruction might deposit the incremented number into R5 as its fifth step (register write back) at t5. But the second instruction might get the number from R5 (to copy to R6) in its second step (instruction decode and register fetch) at time t3. It seems that the first instruction would not have incremented the value by then. The above code invokes a hazard. Writing computer programs in a compiled language might not raise these concerns, as the compiler could be designed to generate machine code that avoids hazards. Workarounds In some early DSP and RISC processors, the documentation advises programmers to avoid such dependencies in adjacent and nearly adjacent instructions (called delay slots), or declares that the second instruction uses an old value rather than the desired value (in the example above, the processor might counter-intuitively copy the unincremented value), or declares that the value it uses is undefined. The programmer may have unrelated work that the processor can do in the meantime; or, to ensure correct results, the programmer may insert NOPs into the code, partly negating the advantages of pipelining. Solutions Pipelined processors commonly use three techniques to work as expected when the programmer assumes that each instruction completes before the next one begins: The pipeline could stall, or cease scheduling new instructions until the required values are available. This results in empty slots in the pipeline, or bubbles, in which no work is performed. An additional data path can be added that routes a computed value to a future instruction elsewhere in the pipeline before the instruction that produced it has been fully retired, a process called operand forwarding. The processor can locate other instructions which are not dependent on the current ones and which can be immediately executed without hazards, an optimization known as out-of-order execution. Branches A branch out of the normal instruction sequence often involves a hazard. Unless the processor can give effect to the branch in a single time cycle, the pipeline will continue fetching instructions sequentially. Such instructions cannot be allowed to take effect because the programmer has diverted control to another part of the program. A conditional branch is even more problematic. The processor may or may not branch, depending on a calculation that has not yet occurred. Various processors may stall, may attempt branch prediction, and may be able to begin to execute two different program sequences (eager execution), each assuming the branch is or is not taken, discarding all work that pertains to the incorrect guess. A processor with an implementation of branch prediction that usually makes correct predictions can minimize the performance penalty from branching. However, if branches are predicted poorly, it may create more work for the processor, such as flushing from the pipeline the incorrect code path that has begun execution before resuming execution at the correct location. Programs written for a pipelined processor deliberately avoid branching to minimize possible loss of speed. For example, the programmer can handle the usual case with sequential execution and branch only on detecting unusual cases. Using programs such as gcov to analyze code coverage lets the programmer measure how often particular branches are actually executed and gain insight with which to optimize the code. In some cases, a programmer can handle both the usual case and unusual case with branch-free code. Special situations Self-modifying programs The technique of self-modifying code can be problematic on a pipelined processor. In this technique, one of the effects of a program is to modify its own upcoming instructions. If the processor has an instruction cache, the original instruction may already have been copied into a prefetch input queue and the modification will not take effect. Some processors such as the Zilog Z280 can configure their on-chip cache memories for data-only fetches, or as part of their ordinary memory address space, and avoid such difficulties with self-modifying instructions. Uninterruptible instructions An instruction may be uninterruptible to ensure its atomicity, such as when it swaps two items. A sequential processor permits interrupts between instructions, but a pipelining processor overlaps instructions, so executing an uninterruptible instruction renders portions of ordinary instructions uninterruptible too. The Cyrix coma bug would hang a single-core system using an infinite loop in which an uninterruptible instruction was always in the pipeline. Design considerations Speed Pipelining keeps all portions of the processor occupied and increases the amount of useful work the processor can do in a given time. Pipelining typically reduces the processor's cycle time and increases the throughput of instructions. The speed advantage is diminished to the extent that execution encounters hazards that require execution to slow below its ideal rate. A non-pipelined processor executes only a single instruction at a time. The start of the next instruction is delayed not based on hazards but unconditionally. A pipelined processor's need to organize all its work into modular steps may require the duplication of registers, which increases the latency of some instructions. Economy By making each dependent step simpler, pipelining can enable complex operations more economically than adding complex circuitry, such as for numerical calculations. However, a processor that declines to pursue increased speed with pipelining may be simpler and cheaper to manufacture. Predictability Compared to environments where the programmer needs to avoid or work around hazards, use of a non-pipelined processor may make it easier to program and to train programmers. The non-pipelined processor also makes it easier to predict the exact timing of a given sequence of instructions. Illustrated example To the right is a generic pipeline with four stages: fetch, decode, execute and write-back. The top gray box is the list of instructions waiting to be executed, the bottom gray box is the list of instructions that have had their execution completed, and the middle white box is the pipeline. The execution is as follows: Pipeline bubble A pipelined processor may deal with hazards by stalling and creating a bubble in the pipeline, resulting in one or more cycles in which nothing useful happens. In the illustration at right, in cycle 3, the processor cannot decode the purple instruction, perhaps because the processor determines that decoding depends on results produced by the execution of the green instruction. The green instruction can proceed to the Execute stage and then to the Write-back stage as scheduled, but the purple instruction is stalled for one cycle at the Fetch stage. The blue instruction, which was due to be fetched during cycle 3, is stalled for one cycle, as is the red instruction after it. Because of the bubble (the blue ovals in the illustration), the processor's Decode circuitry is idle during cycle 3. Its Execute circuitry is idle during cycle 4 and its Write-back circuitry is idle during cycle 5. When the bubble moves out of the pipeline (at cycle 6), normal execution resumes. But everything now is one cycle late. It will take 8 cycles (cycle 1 through 8) rather than 7 to completely execute the four instructions shown in colors. See also Wait state Classic RISC pipeline Notes References External links Branch Prediction in the Pentium Family (Archive.org copy) ArsTechnica article on pipelining Counterflow Pipeline Processor Architecture Pipeline
18831
https://en.wikipedia.org/wiki/Mathematics
Mathematics
Mathematics (from Greek: ) is an area of knowledge, which includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and their changes (calculus and analysis). There is no general consensus about its exact scope or epistemological status. Most of mathematical activity consists of discovering and proving (by pure reasoning) properties of abstract objects. These objects are either abstractions from nature (such as natural numbers or lines), or (in modern mathematics) abstract entities of which certain properties, called axioms, are stipulated. A proof consists of a succession of applications of some deductive rules to already known results, including previously proved theorems, axioms and (in case of abstraction from nature) some basic properties that are considered as true starting points of the theory under consideration. The result of a proof is called a theorem. Mathematics is widely used in science for modeling phenomena. This enables the extraction of quantitative predictions from experimental laws. For example, the movement of planets can be predicted with high accuracy using Newton's law of gravitation combined with mathematical computation. The independence of mathematical truth from any experimentation implies that the accuracy of such predictions depends only on the adequacy of the model for describing the reality. So when some inaccurate predictions arise, it means that the model must be improved or changed, not that the mathematics is wrong. For example, the perihelion precession of Mercury cannot be explained by Newton's law of gravitation, but is accurately explained by Einstein's general relativity. This experimental validation of Einstein's theory shows that Newton's law of gravitation is only an approximation (which still is very accurate in everyday life). Mathematics is essential in many fields, including natural sciences, engineering, medicine, finance, computer science and social sciences. Some areas of mathematics, such as statistics and game theory, are developed in direct correlation with their applications, and are often grouped under the name of applied mathematics. Other mathematical areas are developed independently from any application (and are therefore called pure mathematics), but practical applications are often discovered later. A fitting example is the problem of integer factorization, which goes back to Euclid, but which had no practical application before its use in the RSA cryptosystem (for the security of computer networks). Mathematics has been a human activity from as far back as written records exist. However, the concept of a "proof" and its associated "mathematical rigour" first appeared in Greek mathematics, most notably in Euclid's Elements. Mathematics developed at a relatively slow pace until the Renaissance, when algebra and infinitesimal calculus were added to arithmetic and geometry as main areas of mathematics. Since then the interaction between mathematical innovations and scientific discoveries have led to a rapid increase in the rate of mathematical discoveries. At the end of the 19th century, the foundational crisis of mathematics led to the systematization of the axiomatic method. This, in turn, gave rise to a dramatic increase in the number of mathematics areas and their fields of applications; a witness of this is the Mathematics Subject Classification, which lists more than sixty first-level areas of mathematics. Areas of mathematics Before the Renaissance, mathematics was divided into two main areas: arithmetic, devoted to the manipulation of numbers, and geometry, devoted to the study of shapes. There was also some pseudoscience, such as numerology and astrology, that were not clearly distinguished from mathematics. Around the Renaissance, two new main areas appeared. The introduction of mathematical notation led to algebra, which, roughly speaking, consists of the study and the manipulation of formulas. Calculus, a shorthand of infinitesimal calculus and integral calculus, is the study of continuous functions, which model the change of, and the relationship between varying quantities (variables). This division into four main areas remained valid until the end of the 19th century, although some areas, such as celestial mechanics and solid mechanics, which were often considered as mathematics, are now considered as belonging to physics. Also, some subjects developed during this period predate mathematics (being divided into different) areas, such as probability theory and combinatorics, which only later became regarded as autonomous areas of their own. At the end of the 19th century, the foundational crisis in mathematics and the resulting systematization of the axiomatic method led to an explosion in the amount of areas of mathematics. The Mathematics Subject Classification contains more than 60 first-level areas. Some of these areas correspond to the older division in four main areas. This is the case of number theory (the modern name for higher arithmetic) and Geometry. However, there are several other first-level areas that have "geometry" in their name or are commonly considered as belonging to geometry. Algebra and calculus do not appear as first-level areas, but are each split into several first-level areas. Other first-level areas did not exist at all before the 20th century (for example category theory; homological algebra, and computer science) or were not considered before as mathematics, such as 03:Mathematical logic and foundations (including model theory, computability theory, set theory, proof theory, and algebraic logic). Number theory Number theory started with the manipulation of numbers, that is, natural numbers and later expanded to integers and rational numbers Number theory was formerly called arithmetic, but nowadays this term is mostly used for the methods of calculation with numbers. A specificity of number theory is that many problems that can be stated very elementarily are very difficult, and, when solved, have a solution that require very sophisticated methods coming from various parts of mathematics. A notable example is Fermat's Last theorem that was stated in 1637 by Pierre de Fermat and proved only in 1994 by Andrew Wiles, using, among other tools, algebraic geometry (more specifically scheme theory), category theory and homological algebra. Another example is Goldbach's conjecture, that asserts that every even integer greater than 2 is the sum of two prime numbers. Stated in 1742 by Christian Goldbach it remains unproven despite considerable effort. In view of the diversity of the studied problems and the solving methods, number theory is presently split in several subareas, which include analytic number theory, algebraic number theory, geometry of numbers (method oriented), Diophantine equations and transcendence theory (problem oriented). Geometry Geometry is, with arithmetic, one of the oldest branches of mathematics. It started with empirical recipes concerning shapes, such as lines, angles and circles, which were developed mainly for the need of surveying and architecture. A fundamental innovation was the elaboration of proofs by ancient Greeks: it is not sufficient to verify by measurement that, say, two lengths are equal. Such a property must be proved by abstract reasoning from previously proven results (theorems) and basic properties (which are considered as self-evident because they are too basic for being the subject of a proof (postulates)). This principle, which is foundational for all mathematics, was elaborated for the sake of geometry, and was systematized by Euclid around 300 BC in his book Elements. The resulting Euclidean geometry is the study of shapes and their arrangements constructed from lines, planes and circles in the Euclidean plane (plane geometry) and the (three-dimensional) Euclidean space. Euclidean geometry was developed without a change of methods or scope until the 17th century, when René Descartes introduced what is now called Cartesian coordinates. This was a major change of paradigm, since instead of defining real numbers as lengths of line segments (see number line), it allowed the representation of points using numbers (their coordinates), and for the use of algebra and later, calculus for solving geometrical problems. This split geometry in two parts that differ only by their methods, synthetic geometry, which uses purely geometrical methods, and analytic geometry, which uses coordinates systemically. Analytic geometry allows the study of new shapes, in particular curves that are not related to circles and lines; these curves are defined either as graph of functions (whose study led to differential geometry), or by implicit equations, often polynomial equations (which spawned algebraic geometry). Analytic geometry makes it possible to consider spaces dimensions higher than three (it suffices to consider more than three coordinates), which are no longer a model of the physical space. Geometry expanded quickly during the 19th century. A major event was the discovery (in the second half of the 19th century) of non-Euclidean geometries, which are geometries where the parallel postulate is abandoned. This is, besides Russel's paradox, one of the starting points of the foundational crisis of mathematics, by taking into question the truth of the aforementioned postulate. This aspect of the crisis was solved by systematizing the axiomatic method, and adopting that the truth of the chosen axioms is not a mathematical problem. In turn, the axiomatic method allows for the study of various geometries obtained either by changing the axioms or by considering properties that are invariant under specific transformations of the space. This results in a number of subareas and generalizations of geometry that include: Projective geometry, introduced in the 16th century by Girard Desargues, it extends Euclidean geometry by adding points at infinity at which parallel lines intersect. This simplifies many aspects of classical geometry by avoiding to have a different treatment for intersecting and parallel lines. Affine geometry, the study of properties relative to parallelism and independent from the concept of length. Differential geometry, the study of curves, surfaces, and their generalizations, which are defined using differentiable functions Manifold theory, the study of shapes that are not necessarily embedded in a larger space Riemannian geometry, the study of distance properties in curved spaces Algebraic geometry, the study of curves, surfaces, and their generalizations, which are defined using polynomials Topology, the study of properties that are kept under continuous deformations Algebraic topology, the use in topology of algebraic methods, mainly homological algebra Discrete geometry, the study of finite configurations in geometry Convex geometry, the study of convex sets, which takes its importance from its applications in optimization Complex geometry, the geometry obtained by replacing real numbers with complex numbers Algebra Algebra may be viewed as the art of manipulating equations and formulas. Diophantus (3d century) and Al-Khwarizmi (9th century) were two main precursors of algebra. The first one solved some relations between unknown natural numbers (that is, equations) by deducing new relations until getting the solution. The second one introduced systematic methods for transforming equations (such as moving a term from a side of an equation into the other side). The term algebra is derived from the Arabic word that he used for naming one of these methods in the title of his main treatise. Algebra began to be a specific area only with François Viète (1540–1603), who introduced the use of letters (variables) for representing unknown or unspecified numbers. This allows describing concisely the operations that have to be done on the numbers represented by the variables. Until the 19th century, algebra consisted mainly of the study of linear equations that is called presently linear algebra, and polynomial equations in a single unknown, which were called algebraic equations (a term that is still in use, although it may be ambiguous). During the 19th century, variables began to represent other things than numbers (such as matrices, modular integers, and geometric transformations), on which some operations can operate, which are often generalizations of arithmetic operations. For dealing with this, the concept of algebraic structure was introduced, which consist of a set whose elements are unspecified, of operations acting on the elements of the set, and rules that these operations must follow. So, the scope of algebra evolved for becoming essentially the study of algebraic structures. This object of algebra was called modern algebra or abstract algebra, the latter term being still used, mainly in an educational context, in opposition with elementary algebra which is concerned with the older way of manipulating formulas. Some types of algebraic structures have properties that are useful, and often fundamental, in many areas of mathematics. Their study are nowadays autonomous parts of algebra, which include: group theory; field theory; vector spaces, whose study is essentially the same as linear algebra; ring theory; commutative algebra, which is the study of commutative rings, includes the study of polynomials, and is a foundational part of algebraic geometry; homological algebra Lie algebra and Lie group theory; Boolean algebra, which is widely used for the study of the logical structure of computers. The study of types algebraic structures as mathematical objects is the object of universal algebra and category theory. The latter applies to every mathematical structure (not only the algebraic ones). At its origin, it was introduced, together with homological algebra for allowing the algebraic study of non-algebraic objects such as topological spaces; this particular area of application is called algebraic topology. Calculus and analysis Calculus, formerly called infinitesimal calculus, was introduced in the 17th century by Newton and Leibniz, independently and simultaneously. It is fundamentally the study of the relationship of two changing quantities, called variables, such that one depends on the other. Calculus was largely expanded in the 18th century by Euler, with the introduction of the concept of a function, and many other results. Presently "calculus" refers mainly to the elementary part of this theory, and "analysis" is commonly used for advanced parts. Analysis is further subdivided into real analysis, where variables represent real numbers and complex analysis where variables represent complex numbers. Presently there are many subareas of analysis, some being shared with other areas of mathematics; they include: Multivariable calculus Functional analysis, where variables represent varying functions; Integration, measure theory and potential theory, all strongly related with Probability theory; Ordinary differential equations; Partial differential equations; Numerical analysis, mainly devoted to the computation on computers of solutions of ordinary and partial differential equations that arise in many applications of mathematics. Discrete mathematics Mathematical logic and set theory These subjects belong to mathematics since the end of the 19th century. Before this period, sets were not considered as mathematical objects, and logic, although used for mathematical proofs, belonged to philosophy, and was not specifically studied by mathematicians. Before the study of infinite sets by Georg Cantor, mathematicians were reluctant to consider collections that are actually infinite, and considered infinity as the result of an endless enumeration. Cantor's work offended many mathematicians not only by considering actually infinite sets, but also by showing that this implies different sizes of infinity (see Cantor's diagonal argument) and the existence of mathematical objects that cannot be computed, and not even be explicitly described (for example, Hamel bases of the real numbers over the rational numbers). This led to the controversy over Cantor's set theory. In the same period, it appeared in various areas of mathematics that the former intuitive definitions of the basic mathematical objects were insufficient for insuring mathematical rigour. Examples of such intuitive definitions are "a set is a collection of objects", "natural number is what is used for counting", "a point is a shape with a zero length in every direction", "a curve is a trace left by a moving point", etc. This is the origin of the foundational crisis of mathematics. It has been eventually solved in the mainstream of mathematics by systematize the axiomatic method inside a formalized set theory. Roughly speaking, each mathematical object is defined by the set of all similar objects and the properties that these objects must have. For example, in Peano arithmetic, the natural numbers are defined by "zero is a number", "each number as a unique successor", "each number but zero has a unique predecessor", and some rules of reasoning. The "nature" of the objects defined this way is a philosophical problem that mathematicians leave to philosophers, even if many mathematicians have opinions on this nature, and use their opinion—sometimes called "intuition"—to guide their study and finding proofs. This approach allows considering "logics" (that is, sets of allowed deducing rules), theorems, proofs, etc. as mathematical objects, and to prove theorems about them. For example, Gödel's incompleteness theorems assert, roughly speaking that, in every theory that contains the natural numbers, there are theorems that are true (that is provable in a larger theory), but not provable inside the theory. This approach of the foundations of the mathematics was challenged during the first half of the 20th century by mathematicians leaded by L. E. J. Brouwer who promoted an intuitionistic logic that excludes the law of excluded middle. These problems and debates led to a wide expansion of mathematical logic, with subareas such as model theory (modeling some logical theories inside other theory), proof theory, type theory, computability theory and computational complexity theory. Although these aspects of mathematical logic were introduced before the rise of computers, their use in compiler design, program certification, proof assistants and other aspects of computer science, contributed in turn to the expansion of these logical theories. Applied mathematics Applied mathematics concerns itself with mathematical methods that are typically used in science, engineering, business, and industry. Thus, "applied mathematics" is a mathematical science with specialized knowledge. The term applied mathematics also describes the professional specialty in which mathematicians work on practical problems; as a profession focused on practical problems, applied mathematics focuses on the "formulation, study, and use of mathematical models" in science, engineering, and other areas of mathematical practice. In the past, practical applications have motivated the development of mathematical theories, which then became the subject of study in pure mathematics, where mathematics is developed primarily for its own sake. Thus, the activity of applied mathematics is vitally connected with research in pure mathematics. Statistics and other decision sciences Applied mathematics has significant overlap with the discipline of statistics, whose theory is formulated mathematically, especially with probability theory. Statisticians (working as part of a research project) "create data that makes sense" with random sampling and with randomized experiments; the design of a statistical sample or experiment specifies the analysis of the data (before the data becomes available). When reconsidering data from experiments and samples or when analyzing data from observational studies, statisticians "make sense of the data" using the art of modelling and the theory of inference—with model selection and estimation; the estimated models and consequential predictions should be tested on new data. Statistical theory studies decision problems such as minimizing the risk (expected loss) of a statistical action, such as using a procedure in, for example, parameter estimation, hypothesis testing, and selecting the best. In these traditional areas of mathematical statistics, a statistical-decision problem is formulated by minimizing an objective function, like expected loss or cost, under specific constraints: For example, designing a survey often involves minimizing the cost of estimating a population mean with a given level of confidence. Because of its use of optimization, the mathematical theory of statistics shares concerns with other decision sciences, such as operations research, control theory, and mathematical economics. Computational mathematics Computational mathematics proposes and studies methods for solving mathematical problems that are typically too large for human numerical capacity. Numerical analysis studies methods for problems in analysis using functional analysis and approximation theory; numerical analysis broadly includes the study of approximation and discretisation with special focus on rounding errors. Numerical analysis and, more broadly, scientific computing also study non-analytic topics of mathematical science, especially algorithmic-matrix-and-graph theory. Other areas of computational mathematics include computer algebra and symbolic computation. History The history of mathematics can be seen as an ever-increasing series of abstractions. Evolutionarily speaking, the first abstraction to ever take place, which is shared by many animals, was probably that of numbers: the realization that a collection of two apples and a collection of two oranges (for example) have something in common, namely the quantity of their members. As evidenced by tallies found on bone, in addition to recognizing how to count physical objects, prehistoric peoples may have also recognized how to count abstract quantities, like time—days, seasons, or years. Evidence for more complex mathematics does not appear until around 3000 , when the Babylonians and Egyptians began using arithmetic, algebra, and geometry for taxation and other financial calculations, for building and construction, and for astronomy. The oldest mathematical texts from Mesopotamia and Egypt are from 2000 to 1800 BC. Many early texts mention Pythagorean triples and so, by inference, the Pythagorean theorem seems to be the most ancient and widespread mathematical concept after basic arithmetic and geometry. It is in Babylonian mathematics that elementary arithmetic (addition, subtraction, multiplication, and division) first appear in the archaeological record. The Babylonians also possessed a place-value system and used a sexagesimal numeral system which is still in use today for measuring angles and time. Beginning in the 6th century BC with the Pythagoreans, with Greek mathematics the Ancient Greeks began a systematic study of mathematics as a subject in its own right. Around 300 BC, Euclid introduced the axiomatic method still used in mathematics today, consisting of definition, axiom, theorem, and proof. His book, Elements, is widely considered the most successful and influential textbook of all time. The greatest mathematician of antiquity is often held to be Archimedes (c. 287–212 BC) of Syracuse. He developed formulas for calculating the surface area and volume of solids of revolution and used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, in a manner not too dissimilar from modern calculus. Other notable achievements of Greek mathematics are conic sections (Apollonius of Perga, 3rd century BC), trigonometry (Hipparchus of Nicaea, 2nd century BC), and the beginnings of algebra (Diophantus, 3rd century AD). The Hindu–Arabic numeral system and the rules for the use of its operations, in use throughout the world today, evolved over the course of the first millennium AD in India and were transmitted to the Western world via Islamic mathematics. Other notable developments of Indian mathematics include the modern definition and approximation of sine and cosine, and an early form of infinite series. During the Golden Age of Islam, especially during the 9th and 10th centuries, mathematics saw many important innovations building on Greek mathematics. The most notable achievement of Islamic mathematics was the development of algebra. Other achievements of the Islamic period include advances in spherical trigonometry and the addition of the decimal point to the Arabic numeral system. Many notable mathematicians from this period were Persian, such as Al-Khwarismi, Omar Khayyam and Sharaf al-Dīn al-Ṭūsī. During the early modern period, mathematics began to develop at an accelerating pace in Western Europe. The development of calculus by Isaac Newton and Gottfried Leibniz in the 17th century revolutionized mathematics. Leonhard Euler was the most notable mathematician of the 18th century, contributing numerous theorems and discoveries. Perhaps the foremost mathematician of the 19th century was the German mathematician Carl Gauss, who made numerous contributions to fields such as algebra, analysis, differential geometry, matrix theory, number theory, and statistics. In the early 20th century, Kurt Gödel transformed mathematics by publishing his incompleteness theorems, which show in part that any consistent axiomatic system—if powerful enough to describe arithmetic—will contain true propositions that cannot be proved. Mathematics has since been greatly extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made to this very day. According to Mikhail B. Sevryuk, in the January 2006 issue of the Bulletin of the American Mathematical Society, "The number of papers and books included in the Mathematical Reviews database since 1940 (the first year of operation of MR) is now more than 1.9 million, and more than 75 thousand items are added to the database each year. The overwhelming majority of works in this ocean contain new mathematical theorems and their proofs." Etymology The word mathematics comes from Ancient Greek máthēma (), meaning "that which is learnt," "what one gets to know," hence also "study" and "science". The word for "mathematics" came to have the narrower and more technical meaning "mathematical study" even in Classical times. Its adjective is mathēmatikós (), meaning "related to learning" or "studious," which likewise further came to mean "mathematical." In particular, mathēmatikḗ tékhnē (; ) meant "the mathematical art." Similarly, one of the two main schools of thought in Pythagoreanism was known as the mathēmatikoi (μαθηματικοί)—which at the time meant "learners" rather than "mathematicians" in the modern sense. In Latin, and in English until around 1700, the term mathematics more commonly meant "astrology" (or sometimes "astronomy") rather than "mathematics"; the meaning gradually changed to its present one from about 1500 to 1800. This has resulted in several mistranslations. For example, Saint Augustine's warning that Christians should beware of mathematici, meaning astrologers, is sometimes mistranslated as a condemnation of mathematicians. The apparent plural form in English, like the French plural form (and the less commonly used singular derivative ), goes back to the Latin neuter plural (Cicero), based on the Greek plural ta mathēmatiká (), used by Aristotle (384–322 BC), and meaning roughly "all things mathematical", although it is plausible that English borrowed only the adjective mathematic(al) and formed the noun mathematics anew, after the pattern of physics and metaphysics, which were inherited from Greek. In English, the noun mathematics takes a singular verb. It is often shortened to maths or, in North America, math. Philosophy of mathematics There is no general consensus about the exact definition or epistemological status of mathematics. Aristotle defined mathematics as "the science of quantity" and this definition prevailed until the 18th century. However, Aristotle also noted a focus on quantity alone may not distinguish mathematics from sciences like physics; in his view, abstraction and studying quantity as a property "separable in thought" from real instances set mathematics apart. In the 19th century, when the study of mathematics increased in rigor and began to address abstract topics such as group theory and projective geometry, which have no clear-cut relation to quantity and measurement, mathematicians and philosophers began to propose a variety of new definitions. A great many professional mathematicians take no interest in a definition of mathematics, or consider it undefinable. There is not even consensus on whether mathematics is an art or a science. Some just say, "Mathematics is what mathematicians do." Three leading types Three leading types of definition of mathematics today are called logicist, intuitionist, and formalist, each reflecting a different philosophical school of thought. All have severe flaws, none has widespread acceptance, and no reconciliation seems possible. Logicist definitions An early definition of mathematics in terms of logic was that of Benjamin Peirce (1870): "the science that draws necessary conclusions." In the Principia Mathematica, Bertrand Russell and Alfred North Whitehead advanced the philosophical program known as logicism, and attempted to prove that all mathematical concepts, statements, and principles can be defined and proved entirely in terms of symbolic logic. An example of a logicist definition of mathematics is Russell's (1903) "All Mathematics is Symbolic Logic." Intuitionist definitions Intuitionist definitions, developing from the philosophy of mathematician L. E. J. Brouwer, identify mathematics with certain mental phenomena. An example of an intuitionist definition is "Mathematics is the mental activity which consists in carrying out constructs one after the other." A peculiarity of intuitionism is that it rejects some mathematical ideas considered valid according to other definitions. In particular, while other philosophies of mathematics allow objects that can be proved to exist even though they cannot be constructed, intuitionism allows only mathematical objects that one can actually construct. Intuitionists also reject the law of excluded middle (i.e., ). While this stance does force them to reject one common version of proof by contradiction as a viable proof method, namely the inference of from , they are still able to infer from . For them, is a strictly weaker statement than . Formalist definitions Formalist definitions identify mathematics with its symbols and the rules for operating on them. Haskell Curry defined mathematics simply as "the science of formal systems". A formal system is a set of symbols, or tokens, and some rules on how the tokens are to be combined into formulas. In formal systems, the word axiom has a special meaning different from the ordinary meaning of "a self-evident truth", and is used to refer to a combination of tokens that is included in a given formal system without needing to be derived using the rules of the system. Mathematics as science The German mathematician Carl Friedrich Gauss referred to mathematics as "the Queen of the Sciences". More recently, Marcus du Sautoy has called mathematics "the Queen of Science ... the main driving force behind scientific discovery". The philosopher Karl Popper observed that "most mathematical theories are, like those of physics and biology, hypothetico-deductive: pure mathematics therefore turns out to be much closer to the natural sciences whose hypotheses are conjectures, than it seemed even recently." Popper also noted that "I shall certainly admit a system as empirical or scientific only if it is capable of being tested by experience." Mathematics shares much in common with many fields in the physical sciences, notably the exploration of the logical consequences of assumptions. Intuition and experimentation also play a role in the formulation of conjectures in both mathematics and the (other) sciences. Experimental mathematics continues to grow in importance within mathematics, and computation and simulation are playing an increasing role in both the sciences and mathematics. Several authors consider that mathematics is not a science because it does not rely on empirical evidence. The opinions of mathematicians on this matter are varied. Many mathematicians feel that to call their area a science is to downplay the importance of its aesthetic side, and its history in the traditional seven liberal arts; others feel that to ignore its connection to the sciences is to turn a blind eye to the fact that the interface between mathematics and its applications in science and engineering has driven much development in mathematics. One way this difference of viewpoint plays out is in the philosophical debate as to whether mathematics is created (as in art) or discovered (as in science). In practice, mathematicians are typically grouped with scientists at the gross level but separated at finer levels. This is one of many issues considered in philosophy of mathematics. Inspiration, pure and applied mathematics, and aesthetics Mathematics arises from many different kinds of problems. At first these were found in commerce, land measurement, architecture and later astronomy; today, all sciences pose problems studied by mathematicians, and many problems arise within mathematics itself. For example, the physicist Richard Feynman invented the path integral formulation of quantum mechanics using a combination of mathematical reasoning and physical insight, and today's string theory, a still-developing scientific theory which attempts to unify the four fundamental forces of nature, continues to inspire new mathematics. Some mathematics is relevant only in the area that inspired it, and is applied to solve further problems in that area. But often mathematics inspired by one area proves useful in many areas, and joins the general stock of mathematical concepts. A distinction is often made between pure mathematics and applied mathematics. However pure mathematics topics often turn out to have applications, e.g. number theory in cryptography. This remarkable fact, that even the "purest" mathematics often turns out to have practical applications, is what the physicist Eugene Wigner has named "the unreasonable effectiveness of mathematics". The philosopher of mathematics Mark Steiner has written extensively on this matter and acknowledges that the applicability of mathematics constitutes “a challenge to naturalism.” For the philosopher of mathematics Mary Leng, the fact that the physical world acts in accordance with the dictates of non-causal mathematical entities existing beyond the universe is "a happy coincidence". On the other hand, for some anti-realists, connections, which are acquired among mathematical things, just mirror the connections acquiring among objects in the universe, so there is no "happy coincidence". As in most areas of study, the explosion of knowledge in the scientific age has led to specialization: there are now hundreds of specialized areas in mathematics and the latest Mathematics Subject Classification runs to 46 pages. Several areas of applied mathematics have merged with related traditions outside of mathematics and become disciplines in their own right, including statistics, operations research, and computer science. For those who are mathematically inclined, there is often a definite aesthetic aspect to much of mathematics. Many mathematicians talk about the elegance of mathematics, its intrinsic aesthetics and inner beauty. Simplicity and generality are valued. There is beauty in a simple and elegant proof, such as Euclid's proof that there are infinitely many prime numbers, and in an elegant numerical method that speeds up calculation, such as the fast Fourier transform. G. H. Hardy in A Mathematician's Apology expressed the belief that these aesthetic considerations are, in themselves, sufficient to justify the study of pure mathematics. He identified criteria such as significance, unexpectedness, inevitability, and economy as factors that contribute to a mathematical aesthetic. Mathematical research often seeks critical features of a mathematical object. A theorem expressed as a characterization of an object by these features is the prize. Examples of particularly succinct and revelatory mathematical arguments have been published in Proofs from THE BOOK. The popularity of recreational mathematics is another sign of the pleasure many find in solving mathematical questions. At the other social extreme, philosophers continue to find problems in philosophy of mathematics, such as the nature of mathematical proof. Notation, language, and rigor Most of the mathematical notation in use today was invented after the 15th century. Before that, mathematics was written out in words, limiting mathematical discovery. Euler (1707–1783) was responsible for many of these notations. Modern notation makes mathematics efficient for the professional, while beginners often find it daunting. Mathematical language supplies a more precise meaning for ordinary words such as or and only than they have in everyday speech. Other terms such as open and field are at once precise and also refer to specific concepts present only in mathematics. Mathematical language also includes many technical terms such as homeomorphism and integrable that have no meaning outside of mathematics. Additionally, shorthand phrases such as iff for "if and only if" belong to mathematical jargon. This special notation and technical vocabulary is both precise and concise, making it possible to work on ideas of inordinate complexity. Mathematicians refer to this precision of language and logic as "rigor". The validity of mathematical proofs is fundamentally a matter of rigor. Mathematicians want their theorems to follow from axioms by means of systematic reasoning. This is to avoid mistaken "theorems", based on fallible intuitions, which have arisen many times in mathematics' history. The rigor expected in mathematics has varied over time: the Greeks expected detailed arguments, but in Isaac Newton's heyday, the methods employed were less rigorous. Problems inherent in the definitions used by Newton led to a resurgence of careful analysis and formal proof in the 19th century. Misunderstanding rigor is a notable cause for some of the common misconceptions of mathematics. Despite mathematics' concision, many proofs require hundreds of pages to express. The emergence of computer-assisted proofs has allowed proof lengths to further expand. Assisted proofs may be erroneous if the proving software has flaws and if they are lengthy, difficult to check. On the other hand, proof assistants allow for the verification of details that cannot be given in a hand-written proof, and provide certainty of the correctness of long proofs such as that of the 255-page Feit–Thompson theorem. Traditionally, axioms were thought of as "self-evident truths". However, at a formal level, an axiom is just a string of symbols, which has an intrinsic meaning only in the context of the derivable formulas of an axiomatic system. Hilbert's program attempted to put mathematics on a firm axiomatic basis, but Gödel's incompleteness theorem upended it, showing that every (sufficiently powerful) axiomatic system has undecidable formulas; and so the axiomatization of mathematics is impossible. Nonetheless, mathematics is often imagined to be (as far as its formal content) nothing but set theory in some axiomatization, in the sense that every mathematical statement or proof could be cast into formulas within set theory. Awards Arguably the most prestigious award in mathematics is the Fields Medal, established in 1936 and awarded every four years (except around World War II) to as many as four individuals. The Fields Medal is often considered a mathematical equivalent to the Nobel Prize. The Wolf Prize in Mathematics, instituted in 1978, recognizes lifetime achievement. Another major international award, the Abel Prize, was instituted in 2002 and first awarded in 2003. The Chern Medal was introduced in 2010 to recognize lifetime achievement. These accolades are awarded in recognition of a particular body of work, which may be innovational, or provide a solution to an outstanding problem in an established field. A famous list of 23 open problems, called "Hilbert's problems", was compiled in 1900 by German mathematician David Hilbert. This list achieved great celebrity among mathematicians, and at least thirteen of the problems have now been solved. A new list of seven important problems, titled the "Millennium Prize Problems", was published in 2000. Only one of them, the Riemann hypothesis, duplicates one of Hilbert's problems. A solution to any of these problems carries a 1 million dollar reward. Currently, only one of these problems, the Poincaré conjecture, has been solved. See also International Mathematical Olympiad List of mathematical jargon Outline of mathematics Lists of mathematics topics Mathematical sciences Mathematics and art Mathematics education National Museum of Mathematics Philosophy of mathematics Relationship between mathematics and physics Science, technology, engineering, and mathematics Notes References Bibliography . Further reading  – A translated and expanded version of a Soviet mathematics encyclopedia, in ten volumes. Also in paperback and on CD-ROM, and online . Formal sciences Main topic articles
8860
https://en.wikipedia.org/wiki/Dubbing%20%28filmmaking%29
Dubbing (filmmaking)
Dubbing, mixing or re-recording, is a post-production process used in filmmaking and video production in which additional or supplementary recordings are lip-synced and "mixed" with original production sound to create the finished soundtrack. The process usually takes place on a dub stage. After sound editors edit and prepare all the necessary tracks – dialogue, automated dialogue replacement (ADR), effects, Foley, music – the dubbing mixers proceed to balance all of the elements and record the finished soundtrack. Dubbing is sometimes confused with ADR, also known as "additional dialogue replacement", "automated dialogue recording" and "looping", in which the original actors re-record and synchronize audio segments. Outside the film industry, the term "dubbing" commonly refers to the replacement of the actor's voices with those of different performers speaking another language, which is called "revoicing" in the film industry. The term "dubbing" is only used when talking about replacing a previous voice, usually in another language. When a voice is created from scratch for animations, the term "original voice" is always used because, in some cases, this media is partially finished before the voice is implemented. The voice work would still be part of the creation process, thus being considered the official voice. Origins Films, videos, and sometimes video games are often dubbed into the local language of a foreign market. In foreign distribution, dubbing is common in theatrically released films, television films, television series, cartoons, and anime. In many countries dubbing was adopted, at least in part, for political reasons. In authoritarian states such as Fascist Italy and Francoist Spain, dubbing could be used to enforce particular ideological agendas, excising negative references to the nation and its leaders and promoting standardised national languages at the expense of local dialects and minority languages. In post-Nazi Germany, dubbing was used to downplay events in the country's recent past, as in the case of the dub of Alfred Hitchcock's Notorious, where the Nazi organisation upon which the film's plot centres was changed to a drug smuggling enterprise. First post-WWII movie dub was Konstantin Zaslonov (1949) dubbed from Russian to the Czech language. In Western Europe after World War II, dubbing was attractive to many film producers as it helped to enable co-production between companies in different countries, in turn allowing them to pool resources and benefit from financial support from multiple governments. Use of dubbing meant that multi-national casts could be assembled and were able to use their preferred language for their performances, with appropriate post-production dubs being carried out before distributing versions of the film in the appropriate language for each territory. Methods ADR/post-sync Automated dialogue replacement (ADR) is the process of re-recording dialogue by the original actor (or a replacement actor) after the filming process to improve audio quality or make changes to the originally scripted dialog. In the early days of talkies, a loop of film would be cut and spliced together for each of the scenes that needed to be rerecorded, then one-by-one the loops would be loaded onto a projector. For each scene the loop would be played over and over while the voice actor performed the lines trying to synchronize them to the filmed performance. This was known as "looping" or a "looping session". Loading and reloading the film loops while the talent and recording crew stood by was a tedious process. Later, video tape and then digital technology replaced the film loops and the process became known as automated dialogue replacement (ADR). In conventional film production, a production sound mixer records dialogue during filming. During post-production, a supervising sound editor, or ADR supervisor, reviews all of the dialogue in the film and decides which lines must be re-recorded. ADR is recorded during an ADR session, which takes place in a specialized sound studio. Multiple takes are recorded and the most suitable take becomes the final version, or portions of multiple takes may be edited together. The ADR process does not always take place in a post-production studio. The process may be recorded on location, with mobile equipment. ADR can also be recorded without showing the actor the image they must match, but by having them listen to the performance, since some actors believe that watching themselves act can degrade subsequent performances. The director may be present during ADR, or alternatively, they may leave it up to a trusted sound editor, an ADR specialist, and the performers. the automated process includes sophisticated techniques including automatically displaying lines on-screen for the talent, automated cues, shifting the audio track for accurate synchronization, and time-fitting algorithms for stretching or compressing portions of a spoken line. There is even software that can sort out spoken words from ambient sounds in the original filmed soundtrack and detect the peaks of the dialog and automatically time-fit the new dubbed performance to the original to create perfect synchronization. Sometimes, an actor other than the original actor is used during ADR. One famous example is the Star Wars character Darth Vader, portrayed by David Prowse; in post-production, James Earl Jones dubbed the voice of Vader. In India, the process is simply known as "dubbing", while in the UK, it is also called "post-synchronization" or "post-sync". The insertion of voice actor performances for animation, such as computer generated imagery or animated cartoons, is often referred to as ADR although it generally does not replace existing dialogue. The ADR process may be used to: remove extraneous sounds such as production equipment noise, traffic, wind, or other undesirable sounds from the environment change the original lines recorded on set to clarify context improve diction or modify an accent improve comedic timing or dramatic timing correct technical issues with synchronization use a studio-quality singing performance or provide a voice-double for actors who are poor vocalists add or remove content for legal purposes (such as removing an unauthorized trademarked name) add or remove a product placement correct a misspoken line not caught during filming. replace "foul language" for TV broadcasts of the media or if the scene in question has a young actor involved. Other examples include: Jean Hagen provided Debbie Reynolds' voice in two scenes of Singin' in the Rain (1952). Ironically, the film's story has Reynolds' character, Kathy Seldon, dubbing the voice for Hagen's character, Lina Lamont, due to Lina's grating voice and strong New York accent. Hagen used her own normal melodious voice to portray Kathy dubbing for Lina. The film, which takes place in Hollywood as talking pictures are taking over from silent films, also portrays another character, Cosmo Brown, played by Donald O'Connor, as inventing the idea of using one actor to provide the voice for another. Marni Nixon provided the singing voice for the character Eliza Doolittle, otherwise played by Audrey Hepburn, in the 1964 musical film My Fair Lady. Nixon was also the singing voices for Deborah Kerr in The King and I and Natalie Wood in West Side Story, among many others. Ray Park, who acted as Darth Maul from Star Wars: Episode I – The Phantom Menace had his voice dubbed over by Peter Serafinowicz Frenchmen Philippe Noiret and Jacques Perrin, who were dubbed into Italian for Cinema Paradiso Austrian bodybuilder Arnold Schwarzenegger, dubbed for Hercules in New York Argentine boxer Carlos Monzón, dubbed by a professional actor for the lead in the drama La Mary Gert Fröbe, who played Auric Goldfinger in the James Bond film Goldfinger, dubbed by Michael Collins George Lazenby's James Bond in On Her Majesty's Secret Service, dubbed for a portion of the film by George Baker, since Bond was undercover and impersonating Baker's own character. Andie MacDowell's Jane, in Greystoke: The Legend of Tarzan, Lord of the Apes, who was dubbed by Glenn Close Tom Hardy, who portrayed Bane in The Dark Knight Rises, re-dubbed half of his own lines for ease of viewer comprehension Harvey Keitel was dubbed by Roy Dotrice in post production for Saturn 3 Dave Coulier dubbed replacement of swear words for Richard Pryor in multiple TV versions of his movies Doug Jones was dubbed by Laurence Fishburne in post production for Fantastic Four: Rise of the Silver Surfer Rythmo band An alternative method to dubbing, called "rythmo band" (or "lip-sync band"), has historically been used in Canada and France. It provides a more precise guide for the actors, directors, and technicians, and can be used to complement the traditional ADR method. The "band" is actually a clear 35 mm film leader on which the dialogue is hand-written in India ink, together with numerous additional indications for the actor—including laughs, cries, length of syllables, mouth sounds, breaths, and mouth openings and closings. The rythmo band is projected in the studio and scrolls in perfect synchronization with the picture. Studio time is used more efficiently, since with the aid of scrolling text, picture, and audio cues, actors can read more lines per hour than with ADR alone (only picture and audio). With ADR, actors can average 10–12 lines per hour, while rythmo band can facilitate the reading of 35-50 lines per hour. However, the preparation of a rythmo band is a time-consuming process involving a series of specialists organized in a production line. This has prevented the technique from being more widely adopted, but software emulations of rythmo band technology overcome the disadvantages of the traditional rythmo band process and significantly reduce the time needed to prepare a dubbing session. Translation process For dubs into a language other than the original language, the dubbing process includes the following tasks: Translation Dialog writing: Take segmentation Insertion of dubbing symbols Dialogue writing and the emulation of natural discourse Lip-sync Sometimes the translator performs all five tasks. In other cases, the translator just submits a rough translation and a dialogue writer does the rest. However, the language expertise of translator and dialog writing is different; translators must be proficient in the source language, while dialog writers must be proficient the target language. Dialog writing The dialogue writer's role is to make the translation sound natural in the target language, and to make the translation sound like a credible dialogue instead of merely a translated text. Another task of dialogue writers is to check whether a translation matches an on-screen character's mouth movements or not, by reading aloud simultaneously with the character. The dialogue writer often stays in the recording setting with the actors or the voice talents, to ensure that the dialogue is being spoken in the way that it was written to be, and to avoid any ambiguity in the way the dialogue is to be read (focusing on emphasis, intonation, pronunciation, articulation, pronouncing foreign words correctly, etc.). The overall goal is to make sure the script creates the illusion of authenticity of the spoken language. A successful localization product is one that feels like the original character is speaking the target language. Therefore, in the localization process, the position of the dialogue writing or song writing is extremely important. Global use Localization Localization is the practice of adapting a film or television series from one region of the world for another. In contrast to pure translation, localization encompasses adapting the content to suit the target audience. For example, culture-specific references may be replaced and footage may be removed or added. Dub localization is a contentious issue in cinephilia amongst aficionados of foreign filmmaking and television programs, particularly anime fans. While some localization is virtually inevitable in translation, the controversy surrounding how much localization is "too much" is often discussed in such communities, especially when the final dub product is significantly different from the original. Some fans frown on any extensive localization, while others expect it, and to varying degrees, appreciate it. The new voice track is usually spoken by a voice actor. In many countries, actors who regularly perform this duty remain little-known, with the exception of particular circles (such as anime fandom) or when their voices have become synonymous with roles or actors whose voices they usually dub. In the United States, many of these voice artists may employ pseudonyms or go uncredited due to Screen Actors Guild regulations or the desire to dissociate themselves from the role. Europe Kids/family films and programming In North-West Europe (the UK, Republic of Ireland, the Estonia), Poland, Portugal, Balkan (except Bulgaria) and Nordic countries, generally only movies and TV shows intended for children are dubbed, while TV shows and movies for older audiences are subtitled (although animated productions have a tradition of being dubbed). For movies in cinemas with clear target audiences (both below and above 10–11 years of age), both a dubbed and a subtitled version are usually available. Albania The first movie dubbed in Albanian was The Great Warrior Skanderbeg in 1954 and since then, there have been thousands of popular titles dubbed in Albanian by different dubbing studios. All animated movies and children's programs are dubbed into Albanian (though typically, songs are left in English or the original language of the program with Albanian subtitles). Many live-action movies are dubbed as well. TV series nevertheless are usually not dubbed, they are subtitled except for a few Mexican, Brazilian and Turkish soap operas, like: Por Ti, Celebridade, A Casa das Sete Mulheres, Paramparça, etc. As for documentaries, Albania usually uses voice-over. Belgium In the Dutch-speaking part of Belgium (Flanders), movies and TV series are shown in their original language with subtitles, with the exception of most movies made for a young audience. In the latter case, sometimes separate versions are recorded in the Netherlands and in Flanders (for instance, several Walt Disney films and Harry Potter films). These dubbed versions only differ from each other in their use of different voice actors and different pronunciation, while the text is almost the same. In the French-speaking part of Belgium (Wallonia), the range of French-dubbed versions is approximately as wide as the German range, where nearly all movies and TV series are dubbed. Bosnia and Herzegovina Bosnia and Herzegovina usually uses Serbian and Croatian dubs, but they have dubbed some cartoons in Bosnian by themselves, for example My Little Pony: Friendship Is Magic. Children's programs (both animated and live-action) are airing dubbed (in Serbian, Croatian or Bosnian), while every other program is subtitled (in Bosnian). Croatia In Croatia, foreign films and TV series are always subtitled, while most children's programs and animated movies are dubbed into Croatian. The practice of dubbing began in the 1980s in some animated shows and continued in 90's, 00's and forward in other shows and films, the latter ones being released in home media. Recently, more efforts have been made to introduce dubbing, but public reception has been poor in some exceptions. Regardless of language, Croatian audiences prefer subtitling to dubbing, however it is still popular in animated films. Some previously popular shows (such as Sailor Moon) lost their appeal completely after the practice of dubbing began, and the dubbing was eventually removed from the programs, even though most animated shows shown on television and some on home media have been well received by people watching dubbed versions of them. This situation is similar with theater movies, with only those intended for children being dubbed. Also, there has been an effort to impose dubbing by Nova TV, with La Fea Más Bella translated as Ružna ljepotica (literally, "The Ugly Beauty"), a Mexican telenovela, but it failed. Some of Croatian dubbing is also broadcast in Bosnia and Herzegovina. Estonia In Estonia in cinemas, only children's animated films are dubbed and live-action films are shown in the original English and Russian languages with subtitles at cinemas. Subtitles are usually presented in both Estonian and Russian languages. Cartoons and animated series voiced by dubbing or voiceover and live-action films and television series only with Estonian subtitles also but with English and Russian dub languages. Animated films are commonly shown in both the originals and Russian languages and dubbed into Estonian (or Russian in many cinemas). Most Estonian-language television channels use subtitles English and Russian audio for foreign-language films and TV channels. However, Russian language channels tend to use dubbing more often, especially for Russian channels broadcast from Russia (as opposed to Russian channels broadcast from Estonia). Greece In Greece, most cartoon films have dubs. Usually when a movie has a Greek dub the dub is shown in cinemas but subtitled versions are shown as well. Foreign TV shows for adults are shown in their original versions with subtitles, most cartoons, for example, The Flintstones and The Jetsons were always dubbed, while Family Guy and American Dad! are always subtitled and contain the original English dialogue, since they are mostly for adults rather than children, also some Japanese anime series are dubbed in Greek (such as Pokémon, Dragon Ball, Digimon, Pichi Pichi Pitch, Sailor Moon, Candy Candy etc.) The only television programs dubbed in Greek includes Mexican TV series (like Rubí and La usurpadora). However, when Skai TV was re-launched in April 2006, the network opted for dubbing almost all foreign shows in Greek, unlike other Greek channels which had always broadcast most of the programs in their original language with subtitles. Ireland Ireland usually receives the same film versions as the UK. However some films have been dubbed into Irish by TG4. Children's cartoons on TV are also occasionally dubbed into Irish. Netherlands In the Netherlands, for the most part, Dutch versions are only made for children's and family films. Animated movies are shown in theaters with Dutch subtitles or dubbing, but usually those cinemas with more screening rooms also provide the original subtitled version North Macedonia North Macedonia dubbed many cartoons in Macedonian, but they also air some Serbian dubs. Children's programs are airing dubbed (in Macedonian or Serbian), while every other program is subtitled (in Macedonian). They use Serbian dubs for Disney movies, because there are no Macedonian Disney dubs. Poland In Poland, cinema releases for general audiences are almost exclusively subtitled, with the exception of children's movies, and television screenings of movies, as well as made-for-TV shows. These are usually shown with voice-over, where a voice talent reads a translation over the original soundtrack. This method, called "juxtareading," is similar to the so-called Gavrilov translation in Russia, with one difference—all dialogues are voiced by one off-screen reader (), preferably with a deep and neutral voice which does not interfere with the pitch of voice of the original speakers in the background. To some extent, it resembles live translation. Certain highly qualified voice talents are traditionally assigned to particular kinds of production, such as action or drama. Standard dubbing is not widely popular with most audiences, with the exception of cartoons and children's shows, which are dubbed also for TV releases. It is claimed that, until around 1951, there were no revoiced foreign movies available in Poland. Instead, they were exclusively subtitled in Polish. Poland's dubbing traditions began between the two world wars. In 1931, among the first movies dubbed into Polish were Dangerous Curves (1929), The Dance of Life (1929), Paramount on Parade (1930), and Darling of the Gods (1930). In 1949, the first dubbing studio opened in Łódź. The first film dubbed that year was Russkiy Vopros (filmed 1948). Polish dubbing in the first post-war years suffered from poor synchronization. Polish dialogues were not always audible and the cinema equipment of that time often made films sound less clear than they were. In the 1950s, Polish publicists discussed the quality of Polish versions of foreign movies. The number of dubbed movies and the quality improved. Polish dubbing had a golden age between the 1960s and the 1980s. Approximately a third of foreign movies screened in cinemas were dubbed. The "Polish dubbing school" was known for its high quality. In that time, Poland had some of the best dubbing in the world. The person who initiated high-quality dubbing versions was director Zofia Dybowska-Aleksandrowicz. In that time, dubbing in Poland was very popular. Polish television dubbed popular films and TV series such as Rich Man, Poor Man; Fawlty Towers, Forsyte Saga, Elizabeth R, I, Claudius, I'll Take Manhattan, and Peter the Great. In the 1980s, due to budget cuts, state-run TV saved on tapes by voicing films over live during transmission. Overall, during 1948–1998, almost 1,000 films were dubbed in Polish. In the 1990s, dubbing films and TV series continued, although often also for one emission only. In 1995, Canal+ was launched in Poland. In its first years, it dubbed 30% of its schedule dubbing popular films and TV series, one of the best-known and popular dubbings was that of Friends, but this proved unsuccessful. It stopped dubbing films in 1999, although many people supported the idea of dubbing and bought the access only for dubbing versions of foreign productions. In the 1990s, dubbing was done by the television channel known as Wizja Jeden. They mainly dubbed BBC productions such as The League of Gentlemen, Absolutely Fabulous and Men Behaving Badly. Wizja Jeden was closed in 2001. In the same year, TVP stopped dubbing the TV series Frasier, although that dubbing was very popular. Currently, dubbing of films and TV series for teenagers is made by Nickelodeon and Disney Channel. One of the major breakthroughs in dubbing was the Polish release of Shrek, which contained many references to local culture and Polish humor. Since then, people seem to have grown to like dubbed versions more, and pay more attention to the dubbing actors. However, this seems to be the case only with animated films, as live-action dubbing is still considered a bad practice. In the case of DVD releases, most discs contain both the original soundtrack and subtitles, and either voice over or dubbed Polish track. The dubbed version is, in most cases, the one from the theater release, while voice-over is provided for movies that were only subtitled in theaters. Since theatrical release of The Avengers in May 2012, Walt Disney Company Polska dubs all films for cinema releases. Also in 2012, United International Pictures Polska dubbed The Amazing Spider-Man, while Forum Film Polska – former distributor of Disney's films – decided to dub The Hobbit: An Unexpected Journey, along with its two sequels. However, when a dub is produced but the film's target audience is not exclusively children, both dubbed and subtitled versions are usually available in movie theaters. The dubbed versions are more commonly shown in morning and early afternoon hours, with the subtitled version dominating in the evening. Both can be available in parallel at similar hours in multiplexes. Portugal In Portugal, dubbing was banned under a 1948 law as a way of protecting the domestic film industry and reducing access to culture as most of the population was illiterate. Until 1994, animated movies, as well as other TV series for children, were shown subtitled in Portugal along with imported Brazilian Portuguese dubs due to the lack of interest from Portuguese companies in the dubbing industry. This lack of interest was justified, since there were already quality dubbed copies of shows and movies in Portuguese made by Brazilians. The Lion King was the first feature film to be dubbed in European Portuguese rather than strictly Brazilian Portuguese. Currently, all movies for children are dubbed in European Portuguese. Subtitles are preferred in Portugal, used in every foreign-language documentary, TV series and film. The exception to this preference is when children are the target audience. While on TV, children's shows and movies are always dubbed, in cinemas, films with a clear juvenile target can be found in two versions, one dubbed (identified by the letters V.P. for versão portuguesa - "Portuguese version") and another subtitled version (V.O. for versão original - "original version"). This duality applies only to juvenile films. Others use subtitles only. While the quality of these dubs is recognized (some have already received international recognition and prizes), original versions with subtitles are usually preferred by the adults most cinemas showed both versions (V.O. and V.P.), but in some small cities, cinemas decided to offer only the Portuguese version, a decision that led to public protest. Presently, live action series and movies are always shown in their original language format with Portuguese subtitles. dubbed in European Portuguese, although there they provide an option to select the original language. There are also a few examples of anime who were dubbed in European Portuguese (i.e. Dragon Ball and Naruto) Netflix is now offering foreign language films aimed at older audiences and TV series (M/12, M/14 and M/16) dubbed into European Portuguese in addition to offering the original version with subtitles. Romania In Romania, virtually all programs intended for children are dubbed in Romanian. Animated movies are shown in theaters with Romanian dubbing. However, cinemas with more screening rooms usually also provide the original subtitled version. Other foreign TV shows and movies are shown in the original language with Romanian subtitles. Subtitles are usually preferred in the Romanian market. According to "Special Eurobarometer 243" (graph QA11.8) of the European Commission (research carried out in November and December 2005), 62% of Romanians prefer to watch foreign films and programs with subtitles (rather than dubbed), 22% prefer dubbing, and 16% declined to answer. This is led by the assumption that watching movies in their original versions is very useful for learning foreign languages. However, according to the same Eurobarometer, virtually no Romanian found this method—watching movies in their original version—to be the most efficient way to learn foreign languages, compared to 53 percent who preferred language lessons at school. Some programmes that are broadcast on The Fishing & Hunting Channel are subtitled. TV Paprika used to broadcast voice-overed programmes, but it was replaced with subtitles. Some promos for films shown on TV 1000 use voice-overs; but the films are subtitled. Examples shown here, at 2:11, 4:25, 5:09 and 7:15 Serbia Serbian language dubs are made mainly for Serbia, but they broadcast in Montenegro and Bosnia and Herzegovina, too. Children's animated and some live-action movies and TV series are dubbed into Serbian, while live-action films and TV series for adults are always airing subtitled, because in this region people prefer subtitling for live-action formats. Turkish soap opera Lale Devri started airing dubbed in 2011, on RTV Pink, but because of bad reception, dub failed and rest of TV series was aired subtitled. The dubbing of cartoon series in former Yugoslavia during the 1980s had a twist of its own: famous Serbian actors, such as Nikola Simić, Mića Tatić, Nada Blam and others provided the voices for characters of Disney, Warner Bros., MGM and other companies, frequently using region-specific phrases and sentences and, thus, adding a dose of local humor to the translation of the original lines. These phrases became immensely popular and are still being used for tongue-in-cheek comments in specific situations. These dubs are today considered cult dubs. The only dub made after 1980s and 1990s ones that's considered cult is SpongeBob SquarePants dub, made by B92 in period 2002–2017, because of a great popularity and memorable translation with local humor phrases, such as 1980s dubs translation. Some Serbian dubs are also broadcast in North Macedonia, while cult dubs made during Yugoslavia were aired all over the country (today's Croatia, Bosnia and Herzegovina, Montenegro, Slovenia, North Macedonia and Serbia). In the 21st-century, prominent dubbing/voice actors in Serbia include actors Marko Marković, Vladislava Đorđević, Jelena Gavrilović, Dragan Vujić, Milan Antonić, Boris Milivojević, Radovan Vujović, Goran Jevtić, Ivan Bosiljčić, Gordan Kičić, Slobodan Stefanović, Dubravko Jovanović, Dragan Mićanović, Slobodan Ninković, Branislav Lečić, Jakov Jevtović, Ivan Jevtović, Katarina Žutić, Anica Dobra, Voja Brajović, Nebojša Glogovac and Dejan Lutkić. Slovenia In Slovenia, all foreign films and television programs are subtitled with the exception of children's movies and TV shows (both animated or live-action). While dubbed versions are always shown in cinemas and later on TV channels, cinemas will sometimes play subtitled versions of children's movies as well. United Kingdom In the United Kingdom, the vast majority of foreign language films are subtitled, although mostly animated films are dubbed in English. These usually originate from North America, as opposed to being dubbed locally. Foreign language serials shown on BBC Four are subtitled into English (although open subtitles are dropped during dialogues with English language segments already). There have, however, been notable examples of films and TV programs successfully dubbed in the UK, such as the Japanese Monkey and French Magic Roundabout series. When airing films on television, channels in the UK often choose subtitling over dubbing, even if a dubbing in English exists. It is also a fairly common practice for animation aimed at preschool children to be re-dubbed with British voice actors replacing the original voices, such as Spin Master Entertainment's PAW Patrol series, although this is not done with shows aimed at older audiences. The off-screen narrated portions of some programs and reality shows that originate from North America are also redone with British English voices. The 2020 Bavarian show on Netflix, Freud, has also been dubbed to English. Some animated films and TV programs are also dubbed into Welsh and Scottish Gaelic. Hinterland displays a not so common example of a bilingual production. Each scene is filmed twice, in the English and Welsh languages, apart from a few scenes where Welsh with subtitles is used for the English version. Nordic countries In the Nordic countries, dubbing is used only in animated features (except adult animated features which only use subtitles) and other films for younger audiences. Some cinemas in the major cities may also screen the original version, usually as the last showing of the day, or in a smaller auditorium in a multiplex. In television programs with off-screen narration, both the original audio and on-screen voices are usually subtitled in their native languages. The Nordic countries are often treated as a common market issuing DVD and Blu-ray releases with original audio and user choosable subtitle options in Danish, Finnish, Norwegian and Swedish. The covers often have text in all four languages as well, but are sometimes unique for each country. Some releases may include other European language audio and/or subtitles (i.e. German, Greek, Hungarian or Italian). as well as original audio in most cases. In Finland, the dubbed version from Sweden may also be available at certain cinemas for children of the 5% Swedish-speaking minority, but only in cities or towns with a significant percentage of Swedish speakers. Most DVD and Blu-ray releases usually only have the original audio, except for animated television series telenovelas, which have both Finnish and Swedish language tracks, in addition to the original audio and subtitles in both languages. In Finnish movie theaters, films for adult audiences have both Finnish and Swedish subtitles, the Finnish printed in basic font and the Swedish printed below the Finnish in a cursive font. In the early ages of television, foreign TV shows and movies were voiced by narrator in Finland. Later, subtitled in Finnish subtitles became a practice on Finnish television. as in many other countries. While the original version was well-received, the Finnish-dubbed version received poor reviews, with some critics even calling it a disaster. On the other hand, many dubs of Disney animated television series have been well-received, both critically and by the public. In Iceland, the dubbed version of film and TV is usually Danish with some translated into Icelandic. LazyTown, an Icelandic TV show originally broadcast in English, was dubbed into Icelandic, amongst thirty-two other languages. General films and programming In the Turkish, French, Italian, Spanish, German, Czech, Slovak, Hungarian, Bulgarian, Polish, Russian and Ukrainian language-speaking markets of Europe, almost all foreign films and television shows are dubbed (the exception being the majority of theatrical releases of adult-audience movies in the Czech Republic, Slovakia, Poland and Turkey and high-profile videos in Russia). There are few opportunities to watch foreign movies in their original versions. In Spain, Italy, Germany and Austria, even in the largest cities, there are few cinemas that screen original versions with subtitles, or without any translation. However, digital pay-TV programming is often available in the original language, including the latest movies. Prior to the rise of DVDs, which in these countries are mostly issued with multi-language audio tracks, original-language films (those in languages other than the country's official language) were rare, whether in theaters, on TV, or on home video, and subtitled versions were considered a product for small niche markets such as intellectual or art films. France In France, dubbing is the norm. Most movies with a theatrical release, including all those from major distributors, are dubbed. Those that are not, are foreign independent films whose budget for international distribution is limited, or foreign art films with a niche audience. Almost all theaters show movies with their French dubbing ("VF", short for ). Some of them also offer screenings in the original language ("VO", short for ), generally accompanied with French subtitles ("VOST", short for ). A minority of theaters (usually small ones) screen exclusively in the original language. According to the CNC (National Centre for Cinematography), VOST screenings accounted for 16.4% of tickets sold in France. In addition, dubbing is required for home entertainment and television screenings. However, since the advent of digital television, foreign programs are broadcast to television viewers in both languages (sometimes, French with audio description is also aired); while the French-language track is selected by default, viewers can switch to the original-language track and enable French subtitles. As a special case, the binational television channel Arte broadcasts both the French and German dubbing, in addition to the original-language version. Some voice actors that have dubbed for celebrities in the European French language are listed below. Italy Dubbing is systematic in Italy, with a tradition going back to 1930. In Mussolini's fascist Italy, the release of movies in foreign languages was banned in 1938 for political reasons. Rome is the main base of the dubbing industry, where major productions, such as movies, drama, documentaries, and some animation films are dubbed. However, most animated works are dubbed in Milan, as well as other minor productions. Virtually every foreign film of every genre and target audience—as well as TV shows—are dubbed into Italian. Some theatres in the bigger cities include original language shows in their schedules, even if this is an uncommon practice. Subtitles may be available on late-night programs on mainstream TV channels. Pay-tv and streaming services provide films in the dubbed version as well as in their original language. Early in their careers, actors such as Alberto Sordi or Nino Manfredi worked extensively as dubbing actors. At a certain point, shooting scenes in MOS (motor only sync or motor only shot) was a common practice in Italian cinema; all dialogue was dubbed in post-production. A notable instance is The Good, the Bad, and the Ugly, in which all actors had to dub in their own voices. Because many films would feature multinational casts, dubbing became necessary to ensure dialogue would be comprehensible regardless of the dub language. The presence of foreign actors also meant that some directors would have actors recite gibberish or otherwise unrelated words, since the end goal was simply to have general lip movements over which to add dialogue. A typical example of this practice was La Strada, which starred two Americans; Anthony Quinn and Richard Basehart, in leading roles. Rather than have dialogue spoken phonetically or have multiple languages at the same time (which would require lines to be translated multiple times), actors would instead count numbers corresponding to the number of lines. Liliana Betti, assistant to director Federico Fellini, described the system as such: "Instead of lines, the actor has to count off numbers in their normal order. For instance, a line of fifteen words equals an enumeration of up to thirty. The actor merely counts till thirty: 1-2-3-4-5-6-7. etc." Fellini used this system, which he coined "numerological diction," in many of his films. Other directors adopted similar systems. Dubbing may also be used for artistic purposes. It was common for even Italian-speaking performers to have their dialogue dubbed by separate voice actors, if their actual voice is thought to be unfitting or some otherwise unsuitable. For example, in Django, lead actor Franco Nero was dubbed by Nando Gazzolo because he was thought to sound too youthful for the grizzled character he portrayed. Claudia Cardinale, one of the major actresses of the 1960s and 70s, had a heavy accent from her Tunisian background, and was likewise dubbed for the first decade of her career. This practice was generally phased out in the 1990s, with the widespread adoption of sync sound. Video games are generally either dubbed into Italian (for instance, the Assassin's Creed, Halo, and Harry Potter series) or released with the original audio tracks providing Italian subtitles. The most important Italian voice actors and actresses, as well as the main celebrities dubbed in their career, are listed below. Spain In Spain, practically all foreign television programs are shown dubbed in European Spanish, as are most films. Some dubbing actors have achieved popularity for their voices, such as Constantino Romero (who dubs Clint Eastwood, Darth Vader and Arnold Schwarzenegger's Terminator, among others) and Óscar Muñoz (the official European Spanish dub-over voice artist for Elijah Wood and Hayden Christensen). Currently, with the spread of digital terrestrial television, viewers can choose between the original and the dubbed soundtracks for most movies and television. In some communities such as Catalonia, Galicia and Basque Country, some foreign programs are also dubbed into their own languages, different from European Spanish. Films from the Spanish-speaking America shown in these communities are shown in their original language, while strong regional accents (from the Spanish-speaking America or from Spain) may be dubbed in news and documentaries. Germany, Austria and Switzerland The Germanophone dubbing market is the largest in Europe. Germany has the most foreign-movie-dubbing studios per capita and per given area in the world and according to the German newspaper Die Welt 52% of all voice actors currently work in the German dubbing industry. In Germany and Austria, practically all films, shows, television series and foreign soap operas are shown in dubbed versions created for the German market. Dubbing films is a traditional and common practice in German-speaking Europe, since subtitles are not accepted and used as much as in other European countries. According to a European study, Austria is the country with the highest rejection rate (more than 70 percent) of subtitles, followed by Italy, Spain and Germany. In German-speaking markets, computer and video games feature German text menus and are dubbed into the German language if speaking parts exist. Unlike in Austria and Germany, cinemas in German-speaking Switzerland historically strongly preferred subtitled versions of foreign-language films. Swiss film distributors commissioned dual-language prints with both German and French subtitles as the primary version, with the dubbed version also shown. In recent years, however, there has been a shift towards dubbed versions, which now account for the majority of showings. Television broadcasts of foreign films and programming have historically been dubbed. Swiss and Austrian television stations have increasingly been broadcasting foreign-language movies and TV programs with multiple soundtracks, allowing the viewer to choose between the original language (e.g. English) and the channel's local language (German, French, or Italian, according to the location). Although German-speaking voice actors play only a secondary role, they are still notable for providing familiar voices to well-known actors. Famous foreign actors are known and recognized for their German voice, and the German audience is used to them, so dubbing is also a matter of authenticity. However, in larger cities, there are theaters where movies can be seen in their original versions, as English has become somewhat more popular among young educated viewers. On German mainstream television, films are never broadcast with subtitles, but pay-per-view programming is often available in the original language. Subtitled niche and art films are sometimes aired on smaller networks. German-dubbed versions sometimes diverge greatly from the original, especially in adding humorous elements absent from the original. In extreme cases, such as The Persuaders!, the German-dubbed version was more successful than the English original. Often, translation adds sexually explicit gags the U.S. versions might not be allowed to use. For example, in Bewitched, the translators changed "The Do Not Disturb sign will hang on the door tonight" to "The only hanging thing tonight will be the Do Not Disturb sign". Some movies dubbed in Austria diverge from the German Standard version in addressing other people but only when the movies are dubbed into certain Austrian dialect versions. (Mr. and Mrs. are translated into Herr and Frau which is usually not translated in order to be in lip-sync). Sometimes even English pronounced first names are translated and are pronounced into the correct German equivalent (English name "Bert" became Southern German pronounced name "Bertl" which is an abbreviation for any name either beginning or even ending with "bert", e.g. "Berthold" or "Albert".) Some movies dubbed before reunification exist in different versions for the east and the west. They use different translations, and often differ in the style of dubbing. Some of the well-known German dubbing voice artists are listed below. Russia Russian television is generally dubbed, but some cases use the voice-over dub technique with only a couple of voice actors, with the original speech still audible underneath. In the Soviet Union, most foreign movies to be officially released were dubbed. Voice-over dub was invented in the Soviet Union in the 1980s when with the fall of the regime, many popular foreign movies, previously forbidden, or at least questionable under communist rule, started to flood in, in the form of low-quality home-copied videos. Being unofficial releases, they were dubbed in a very primitive way. For example, the translator spoke the text directly over the audio of a video being copied, using primitive equipment. The quality of the resulting dub was very low, the translated phrases were off-sync, interfering with the original voices, background sounds leaked into the track, translation was inaccurate and, most importantly, all dub voices were made by a single person who usually lacked the intonation of the original, making comprehension of some scenes quite difficult. This method of translation exerted a strong influence on Russian pop culture. Voices of translators became recognizable for generations. In modern Russia, the overdubbing technique is still used in many cases, although with vastly improved quality, and now with multiple voice actors dubbing different original voices. Video games are generally either dubbed into Russian (such as the Legend of Spyro trilogy, the Skylanders series, the Assassin's Creed saga, the Halo series, the Harry Potter series, etc.) or released with original-speaking tracks but with all the texts translated into Russian language. The technique of non-voiceover dubbing, without the original speech still audible underneath, has also gained traction in Russia in the 21st century. Releases of films in cinemas are almost always dubbed in the Russian language. Television series are typically shown as a dubbed or voiceovered translation. Subtitles are not used at all. Some of the well-known Russian dubbing voice artists are listed below. Slovakia In Slovakia, home media market, Czech dubbed versions are widely used, with only children's films and some few exceptions (for example Independence Day) that have been dubbed for cinema being released with Slovak dubbing. Czech dubbing was also extensively used in the broadcast of Slovak television channels, but since 2008 Slovak language laws require any newer shows (understood as the first television broadcast in Slovakia) to be provided with Slovak localization (dubbing or subtitles); since then, television broadcasts of films, TV series and cartoons have been dubbed into Slovak. Theatrical releases are generally subtitled, except for films with a young target audience. Hungary In Hungary, dubbing is almost universally common. Almost every foreign movie or TV show released in Hungary is dubbed into Hungarian. The history of dubbing dates back to the 1950s, when the country was still under communist rule. One of the most iconic Hungarian dubs was of the American cartoon The Flintstones, with a local translation by József Romhányi. The Internetes Szinkron Adatbázis (ISzDB) is the largest Hungarian database for film dubs, with information for many live action and animated films. On page 59 of the Eurobarometer, 84% of Hungarians said that they prefer dubbing over subtitles. In the socialist era, every film was dubbed with professional and mostly popular actors. Care was taken to make sure the same voice actor would lend his voice to the same original actor. In the early 1990s, as cinemas tried to keep up with showing newly released films, subtitling became dominant in the cinema. This, in turn, forced TV channels to make their own cheap versions of dubbed soundtracks for the movies they presented, resulting in a constant degrading of dubbing quality. Once this became customary, cinema distributors resumed the habit of dubbing for popular productions, presenting them in a below-average quality. However, every feature is presented with the original soundtrack in at least one cinema in large towns and cities. However, in Hungary, most documentary films and series (for example, those on Discovery Channel, National Geographic Channel) are made with voiceovers. Some old movies and series, or ones that provide non-translatable jokes and conversations (for example, the Mr. Bean television series), are shown only with subtitles. There is a more recent problem arising from dubbing included on DVD releases. Many generations have grown up with an original (and, by current technological standards, outdated) soundtrack, which is either technologically (mono or bad quality stereo sound) or legally (expired soundtrack license) unsuitable for a DVD release. Many original features are released on DVD with a new soundtrack, which in some cases proves to be extremely unpopular, thus forcing DVD producers to include the original soundtrack. In some rare cases, the Hungarian soundtrack is left out altogether. This happens notably with Warner Home Video Hungary, which ignored the existence of Hungarian soundtracks completely, as they did not want to pay the licenses for the soundtracks to be included on their new DVD releases, which appear with improved picture quality, but very poor subtitling. Poland In Poland, cinema releases for general audiences are almost exclusively subtitled, with the exception of children's movies, and television screenings of movies, as well as made-for-TV shows. These are usually shown with voice-over, where a voice talent reads a translation over the original soundtrack. This method, called "juxtareading," is similar to the so-called Gavrilov translation in Russia, with one difference—all dialogues are voiced by one off-screen reader (), preferably with a deep and neutral voice which does not interfere with the pitch of voice of the original speakers in the background. To some extent, it resembles live translation. Certain highly qualified voice talents are traditionally assigned to particular kinds of production, such as action or drama. Standard dubbing is not widely popular with most audiences, with the exception of cartoons and children's shows, which are dubbed also for TV releases. It is claimed that, until around 1951, there were no revoiced foreign movies available in Poland. Instead, they were exclusively subtitled in Polish. Poland's dubbing traditions began between the two world wars. In 1931, among the first movies dubbed into Polish were Dangerous Curves (1929), The Dance of Life (1929), Paramount on Parade (1930), and Darling of the Gods (1930). In 1949, the first dubbing studio opened in Łódź. The first film dubbed that year was Russkiy Vopros (filmed 1948). Polish dubbing in the first post-war years suffered from poor synchronization. Polish dialogues were not always audible and the cinema equipment of that time often made films sound less clear than they were. In the 1950s, Polish publicists discussed the quality of Polish versions of foreign movies. The number of dubbed movies and the quality improved. Polish dubbing had a golden age between the 1960s and the 1980s. Approximately a third of foreign movies screened in cinemas were dubbed. The "Polish dubbing school" was known for its high quality. In that time, Poland had some of the best dubbing in the world. The person who initiated high-quality dubbing versions was director Zofia Dybowska-Aleksandrowicz. In that time, dubbing in Poland was very popular. Polish television dubbed popular films and TV series such as Rich Man, Poor Man; Fawlty Towers, Forsyte Saga, Elizabeth R, I, Claudius, I'll Take Manhattan, and Peter the Great. In the 1980s, due to budget cuts, state-run TV saved on tapes by voicing films over live during transmission. Overall, during 1948–1998, almost 1,000 films were dubbed in Polish. In the 1990s, dubbing films and TV series continued, although often also for one emission only. In 1995, Canal+ was launched in Poland. In its first years, it dubbed 30% of its schedule dubbing popular films and TV series, one of the best-known and popular dubbings was that of Friends, but this proved unsuccessful. It stopped dubbing films in 1999, although many people supported the idea of dubbing and bought the access only for dubbing versions of foreign productions. In the 1990s, dubbing was done by the television channel known as Wizja Jeden. They mainly dubbed BBC productions such as The League of Gentlemen, Absolutely Fabulous and Men Behaving Badly. Wizja Jeden was closed in 2001. In the same year, TVP stopped dubbing the TV series Frasier, although that dubbing was very popular. Currently, dubbing of films and TV series for teenagers is made by Nickelodeon and Disney Channel. One of the major breakthroughs in dubbing was the Polish release of Shrek, which contained many references to local culture and Polish humor. Since then, people seem to have grown to like dubbed versions more, and pay more attention to the dubbing actors. However, this seems to be the case only with animated films, as live-action dubbing is still considered a bad practice. In the case of DVD releases, most discs contain both the original soundtrack and subtitles, and either voice over or dubbed Polish track. The dubbed version is, in most cases, the one from the theater release, while voice-over is provided for movies that were only subtitled in theaters. Since theatrical release of The Avengers in May 2012, Walt Disney Company Polska dubs all films for cinema releases. Also in 2012, United International Pictures Polska dubbed The Amazing Spider-Man, while Forum Film Polska – former distributor of Disney's films – decided to dub The Hobbit: An Unexpected Journey, along with its two sequels. However, when a dub is produced but the film's target audience is not exclusively children, both dubbed and subtitled versions are usually available in movie theaters. The dubbed versions are more commonly shown in morning and early afternoon hours, with the subtitled version dominating in the evening. Both can be available in parallel at similar hours in multiplexes. Ukraine In Ukraine, since 2006 cinema releases are almost always dubbed into Ukrainian with the overdubbing technique and multiple voice actors dubbing different original voices with a small percent of art-house/documentaries shown in the original language with Ukrainian subtitles. For television, TV channels usually release movies and TV-shows with a Ukrainian voiceover, although certain high-profile films and TV shows are dubbed rather than voice-overe'ed. In the past Russian-language films, TV series, cartoons, animated series and TV programs were usually not dubbed but were shown with the original audio with Ukrainian subtitles. However, this practice has been slowly abandoned since the late 2010s: all children's films and cartoons regardless of the original language (including Russian) are always dubbed into Ukrainian; example of the first Russian cartoons dubbed into Ukrainian for the cinematic-release is The Snow Queen 2 (2015), A Warrior's Tail (2015), Volki i Ovtsy: Be-e-e-zumnoe prevrashenie (2016), Ivan Tsarevich i Seryy Volk 3 (2016), Bremenskie razboyniki (2016), The Snow Queen 3: Fire and Ice (2017), Fantastic Journey to OZ (2017), Fixies: Top Secret (2017) etc.; the same trend is seen among Russian language feature films for adults, with the first such films dubbed into Ukrainian including Battle for Sevastopol (2015), Hardcore Henry (2016), The Duelist (2016). Latvia and Lithuania In Latvia and Lithuania, only children's movies get dubbed in the cinema, while many live-action movies for an older audience use voice-over. In recent years however, many cartoons have been dubbed into Latvian and Lithuanian for TV. But some other kids shows, like SpongeBob SquarePants, use the voice-over. North America United States and English-speaking Canada In the United States and English-speaking Canada, live-action foreign films are usually shown in theaters with their original languages and English subtitles. It is because live-action dubbed movies rarely did well in United States box office since the 1980s. The 1982 United States theatrical release of Wolfgang Petersen's Das Boot was the last major release to go out in both original and English-dubbed versions, and the film's original version actually grossed much higher than the English-dubbed version. Later on, English-dubbed versions of international hits like Un indien dans la ville, Godzilla 2000, Anatomy, Pinocchio, The Return of Godzilla and High Tension flopped at United States box offices. When Miramax planned to release the English-dubbed versions of Shaolin Soccer and Hero in the United States cinemas, their English-dubbed versions scored badly in test screenings in the United States, so Miramax finally released the films in United States cinemas with their original language. Still, English-dubbed movies have much better commercial potential in ancillary market; therefore, more distributors would release live-action foreign films in theaters with their original languages (with English subtitles), then release both original versions and English-dubbed versions in ancillary market. On the other hand, anime is almost always released in English-dubbed format, regardless of its content or target age group. The exceptions to this practice are either when an English dub has not been produced for the program (usually in the case of feature films) or when the program is being presented by a network that places importance on presenting it in its original format (as was the case when Turner Classic Movies aired several of Hayao Miyazaki's works, which were presented both dubbed and subtitled). Most anime DVDs contain options for original Japanese, Japanese with subtitles, and English-dubbed, except for a handful of series that have been heavily edited or Americanized. In addition, Disney has a policy that makes its directors undergo stages to perfect alignment of certain lip movements so the movie looks believable. In addition, a small number of British films have been re-dubbed when released in the United States, due to the usage of dialects which Americans are not familiar with (for example, Kes and Trainspotting). However, British children's shows (such as Thomas and Friends and Bob the Builder) have historically always been re-dubbed with American voice actors in order to make the series more understandable for American children. This slowly fell out of practice since the late 2000s. With the rising popularity of British children's shows such as Peppa Pig, which airs undubbed on Nick Jr., fewer and fewer British children's shows have been broadcast with American re-dubs. The most recent of such re-dubs is season 9 of Fireman Sam, whose dub is currently an Amazon Prime exclusive - on linear TV, the show airs undubbed. Conversely, British programs shown in Canada are not re-dubbed. Some live-action television shows shown in the US have Spanish dubs. These are accessible though the SAP (secondary audio program) function of the television unit. French-speaking Canada In Quebec, Canada, most films and TV programs in English are dubbed into Standard French, occasionally with Quebec French idiosyncrasies. They speak with a mixed accent, they pronounce /ɛ̃/ with a Parisian accent, but they pronounce "â" and "ê" with a Quebec accent: grâce [ɡʁɑːs] and être [ɛːtʁ̥]. Occasionally, the dubbing of a series or a movie, such as The Simpsons, is made using the more widely spoken joual variety of Quebec French. Dubbing has the advantage of making children's films and TV series more comprehensible to younger audiences. However, many bilingual Québécois prefer subtitling, since they would understand some or all of the original audio. In addition, all films are shown in English, as well in certain theaters (especially in major cities and English-speaking areas such as the West Island), and some theatres, such as the Scotiabank Cinema Montreal, show only movies in English. Most American television series are only available in English on DVD, or on English-language channels, but some of the more popular ones have French dubs shown on mainstream networks, and are released in French on DVD as well, sometimes separately from an English-only version. Formerly, all French-language dubbed films in Quebec were imported from France and some still are. Such a practice was criticized by former politician Mario Dumont after he took his children to see the Parisian French dub of Shrek the Third, which Dumont found incomprehensible. After his complaints and a proposed bill, Bee Movie, the film from DreamWorks Animation, was dubbed in Quebec, making it the studio's first animated film to have a Quebec French dub, as all DreamWorks Animation films had previously been dubbed in France. In terms of Disney, the first Disney animated film to be dubbed in Quebec was Oliver and Company. The Disney Renaissance films were also dubbed in Quebec except for The Rescuers Down Under, Beauty and the Beast, and The Lion King. In addition, because Canadian viewers usually find Quebec French more comprehensible than other dialects of the language, some older film series that had the French-language versions of previous installments dubbed in France have had later ones dubbed in Quebec, often creating inconsistencies within the French version of the series' canon. Lucasfilm's Star Wars and Indiana Jones series are examples. Both series had films released in the 1970s and 1980s, with no Québécois French dubbed versions; instead, the Parisian French versions, with altered character and object names and terms, were distributed in the province. However, later films in both series released 1999 and later were dubbed in Quebec, using different voice actors and "reversing" name changes made in France's dubbings due to the change in studio. Latin America Spanish-speaking countries For Spanish-speaking countries, all foreign-language programs, films, cartoons and documentaries shown on free-to-air TV networks (i.e. Discovery Kids) are dubbed into Standard Spanish, while broadcasts on cable and satellite pan-regional channels are either dubbed or subtitled. In theaters, children's movies and most blockbuster films are dubbed into Standard Spanish also known as Mexican Spanish, and are sometimes further dubbed into regional dialects of Spanish where they are released. Mexico In Mexico, by law, films shown in theaters must be shown in their original version. Films in languages other than Spanish are usually subtitled. Only educational documentaries and movies rated for children (some shows aired on PBS or PBS Kids), as well as some movies that are expected to have a wide audience (for example, The Lord of the Rings: The Return of the King or The Avengers) may be dubbed, but this is not compulsory, and some animated films are shown in theaters in both dubbed and subtitled versions (for instance, some DreamWorks productions). Nonetheless, a recent trend in several cinemas is to offer the dubbed versions only, with a stark decrease in the showing of the original ones. Dubbing must be made in Mexico by Mexican nationals or foreigners residing in Mexico. Still, several programs that are shown on pay TV are dubbed in other countries like Argentina, Chile, Colombia and Venezuela. Most movies released on DVD feature neutral Spanish as a language option, and sometimes feature a specific dub for Mexican audiences (for example, Rio). Foreign programs are dubbed on broadcast TV, while on pay TV most shows and movies are subtitled. In a similar way to cinemas, in the last few years many channels on pay TV have begun to broadcast programs and films only in their dubbed version. Dubbing became very popular in the 1990s with the rise in popularity of anime in Mexico. Some voice actors have become celebrities and are always identified with specific characters, such as Mario Castañeda (who became popular by dubbing Goku in Dragon Ball Z) or Humberto Vélez (who dubbed Homer Simpson in the first 15 seasons of The Simpsons). The popularity of pay TV has allowed people to view several series in their original language rather than dubbed. Dubbing has been criticized for the use of TV or movie stars as voice actors (such as Ricky Martin in Disney's Hercules, or Eugenio Derbez in DreamWorks' Shrek), or for the incorrect use of local popular culture that sometimes creates unintentional jokes or breaks the feeling of the original work (such as translating Sheldon Cooper's "Bazinga!" to "¡Vacilón!"). Several video games have been dubbed into neutral Spanish, rather than European Spanish, in Mexico (such as the Gears of War series, Halo 3, Infamous 2 and others). Sony recently announced that more games (such as God of War: Ascension) will be dubbed into neutral Spanish. Peru In Peru, all foreign series, movies, and animated programming are shown dubbed in Latin American Spanish, with dubs imported from Argentina, Mexico, Chile, Colombia and Venezuela on terrestrial and pay-television. Most movies intended for kids are being offered as dub-only movies, while most films aimed at older audiences are being offered dubbed and subtitled in Spanish. Also, at most theaters, kids films (on rare occasions) subtitled are commonly shown at nighttime. Most subtitled Pay-TV channels show both dubbed and subtitled version of every film they broadcast, being offered with a separate subtitle track and a second audio track in English. There is an increase of people preferring subtitle films and series rather than dubbed starting the late-2000s, as Peruvians viewers tend to get used to their original version. Peru used to do not produce their own dubs since dubbing studios never existed in that country until 2016, when the company "Big Bang Films" started to dub movies and series, however, since 2014 a group of dubbing actors created a group called "Torre A Doblaje", who is a group of actors who gives dubbing and locution service. Brazil In Brazil, foreign programs are invariably dubbed into Brazilian Portuguese on free-to-air TV, with only a few exceptions. Films shown at cinemas are generally offered with both subtitled and dubbed versions, with dubbing frequently being the only choice for children's movies. Subtitling was primarily for adult audience movies until 2012. Since then, dubbed versions also became available for all ages. As a result, in recent years, more cinemas have opened in Brazil, attracting new audiences to the cinema who prefer dubbing. According to a Datafolha survey, 56% of Brazilian movie theaters' audience prefer to watch dubbed movies. Most of the dubbing studios in Brazil are in the cities of Rio de Janeiro and São Paulo. The first film to be dubbed in Brazil was the Disney animation "Snow White and the Seven Dwarfs" in 1938. By the end of the 1950s, most of the movies, TV series and cartoons on television in Brazil were shown in its original sound and subtitles. However, in 1961, a decree of President Jânio Quadros ruled that all foreign productions on television should be dubbed. This measure boosted the growth of dubbing in Brazil, and has led to several dubbing studios since then. The biggest dubbing studio in Brazil was Herbert Richers, headquartered in Rio de Janeiro and closed in 2009, At its peak in the 80s and 90s, the Herbert Richers studios dubbed about 70% of the productions shown in Brazilian cinemas. In the 90s, with Saint Seiya, Dragon Ball and other anime shows becoming popular in Brazilian TVs, the voice actors and the dubbing career gained a higher space in Brazilian culture. Actors like Hermes Baroli (Brazilian dubber of Pegasus Seiya, in Saint Seiya and actors like Ashton Kutcher), Marco Ribeiro (Brazilian dubber of many actors like Tom Hanks, Jim Carrey and Robert Downey Jr., and Yusuke Urameshi from the anime Yu Yu Hakusho) and Wendel Bezerra (Brazilian dubber of Goku in Dragon Ball Z and SpongeBob in SpongeBob SquarePants) are recognized for their most notable roles. Pay TV commonly offers both dubbed and subtitled movies, with statistics showing that dubbed versions are becoming predominant. Most DVD and Blu-ray releases usually feature Portuguese, Spanish, and the original audio along with subtitles in native languages. Most video games are dubbed in Brazilian Portuguese rather than having European Portuguese dubs alone. Games such as Halo 3, God of War: Ascension, inFamous 2, Assassin's Creed III, Skylanders: Spyro's Adventure, World of Warcraft and others are dubbed in Brazilian Portuguese. This is because despite the dropping of the dubbing law in Portugal in 1994, most companies in that country use the Brazilian Portuguese because of traditional usage during the days of the dubbing rule, along with these dubbings being more marketable than European Portuguese. A list that showcases Brazilian Portuguese voice artists that dub for actors and actresses are displayed here. However, there can also be different official dub artists for certain regions within Brazil. Apparently, for unknown reasons (probably technical), the Brazilian Portuguese dub credits from some shows or cartoons from channels from Viacom or Turner/Time Warner, are shown on Latin America (on Spanish-dubbed series). Asia China China has a long tradition of dubbing foreign films into Mandarin Chinese, starting in the 1930s. While during the Republic of China era Western motion pictures may have been imported and dubbed into Chinese, since 1950 Soviet movies, dubbed primarily in Shanghai, became the main import. Beginning in the late 1970s, in addition to films, popular TV series from the United States, Japan, Brazil, and Mexico were also dubbed. The Shanghai Film Dubbing Studio has been the most well-known studio in the film dubbing industry in China. In order to generate high-quality products, they divide each film into short segments, each one lasting only a few minutes, and then work on the segments one-by-one. In addition to the correct meaning in translation, they make tremendous effort to match the lips of the actors to the dialogue. As a result, the dubbing in these films generally is not readily detected. The cast of dubbers is acknowledged at the end of a dubbed film. Several dubbing actors and actresses of the Shanghai Film Dubbing Studio have become well-known celebrities, such as Qiu Yuefeng, Bi Ke, Li Zi, and Liu Guangning. In recent years, however, especially in the larger cities on the east and south coasts, it has become increasingly common for movie theaters to show subtitled versions with the original soundtracks intact. Motion pictures are also dubbed into the languages of some of China's autonomous regions. Notably, the Translation Department of the Tibetan Autonomous Region Movie Company (西藏自治区电影公司译制科) has been dubbing movies into the Tibetan language since the 1960s. In the early decades, it would dub 25 to 30 movies each year, the number rising to 60-75 by the early 2010s. Motion pictures are dubbed for China's Mongol- and Uyghur-speaking markets as well. Chinese television dramas are often dubbed to Standard Mandarin by professional voice actors for a number of reasons. Taiwan Taiwan dubs some foreign films and TV series in Mandarin Chinese. Until the mid-1990s, the major national terrestrial channels both dubbed and subtitled all foreign programs and films and, for some popular programs, the original voices were offered in second audio program. Gradually, however, both terrestrial and cable channels stopped dubbing for prime time U.S. shows and films, while subtitling continued. In the 2000s, the dubbing practice has differed depending on the nature and origin of the program. Animations, children's shows and some educational programs on PTS are mostly dubbed. English live-action movies and shows are not dubbed in theaters or on television. Japanese TV dramas are no longer dubbed, while Korean dramas, Hong Kong dramas and dramas from other Asian countries are still often dubbed. Korean variety shows are not dubbed. Japanese and Korean films on Asian movie channels are still dubbed. In theaters, most foreign films are not dubbed, while animated films and some films meant for children offer a dubbed version. Hong Kong live-action films have a long tradition of being dubbed into Mandarin, while more famous films offer a Cantonese version. Hong Kong In Hong Kong, foreign television programs, except for English-language and Mandarin television programs, are dubbed in Cantonese. English-language and Mandarin programs are generally shown in their original with subtitles. Foreign films, such as most live-action and animated films (such as anime and Disney), are usually dubbed in Cantonese. However most cinemas also offer subtitled versions of English-language films. For the most part, foreign films and TV programs, both live-action and animated, are generally dubbed in both Mandarin and Cantonese. For example, in The Lord of the Rings film series, Elijah Wood's character Frodo Baggins was dubbed into Mandarin by Jiang Guangtao for China and Taiwan. For the Cantonese localization, there were actually two dubs for Hong Kong and Macau. The first Cantonese dub, he was voiced by Leung Wai Tak, with a second Cantonese dub released, he was voiced by Bosco Tang. A list for Mandarin and Cantonese voice artists that dub for actors are shown here. Indonesia Unlike movie theaters in most Asian countries, those in Indonesia show foreign movies with subtitles. Then a few months or years later, those movies appear on TV either dubbed in Indonesian or subtitled. Kids shows are mostly dubbed, though even in cartoon series, songs typically aren't dubbed, but in big movies such as Disney movies, both speaking and singing voice were cast for the new Indonesian dub even though it took maybe a few months or even years for the movie to come out. Adult films was mostly subtitled, but sometimes they can be dubbed as well and because there aren't many Indonesian voices, especially in dubbed movies, three characters can have the exact same voice. Reality shows, including Malay-language TV series (like Upin & Ipin), are not dubbed in Indonesian, because they are not a planned interaction like with movies and TV shows, so if they appear in TV, they will be appear with subtitles. Israel In Israel, only children's movies and TV programming are dubbed in Hebrew. In programs aimed at teenagers and adults, dubbing is never considered for translation, not only because of its high costs, but also because the audience is mainly multi-lingual. Most viewers in Israel speak at least one European language in addition to Hebrew, and a large part of the audience also speaks Arabic. Therefore, most viewers prefer to hear the original soundtrack, aided by Hebrew subtitles. Another problem is that dubbing does not allow for translation into two different languages simultaneously, as is often the case of Israeli television channels that use subtitles in Hebrew and another language (like Russian) simultaneously. Japan In Japan, many television programs appear on Japanese television subtitled or dubbed if they are intended for children. When the American film Morocco was released in Japan in 1931, subtitles became the mainstream method of translating TV programs and films in Japan. Later, around the 1950s, foreign television programs and films began to be shown dubbed in Japanese on television. The first ones to be dubbed into Japanese were the 1940s Superman cartoons in 1955. Due to the lack of video software for domestic television, video software was imported from abroad. When the television program was shown on television, it was mostly dubbed. There was a character limit for a small TV screen at a lower resolution, and this method was not suitable for the poor elderly and illiterate eye, as was audio dubbing. Presently, TV shows and movies (both those aimed at all ages and adults-only) are shown dubbed with the original language and Japanese subtitles, while providing the original language option when the same film is released on VHS, DVD and Blu-ray. Laserdisc releases of Hollywood films were almost always subtitled, films alike Godzilla: King of the Monsters. Adult cartoons such as South Park and The Simpsons are shown dubbed in Japanese on the WOWOW TV channel. South Park: Bigger, Longer and Uncut was dubbed in Japanese by different actors instead of the same Japanese dubbing-actors from the cartoon because it was handled by a different Japanese dubbing studio, and it was marketed for the Kansai market. In Japanese theaters, foreign-language movies, except those intended for children, are usually shown in their original version with Japanese subtitles. Foreign films usually contain multiple Japanese-dubbing versions, but with several different original Japanese-dubbing voice actors, depending upon which TV station they are aired. NHK, Nippon TV, Fuji TV, TV Asahi, and TBS usually follow this practice, as do software releases on VHS, Laserdisc, DVD and Blu-ray. As for recent foreign films being released, there are now some film theaters in Japan that show both dubbed and subtitled editions. On 22 June 2009, 20th Century Fox's Japanese division has opened up a Blu-ray lineup known as "Emperor of Dubbing", dedicated at having multiple Japanese dubs of popular English-language films (mostly Hollywood films) as well as retaining the original scripts, releasing them altogether in special Blu-ray releases. These also feature a new dub created exclusively for that release as a director's cut, or a new dub made with a better surround sound mix to match that of the original English mix (as most older Japanese dubbings were made on mono mixes to be aired on TV). Other companies have followed practice, like Universal Pictures's Japanese division NBCUniversal Entertainment Japan opening up "Reprint of Memories", along with Warner Bros Japan having "Power of Dubbing", which act in a similar way by re-packaging all the multiple Japanese dubs of popular films and putting them out as Special Blu-ray releases. "Japanese dub-over artists" provide the voices for certain performers, such as those listed in the following table: South Korea In South Korea, anime that are imported from Japan are generally shown dubbed in Korean on television. However, some anime is censored, such as Japanese letters or content being edited for a suitable Korean audience. Western cartoons are dubbed in Korean as well, such as Nickelodeon cartoons like SpongeBob SquarePants and Danny Phantom. Several English-language (mostly American) live-action films are dubbed in Korean, but they are not shown in theaters. Instead they are only broadcast on South Korean television networks (KBS, MBC, SBS, EBS), while DVD import releases of these films are shown with Korean subtitles, such as The Wizard of Oz, Mary Poppins, the Star Wars films, and Avatar. This may be due to the fact that the six American major film studios may not own any rights to the Korean dubs of their live-action films that the Korean television networks have dubbed and aired. Even if they don't own the rights, Korean or non-Korean viewers can record from Korean-dubbed live-action films from television broadcasting onto DVDs with DVRs. Sometimes, video games are dubbed in Korean. Examples would be the Halo series, the Jak & Daxter series, and the God of War series. For the Halo games, Lee Jeong Gu provides his Korean voice to the main protagonist Master Chief (replacing Steve Downes's voice), while Kim So Hyeong voices Chieftain Tartarus, one of the main antagonists (replacing Kevin Michael Richardson's voice). The following South Korean voice-over artists are usually identified with the following actors: Thailand In Thailand, foreign television programs are dubbed in Thai, but the original soundtrack is often simultaneously carried on a NICAM audio track on terrestrial broadcast, and alternate audio tracks on satellite broadcast. Previously, terrestrial stations simulcasted the original soundtrack on the radio. On pay-TV, many channels carry foreign-language movies and television programs with subtitles. Movie theaters in Bangkok and some larger cities show both the subtitled version and the dubbed version of English-language movies. In big cities like Bangkok, Thai-language movies have English subtitles. This list features a collection of Thai voice actors and actresses that have dubbed for these featured performers. Philippines In the Philippines, media practitioners generally have mixed practices regarding whether to dub television programs or films, even within the same kind of medium. In general, the decision whether to dub a video production depends on a variety of factors such as the target audience of the channel or programming bloc on which the feature will be aired, its genre, and/or outlet of transmission (e.g. TV or film, free or pay-TV). Free-to-air TV The prevalence of media needing to be dubbed has resulted in a talent pool that is very capable of syncing voice to lip, especially for shows broadcast by the country's three largest networks. It is not uncommon in the Filipino dub industry to have most of the voices in a series dubbing by only a handful of voice talents. Programs originally in English used to usually air in their original language on free-to-air television. Since the late 1990s/early 2000s, however, more originally English-language programs that air on major free-to-air networks (i.e. 5, ABS-CBN, GMA) have been dubbed into Filipino. Even the former Studio 23 (now S+A), once known for its airing programs in English, had adopted Filipino language dubbing for some of its foreign programs. Children's programs from cable networks Nickelodeon, Cartoon Network, Disney Channel, and not all other PBS Kids shows shown on 5, GMA, or ABS-CBN, have long been dubbed into Filipino or another Philippine regional language. Animated Disney films are often dubbed in Filipino except for the singing scenes, which are shown in their original language (though in recent years, there has been an increase in number of Disney musicals having their songs also translated such as Frozen). GMA News TV airs some documentaries, movies, and reality series originally shown in the English language as dubbed in Filipino. Dubbing is less common in smaller free-to-air networks such as ETC and the former RPN 9 (now CNN Philippines) whereby the original-language version of the program is aired. Dramas from Asia (particularly Greater China and Korea) and Latin America (called Asianovelas, and Mexicanovelas, respectively) have always been dubbed into Filipino or another Philippine regional language, and each program from these genres feature their unique set of Filipino-speaking voice actors. Pay TV The original language-version of TV programs is also usually available on cable/satellite channels such as Fox Life, Fox, and AXN. However, some pay-TV channels specialize in showing foreign shows and films dubbed into Filipino. Cinema One, ABS-CBN's cable movie channel, shows some films originally in non-English language dubbed into Filipino. Nat Geo Wild airs most programs dubbed into Filipino for Philippine audiences, being one of the few cable channels to do so. Tagalized Movie Channel & Tag airs Hollywood and Asian movies dubbed in Filipino. Fox Filipino airs some English, Latin, and Asian series dubbed in Filipino such as The Walking Dead, Devious Maids, La Teniente, Kdabra, and some selected programs from Channel M. The defunct channel HERO TV, which focuses on anime and tokusatsu shows and now a web portal, dubs all its foreign programs into Filipino. This is in contrast to Animax, where their anime programs are dubbed in English. Cinema Foreign films, especially English films shown in local cinemas, are almost always shown in their original language. Non-English foreign films make use of English subtitles. Unlike other countries, children's films originally in English are not dubbed in cinemas. A list of voice actors with their associates that they dub into Filipino are listed here. India In India, where "foreign films" are synonymous with "Hollywood films", dubbing is done mostly in Hindi, Tamil and Telugu. Dubbing is rarely done with the other major Indian languages, namely Malayalam and Bengali, due to lack of significant market size. Despite this, some Kannada and Malayalam dubs of children television programs can be seen on the Sun TV channel. The dubbed versions are released into the towns and lower tier settlements of the respective states (where English penetration is low), often with the English-language originals released in the metropolitan areas. In all other states, the English originals are released along with the dubbed versions, where often the dubbed version collections are more outstanding than the originals. Spider-Man 3 was also done in the Bhojpuri language, a language popular in eastern India in addition to Hindi, Tamil and Telugu. A Good Day to Die Hard, the most recent installment in the Die Hard franchise, was the first ever Hollywood film to receive a Punjabi language dub as well. Most TV channels mention neither the Indian-language dubbing credits, nor its staff, at the end of the original ending credits, since changing the credits casting for the original actors or voice actors involves a huge budget for modifying, making it somewhat difficult to find information for the dubbed versions. The same situation is encountered for films. Sometimes foreign programs and films receive more than one dub, such as for example, Jumanji, Dragonheart and Van Helsing having two Hindi dubs. Information for the Hindi, Tamil and Telugu voice actors who have done the voices for specific actors and for their roles on foreign films and television programs are published in local Indian data magazines, for those that are involved in the dubbing industry in India. But on a few occasions, there are some foreign productions that do credit the dubbing cast, such as animated films like the Barbie films, and some Disney films. Disney Channel original series released on DVD with their Hindi dubs show a list of the artists in the Hindi dub credits, after the original ending credits. Theatrical releases and VCD releases of foreign films do not credit the dubbing cast or staff. The DVD releases, however, do have credits for the dubbing staff, if they are released multilingual. As of recently, information for the dubbing staff of foreign productions have been expanding due to high demands of people wanting to know the voice actors behind characters in foreign works. Large dubbing studios in India include Sound & Vision India, Main Frame Software Communications, Visual Reality, ZamZam Productions, Treasure Tower International, Blue Whale Entertainment, Jai Hand Entertainment, Sugar Mediaz, Rudra Sound Solutionz and voxcom. Pakistan In Pakistan "foreign films", and cartoons are not normally dubbed locally. Instead, foreign films, anime and cartoons, such as those shown on Nickelodeon Pakistan and Cartoon Network Pakistan, are dubbed in Hindi in India, as Hindi and Urdu, the national language of Pakistan, are mutually intelligible. However, soap operas from Turkey are now dubbed in Urdu and have gained increased popularity at the expense of Indian soap operas in Hindi. This has led to protests from local producers that these are a threat to Pakistan's television industry, with local productions being moved out of peak viewing time or dropped altogether. Similarly, politicians leaders have expressed concerns over their content, given Turkey's less conservative culture. Vietnam In Vietnam, foreign-language films and programs are subtitled on television in Vietnamese. They were not dubbed until 1985, but are briefly translated with a speaker before commercial breaks. Rio was considered to be the very first American Hollywood film to be entirely dubbed in Vietnamese. Since then, children's films that came out afterwards have been released dubbed in theaters. HTV3 has dubbed television programs for children, including Ben 10, and Ned's Declassified School Survival Guide, by using various voice actors to dub over the character roles. Sooner afterwards, more programs started to get dubbed. HTV3 also offers anime dubbed into Vietnamese. Pokémon got a Vietnamese dub in early 2014 on HTV3 starting with the Best Wishes series. But due to a controversy regarding Pokémon's cries being re-dubbed despite that all characters had their Japanese names, it was switched to VTV2 in September 2015 when the XY series debut. Sailor Moon also recently has been dubbed for HTV3 in early 2015. Singapore In multilingual Singapore, dubbing is rare for western programs. English-language programs on the free-to-air terrestrial channels are usually subtitled in Chinese or Malay. Chinese, Malay and Tamil programs (except for news bulletins), usually have subtitles in English and the original language during the prime time hours. Dual sound programs, such as Korean and Japanese dramas, offer sound in the original languages with subtitles, Mandarin-dubbed and subtitled, or English-dubbed. The deliberate policy to encourage Mandarin among citizens made it required by law for programs in other Chinese dialects (Hokkien, Cantonese and Teochew) to be dubbed into Mandarin, with the exception of traditional operas. Cantonese and Hokkien shows from Hong Kong and Taiwan, respectively, are available on VCD and DVD. In a recent development, news bulletins are subtitled. Iran In Iran, International foreign films and television programs are dubbed in Persian. Dubbing began in 1946 with the advent of movies and cinemas in the country. Since then, foreign movies have always been dubbed for the cinema and TV foreign films and television programs are subtitled in Persian. Using various voice actors and adding local hints and witticisms to the original contents, dubbing played a major role in attracting people to the cinemas and developing an interest in other cultures. The dubbing art in Iran reached its apex during the 1960s and 1970s with the inflow of American, European and Hindi movies. The most famous musicals of the time, such as My Fair Lady and The Sound of Music, were translated, adjusted and performed in Persian by the voice artists. Since the 1990s, for political reasons and under pressure from the state, the dubbing industry has declined, with movies dubbed only for the state TV channels. During recent years, DVDs with Persian subtitles have found a market among viewers for the same reason, but most people still prefer the Persian-speaking dubbed versions. Recently, privately operated companies started dubbing TV series by hiring famous dubbers. However, the dubs which these companies make are often unauthorized and vary greatly in terms of quality. A list of Persian voice actors that associate with their actor counterparts are listed here. Georgia In Georgia, original soundtracks are kept in films and TV series, but with voice-over translation. There are exceptions, such as some children's cartoons. Azerbaijan In Azerbaijan, dubbing is rare, as most Azerbaijani channels such as ARB Günəş air voice-overs or Azerbaijan originals. Western Asia See below. Africa North Africa, Western Asia In Algeria, Morocco, and Tunisia, most foreign movies (especially Hollywood productions) are shown dubbed in French. These movies are usually imported directly from French film distributors. The choice of movies dubbed into French can be explained by the widespread use of the French language. Another important factor is that local theaters and private media companies do not dub in local languages in order to avoid high costs, but also because of the lack of both expertise and demand. Beginning in the 1980s, dubbed series and movies for children in Modern Standard Arabic became a popular choice among most TV channels, cinemas and VHS/DVD stores. However, dubbed films are still imported, and dubbing is performed in the Levant countries with a strong tradition of dubbing (mainly Syria, Lebanon and Jordan). Egypt was the first Arabian country in charge of dubbing Disney movies in 1975 and used to do it exclusively in Egyptian Arabic rather than Modern Standard Arabic until 2011, and since then many other companies started dubbing their productions in this dialect. In the Arabic-speaking countries, children shows (mainly cartoons & kids sitcoms) are dubbed in Arabic, otherwise Arabic subtitles are used. The only exception was telenovelas dubbed in Standard Arabic, or dialects, but also Turkish series, most notably Gümüş, in Syrian Arabic. An example of Arabic voice actors that dub for certain performers is Safi Mohammed for Elijah Wood. In Tunisia, the Tunisia National Television (TNT), the public broadcaster of Tunisia, is not allowed to show any content in any language other than Arabic, which forces it to broadcast only dubbed content (this restriction was recently removed for commercials). During the 1970s and 1980s, TNT (known as ERTT at the time) started dubbing famous cartoons in Tunisian and Standard Arabic. However, in the private sector, television channels are not subject to the language rule. South Africa In South Africa, many television programs were dubbed in Afrikaans, with the original soundtrack (usually in English, but sometimes Dutch or German) "simulcast" in FM stereo on Radio 2000. These included US series such as The Six Million Dollar Man, (Steve Austin: Die Man van Staal) Miami Vice (Misdaad in Miami), Beverly Hills 90210, and the German detective series Derrick. As a result of the boycott by the British actors' union Equity, which banned the sale of most British television programs, the puppet series The Adventures of Rupert Bear was dubbed into South African English, as the original voices had been recorded by Equity voice artists. This practice has declined as a result of the reduction of airtime for the language on SABC TV, and the increase of locally produced material in Afrikaans on other channels like KykNet. Similarly, many programs, such as The Jeffersons, were dubbed into Zulu, but this has also declined as local drama production has increased. However, some animated films, such as Maya the Bee, have been dubbed in both Afrikaans and Zulu by local artists. In 2018, eExtra began showing the Turkish drama series Paramparça dubbed in Afrikaans as Gebroke Harte or "Broken Hearts", the first foreign drama to be dubbed in the language for twenty years. On extra they have many Turkish series. Kara Sevda which is Bittersoet. They also have Istanbullu Gelin which is Deur dik en deun. They have Yasak Elma which is Doodsondes. They also have Elif. Uganda Uganda's own film industry is fairly small, and foreign movies are commonly watched. The English sound track is often accompanied by the Luganda translation and comments, provided by an Ugandan "video jockey" (VJ). VJ's interpreting and narration may be available in a recorded form or live. Oceania In common with other English-speaking countries, there has traditionally been little dubbing in Australia, with foreign language television programs and films being shown (usually on SBS) with subtitles or English dubs produced in other countries. This has also been the case in New Zealand, but the Māori Television Service, launched in 2004, has dubbed animated films into Māori. However, some TV commercials from foreign countries are dubbed, even if the original commercial came from another English-speaking country. Moreover, the off-screen narration portions of some non-fiction programs originating from the UK or North America are re-dubbed by Australian voice talents to relay information in expressions that Australians can understand more easily. Alternatives Subtitles Subtitles can be used instead of dubbing, as different countries have different traditions regarding the choice between dubbing and subtitling. On DVDs with higher translation budgets, the option for both types will often be provided to account for individual preferences; purists often demand subtitles. For small markets (small language area or films for a select audience), subtitling is more suitable, because it is cheaper. In the case of films for small children who cannot yet read, or do not read fast enough, dubbing is necessary. In most English-speaking countries, dubbing is comparatively rare. In Israel, some programs need to be comprehensible to speakers of both Russian and Hebrew. This cannot be accomplished with dubbing, so subtitling is much more commonplace—sometimes even with subtitles in multiple languages, with the soundtrack remaining in the original language, usually English. The same applies to certain television shows in Finland, where Swedish and Finnish are both official languages. In the Netherlands, Flanders, Nordic countries, Estonia and Portugal, films and television programs are shown in the original language (usually English) with subtitles, and only cartoons and children's movies and programs are dubbed, such as the Harry Potter series, Finding Nemo, Shrek, Charlie and the Chocolate Factory and others. Cinemas usually show both a dubbed version and one with subtitles for this kind of movie, with the subtitled version shown later in the evening. In Portugal, one terrestrial channel, TVI, dubbed U.S. series like Dawson's Creek into Portuguese. RTP also transmitted Friends in a dubbed version, but it was poorly received and later re-aired in a subtitled version. Cartoons, on the other hand, are usually dubbed, sometimes by well-known actors, even on TV. Animated movies are usually released to the cinemas in both subtitled and dubbed versions. In Argentina and Venezuela, terrestrial channels air films and TV series in a dubbed version, as demanded by law. However, those same series can be seen on cable channels at more accessible time-slots in their subtitled version and usually before they are shown on open TV. In contrast, the series The Simpsons is aired in its Mexican Spanish-dubbed version both on terrestrial television and on the cable station Fox, which broadcasts the series for the area. Although the first season of the series appeared with subtitles, this was not continued for the following seasons. Dubbing and subtitling In Bulgaria, television series are dubbed, but most television channels use subtitles for action and drama movies. AXN uses subtitles for its series, but as of 2008 emphasizes dubbing. Only Diema channels dub all programs. Movies in theaters, with the exception of films for children, use dubbing and subtitles. Dubbing of television programs is usually done using voiceovers, but usually, voices professional actors, while trying to give each character a different voice by using appropriate intonations. Dubbing with synchronized voices is rarely used, mostly for animated films. Mrs. Doubtfire is a rare example of a feature film dubbed this way on BNT Channel 1, though a subtitled version is currently shown on other channels. Walt Disney Television's animated series (such as DuckTales, Darkwing Duck, and Timon & Pumbaa) were only aired with synchronized Bulgarian voices on BNT Channel 1 until 2005, but then the Disney shows were canceled. When airing of Disney series resumed on Nova Television and Jetix in 2008, voiceovers were used, but Disney animated-movie translations still use synchronized voices. Voiceover dubbing is not used in theatrical releases. The Bulgarian film industry law requires all children's films to be dubbed, not subtitled. Nova Television dubbed and aired the Pokémon anime with synchronized voices. Now, the show is airing on Disney Channel, also in a synchronized form. Netflix provides both subtitles and dubbed audio with its foreign language shows, including Brazil's dystopian "3%" and the German thriller "Dark". Viewer testing indicates that its audience is more likely to finish watching a series if they select to view it with dubbed audio rather than translated subtitles. Netflix now streams its foreign language content with dubbed audio as default in an effort to increase viewer retention. General use Dubbing is also used in applications and genres other than traditional film, including video games, television, and pornographic films. Video games Many video games originally produced in North America, Japan, and PAL countries are dubbed into foreign languages for release in areas such as Europe and Australia, especially for video games that place a heavy emphasis on dialogue. Because characters' mouth movements can be part of the game's code, lip sync is sometimes achieved by re-coding the mouth movements to match the dialogue in the new language. The Source engine automatically generates lip-sync data, making it easier for games to be localized. To achieve synchronization when animations are intended only for the source language, localized content is mostly recorded using techniques borrowed from movie dubbing (such as rythmo band) or, when images are not available, localized dubbing is done using source audios as a reference. Sound-synch is a method where localized audios are recorded matching the length and internal pauses of the source content. For the European version of a video game, the on-screen text of the game is available in various languages and, in many cases, the dialogue is dubbed into each respective language, as well. The North American version of any game is always available in English, with translated text and dubbed dialogue, if necessary, in other languages, especially if the North American version of the game contains the same data as the European version. Several Japanese games, such as those in the Dynasty Warriors, and Soul series, are released with both the original Japanese audio and the English dub included. Television Dubbing is occasionally used on network television broadcasts of films that contain dialogue that the network executives or censors have decided to replace. This is usually done to remove profanity. In most cases, the original actor does not perform this duty, but an actor with a similar voice reads the changes. The results are sometimes seamless, but, in many cases, the voice of the replacement actor sounds nothing like the original performer, which becomes particularly noticeable when extensive dialogue must be replaced. Also, often easy to notice, is the sudden absence of background sounds in the movie during the dubbed dialogue. Among the films considered notorious for using substitute actors that sound very different from their theatrical counterparts are the Smokey and the Bandit and the Die Hard film series, as shown on broadcasters such as TBS. In the case of Smokey and the Bandit, extensive dubbing was done for the first network airing on ABC Television in 1978, especially for Jackie Gleason's character, Buford T. Justice. The dubbing of his phrase "sombitch" (son of a bitch) became "scum bum," which became a catchphrase of the time. Dubbing is commonly used in science fiction television, as well. Sound generated by effects equipment such as animatronic puppets or by actors' movements on elaborate multi-level plywood sets (for example, starship bridges or other command centers) will quite often make the original character dialogue unusable. Stargate and Farscape are two prime examples where ADR is used heavily to produce usable audio. Since some anime series contain profanity, the studios recording the English dubs often re-record certain lines if a series or movie is going to be broadcast on Cartoon Network, removing references to death and hell as well. Some companies will offer both an edited and an uncut version of the series on DVD, so that there is an edited script available in case the series is broadcast. Other companies also edit the full-length version of a series, meaning that even on the uncut DVD characters say things like "Blast!" and "Darn!" in place of the original dialogue's profanity. Bandai Entertainment's English dub of G Gundam is infamous for this, among many other things, with such lines as "Bartender, more milk". Dubbing has also been used for comedic purposes, replacing lines of dialogue to create comedies from footage that was originally another genre. Examples include the American television show Kung Faux, comedically re-dubbed from 1970s kung fu films originally produced in Hong Kong, the Australian television shows The Olden Days and Bargearse, re-dubbed from 1970s Australian drama and action series, respectively, the Irish show Soupy Norman, re-dubbed from Pierwsza miłość, a Polish soap opera, and Most Extreme Elimination Challenge, a comedic dub of the Japanese game show Takeshi's Castle. Dubbing into a foreign language does not always entail the deletion of the original language. In some countries, a performer may read the translated dialogue as a voice-over. This often occurs in Russia and Poland, where "lektories" or "lektors" read the translated dialogue into Russian and Polish. In Poland, one announcer read all text. However, this is done almost exclusively for the television and home video markets, while theatrical releases are usually subtitled. Recently, however, the number of high-quality, fully dubbed films has increased, especially for children's movies. If a quality dubbed version exists for a film, it is shown in theaters. However, some films, such as Harry Potter or Star Wars, are shown in both dubbed and subtitled versions, varying with the time of the show. Such films are also shown on TV (although some channels drop them and do standard one-narrator translation) and VHS/DVD. In Russia, the reading of all lines by a single person is referred to as a Gavrilov translation, and is generally found only in illegal copies of films and on cable television. Professional copies always include at least two actors of opposite gender translating the dialogue. Some titles in Poland have been dubbed this way, too, but this method lacks public appeal, so it is very rare now. On special occasions, such as film festivals, live interpreting is often done by professionals. Pornography As budgets for pornographic films are often small, compared to films made by major studios, and there is an inherent need to film without interrupting filming, it is common for sex scenes to be over-dubbed. The audio for such over-dubbing is generally referred to as the Ms and Gs, or the moans and groans. Dubbing into varieties In the case of languages with large communities (such as English, Chinese, Hindi, Portuguese, italian, German, Spanish, or French), a single translation may sound foreign to native speakers in a given region. Therefore, a film may be translated into a certain variety of a certain language. For example, the animated movie The Incredibles was translated to European Spanish, Mexican Spanish, Neutral Spanish (which is Mexican Spanish but avoids colloquialisms), and Rioplatense Spanish (although people from Chile and Uruguay noticed a strong porteño accent from most of the characters of the Rioplatense Spanish translation). In Spanish-speaking regions, most media is dubbed twice: into European Spanish and Neutral Spanish. Another example is the French dubbing of The Simpsons, which has two entirely different versions for Quebec and for France. The humor is very different for each audience (see Non-English versions of The Simpsons). Audiences in Quebec are generally critical of France's dubbing of The Simpsons, which they often do not find amusing. Quebec-French dubbing of films is generally made in accent-free Standard French, but may sound peculiar to audiences in France because of the persistence of some regionally-neutral expressions and because Quebec-French performers pronounce Anglo-Saxon names with an American accent, unlike French performers. Occasionally, budget restraints cause American direct-to-video films, such as the 1995 film When the Bullet Hits the Bone, to be released in France with a Quebec-French dubbing, sometimes resulting in what some members of French audiences perceive as unintentional humor. Portugal and Brazil also use different versions of dubbed films and series. Because dubbing has never been very popular in Portugal, for decades, children's films were distributed using the higher-quality Brazilian dub (unlike children's TV series, which are traditionally dubbed in European Portuguese). Only in the 1990s did dubbing begin to gain popularity in Portugal. The Lion King became the first Disney feature film to be completely dubbed into European Portuguese, and subsequently all major animation films gained European-Portuguese versions. In recent DVD releases, most Brazilian-Portuguese-dubbed classics were released with new European-Portuguese dubs, eliminating the predominance of Brazilian-Portuguese dubs in Portugal. Similarly, in Flanders, the Dutch-speaking region of Belgium, cartoons are often dubbed locally by Flemish artists rather than using soundtracks produced in the Netherlands. The German-speaking region, which includes Germany, Austria, part of Switzerland, and Liechtenstein, share a common German-dubbed version of films and shows. Although there are some differences in the three major German varieties, all films, shows, and series are dubbed into a single Standard German version that avoids regional variations in the German-speaking audience. Most voice actors are primarily German or Austrian. Switzerland, which has four official languages (German, French, Italian, and Romansh), generally uses dubbed versions made in each respective country (except for Romansh). Liechtenstein uses German-dubbed versions only. Sometimes, films are also dubbed into several German dialects (Berlinerisch, Kölsch, Saxonian, Austro-Bavarian or Swiss German), especially animated films and Disney films. They are as an additional "special feature" to entice the audience into buying it. Popular animated films dubbed into German variety include Asterix films (in addition to its Standard German version, every film has a particular variety version), The Little Mermaid, Shrek 2, Cars, (+ Austrian German) and Up (+ Austrian German). Some live-action films or TV-series have an additional German variety dubbing: Babe and its sequel, Babe: Pig in the City (German German, Austrian German, Swiss German); and Rehearsal for Murder, Framed (+ Austrian German); The Munsters, Serpico, Rumpole (+ Austrian German), and The Thorn Birds (only Austrian German dubbing). Before German reunification, East Germany also made its own particular German version. For example, Olsen Gang and the Hungarian animated series The Mézga Family were dubbed in West Germany as well as East Germany. Usually, there are two dubbings produced in Serbo-Croatian: Serbian and Croatian. Serbian for Serbia, Montenegro and Bosnia and Herzegovina; Croatian for Croatia and parts of Bosnia and Herzegovina. References Further reading Di Fortunato E. e Paolinelli M. (a cura di), "La Questione Doppiaggio - barriere linguistiche e circolazione delle opere audiovisive", Roma, AIDAC, 1996 - (available on website: www.aidac.it) Castellano A. (a cura di), "Il Doppiaggio, profilo, storia e analisi di un'arte negata", Roma, AIDAC-ARLEM, 2001 Di Fortunato E. e Paolinelli M., "Tradurre per il doppiaggio - la trasposizione linguistica dell'audiovisivo: teoria e pratica di un'arte imperfetta", Milano, Hoepli, 2005 ASINC online magazine on criticism of the art of dubbing Rose, Jay, Producing Great Sound for Film and Video. Focal Press, fourth edition 2014 Book info.
41384059
https://en.wikipedia.org/wiki/Yuval%20Ben-Itzhak
Yuval Ben-Itzhak
Yuval Ben-Itzhak is an executive and entrepreneur. He received a number of honors and public recognition for his work as a Chief Technology Officer throughout his career. Ben-Itzhak has been selected as the 25 most influencing CTO by InfoWorld, 40 Innovative IT People To Watch by Computerworld, and 2017 Chief Technology Officer of the Year by GeekTime. He was the Chief Technology Officer at AVG and was part of the leadership team that took AVG through its initial public offering on the New York Stock Exchange in 2012. He was the Chief Technology Officer at Outbrain until 2017, and led the acquisition of Socialbakers by Astute in 2020 as the Chief Executive Officer. Yuval is the inventor of 25 US Patents. Career 2017–2020: Chief Executive Officer, Socialbakers 2015–2017: Chief Technology Officer, Outbrain 2009–2015: Chief Technology Officer, AVG Technologies (NYSE: AVG) 2005–2009: Chief Technology Officer, Finjan Inc. 2000–2004: Chief Technology Officer, Founder, KaVaDo Inc. 1998–2000: Chief Technology Officer, Ness Technologies (Nasdaq: NSTC) 1994–1996: Software development & management, Intel Corp. (Nasdaq: INTC) 1988–1992: Intelligence services, IDF Awards 2017 – Chief Technology Officer of the Year by GeekTime 2007 – 40 Innovative IT People To Watch by Computerworld 2004 – Top 25 Most Influential CTOs by InfoWorld Patents US 10,417,270 – Systems and methods for extraction of policy information US 9,813,873 – Mobile device tracking prevention method and system US 9,800,553 – Splitting an SSL connection between gateways US 9,798,802 – Systems and methods for extraction of policy information US 9,697,009 – Method for improving the performance of computers US 9,525,680 – Splitting an SSL connection between gateways US 9,514,477 – Systems and methods for providing user-specific content on an electronic device US 9,424,422 – Detection of rogue software applications US 9,294,493 – Computer security method and system with input parameter validation US 9,288,226 – Detection of rogue software applications US 9,280,391 – Systems and methods for improving performance of computer systems US 9,110,595 – Systems and methods for enhancing performance of software applications US 9,058,612 – Systems and methods for recommending software applications US 8,769,690 – Protection from malicious web content US 8,732,831 – Detection of rogue software applications US 8,566,580 – Splitting an SSL connection between gateways US 8,141,154 – System and method for inspecting dynamically generated executable code US 8,087,079 – Byte-distribution analysis of file security US 8,015,182 – System and method for appending security information to search engine results US 7,930,299 – System and method for appending security information to search engine results US 7,882,555 – Application layer security method and system US 7,757,289 – System and method for inspecting dynamically generated executable code US 7,614,085 – Method for the automatic setting and updating of a security policy US 7,613,918 – System and method for enforcing a security context on a downloadable US 7,313,822 – Application-layer security method and system References Living people Israeli businesspeople Chief technology officers Ben-Gurion University of the Negev alumni Year of birth missing (living people)
34132927
https://en.wikipedia.org/wiki/Extensible%20Host%20Controller%20Interface
Extensible Host Controller Interface
eXtensible Host Controller Interface (xHCI) is a computer interface specification that defines a register-level description of a host controller for Universal Serial Bus (USB), which is capable of interfacing with USB 1.x, 2.0, and 3.x compatible devices. The specification is also referred to as the USB 3.0 host controller specification. xHCI improves on the pre-existing Open Host Controller Interface (OHCI) and the Universal Host Controller Interface (UHCI) architectures most prominently in handling a wider range of speeds within a single standard, in managing resources more efficiently for the benefit of mobile hosts with limited power resources (such as tablets and cell phones), and in simplifying support for mixing of low-speed and high-speed devices. Architectural goals The xHCI is a radical break from the previous generations of USB host controller interface architectures (i.e. the Open Host Controller Interface (OHCI), the Universal Host Controller Interface (UHCI), and the Enhanced Host Controller Interface (EHCI)) on many counts. Following are the key goals of the xHCI architecture: Efficient operation – idle power and performance better than legacy USB host controller architectures. A device level programming model that is fully consistent with the existing USB software model Decouple the host controller interface presented to software from the underlying USB protocols Minimize host memory accesses, fully eliminating them when USB devices are idle Eliminate register writes and minimize register reads for normal data transfers Eliminate the "Companion Controller" model Enable hardware "fail-over" modes in system resource constrained situations so devices are still accessible, but perhaps at less optimal power/performance point Provide the ability for different markets to differentiate hardware capabilities, e.g. target host controller power, performance and cost trade-offs for specific markets Define an extensible architecture that provides an easy path for new USB specifications and technologies, such as higher bandwidth interfaces, optical transmission medium, etc., without requiring the definition of yet another USB host controller interface Architectural details Support for all speeds The OHCI and UHCI controllers support only USB 1 speed devices (1.5 Mbit/s and 12 Mbit/s), and the EHCI only supports USB 2 devices (480 Mbit/s). The xHCI architecture was designed to support all USB speeds, including SuperSpeed (5 Gbit/s) and future speeds, under a single driver stack. Power efficiency When USB was originally developed in 1995, it was targeted at desktop platforms to stem the proliferation of connectors that were appearing on PCs, e.g. PS/2, serial port, parallel port, game port, etc., and host power consumption was not an important consideration at the time. Since then, mobile platforms have become the platform of choice, and their batteries have made power consumption a key consideration. The architectures of the legacy USB host controllers (OHCI, UHCI, and EHCI) were very similar in that the "schedule" for the transactions to be performed on the USB were built by software in host memory, and the host controller hardware would continuously read the schedules to determine what transactions needed to be driven on the USB, and when, even if no data was moved. Additionally, in the case of reads from the device, the device was polled each schedule interval, even if there was no data to read. The xHCI eliminates host memory based USB transaction schedules, enabling zero host memory activity when there is no USB data movement. The xHCI reduces the need for periodic device polling by allowing a USB 3.0 or later device to notify the host controller when it has data available to read, and moves the management of polling USB 2.0 and 1.1 devices that use interrupt transactions from the CPU-driven USB driver to the USB host controller. EHCI, OHCI, and UHCI host controllers would automatically handle polling for the CPU if there are no changes that need to be made and if no device has any interrupts to send but they all rely on the CPU to set the schedule up for the controllers. If any USB device using interrupt transactions does have data to send, then an xHCI host controller will send an interrupt to notify the CPU that there is a USB interrupt transaction that needs handling. Since the CPU no longer has to manage the polling of the USB bus, it can spend more time in low power states. The xHCI does not require that implementations provide support for all advanced USB 2 and 3 power management features, including USB 2 LPM, USB 3 U1 and U2 states, HERD, LTM, Function Wake, etc.; but these features are required to realize all of the advantages of xHCI. Virtualization support Legacy USB host-controller architectures exhibit some serious shortcomings when applied to virtualized environments. Legacy USB host-controller interfaces define a relatively simple hardware data-pump; where critical state related to overall bus-management (bandwidth allocation, address assignment, etc.) resides in the software of the host-controller driver (HCD). Trying to apply the standard hardware IO virtualization technique - replicating I/O interface registers - to the legacy USB host controller interface is problematic because critical state that must be managed across virtual machines (VMs) is not available to hardware. The xHCI architecture moves the control of this critical state into hardware, enabling USB resource management across VMs. The xHCI virtualization features also provide for: direct-Assignment of individual USB devices (irrespective of their location in the bus topology) to any VM minimizing run-time inter-VM communications support for native USB device-sharing support of PCIe SR-IOV (single root I/O virtualization) Simplified driver architecture The EHCI utilizes OHCI or UHCI controllers as "companion controllers", where USB 2 devices are managed through the EHCI stack, and the port logic of the EHCI allows a low-speed or full-speed USB device to be routed to a port of a "companion" UHCI or OHCI controller, where the low-speed or full-speed USB devices are managed through the respective UHCI or OHCI stack. For example, a USB 2 PCIe host controller card that presents 4 USB "Standard A" connectors typically presents one 4-port EHCI and two 2-port OHCI controllers to system software. When a high-speed USB device is attached to any of the 4 connectors, the device is managed through one of the 4 root hub ports of the EHCI controller. If a low-speed or full-speed USB device is attached to connectors 1 or 2, it will be routed to the root hub ports of one of the OHCI controllers for management, and low-speed and full-speed USB devices attached to connectors 3 or 4 will be routed to the root hub ports of the other OHCI controller. The EHCI dependence on separate host controllers for high-speed USB devices and the group of low-speed and full-speed USB devices results in complex interactions and dependencies between the EHCI and OHCI/UHCI drivers. The xHCI architecture eliminates the need for companion controllers and their separate driver stacks. The incorporation of the schedule, bandwidth management, and USB device address assignment functions, that were previously performed by the driver in to the xHCI hardware enable a simpler, leaner, lower latency software stack for the xHCI. Stream support Support for Streams was added to the USB 3.0 SuperSpeed specification, primarily to enable high performance storage operations over USB. Classically there has been a 1:1 relationship between a USB endpoint and a buffer in system memory, and the host controller solely responsible for directing all data transfers. Streams changed this paradigm by providing a 1-to-many "endpoint to buffer" association, and allowing the device to direct the host controller as to which buffer to move. The USB data transfers associated with a USB Stream endpoint are scheduled by the xHCI the same as any other bulk endpoint is, however the data buffer associated with a transfer is determined by the device. The xHCI USB Stream support allows up to 64K buffers to be associated with a single endpoint. The xHCI Streams protocol support allows a USB device to select which buffer that the xHCI will transfer when the endpoint is scheduled. Scalability The xHCI architecture was designed to be highly scalable, capable of supporting 1 to 255 USB devices and 1 to 255 root hub ports. Since each USB device is allowed to define up to 31 endpoints, an xHCI that supported 255 devices would have to support 7,906 separate total endpoints. Classically, each memory buffer associated with an endpoint is described by a queue of physical memory blocks, where the queue requires a head pointer, tail pointer, length and other registers to define its state. There are many ways to define queue state, however if one were to assume 32 bytes of register space for each queue, then almost a 256KB of register space would be required to support 7,906 queues. Typically only a small number of USB devices are attached to a system at one time, and on the average a USB device supports 3-4 endpoints, of which only a subset of the endpoints are active at the same time. The xHCI maintains queue state in system memory as Endpoint Context data structures. The contexts are designed so that they can be cached by the xHCI, and "paged" in and out as a function of endpoint activity. Thus a vendor can scale their internal xHCI Endpoint Context cache space and resources to match the practical usage models expected for their products, rather than the architectural limits that they support. Ideally the internal cache space is selected so that under normal usage conditions, there is no context paging by the xHCI. Also USB endpoint activity tends to be bursty. That is, at any point in time a large number of endpoints may be ready to move data, however only a subset are actively moving data. For instance, the interrupt IN endpoint of a mouse may not transfer data for hours if the user is away from their desk. xHCI vendor specific algorithms could detect this condition and make that endpoint a candidate for paging out if other endpoints become busy. The xHCI architecture allows large maximum values for the number of USB devices, ports, interrupt vectors, etc. supported, however an implementation only needs to define the number necessary to meet its marketing requirements. For instance, a vendor could choose to limit the number of USB devices that it supported for a tablet xHCI implementation to 16 devices. A vendor can further take advantage of xHCI architectural features to scale its internal resources to match its target usage models. For instance, if through usability testing a vendor determines that 95% of tablet users will never connect more than 4 USB devices, and each USB device typically defines 4 endpoints (or less), then internal caching for 16 Endpoint Contexts will ensure that under normal conditions there will be no system memory activity due to Endpoint Context paging. History The Open Host Controller Interface (OHCI) specification was defined by a consortium of companies (Compaq, Microsoft, and National Semiconductor) as open specification to support USB 1.0 devices. The Universal Host Controller Interface (UHCI) refers to a specification that Intel originally defined as a proprietary interface to support USB 1.0 devices. The UHCI specification was eventually made public, but only after the rest of industry had adopted the OHCI specification. The EHCI specification was defined by Intel to support USB 2.0 devices. The EHCI architecture was modeled after the UHCI and OHCI controllers, which required software to build the USB transaction schedules in memory, and to manage bandwidth and address allocation. To eliminate a redundant industry effort of defining an open version of a USB 2.0 host controller interface, Intel made the EHCI specification available to the industry with no licensing fees. The EHCI licensing model was continued for Intel's xHCI specification, however with a greatly expanded industry contribution. Over 100 companies have contributed to the xHCI specification. The USB Implementers Forum (USB-IF) has also funded a set of xHCI Compliance Tests to maximize the compatibility of the various xHCI implementations. xHCI 1.0 controllers have been shipping since December 2009. Linux kernels since 2009 contain xHCI drivers, but for older kernels there are drivers available online. Windows drivers for XP, Vista, and Windows 7 are available from the respective xHCI vendors. xHCI drivers for embedded system are available from MCCI, Jungo, and other software vendors. xHCI IP blocks are also available from several vendors for customization in SOC environments. xHCI 1.1 controllers and devices began shipping in 2015. Version history The xHCI specification uses "errata" files to define updates and clarifications to a specific release. The changes in the errata files are accumulated in each release. Refer to the associated errata files for the details of specific changes. Most changes defined in the xHCI errata files are clarifications, grammatical or spelling corrections, additional cross-references, etc., which do not affect a driver implementation. Changes that are determined to be architectural utilize a Capability flag to determine whether a particular feature is supported by an xHCI implementation, and an Enable flag to turn on the feature. Prereleases The xHCI specification evolved through several versions before its official release in 2010: xHCI 0.9: Released in August 2008. USB 0.95: Released in December 2008. USB 0.96: Released in August 2009. USB 0.96a: 1.0 Release Candidate, Released in April 2010. First shipping devices based on this version. xHCI 1.0 xHCI 1.0: First public release, May 21, 2010. Specified USB data rates of 1.5 Mbit/s (Low-speed), 12 Mbit/s (Full-speed), 480 Mbit/s (High-speed) and 5 Gbit/s (SuperSpeed). xHCI 1.0, errata files 1-4: Released on January 17, 2011. Incorporated initial review feedback from larger 1.0 public audience, Save-Restore clarifications, and Hardware LPM support. xHCI 1.0, errata files 1-6: Released on March 18, 2011. Clarifications. xHCI 1.0, errata files 1-7: Released on June 13, 2011. Clarifications. xHCI 1.1 xHCI 1.1: Released on December 21, 2013. Specified USB 3.1 data rate of 10 Gbit/s (SuperSpeed+). This incorporates xHCI 1.0 errata files 1-21. Allows controller to require a larger number of scratchpad buffers (up to 1023) in HCSPARAMS2 capability register. xHCI 1.2 xHCI 1.2: Dated May 2019. Specified USB 3.2 data rates of 10 Gbit/s (SuperSpeedPlus Gen1x2) and 20 Gbit/s (SuperSpeedPlus Gen2x2). References External links USB official website (USB Implementers Forum, Inc.) Open Host Controller Interface (OHCI) Intel Universal Host Controller Interface (UHCI) Archived there Intel Enhanced Host Controller Interface (EHCI) Intel eXtensible Host Controller Interface (xHCI) USB
2772397
https://en.wikipedia.org/wiki/Windows%20Marketplace
Windows Marketplace
Windows Marketplace was a Microsoft platform for the delivery of software electronically that was secured by use of Windows Live ID (now Microsoft account). The digital locker platform was composed of four major components: Windows Marketplace catalog Multi merchant download cart Digital Locker Assistant (a client-side application that facilitates the download of purchased applications) Digital Locker For consumers, the digital locker and Windows Marketplace could be used for purchasing and downloading third party software titles compatible with Microsoft Windows, and then using that purchased software on any computer the software license allows. For software developers, the digital locker and Windows Marketplace was for the distribution of their software titles. A Windows Vista Ultimate Extra, called Secure Online Key Backup allowed backing up EFS recovery certificates, as well as the BitLocker recovery password in the Digital Locker. Windows Marketplace was only available for residents of the United States and some other countries. Launch Windows Marketplace was announced on July 12, 2004; and launched on October 12, 2004. It provided over 93,000 products split between hardware, boxed software, and downloadable software. About 20,000 downloads were available and 5,000 were free. The site was designed to allow product comparison, purchase and discussion from one central location. Research had shown the average customer used over 20 sites when making purchases and Marketplace was an attempt to centralize that. An emphasis was placed on software designed to Microsoft's API requirements and user experience recommendations with certified software carrying the Designed-for-Windows logo surpassing 10,000 at store launch. CNET was a major initial partner, supplying Microsoft with the majority of the pricing and description information. 2006 Redesign The site was redesigned on August 28, 2006. During the redesign a greater emphasis was placed on downloadable software instead of boxed software and hardware. In less than two years since the original launch, Microsoft tripled the software purchasable by download to 60,000 and expanded all products offered to over 150,000. The Digital Locker initiative also attempted to unify all software licenses into a central location tied to a single Windows Live ID. This version of the site was Microsoft's first offering of a full operating system, Windows Vista as a downloadable purchase; in addition to the first for Microsoft Office. Expansion to mobile On February 16, 2009 at the Mobile World Congress, Microsoft announced Windows Marketplace for Mobile which deployed a similar concept to Windows Mobile devices. Marketplace for Mobile was shut down on May 22, 2012. A year after Microsoft introduced Windows Marketplace for Mobile, they announced their next generation mobile platform named Windows Phone. Microsoft introduced Windows Phone Marketplace in 2010, and renamed the service to Windows Phone Store in 2012. Discontinuation and replacement On November 14, 2008, Microsoft announced their intention to discontinue the Digital Locker in 2009. The company phased-out Windows Marketplace, and replaced it with the Microsoft Store. At the Build conference on September 13, 2011, Microsoft announced Windows Store, a new software distribution platform for Windows 8, WinRT, and subsequent Windows versions. The Windows Store was accessible via WinRT client or web browser. References External links Defunct_websites Microsoft websites Software distribution platforms
3429879
https://en.wikipedia.org/wiki/Storm%20Water%20Management%20Model
Storm Water Management Model
The United States Environmental Protection Agency (EPA) Storm Water Management Model (SWMM) is a dynamic rainfall–runoff–subsurface runoff simulation model used for single-event to long-term (continuous) simulation of the surface/subsurface hydrology quantity and quality from primarily urban/suburban areas. It can simulate the Rainfall- runoff, runoff, evaporation, infiltration and groundwater connection for roots, streets, grassed areas, rain gardens and ditches and pipes, for example. The hydrology component of SWMM operates on a collection of subcatchment areas divided into impervious and pervious areas with and without depression storage to predict runoff and pollutant loads from precipitation, evaporation and infiltration losses from each of the subcatchment. Besides, low impact development (LID) and best management practice areas on the subcatchment can be modeled to reduce the impervious and pervious runoff. The routing or hydraulics section of SWMM transports this water and possible associated water quality constituents through a system of closed pipes, open channels, storage/treatment devices, ponds, storages, pumps, orifices, weirs, outlets, outfalls and other regulators. SWMM tracks the quantity and quality of the flow generated within each subcatchment, and the flow rate, flow depth, and quality of water in each pipe and channel during a simulation period composed of multiple fixed or variable time steps. The water quality constituents such as water quality constituents can be simulated from buildup on the subcatchments through washoff to a hydraulic network with optional first order decay and linked pollutant removal, best management practice and low-impact development (LID) removal and treatment can be simulated at selected storage nodes. SWMM is one of the hydrology transport models which the EPA and other agencies have applied widely throughout North America and through consultants and universities throughout the world. The latest update notes and new features can be found on the EPA website in the download section. Recently added in November 2015 were the EPA SWMM 5.1 Hydrology Manual (Volume I) and in 2016 the EPA SWMM 5.1 Hydraulic Manual (Volume II) and EPA SWMM 5.1 Water Quality (including LID Modules) Volume (III) + Errata. Program description The EPA storm water management model (SWMM) is a dynamic rainfall-runoff-routing simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. The runoff component of SWMM operates on a collection of subcatchment areas that receive precipitation and generate runoff and pollutant loads. The routing portion of SWMM transports this runoff through a system of pipes, channels, storage/treatment devices, pumps, and regulators. SWMM tracks the quantity and quality of runoff generated within each subcatchment, and the flow rate, flow depth, and quality of water in each pipe and channel during a simulation period divided into multiple time steps. SWMM accounts for various hydrologic processes that produce runoff from urban areas. These include: time-varying rainfall evaporation of standing surface water snow accumulation and melting rainfall interception from depression storage infiltration of rainfall into unsaturated soil layers percolation of infiltrated water into groundwater layers interflow between groundwater and the drainage system nonlinear reservoir routing of overland flow capture and retention of rainfall/runoff with various types of low impact development (LID) practices. SWMM also contains a flexible set of hydraulic modeling capabilities used to route runoff and external inflows through the drainage system network of pipes, channels, storage/treatment units and diversion structures. These include the ability to: handle networks of unlimited size· use a wide variety of standard closed and open conduit shapes as well as natural channels· model special elements such as storage/treatment units, flow dividers, pumps, weirs, and orifices· apply external flows and water quality inputs from surface runoff, groundwater interflow, rainfall-dependent infiltration/inflow, dry weather sanitary flow, and user-defined inflows utilize either kinematic wave or full dynamic wave flow routing methods· model various flow regimes, such as backwater, surcharging, reverse flow, and surface ponding· apply user-defined dynamic control rules to simulate the operation of pumps, orifice openings, and weir crest levels. Spatial variability in all of these processes is achieved by dividing a study area into a collection of smaller, homogeneous subcatchment areas, each containing its own fraction of pervious and impervious sub-areas. Overland flow can be routed between sub-areas, between subcatchments, or between entry points of a drainage system. Since its inception, SWMM has been used in thousands of sewer and stormwater studies throughout the world. Typical applications include: design and sizing of drainage system components for flood control sizing of detention facilities and their appurtenances for flood control and water quality protection· flood plain mapping of natural channel systems, by modeling the river hydraulics and associated flooding problems using prismatic channels· designing control strategies for minimizing Combined Sewer Overflow (CSO) and Sanitary Sewer Overflow (SSO)· evaluating the impact of inflow and infiltration on sanitary sewer overflows· generating non-point source pollutant loadings for waste load allocation studies· evaluating the effectiveness of BMPs and Subcatchment LID's for reducing wet weather pollutant loadings. Rainfall-runoff modeling of urban and rural watersheds hydraulic and water quality analysis of storm, sanitary, and combined sewer systems master planning of sewer collection systems and urban watersheds system evaluations associated with USEPA's regulations including NPDES permits, CMOM, and TMDL 1D and 2D (surface ponding) predictions of flood levels and flooding volume EPA SWMM is public domain software that may be freely copied and distributed. The SWMM 5 public domain consists of C engine code and Delphi SWMM 5 graphical user interface code. The C code and Delphi code are easily edited and can be recompiled by students and professionals for custom features or extra output features. History SWMM was first developed between 1969–1971 and has undergone four major upgrades since those years. The major upgrades were: (1) Version 2 in 1973-1975, (2) Version 3 in 1979-1981, (3) Version 4 in 1985-1988 and (4) Version 5 in 2001-2004. A list of the major changes and post-2004 changes are shown in Table 1. The current SWMM edition, Version 5/5.1.012, is a complete re-write of the previous Fortran releases in the programming language C, and it can be run under Windows XP, Windows Vista, Windows 7, Windows 8, Windows 10 and also with a recompilation under Unix. The code for SWMM5 is open source and public domain code that can be downloaded from the EPA website. EPA SWMM 5 provides an integrated graphical environment for editing watershed input data, running hydrologic, hydraulic, real time control and water quality simulations, and viewing the results in a variety of graphical formats. These include color-coded thematic drainage area maps, time series graphs and tables, profile plots, scatter plots and statistical frequency analyses. The last rewrite of EPA SWMM was produced by the Water Supply and Water Resources Division of the U.S. Environmental Protection Agency's National Risk Management Research Laboratory with assistance from the consulting firm of CDM Inc under a Cooperative Research and Development Agreement (CRADA). SWMM 5 is used as the computational engine for many modeling packages plus components of SWMM5 are in other modeling packages. The major modeling packages that use all or some of the SWMM5 components are shown in the Vendor section. The update history of SWMM 5 from the original SWMM 5.0.001 to the current version SWMM 5.1.012 can be found at the EPA website. SWMM 5 was approved FEMA Model Approval Page in May 2005, with a note about the versions that are approved on the FEMA Approval Page SWMM 5 Version 5.0.005 (May 2005) and up for NFIP modeling. SWMM 5 is used as the computational engine for many modeling packages (see the SWMM 5 Platform Section of this article) and some components of SWMM5 are in other modeling packages (see the SWMM 5 Vendor Section of this article). SWMM conceptual model SWMM conceptualizes a drainage system as a series of water and material flows between several major environmental compartments. These compartments and the SWMM objects they contain include: The Atmosphere compartment, from which precipitation falls and pollutants are deposited onto the land surface compartment. SWMM uses Rain Gage objects to represent rainfall inputs to the system. The rain gage objects can use time series, external text files or NOAA rainfall data files. The Rain Gage objects can use precipitation for thousands of years. Using the SWMM-CAT Addon to SWMM5 climate change can now be simulated using modified temperature, evaporation or rainfall. The Land Surface compartment, which is represented by one or more Subcatchment objects. It receives precipitation from the Atmospheric compartment in the form of rain or snow; it sends outflow in the form of infiltration to the Groundwater compartment and also as surface runoff and pollutant loadings to the Transport compartment. The Low Impact Development (LID) controls are part of the Subcatchments and store, infiltrate or evaporate the runoff. The groundwater compartment receives infiltration from the Land Surface compartment and transfers a portion of this inflow to the Transport compartment. This compartment is modeled using Aquifer objects. The connection to the Transport compartment can be either a static boundary or a dynamic depth in the channels. The links in the Transport compartment now also have seepage and evaporation. The Transport compartment contains a network of conveyance elements (channels, pipes, pumps, and regulators) and storage/treatment units that transport water to outfalls or to treatment facilities. Inflows to this compartment can come from surface runoff, groundwater interflow, sanitary dry weather flow, or from user-defined hydrographs. The components of the Transport compartment are modeled with Node and Link objects. Not all compartments need to appear in a particular SWMM model. For example, one could model just the transport compartment, using pre-defined hydrographs as inputs. If kinematic wave routing is used, then the nodes do not need to contain an outfall. Model parameters The simulated model parameters for subcatchments are surface roughness, depression storage, slope, flow path length; for Infiltration: Horton: max/min rates and decay constant; Green-Ampt: hydraulic conductivity, initial moisture deficit and suction head; Curve Number: NRCS (SCS) Curve number; All: time for saturated soil to fully drain; for Conduits: Manning’s roughness; for Water Quality: buildup/washoff function coefficients, first-order decay coefficients, removal equations. A study area can be divided into any number of individual subcatchments, each of which drains to a single point. Study areas can range in size from a small portion of a single lot up to thousands of acres. SWMM uses hourly or more frequent rainfall data as input and can be run for single events or in a continuous fashion for any number of years. Hydrology and hydraulics capabilities SWMM 5 accounts for various hydrologic processes that produce surface and subsurface runoff from urban areas. These include: Time-varying rainfall for an unlimited number of rain gages for both design and continuous hyetographs evaporation of standing surface water on watersheds and surface ponds snowfall accumulation, plowing, and melting rainfall interception from depression storage in both impervious and pervious areas infiltration of precipitation into unsaturated soil layers percolation of infiltrated water into groundwater layers interflow between groundwater and pipes and ditches nonlinear reservoir routing of watershed overland flow. Spatial variability in all of these processes is achieved by dividing a study area into a collection of smaller, homogeneous watershed or subcatchment areas, each containing its fraction of pervious and impervious sub-areas. Overland flow can be routed between sub-areas, between subcatchments, or between entry points of a drainage system. SWMM also contains a flexible set of hydraulic modeling capabilities used to route runoff and external inflows through the drainage system network of pipes, channels, storage/treatment units and diversion structures. These include the ability to: Simulate drainage networks of unlimited size use a wide variety of standard closed and open conduit shapes as well as natural or irregular channels model special elements such as storage/treatment units, outlets, flow dividers, pumps, weirs, and orifices apply external flows and water quality inputs from surface runoff, groundwater interflow, rainfall-dependent infiltration/inflow, dry weather sanitary flow, and user-defined inflows utilize either steady, kinematic wave or full dynamic wave flow routing methods model various flow regimes, such as backwater, surcharging, pressure, reverse flow, and surface ponding apply user-defined dynamic control rules to simulate the operation of pumps, orifice openings, and weir crest levels Infiltration is the process of rainfall penetrating the ground surface into the unsaturated soil zone of pervious subcatchments areas. SWMM5 offers four choices for modeling infiltration: Classical infiltration method This method is based on empirical observations showing that infiltration decreases exponentially from an initial maximum rate to some minimum rate over the course of a long rainfall event. Input parameters required by this method include the maximum and minimum infiltration rates, a decay coefficient that describes how fast the rate decreases over time, and the time it takes a fully saturated soil to completely dry (used to compute the recovery of infiltration rate during dry periods). Modified Horton Method This is a modified version of the classical Horton Method that uses the cumulative infiltration in excess of the minimum rate as its state variable (instead of time along the Horton curve), providing a more accurate infiltration estimate when low rainfall intensities occur. It uses the same input parameters as does the traditional Horton Method. Green–Ampt method This method for modeling infiltration assumes that a sharp wetting front exists in the soil column, separating soil with some initial moisture content below from saturated soil above. The input parameters required are the initial moisture deficit of the soil, the soil's hydraulic conductivity, and the suction head at the wetting front. The recovery rate of moisture deficit during dry periods is empirically related to the hydraulic conductivity. Curve number method This approach is adopted from the NRCS (SCS) curve number method for estimating runoff. It assumes that the total infiltration capacity of a soil can be found from the soil's tabulated curve number. During a rain event this capacity is depleted as a function of cumulative rainfall and remaining capacity. The input parameters for this method are the curve number and the time it takes a fully saturated soil to completely dry (used to compute the recovery of infiltration capacity during dry periods). SWMM also allows the infiltration recovery rate to be adjusted by a fixed amount on a monthly basis to account for seasonal variation in such factors as evaporation rates and groundwater levels. This optional monthly soil recovery pattern is specified as part of a project's evaporation data. In addition to modeling the generation and transport of runoff flows, SWMM can also estimate the production of pollutant loads associated with this runoff. The following processes can be modeled for any number of user-defined water quality constituents: Dry-weather pollutant buildup over different land uses pollutant washoff from specific land uses during storm events direct contribution of wet and dry rainfall deposition reduction in dry-weather buildup due to street cleaning reduction in washoff load due to BMPs and LIDs entry of dry weather sanitary flows and user-specified external inflows at any point in the drainage system routing of water quality constituents through the drainage system reduction in constituent concentration through treatment in storage units or by natural processes in pipes and channels. Rain Gages in SWMM5 supply precipitation data for one or more subcatchment areas in a study region. The rainfall data can be either a user-defined time series or come from an external file. Several different popular rainfall file formats currently in use are supported, as well as a standard user-defined format. The principal input properties of rain gages include: rainfall data type (e.g., intensity, volume, or cumulative volume) recording time interval (e.g., hourly, 15-minute, etc.) source of rainfall data (input time series or external file) name of rainfall data source The other principal input parameters for the subcatchments include: assigned rain gage outlet node or subcatchment and routing fraction assigned land uses tributary surface area imperviousness and zero percent imperviousness slope characteristic width of overland flow Manning's n for overland flow on both pervious and impervious areas depression storage in both pervious and impervious areas percent of impervious area with no depression storage. infiltration parameters snowpack groundwater parameters LID parameters for each LID Control Used Routing options Steady-flow routing represents the simplest type of routing possible (actually no routing) by assuming that within each computational time step flow is uniform and steady. Thus it simply translates inflow hydrographs at the upstream end of the conduit to the downstream end, with no delay or change in shape. The normal flow equation is used to relate flow rate to flow area (or depth). This type of routing cannot account for channel storage, backwater effects, entrance/exit losses, flow reversal or pressurized flow. It can only be used with dendritic conveyance networks, where each node has only a single outflow link (unless the node is a divider in which case two outflow links are required). This form of routing is insensitive to the time step employed and is really only appropriate for preliminary analysis using long-term continuous simulations. Kinematic wave routing solves the continuity equation along with a simplified form of the momentum equation in each conduit. The latter requires that the slope of the water surface equal the slope of the conduit. The maximum flow that can be conveyed through a conduit is the full normal flow value. Any flow in excess of this entering the inlet node is either lost from the system or can pond atop the inlet node and be re-introduced into the conduit as capacity becomes available. Kinematic wave routing allows flow and area to vary both spatially and temporally within a conduit. This can result in attenuated and delayed outflow hydrographs as inflow is routed through the channel. However this form of routing cannot account for backwater effects, entrance/exit losses, flow reversal, or pressurized flow, and is also restricted to dendritic network layouts. It can usually maintain numerical stability with moderately large time steps, on the order of 1 to 5 minutes. If the aforementioned effects are not expected to be significant then this alternative can be an accurate and efficient routing method, especially for long-term simulations. Dynamic wave routing solves the complete one-dimensional Saint Venant flow equations and therefore produces the most theoretically accurate results. These equations consist of the continuity and momentum equations for conduits and a volume continuity equation at nodes. With this form of routing it is possible to represent pressurized flow when a closed conduit becomes full, such that flows can exceed the full normal flow value. Flooding occurs when the water depth at a node exceeds the maximum available depth, and the excess flow is either lost from the system or can pond atop the node and re-enter the drainage system. Dynamic wave routing can account for channel storage, backwater, entrance/exit losses, flow reversal, and pressurized flow. Because it couples together the solution for both water levels at nodes and flow in conduits it can be applied to any general network layout, even those containing multiple downstream diversions and loops. It is the method of choice for systems subjected to significant backwater effects due to downstream flow restrictions and with flow regulation via weirs and orifices. This generality comes at a price of having to use much smaller time steps, on the order of a minute or less (SWMM can automatically reduce the user-defined maximum time step as needed to maintain numerical stability). Integrated hydrology/hydraulics One of the great advances in SWMM 5 was the integration of urban/suburban subsurface flow with the hydraulic computations of the drainage network. This advance is a tremendous improvement over the separate subsurface hydrologic and hydraulic computations of the previous versions of SWMM because it allows the modeler to conceptually model the same interactions that occur physically in the real open channel/shallow aquifer environment. The SWMM 5 numerical engine calculates the surface runoff, subsurface hydrology and assigns the current climate data at either the wet or dry hydrologic time step. The hydraulic calculations for the links, nodes, control rules and boundary conditions of the network are then computed at either a fixed or variable time step within the hydrologic time step by using interpolation routines and the simulated hydrologic starting and ending values. The versions of SWMM 5 greater than SWMM 5.1.007 allow the modeler to simulate climate changes by globally changing the rainfall, temperature, and evaporation using monthly adjustments. An example of this integration was the collection of the different SWMM 4 link types in the runoff, transport and Extran blocks to one unified group of closed conduit and open channel link types in SWMM 5 and a collection of node types (Figure 2). SWMM contains a flexible set of hydraulic modeling capabilities used to route runoff and external inflows through the drainage system network of pipes, channels, storage/treatment units, and diversion structures. These include the ability to do the following: Handle drainage networks of unlimited size. Use a wide variety of standard closed and open conduit shapes as well as natural channels. Model special elements, such as storage/treatment units, flow dividers, pumps, weirs, and orifices. Apply external flows and water quality inputs from surface runoff, groundwater interflow, rainfall-dependent infiltration/inflow, dry weather sanitary flow, and user-defined inflows. Utilize either kinematic wave or full dynamic wave flow routing methods. Model various flow regimes, such as backwater, surcharging, reverse flow, and surface ponding. apply user-defined dynamic control rules to simulate the operation of pumps, orifice openings, and weir crest levels. Percolation of infiltrated water into groundwater layers. Interflow between groundwater and the drainage system. Nonlinear reservoir routing of overland flow. Runoff reduction via LID controls. Low-impact development components The low-impact development (LID) function was new to SWMM 5.0.019/20/21/22 and SWMM 5.1+ It is integrated within the subcatchment and allows further refinement of the overflows, infiltration flow and evaporation in rain barrel, swales, permeable paving, green roof, rain garden, bioretention and infiltration trench. The term Low-impact development (Canada/US) is used in Canada and the United States to describe a land planning and engineering design approach to managing stormwater runoff. In recent years many states in the US have adopted LID concepts and standards to enhance their approach to reducing the harmful potential for storm water pollution in new construction projects. LID takes many forms but can generally be thought of as an effort to minimize or prevent concentrated flows of storm water leaving a site. To do this the LID practice suggests that when impervious surfaces (concrete, etc.) are used, they are periodically interrupted by pervious areas which can allow the storm water to infiltrate (soak into the earth) A variety of sub-processes in each LID can be defined in SWMM5 such as: surface, pavement, soil, storage, drainmat and drain. Each type of LID has limitations on the type of sub-process allowed by SWMM 5. It has a good report feature and a LID summary report can be in the rpt file and an external report file in which the surface depth can be seen, soil moisture, storage depth, surface inflow, evaporation, surface infiltration, soil percolation, storage infiltration, surface outflow and the LID continuity error. There can be multiple LID's per subcatchment and no issues have been had because of having many complicated LID sub networks and processes inside the Subcatchments of SWMM 5 or any continuity issues not solvable by a smaller wet hydrology time step. The types of SWMM 5 LID compartments are: storage, underdrain, surface, pavement and soil. a bio-retention cell has storage, underdrain and surface compartments. an infiltration trench lid has storage, underdrain and surface compartments. A porous pavement LID has storage, underdrain and pavement compartments. A rain barrel has only storage and underdrain compartments and a vegetative swale LID has a single surface compartment. Each type of LID shares different underlying compartment objects in SWMM 5 which are called layers. This set of equations can be solved numerically at each runoff time step to determine how an inflow hydrograph to the LID unit is converted into some combination of runoff hydrograph, sub-surface storage, sub-surface drainage, and infiltration into the surrounding native soil. In addition to Street Planters and Green Roofs, the bio-retention model just described can be used to represent Rain Gardens by eliminating the storage layer and also Porous Pavement systems by replacing the soil layer with a pavement layer. The surface layer of the LID receives both direct rainfall and runon from other areas. It loses water through infiltration into the soil layer below it, by evapotranspiration (ET) of any water stored in depression storage and vegetative capture, and by any surface runoff that might occur. The soil layer contains an amended soil mix that can support vegetative growth. It receives infiltration from the surface layer and loses water through ET and by percolation into the storage layer below it. The storage layer consists of coarse crushed stone or gravel. It receives percolation from the soil zone above it and loses water by either infiltration into the underlying natural soil or by outflow through a perforated pipe underdrain system. New , the EPA's National Stormwater Calculator is a Windows desktop application that estimates the annual amount of rainwater and frequency of runoff from a specific site anywhere in the United States. Estimates are based on local soil conditions, land cover, and historic rainfall records. The Calculator accesses several national databases that provide soil, topography, rainfall, and evaporation information for the chosen site. The user supplies information about the site's land cover and selects the types of low impact development (LID) controls they would like to use on-site. The LID Control features in SWMM 5.1.013 include the following among types of Green infrastructure: StreetPlanter: Bioretention cells are depressions that contain vegetation grown in an engineered soil mixture placed above a gravel drainage bed. They provide storage, infiltration and evaporation of both direct rainfall and runoff captured from surrounding areas. Street planters consist of concrete boxes filled with an engineered soil that supports vegetative growth. Beneath the soil is a gravel bed that provides additional storage. The walls of a planter extend 3 to 12 inches above the soil bed to allow for ponding within the unit. The thickness of the soil growing medium ranges from 6 to 24 inches while gravel beds are 6 to 18 inches in depth. The planter's capture ratio is the ratio of its area to the impervious area whose runoff it captures. Raingarden: Rain gardens are a type of bio-retention cell consisting of just the engineered soil layer with no gravel bed below it. Rain Gardens are shallow depressions filled with an engineered soil mix that supports vegetative growth. They are usually used on individual home lots to capture roof runoff. Typical soil depths range from 6 to 18 inches. The capture ratio is the ratio of the rain garden's area to the impervious area that drains onto it. GreenRoof: Green roofs are another variation of a bio-retention cell that have a soil layer laying atop a special drainage mat material that conveys excess percolated rainfall off of the roof. Green Roofs (also known as Vegetated Roofs) are bio-retention systems placed on roof surfaces that capture and temporarily store rainwater in a soil growing medium. They consist of a layered system of roofing designed to support plant growth and retain water for plant uptake while preventing ponding on the roof surface. The thickness used for the growing medium typically ranges from 3 to 6 inches. InfilTrench: infiltration trenches are narrow ditches filled with gravel that intercept runoff from upslope impervious areas. They provide storage volume and additional time for captured runoff to infiltrate the native soil below. PermPave or Permeable Pavements: Continuous Permeable Pavement systems are excavated areas filled with gravel and paved over with a porous concrete or asphalt mix. Continuous Permeable Pavement systems are excavated areas filled with gravel and paved over with a porous concrete or asphalt mix. Modular Block systems are similar except that permeable block pavers are used instead. Normally all rainfall will immediately pass through the pavement into the gravel storage layer below it where it can infiltrate at natural rates into the site's native soil. Pavement layers are usually 4 to 6 inches in height while the gravel storage layer is typically 6 to 18 inches high. The Capture Ratio is the percent of the treated area (street or parking lot) that is replaced with permeable pavement. Cistern: Rain Barrels (or Cisterns) are containers that collect roof runoff during storm events and can either release or re-use the rainwater during dry periods. Rain harvesting systems collect runoff from rooftops and convey it to a cistern tank where it can be used for non-potable water uses and on-site infiltration. The harvesting system is assumed to consist of a given number of fixed-sized cisterns per 1000 square feet of rooftop area captured. The water from each cistern is withdrawn at a constant rate and is assumed to be consumed or infiltrated entirely on-site. VegSwale: Vegetative swales are channels or depressed areas with sloping sides covered with grass and other vegetation. They slow down the conveyance of collected runoff and allow it more time to infiltrate the native soil beneath it. Infiltration basins are shallow depressions filled with grass or other natural vegetation that capture runoff from adjoining areas and allow it to infiltrate into the soil. Wet ponds are frequently used for water quality improvement, groundwater recharge, flood protection, aesthetic improvement or any combination of these. Sometimes they act as a replacement for the natural absorption of a forest or other natural process that was lost when an area is developed. As such, these structures are designed to blend into neighborhoods and are viewed as an amenity. Dry ponds temporarily store water after a storm, but eventually empties out at a controlled rate to a downstream water body. Sand filters generally control runoff water quality, providing very limited flow rate control. A typical sand filter system consists of two or three chambers or basins. The first is the sedimentation chamber, which removes floatables and heavy sediments. The second is the filtration chamber, which removes additional pollutants by filtering the runoff through a sand bed. The third is the discharge chamber. Infiltration trench, is a type of best management practice (BMP) that is used to manage stormwater runoff, prevent flooding and downstream erosion, and improve water quality in an adjacent river, stream, lake or bay. It is a shallow excavated trench filled with gravel or crushed stone that is designed to infiltrate stormwater though permeable soils into the groundwater aquifer. A Vegatated filter strip is a type of buffer strip that is an area of vegetation, generally narrow and long, that slows the rate of runoff, allowing sediments, organic matter, and other pollutants that are being conveyed by the water to be removed by settling out. Filter strips reduce erosion and the accompanying stream pollution, and can be a best management practice. Other LID like concepts around the world include sustainable drainage system (SUDS). The idea behind SUDS is to try to replicate natural systems that use cost effective solutions with low environmental impact to drain away dirty and surface water run-off through collection, storage, and cleaning before allowing it to be released slowly back into the environment, such as into watercourses. In addition the following features can also be simulated using the features of SWMM 5 (storage ponds, seepage, orifices, Weirs, seepage and evaporation from natural channels): constructed wetlands, wet ponds, dry ponds, infiltration basin, non-surface sand filters, vegetated filterstrips, vegetated filterstrip and infiltration basin. A WetPark would be a combination of wet and dry ponds and LID features. A WetPark is also considered a constructed wetland. SWMM5 components The SWMM 5.0.001 to 5.1.015 main components are rain gages, watersheds, LID controls or BMP features such as Wet and Dry Ponds, nodes, links, pollutants, landuses, time patterns, curves, time series, controls, transects, aquifers, unit hydrographs, snowmelt and shapes (Table 3). Other related objects are the types of Nodes and the Link Shapes. The purpose of the objects is to simulate the major components of the hydrologic cycle, the hydraulic components of the drainage, sewer or stormwater network, and the buildup/washoff functions that allow the simulation of water quality constituents. A watershed simulation starts with a precipitation time history. SWMM 5 has many types of open and closed pipes and channels: dummy, circular, filled circular, rectangular closed, rectangular open, trapezoidal, triangular, parabolic, power function, rectangular triangle, rectangle round, modified baskethandle, horizontal ellipse, vertical ellipse, arch, eggshaped, horseshoe, gothic, catenary, semielliptical, baskethandle, semicircular, irregular, custom and force main. The major objects or hydrology and hydraulic components in SWMM 5 are: GAGE rain gage SUBCATCH subcatchment NODE conveyance system node LINK conveyance system link POLLUT pollutant LANDUSE land use category TIMEPATTERN, dry weather flow time pattern CURVE generic table of values TSERIES generic time series of values CONTROL conveyance system control rules TRANSECT irregular channel cross-section AQUIFER groundwater aquifer UNITHYD RDII unit hydrograph SNOWMELT snowmelt parameter set SHAPE custom conduit shape LID LID treatment units The major overall components are called in the SWMM 5 input file and C code of the simulation engine: gage, subcatch, node, link, pollute, landuse, timepattern, curve, tseries, control, transect, aquifer, unithyd, snowmelt, shape and lid. The subsets of possible nodes are: junction, outfall, storage and divider. Storage Nodes are either tabular with a depth/area table or a functional relationship between area and depth. Possible node inflows include: external_inflow, dry_weather_inflow, wet_weather_inflow, groundwater_inflow, rdii_inflow, flow_inflow, concen_inflow, and mass_inflow. The dry weather inflows can include the possible patterns: monthly_pattern, daily_pattern, hourly_pattern, and weekend_pattern. The SWMM 5 component structure allows the user to choose which major hydrology and hydraulic components are using during the simulation: Rainfall/runoff with infiltration options: horton, modified horton, green ampt and curve number RDII Water Quality Groundwater Snowmelt Flow Routing with Routing Options: Steady State, Kinematic Wave and Dynamic Wave SWMM 3 and 4 to 5 converter The SWMM 3 and SWMM 4 converter can convert up to two files from the earlier SWMM 3 and 4 versions at one time to SWMM 5. Typically one would convert a Runoff and Transport file to SWMM 5 or a Runoff and Extran File to SWMM 5. If there is a combination of a SWMM 4 Runoff, Transport and Extran network then it will have to be converted in pieces and the two data sets will have to be copied and pasted together to make one SWMM 5 data set. The x,y coordinate file is only necessary if there are not existing x, y coordinates on the D1 line of the SWMM 4 Extran input data[ set. The command File=>Define Ini File can be used to define the location of the ini file. The ini file will save the conversion project input data files and directories. The SWMMM3 and SWMM 3.5 files are fixed format. The SWMM 4 files are free format. The converter will detect which version of SWMM is being used. The converted files can be combined using a text editor to merge the created inp files. SWMM-CAT Climate Change AddOn The Storm Water Management Model Climate Adjustment Tool (SWMM-CAT) is a new addition to SWMM5 (December 2014). It is a simple to use software utility that allows future climate change projections to be incorporated into the Storm Water Management Model (SWMM). SWMM was recently updated to accept a set of monthly adjustment factors for each of these time series that could represent the impact of future changes in climatic conditions. SWMM-CAT provides a set of location-specific adjustments that derived from global climate change models run as part of the World Climate Research Programme (WCRP) Coupled Model Intercomparison Project Phase 3 (CMIP3) archive (Figure 4). SWMM-CAT is a utility that adds location-specific climate change adjustments to a Storm Water Management Model (SWMM) project file. Adjustments can be applied on a monthly basis to air temperature, evaporation rates, and precipitation, as well as to the 24-hour design storm at different recurrence intervals. The source of these adjustments are global climate change models run as part of the World Climate Research Programme (WCRP) Coupled Model Intercomparison Project Phase 3 (CMIP3) archive. Downscaled results from this archive were generated and converted into changes with respect to historical values by USEPA's CREAT project. The following steps are used to select a set of adjustments to apply to SWMM5: 1) Enter the latitude and longitude coordinates of the location if available or its 5-digit zip code. SWMM-CAT will display a range of climate change outcomes for the CMIP3 results closest to the location. 2) Select whether to use climate change projections based on either a near-term or far-term projection period. The displayed climate change outcomes will be updated to reflect the chosen choice. 3) Select a climate change outcome to save to SWMM. There are three choices that span the range of outcomes produced by the different global climate models used in the CMIP3 project. The Hot/Dry outcome represents a model whose average temperature change was on the high end and whose average rainfall change was on the lower end of all model projections. The Warm/Wet outcome represents a model whose average temperature change was on the lower end and whose average rainfall change was on the wetter end of the spectrum. The Median outcome is for a model whose temperature and rainfall changes were closest to the median of all models. 4) Click the Save Adjustments to SWMM link to bring up a dialog form that will allow the selection of an existing SWMM project file to save the adjustments to. The form will also allow the selection of which type of adjustments (monthly temperature, evaporation, rainfall, or 24-hour design storm) to save. Conversion of temperature and evaporation units is automatically handled depending on the unit system (US or SI) detected in the SWMM file. EPA stormwater calculator based on SWMM5 Other external programs that aid in the generation of data for the EPA SWMM 5 model include: SUSTAIN, BASINS, SSOAP, and the EPA’s National Stormwater Calculator (SWC) which is a desktop application that estimates the annual amount of rainwater and frequency of runoff from a specific site anywhere in the United States (including Puerto Rico). The estimates are based on local soil conditions, land cover, and historic rainfall records (Figure 5). SWMM platforms The SWMM5 engine is used by a variety of software packages, including many commercial software packages. Some of these software packages include: EPA-SWMM InfoWorks ICM SWMM, InfoDrainage, and InfoSWMM Developed by Innovyze An Autodesk Company InfoWorks ICM which has RDII, Water Quality, and Hydrology Components from SWMM5. Developed by Innovyze An Autodesk Company XPSWMM now part of Innovyze An Autodesk Company. Autodesk Storm and Sanitary Analysis PCSWMM MIKE URBAN SewerGEMS and CivilStorm from Bentley Systems, Inc. Fluidit Sewer and Fluidit Storm Flood Modeller by Jacobs GeoSWMM Giswater GISpipe GIS-based EPANET and SWMM integration software. PySWMM by OpenWaterAnalytics See also SWAT model Stochastic empirical loading and dilution model WAFLEX Hydrology Hydraulics Surface runoff Precipitation (meteorology) Antecedent moisture Evapotranspiration EPANET Rainfall Hydrological transport model Computer simulation Water pollution Water quality Surface-water hydrology References External links EPA SWMM 5.1.013 Download EPA National Stormwater Calculator - SWMM 5 Based United States Environmental Protection Agency Water resource management in the United States Stormwater management Public-domain software with source code Hydrology models
190219
https://en.wikipedia.org/wiki/Open%20Software%20Foundation
Open Software Foundation
The Open Software Foundation (OSF) was a not-for-profit industry consortium for creating an open standard for an implementation of the operating system Unix. It was formed in 1988 and merged with X/Open in 1996, to become The Open Group. Despite the similarities in name, OSF was unrelated to the Free Software Foundation (FSF, also based in Cambridge, Massachusetts), or the Open Source Initiative (OSI). History The organization was first proposed by Armando Stettner of Digital Equipment Corporation (DEC) at an invitation-only meeting hosted by DEC for several Unix system vendors in January 1988 (called the "Hamilton Group", since the meeting was held at DEC's offices on Palo Alto's Hamilton Avenue). It was intended as an organization for joint development, mostly in response to a perceived threat of "merged UNIX system" efforts by AT&T Corporation and Sun Microsystems. After discussion during the meeting, the proposal was tabled so that members of the Hamilton Group could broach the idea of a joint development effort with Sun and AT&T. In the meantime, Stettner was asked to write an organization charter. That charter was formally presented to Apollo, HP, IBM and others after Sun and AT&T rejected the overture by the Hamilton Group members. The foundation's original sponsoring members were Apollo Computer, Groupe Bull, Digital Equipment Corporation, Hewlett-Packard, IBM, Nixdorf Computer, and Siemens AG, sometimes called the "Gang of Seven". Later sponsor members included Philips and Hitachi with the broader general membership growing to more than a hundred companies. It was registered under the U.S. National Cooperative Research Act of 1984, which reduces potential antitrust liabilities of research joint ventures and standards development organizations. The sponsors gave OSF significant funding, a broad mandate (the so-called "Seven Principles"), substantial independence, and support from sponsor senior management. Senior operating executives from the sponsoring companies served on OSF's initial Board of Directors. One of the Seven Principles was declaration of an "Open Process" whereby OSF staff would create Request for Proposals for source technologies to be selected by OSF, in a vendor neutral process. The selected technology would be licensed by the OSF to the public. Membership in the organization gave member companies a voice in the process for requirements. At the founding, five Open Process projects were named. The organization was seen as a response to the collaboration between AT&T and Sun on UNIX System V Release 4, and a fear that other vendors would be locked out of the standardization process. This led Scott McNealy of Sun to quip that "OSF" really stood for "Oppose Sun Forever". The competition between the opposing versions of Unix systems became known as the Unix wars. AT&T founded the Unix International (UI) project management organization later that year as a counter-response to the OSF. UI was led by Peter Cunningham, formerly of International Computers Limited (ICL), as its president. UI had many of the same characteristics of OSF, with the exception of a software development staff. Unix System Laboratories (USL) filled the software development role, and UI was based in Parsippany-Troy Hills, New Jersey to be close to USL. The executive staff of the Open Software Foundation included David Tory, President, formerly of Computer Associates; Norma Clarke, Vice-President Human Resources formerly of Mitre; Marty Ford, Vice-President Finance, formerly of DEC; Ira Goldstein, Vice-President Research Institute, formerly of Hewlett-Packard; Roger Gourd, Vice-President Engineering, formerly of DEC; Alex Morrow, Vice-President Strategy, formerly of IBM; Donal O'Shea, Vice-President of Operations, formerly of UniSoft. This staff added more than 300 employees in less than two years. The organization's headquarters were at 11 Cambridge Center in Cambridge, Massachusetts, intentionally located in the neighborhood of the Massachusetts Institute of Technology along with remote development offices in Munich, Germany and Grenoble, France and field offices in Brussels and Tokyo. To the public, the organization appeared to be nothing more than an advocacy group; in reality it included a distributed software development organization. An independent security software company - Addamax, filed suit in 1990 against OSF and its sponsors charging that OSF was engaged in anticompetitive practices. The court delivered a grant of summary judgment to OSF (152 F.3d 48, 50 (1st Cir.1998). In a related action in 1991, the Federal Trade Commission investigated OSF for allegedly using "unfair trade practices" in its "process for acquiring technology." Products OSF's Unix reference implementation was named OSF/1. It was first released in December 1990 and adopted by Digital a month later. As part of the founding of the organization, the AIX operating system was provided by IBM and was intended to be passed-through to the member companies of OSF. However, delays and portability concerns caused the OSF staff to cancel the original plan. Instead, a new Unix reference operating system using components from across the industry would be released on a wide range of platforms to demonstrate its portability and vendor neutrality. This new OS was produced in a little more than one year. It incorporated technology from Carnegie Mellon University: the Mach 2.5 microkernel; from IBM, the journaled file system and commands and libraries; from SecureWare secure core components; from Berkeley Software Distribution (BSD) the computer networking stack; and a new virtual memory management system invented at OSF. By the time OSF stopped development of OSF/1 in 1996, the only major Unix system vendor using the complete OSF/1 package was Digital (DEC), which rebranded it Digital UNIX (later renamed Tru64 UNIX after Digital's acquisition by Compaq). However, other Unix vendors licensed the operating system to include various components of OSF/1 in their products. Other software vendors also licensed OSF/1 including Apple. Parts of OSF/1 were contained in so many versions of Unix that it may have been the most widely deployed Unix product ever produced. Other technologies developed by OSF include Motif and Distributed Computing Environment (DCE), respectively a widget toolkit and package of distributed network computing technologies. The Motif toolkit was adopted as a formal standard within the Institute of Electrical and Electronics Engineers (IEEE) as P1295 in 1994. Filling out the initial (and what turned out to be final) five technologies from OSF were DME, the Distributed Management Environment and ANDF, the Architecturally Neutral Distribution Format. Technologies which were produced primarily by OSF included ODE, the Open Development Environment - a flexible development, build and source control environment; TET, the Test Environment Toolkit - an open framework for building and executing automated test cases; and the operating system OSF/1 MK from the OSF Research Institute based on the Mach3.0 microkernel. ODE and TET were made available as open source. TET was produced as a result of collaboration between OSF, UNIX International and the X/Open Consortium. All the OSF technologies had corresponding manuals and supporting publications produced almost exclusively by the staff at OSF and published by Prentice-Hall. IBM has published its version of ODE on GitHub. Merger By 1993, it had become clear that the greater threat to UNIX system vendors was not each other as much as the increasing presence of Microsoft in enterprise computing. In May, the Common Open Software Environment (COSE) initiative was announced by the major players in the UNIX world from both the UI and OSF camps: Hewlett-Packard, IBM, Sun, Unix System Laboratories, and the Santa Cruz Operation. As part of this agreement, Sun and AT&T became OSF sponsor members, OSF submitted Motif to the X/Open Consortium for certification and branding and Novell passed control and licensing of the UNIX trademark to the X/Open Consortium. In March 1994, OSF announced its new organizational model and introduced the COSE technology model as its Pre-Structured Technology (PST) process, which marked the end of OSF as a significant software development company. It also assumed responsibility for future work on the COSE initiative's Common Desktop Environment (CDE). In September 1995, the merger of OSF/Motif and CDE into a single project, CDE/Motif, was announced. In February 1996 OSF merged with X/Open to become The Open Group. References Free software project foundations in the United States Standards organizations in the United States Unix history Unix standards X Window System
31966269
https://en.wikipedia.org/wiki/LulzSec
LulzSec
Lulz Security, commonly abbreviated as LulzSec, was a black hat computer hacking group that claimed responsibility for several high profile attacks, including the compromise of user accounts from PlayStation Network in 2011. The group also claimed responsibility for taking the CIA website offline. Some security professionals have commented that LulzSec has drawn attention to insecure systems and the dangers of password reuse. It has gained attention due to its high profile targets and the sarcastic messages it has posted in the aftermath of its attacks. One of the founders of LulzSec was computer security specialist Hector Monsegur, who used the online moniker Sabu. He later helped law enforcement track down other members of the organization as part of a plea deal. At least four associates of LulzSec were arrested in March 2012 as part of this investigation. Prior, British authorities had announced the arrests of two teenagers they alleged were LulzSec members, going by the pseudonyms T-flow and Topiary. At just after midnight (BST, UT+01) on 26 June 2011, LulzSec released a "50 days of lulz" statement, which they claimed to be their final release, confirming that LulzSec consisted of six members, and that their website was to be shut down. The sudden disbandment of the group was unexpected. Their final release included accounts and passwords from many different sources. Despite claims of retirement, the group committed another hack against newspapers owned by News Corporation on 18 July, defacing them with false reports regarding the death of Rupert Murdoch. The group had also helped launch Operation AntiSec, a joint effort involving LulzSec, Anonymous, and other hackers. Background and history A federal indictment against members contends that, prior to forming the hacking collective known as LulzSec, the six members were all part of another collective called Internet Feds, a group in rivalry with Anonymous. Under this name, the group attacked websites belonging to Fine Gael, HBGary, and Fox Broadcasting Company. This includes the alleged incident in which e-mail messages were stolen from HBGary accounts. In May 2011, following the publicity surrounding the HBGary hacks, six members of Internet Feds founded the group LulzSec. The group's first recorded attack was against Fox.com's website, though they still may have been using the name Internet Feds at the time. It claimed responsibility for leaking information, including passwords, altering several employees' LinkedIn profiles, and leaking a database of X Factor contestants containing contact information of 73,000 contestants. They claimed to do so because the rapper Common had been referred to as "vile" on air. LulzSec drew its name from the neologism "lulz", (from lol), "laughing out loud", which represents laughter, and "Sec", short for "Security". The Wall Street Journal characterized its attacks as closer to Internet pranks than serious cyber-warfare, while the group itself claimed to possess the capability of stronger attacks. It gained attention in part due to its brazen claims of responsibility and lighthearted taunting of corporations that were hacked. It frequently referred to Internet memes when defacing websites. The group emerged in May 2011, and successfully attacked websites of several major corporations. It specialized in finding websites with poor security, stealing and posting information from them online. It used well-known straightforward methods, such as SQL injection, to attack its target websites. Several media sources have described their tactics as grey hat hacking. Members of the group may have been involved in a previous attack against the security firm HBGary. The group used the motto "Laughing at your security since 2011!" and its website, created in June 2011, played the theme from The Love Boat. It announced its exploits via Twitter and its own website, often accompanied with lighthearted ASCII art drawings of boats. Its website also included a bitcoin donation link to help fund its activities. Ian Paul of PC World wrote that, "As its name suggests, LulzSec claims to be interested in mocking and embarrassing companies by exposing security flaws rather than stealing data for criminal purposes." The group was also critical of white hat hackers, claiming that many of them have been corrupted by their employers. Some in the security community contended that the group raised awareness of the widespread lack of effective security against hackers. They were credited with inspiring LulzRaft, a group implicated in several high-profile website hacks in Canada. In June 2011 the group took suggestions for sites to hit with denial-of-service attacks. The group redirected telephone numbers to different customer support lines, including the line for World of Warcraft, magnets.com, and the FBI Detroit office. The group claimed this sent five to 20 calls per second to these sources, overwhelming their support officers. On 24 June 2011, The Guardian released leaked logs of one of the group's IRC chats, revealing that the core group was a small group of hackers with a leader Sabu who exercised large control over the group's activities. It also revealed that the group had connections with Anonymous, though was not formally affiliated with it. Some LulzSec members had once been prominent Anonymous members, including member Topiary. At just after midnight (UTC) on 26 June 2011, LulzSec released a "50 days of lulz" statement, which they claimed to be their final release, confirming that LulzSec consisted of six members, and that their website was to be taken down. The group claimed that they had planned to be active for only fifty days from the beginning. "We're not quitting because we're afraid of law enforcement. The press are getting bored of us, and we're getting bored of us," a group member said in an interview to the Associated Press. Members of the group were reported to have joined with Anonymous members to continue the AntiSec operation. However, despite claiming to retire, the group remained in communication as it attacked the websites of British newspapers The Times and The Sun on 18 July, leaving a false story on the death of owner Rupert Murdoch. Former members and associates LulzSec consisted of seven core members. The online handles of these seven were established through various attempts by other hacking groups to release personal information of group members on the internet, leaked IRC logs published by The Guardian, and through confirmation from the group itself. Sabu – One of the group's founders, who seemed to act as a kind of leader for the group, Sabu would often decide what targets to attack next and who could participate in these attacks. He may have been part of the Anonymous group that hacked HBGary. Various attempts to release his real identity have claimed that he is an information technology consultant with the strongest hacking skills of the group and knowledge of the Python programming language. It was thought that Sabu was involved in the media outrage cast of 2010 using the skype "anonymous.sabu" Sabu was arrested in June 2011 and identified as a 29-year-old unemployed man from New York’s Lower East Side. On 15 August, he pleaded guilty to several hacking charges and agreed to cooperate with the FBI. Over the following seven months he successfully unmasked the other members of the group. Sabu was identified by Backtrace Security as Hector Montsegur on 11 March 2011 in a PDF publication named "Namshub." Topiary – Topiary was also a suspected former member of the Anonymous, where he used to perform media relations, including hacking the website of the Westboro Baptist Church during a live interview. Topiary ran the LulzSec Twitter account on a daily basis; following the announcement of LulzSec's dissolution, he deleted all the posts on his Twitter page, except for one, which stated: "You cannot arrest an idea". Police arrested a man from Shetland, United Kingdom suspected of being Topiary on 27 July 2011. The man was later identified as Jake Davis and was charged with five counts, including unauthorized access of a computer and conspiracy. He was indicted on conspiracy charges on 6 March 2012. Kayla/KMS – Ryan Ackroyd of London, and another unidentified individual known as "lol" or "Shock.ofgod" in LulzSec chat logs. Kayla owned a botnet used by the group in their distributed denial-of-service attacks. The botnet is reported to have consisted of about 800,000 infected computer servers. Kayla was involved in several high-profile attacks under the group "gn0sis". Kayla also may have participated in the Anonymous operation against HBGary. Kayla reportedly wiretapped 2 CIA agents in an anonymous operation. Kayla was also involved in the 2010 media outrage under the Skype handle "Pastorhoudaille". Kayla is suspected of having been something of a deputy to Sabu and to have found the vulnerabilities that allowed LulzSec access to the United States Senate systems. One of the men behind the handle Kayla was identified as Ryan Ackroyd of London, arrested, and indicted on conspiracy charges on 6 March 2012. Tflow – (Real name: Mustafa Al-Bassam) The fourth founding member of the group identified in chat logs, attempts to identify him have labelled him a PHP coder, web developer, and performer of scams on PayPal. The group placed him in charge of maintenance and security of the group's website lulzsecurity.com. London Metropolitan Police announced the arrest of a 16-year-old hacker going by the handle Tflow on 19 July 2011. Avunit – He is one of the core seven members of the group, but not a founding member. He left the group after their self-labelled "Fuck the FBI Friday". He was also affiliated with Anonymous AnonOps HQ. Avunit is the only one of the core seven members that has not been identified. Pwnsauce – Pwnsauce joined the group around the same time as Avunit and became one of its core members. He was identified as Darren Martyn of Ireland and was indicted on conspiracy charges on 6 March 2012. The Irish national worked as a local chapter leader for the Open Web Application Security Project, resigning one week before his arrest. Palladium – Identified as Donncha O'Cearbhaill of Ireland, he was indicted on conspiracy on 6 March 2012. Anarchaos – Identified as Jeremy Hammond of Chicago, he was arrested on access device fraud and hacking charges. He was also charged with a hacking attack on the U.S. security company Stratfor in December 2011. He is said to be a member of Anonymous. Ryan Cleary, who sometimes used the handle ViraL. Cleary faced a sentence of 32 months in relation to attacks against the US Air Force and others. Ideology LulzSec did not appear to hack for financial profit, claiming their main motivation was to have fun by causing mayhem. They did things "for the lulz" and focused on the possible comedic and entertainment value of attacking targets. The group occasionally claimed a political message. When they hacked PBS, they stated they did so in retaliation for what they perceived as unfair treatment of WikiLeaks in a Frontline documentary entitled WikiSecrets. A page they inserted on the PBS website included the title "FREE BRADLEY MANNING. FUCK FRONTLINE!" The 20 June announcement of "Operation Anti-Security" contained justification for attacks on government targets, citing supposed government efforts to "dominate and control our Internet ocean" and accusing them of corruption and breaching privacy. The news media most often described them as grey hat hackers. Karim Hijazi, CEO of security company Unveillance, accused the group of blackmailing him by offering not to attack his company or its affiliates in exchange for money. LulzSec responded by claiming that Hijazi offered to pay them to attack his business opponents and that they never intended to take any money from him. LulzSec has denied responsibility for misuse of any of the data they breached and released. Instead, they placed the blame on users who reused passwords on multiple websites and on companies with inadequate security in place. In June 2011, the group released a manifesto outlining why they performed hacks and website takedowns, reiterating that "we do things just because we find it entertaining" and that watching the results can be "priceless". They also claimed to be drawing attention to computer security flaws and holes. They contended that many other hackers exploit and steal user information without releasing the names publicly or telling people they may possibly have been hacked. LulzSec said that by releasing lists of hacked usernames or informing the public of vulnerable websites, it gave users the opportunity to change names and passwords elsewhere that might otherwise have been exploited, and businesses would be alarmed and would upgrade their security. The group's later attacks have had a more political tone. They claimed to want to expose the "racist and corrupt nature" of the military and law enforcement. They have also expressed opposition to the War on Drugs. Lulzsec's Operation Anti-Security was characterized as a protest against government censorship and monitoring of the internet. In a question and answer session with BBC Newsnight, LulzSec member Whirlpool (AKA: Topiary) said, "Politically motivated ethical hacking is more fulfilling". He claimed the loosening of copyright laws and the rollback of what he sees as corrupt racial profiling practices as some of the group's goals. Initial targets The group's first attacks came in May 2011. Their first recorded target was Fox.com, which they retaliated against after they called Common, a rapper and entertainer, "vile" on the Fox News Channel. They leaked several passwords, LinkedIn profiles, and the names of 73,000 X Factor contestants. Soon after on 15 May, they released the transaction logs of 3,100 Automated Teller Machines in the United Kingdom. In May 2011, members of Lulz Security gained international attention for hacking the American Public Broadcasting System (PBS) website. They stole user data and posted a fake story on the site which claimed that Tupac Shakur and Biggie Smalls were still alive and living in New Zealand. In the aftermath of the attack, CNN referred to the responsible group as the "Lulz Boat". Lulz Security claimed that some of its hacks, including its attack on PBS, were motivated by a desire to defend WikiLeaks and Chelsea Manning. A Fox News report on the group quoted one commentator, Brandon Pike, who claimed that Lulz Security was affiliated with the hacktivist group Anonymous. Lulz Security claimed that Pike had actually hired it to hack PBS. Pike denied the accusation and claimed it was leveled against him because he said Lulz Security was a splinter of Anonymous. In June 2011, members of the group claimed responsibility for an attack against Sony Pictures that took data that included "names, passwords, e-mail addresses, home addresses and dates of birth for thousands of people." The group claimed that it used a SQL injection attack, and was motivated by Sony's legal action against George Hotz for jailbreaking the PlayStation 3. The group claimed it would launch an attack that would be the "beginning of the end" for Sony. Some of the compromised user information was subsequently used in scams. The group claimed to have compromised over 1,000,000 accounts, though Sony claimed the real number was around 37,500. Corporate attacks Lulz Security attempted to hack into Nintendo, but both the group and Nintendo itself report that no particularly valuable information was found by the hackers. LulzSec claimed that it did not mean to harm Nintendo, declaring: "We're not targeting Nintendo. We like the N64 too much — we sincerely hope Nintendo plugs the gap." On 11 June, reports emerged that LulzSec hacked into and stole user information from the pornography website www.pron.com. They obtained and published around 26,000 e-mail addresses and passwords. Among the information stolen were records of two users who subscribed using email addresses associated with the Malaysian government, three users who subscribed using United States military email addresses and 55 users who LulzSec claimed were administrators of other adult-oriented websites. Following the breach, Facebook locked the accounts of all users who had used the published e-mail addresses, and also blocked new Facebook accounts opened using the leaked e-mail addresses, fearing that users of the site would get hacked after LulzSec encouraged people to try and see if these people used identical user name and password combinations on Facebook as well. LulzSec hacked into the Bethesda Game Studios network and posted information taken from the network onto the Internet, though they refrained from publishing 200,000 compromised accounts. LulzSec posted to Twitter regarding the attack, "Bethesda, we broke into your site over two months ago. We've had all of your Brink users for weeks, Please fix your junk, thanks!" On 14 June 2011, LulzSec took down four websites by request of fans as part of their "Titanic Take-down Tuesday". These websites were Minecraft, League of Legends, The Escapist, and IT security company FinFisher. They also attacked the login servers of the massively multiplayer online game EVE Online, which also disabled the game's front-facing website, and the League of Legends login servers. Most of the takedowns were performed with distributed denial-of-service attacks. On 15 June, LulzSec took down the main server of S2 Games' Heroes of Newerth as another phone request. They claimed, "Heroes of Newerth master login server is down. They need some treatment. Also, DotA is better." On 16 June, LulzSec posted a random assortment of 62,000 emails and passwords to MediaFire. LulzSec stated they released this in return for supporters flooding the 4chan /b/ board. The group did not say what websites the combinations were for and encouraged followers to plug them into various sites until they gained access to an account. Some reported gaining access to Facebook accounts and changing images to sexual content and others to using the Amazon.com accounts of others to purchase several books. Writerspace.com, a literary website, later admitted that the addresses and passwords came from users of their site. Government-focused activities LulzSec claimed to have hacked local InfraGard chapter sites, a non-profit organization affiliated with the FBI. The group leaked some of InfraGard member e-mails and a database of local users. The group defaced the website posting the following message, "LET IT FLOW YOU STUPID FBI BATTLESHIPS", accompanied with a video. LulzSec posted: On 9 June, LulzSec sent an email to the administrators of the British National Health Service, informing them of a security vulnerability discovered in NHS systems. LulzSec stated that they did not intend to exploit this vulnerability, saying in the email that "We mean you no harm and only want to help you fix your tech issues." On 13 June, LulzSec released the e-mails and passwords of a number of users of senate.gov, the website of the United States Senate. The information released also included the root directory of parts of the website. LulzSec stated, "This is a small, just-for-kicks release of some internal data from senate.gov – is this an act of war, gentlemen? Problem?" referencing a recent statement by the Pentagon that some cyberattacks could be considered an act of war. No highly sensitive information appears in the release. On 15 June, LulzSec launched an attack on CIA.gov, the public website of the United States Central Intelligence Agency, taking the website offline with a distributed denial-of-service attack. The website was down from 5:48 pm to 8:00 pm eastern time. On 2 December, an offshoot of LulzSec calling itself LulzSec Portugal, attacked several sites related to the government of Portugal. The websites for the Bank of Portugal, the Assembly of the Republic, and the Ministry of Economy, Innovation and Development all became unavailable for a few hours. Operation Anti-Security On 20 June, the group announced it had teamed up with Anonymous for "Operation Anti-Security". They encouraged supporters to steal and publish classified government information from any source while leaving the term "AntiSec" as evidence of their intrusion. Also listed as potential targets were major banks. USA Today characterized the operation as an open declaration of cyberwarfare against big government and corporations. Their first target of the operation was the Serious Organised Crime Agency (SOCA), a national law enforcement agency of the United Kingdom. LulzSec claimed to have taken the website offline at about 11 am EST on 20 June 2011, though it only remained down for a few minutes. While the attack appeared to be a DDoS attack, LulzSec tweeted that actual hacking was taking place "behind the scenes". At about 6:10 pm EST on 20 June, SOCA's website went down yet again. SOCA's website was back online sometime between 20 and 21 June. The website of the local district government of Jianhua District in Qiqihar, China, was also knocked offline. Early in the morning on 22 June, it was revealed that LulzSec's "Brazilian unit" had taken down two Brazilian government websites, brasil.gov.br and presidencia.gov.br. They also brought down the website of Brazilian energy company Petrobras. On 20 June, two members on the "Lulz Boat" reportedly leaked logs that LulzSec was going to leak on 21 June. They also claimed that the two had leaked information that aided authorities in locating and arresting Ryan Cleary, a man loosely affiliated with the group. LulzSec posted various personal information about the two on Pastebin including IP addresses and physical addresses. Both had been involved with cyber-crimes in the past, and one had been involved with hacking the game Deus Ex. After LulzSec encouragement, some began tagging public locations with physical graffiti reading "Antisec" as part of the operation. Numerous beachfronts in Mission Beach, San Diego were vandalized with the phrase. Some local news organizations mistook the graffiti in Mission Beach as signs of the Antisec Movement. Many commenters on the local news websites corrected this. On 23 June, LulzSec released a number of documents pertaining to the Arizona Department of Public Safety, which they titled "chinga la migra", which roughly translates to "fuck the border patrol". The leaked items included email addresses and passwords, as well as hundreds of documents marked "sensitive" or "for official use only". LulzSec claimed that this was in protest of the law passed in Arizona requiring some aliens to carry registration documents at all times. Arizona officials have confirmed the intrusion. Arizona police have complained that the release of officer identities and the method used to combat gangs could endanger the lives of police officers. On 24 June 2011, LulzSecBrazil published what they claimed were access codes and passwords that they used to access the Petrobras website and employee profile data they had taken using the information. Petrobras denied that any data had been stolen, and LulzSecBrazil removed the information from their Twitter feed a few hours later. The group also released personal information regarding President of Brazil Dilma Rousseff and Mayor of São Paulo Gilberto Kassab. On 25 June 2011, LulzSec released what they described as their last data dump. The release contained an enormous amount of information from various sources. The files contained a half gigabyte of internal information from telecommunication company AT&T, including information relating to its release of 4G LTE and details pertaining to over 90,000 personal phones used by IBM. The IP addresses of several large corporations including Sony, Viacom, and Disney, EMI, and NBC Universal were included. It also contained over 750,000 username and password combinations from several websites, including 200,000 email addresses, usernames, and encrypted passwords from hackforums.net; 12,000 names, usernames, and passwords of the NATO online bookshop; half a million usernames and encrypted passwords of players of the online game Battlefield Heroes; 50,000 usernames, email addresses, and encrypted passwords of various video game forum users; and 29 users of Priority Investigations, an Irish private investigation company. Also included were an internal manual for AOL engineering staff and a screencapture of a vandalized page from navy.mil, the website of the United States Navy. Members of the group continued the operation with members of Anonymous after disbanding. Despite claiming to have retired, on 18 July LulzSec hacked into the website of British newspaper The Sun. The group redirected the newspaper's website to an also-hacked redesign website of another newspaper The Times, altering the site to resemble The Sun and posting a fake story claiming that Rupert Murdoch had died after ingesting a fatal dose of palladium. They objected to the involvement of News Corporation, the Murdoch-owned company that publishes The Sun and The Times, in a large phone hacking scandal. The hacked website also contained a webcomic depicting LulzSec deciding on and carrying out the attack. The group later redirected The Sun website to their Twitter feed. News International released a statement regarding the attacks before having the page the statement appeared on also redirected to the LulzSec Twitter page and eventually taken offline. The group also released the names and phone numbers of a reporter for The Sun and two others associated with the newspaper and encouraged their supporters to call them. In recent times NovaCygni of AntiSec has openly touted that the news channel Russian Television (RT) has openly stated support for the Anonymous movement and that at least one reporter for them is an active member of Anonymous. They further included an old email address and password of former News International executive Rebekah Brooks. News Corporation took the websites offline as a precaution later in the day. Denied attacks The media reported a number of attacks, originally attributed to LulzSec, that the group later denied involvement in. On 21 June, someone claiming to be from the group posted on Pastebin that they had stolen the entire database of the United Kingdom Census 2011. LulzSec responded by saying that they had obtained no such data and that whoever posted the notice was not from the group. British officials said they were investigating the incident, but have found no evidence that any databases had been compromised or any information taken. The British government, upon concluding their investigation, called the claims that any information on the census was taken a hoax. In June 2011, assets belonging to newspaper publisher News International were attacked, apparently in retaliation for reporting by The Sun of the arrest of Ryan Cleary, an associate of the group. The newspaper's website and a computer used in the publishing process of The Times were attacked. However, LulzSec denied any involvement, stating "we didn't attack The Sun or The Times in any way with any kind of DDoS attack". Members of AntiSec based in Essex England claimed responsibility for the attack. Hacker actions against LulzSec A number of different hackers have targeted LulzSec and its members in response to their activities. On 23 June 2011, Fox News reported that rival hacker group TeaMp0isoN were responsible for outing web designer Sven Slootweg, who they said used the online nickname Joepie91, and that they have intentions to do the same with every member. A Pastebin post in June 2011 from hacker KillerCube identified LulzSec leader Sabu as Hector Xavier Monsegur, an identification later shown to be accurate. A group calling themselves Team Web Ninjas appeared in June 2011 saying they were angry over the LulzSec release of the e-mail addresses and passwords of thousands of normal Internet users. They attempted to publicly identify the online and real world identities of LulzSec leadership and claimed to do so on behalf of the group's victims. The group claimed to have identified and given to law enforcement the names of a number of the group's members, including someone they claimed to be a United States Marine. The Jester, a hacker who generally went by the leetspeak handle th3j35t3r, vowed to find and expose members of LulzSec. Claiming to perform hacks out of a sense of American patriotism, he attempted to obtain and publish the real world personally identifiable information of key members, whom he described as "childish". On 24 June 2011, he claimed to have revealed the identity of LulzSec leader Sabu as an information technology consultant possibly from New York City. On 24 June 2011, a hacker allegedly going by the name Oneiroi briefly took down the LulzSec website in what he labelled "Operation Supernova". The Twitter page for the group also briefly became unavailable. On 24 June 2011, The Guardian published leaked logs from one of the group's IRC channels. The logs were originally assumed to have been leaked by a disillusioned former member of the group who went by the nickname m_nerva, yet fellow hacker Michael Major, known by his handle 'hann', later claimed responsibility. After confirming that the leaked logs were indeed theirs, and that the logs revealed personal information on two members who had recently left the group due to the implications of attacking the FBI website, LulzSec went on to threaten m_nerva on their Twitter feed. LulzSec claimed the logs were not from one of their core chatting channels, but rather a secondary channel used to screen potential backups and gather research. A short time before LulzSec claimed to be disbanding, a group calling itself the A-Team posted what they claimed was a full list of LulzSec members online along with numerous chat logs of the group communicating with each other. A rival hacker going by the name of TriCk also claimed to be working to reveal the group's identities and claimed that efforts on the part of rival hackers had pushed the group to disband for fear of being caught. Law enforcement response On 21 June 2011, the London Metropolitan Police announced that they had arrested a 19-year-old man from Wickford, Essex, named by LulzSec and locally as Ryan Cleary, as part of an operation carried out in cooperation with the FBI. The suspect was arrested on charges of computer misuse and fraud, and later charged with five counts of computer hacking under the Criminal Law Act and the Computer Misuse Act. News reports described him as an alleged member of LulzSec. LulzSec denied the man arrested was a member. A member of LulzSec claimed that the suspect was not part of the group, but did host one of its IRC channels on his server. British police confirmed that he was being questioned regarding alleged involvement in LulzSec attacks against the Serious Organized Crime Agency (SOCA) and other targets. They also questioned him regarding an attack on the International Federation of the Phonographic Industry in November 2010. On 25 June 2011 the court released Cleary under the bail conditions that he not leave his house without his mother and not use any device connected to the internet. He was diagnosed the previous week with Asperger syndrome. In June 2012 Cleary, together with another suspected LulzSec member, 19-year-old Jake Davis, pleaded guilty conspiring to attack government, law enforcement and media websites in 2011. At around the same time as Cleary's arrest, Federal Bureau of Investigation agents raided the Reston, Virginia facility of Swiss web hosting service DigitalOne. The raid took several legitimate websites offline for hours as the agency looked for information on an undisclosed target. Media reports speculated the raid may have been related to the LulzSec investigation. A few days before LulzSec disbanded, the FBI executed a search warrant on an Iowa home rented by Laurelai Bailey. Authorities interviewed her for five hours and confiscated her hard drives, camera, and other electronic equipment, but no charges were filed. Bailey denied being a member of the group, but admitted chatting with members of LulzSec online and later leaking those chats. The FBI was interested in having her infiltrate the group, but Bailey claimed the members hated her and would never let her in. The questioning by the FBI led a local technical support company to fire Laurelai, claiming she embarrassed the company. On 27 June 2011, the FBI executed another search warrant in Hamilton, Ohio. The local media connected the raid to the LulzSec investigation; however, the warrant was sealed, the name of the target was not revealed, and the FBI office in Cincinnati refused to comment on any possible connection between the group and the raid. No one was charged with a crime after the FBI served the warrant. Some reports suggested the house may have belonged to former LulzSec member m_nerva, whom was originally suspected of leaking a number of the group's logs to the press, and information leading to the warrant supplied by Ryan Cleary. On 19 July 2011, the London Metropolitan Police announced the arrest of LulzSec member Tflow. A 16-year-old male was arrested in South London on charges of violating the Computer Misuse Act, as part of an operation involving the arrest of several other hackers affiliated with Anonymous in the United States and United Kingdom. LulzSec once again denied that any of their membership had been arrested, stating "there are seven of us, and we're all still here." On the same day the FBI arrested 21-year-old Lance Moore in Las Cruces, New Mexico, accusing him of stealing thousands of documents and applications from AT&T that LulzSec published as part of their so called "final release". The Police Central E-Crime Unit arrested an 18-year-old man from Shetland on 27 July 2011 suspected of being LulzSec member Topiary. They also searched the house of a 17-year-old from Lincolnshire possibly connected to the investigation, interviewing him. Scotland Yard later identified the man arrested as Yell, Shetland resident Jake Davis. He was charged with unauthorized access of a computer under the Computer Misuse Act 1990, encouraging or assisting criminal activity under the Serious Crime Act 2007, conspiracy to launch a denial-of-service attack against the Serious Organised Crime Unit contrary to the Criminal Law Act 1977, and criminal conspiracy also under the Criminal Law Act 1977. Police confiscated a Dell laptop and a 100-gigabyte hard drive that ran 16 different virtual machines. Details relating to an attack on Sony and hundreds of thousands of email addresses and passwords were found on the computer. A London court released Davis on bail under the conditions that he live under curfew with his parents and have no access to the internet. His lawyer Gideon Cammerman stated that, while his client did help publicize LulzSec and Anonymous attacks, he lacked the technical skills to have been anything but a sympathizer. In early September 2011, Scotland Yard made two further arrests relating to LulzSec. Police arrested a 24-year-old male in Mexborough, South Yorkshire and a 20-year-old male in Warminster, Wiltshire. The two were accused of conspiring to commit offenses under the Computer Misuse Act of 1990; police said that the arrests related to investigations into LulzSec member Kayla. On 22 September 2011, the FBI arrested Cody Kretsinger, a 23-year-old from Phoenix, Arizona who was indicted on charges of conspiracy and the unauthorized impairment of a protected computer. He is suspected of using the name "recursion" and assisting LulzSec in their early hack against Sony Pictures Entertainment, though he allegedly erased the hard drives he used to carry out the attack. Kretsinger was released on his own recognizance under the conditions that he not access the internet except while at work and that he not travel to any states other than Arizona, California, or Illinois. The case against him was filed in Los Angeles, where Sony Pictures is located. Kretsinger pleaded guilty on 5 April 2012 to one count of conspiracy and one count of unauthorized impairment of a protected computer. On 19 April 2013, Kretsinger was sentenced for the "unauthorized impairment of protected computers" to one year in federal prison, one year of home detention following the completion of his prison sentence, a fine of $605,663 in restitution to Sony Pictures and 1000 hours of community service. On 8 August 2013, Raynaldo Rivera, age 21, known by the online moniker "neuron", of Chandler, Arizona, was sentenced to one year and one day in federal prison by United States District Judge John A. Kronstadt. In addition to the prison sentence, Judge Kronstadt ordered Rivera to serve 13 months of home detention, to perform 1,000 hours of community service and to pay $605,663 in restitution to Sony Pictures. On 6 March 2012, two men from Great Britain, one from the United States, and two from Ireland were charged in connection to their alleged involvement with LulzSec. The FBI revealed that supposed LulzSec leader Hector Xavier Monsegur, who went by the username Sabu, had been aiding law enforcement since pleading guilty to twelve counts, including conspiracy and computer hacking, on 15 August 2011 as part of a plea deal. In exchange for his cooperation, federal prosecutors agreed not to prosecute Monsegur for his computer hacking, and also not to prosecute him for two attempts to sell marijuana, possession of an illegal handgun, purchasing stolen property, charging $15,000 to his former employer's credit card in a case of identity theft, and directing people to buy prescription drugs from illegal sources. He still faces a misdemeanor charge of impersonating a federal agent. Five suspects were charged with conspiracy: Jake Davis, accused of being the hacker "Topiary" (who had been previously arrested); Ryan Ackroyd of London, accused of being "Kayla"; Darren Martyn of Ireland, accused of being "pwnsauce"; Donncha O’Cearrbhail of Ireland, accused of being "palladium"; and Jeremy Hammond of Chicago, accused of being "Anarchaos". While not a member of LulzSec, authorities suspect Hammond of being a member of Anonymous and charged him with access device fraud and hacking in relation to his supposed involvement in the December 2011 attack on intelligence company Stratfor as part of Operation AntiSec. On 8 April 2013, Jake 'Topiary' Davis and three other LulzSec members pleaded guilty to charges of computer hacking at Southwark Crown Court in London. On 24 April 2013, Australian Federal Police arrested 24-year-old Matthew Flannery of Point Clare, who boasted on Facebook "I’m the leader of LulzSec". Flannery, who went by the username Aush0k, was arrested for the alleged hacking of the Narrabri Shire Council website on which homepage sexually explicit text and an image were left. On 27 August 2014, Flannery entered guilty pleas to five charges of making unauthorised modification of data to cause impairment, and dishonestly obtaining the Commonwealth Bank details of a woman. Flannery, who said the reference to LulzSec was a joke, lost his job of computer technician in a security company. On 16 October 2014, he was sentenced to 15 months of house arrest which continues until mid-April 2016, alongside a 12 months good behaviour bond. See also Anonymous (group) Hacktivism LulzRaft MalSec Operation AntiSec Operation Payback 2011 PlayStation Network outage Securax References External links Lulzsecurity.org Current website referencing the latest attacks the group LuLzSecReborn Wikipedia articles with ASCII art Hacker groups Anonymous (hacker group) Hacktivists
4937124
https://en.wikipedia.org/wiki/Society%20for%20the%20History%20of%20Technology
Society for the History of Technology
The Society for the History of Technology (SHOT) is the primary professional society for historians of technology. SHOT was founded in 1958 in the United States, and it has since become an international society with members "from some thirty-five countries throughout the Americas, Europe, Asia, and Africa." SHOT owes its existence largely to the efforts of Professor Melvin Kranzberg (1917–1995) and an active network of engineering educators. SHOT co-founders include John B. Rae, Carl W. Condit, Thomas P. Hughes, and Eugene S. Ferguson. SHOT's flagship publication is the journal Technology and Culture, published by the Johns Hopkins University Press. Kranzberg served as editor of Technology and Culture until 1981, and was succeeded as editor by Robert C. Post until 1995, and John M. Staudenmaier from 1996 until 2010. The current editor of Technology and Culture is Suzanne Moon at the University of Oklahoma. SHOT is an affiliate of the American Council of Learned Societies and the American Historical Association and publishes a book series with the Johns Hopkins University Press entitled "Historical Perspectives on Technology, Society, and Culture," under the co-editorship of Pamela O. Long and Asif Azam Siddiqi. Pamela O. Long is the recipient of a MacArthur Foundation "Genius Grant" for 2014. The history of technology was traditionally linked to economic history and history of science, but its interactions are now equally strong with environmental history, gender history, business history, and labor history. SHOT annually awards two book prizes, the Edelstein Prize and the Sally Hacker Prize, as well as the Kranzberg Dissertation Fellowship and the Brooke Hindle Postdoctoral Fellowship. Its highest award is the Leonardo da Vinci Medal. Recipients of the medal include Kranzberg, Ferguson, Post, Staudenmaier, Bart Hacker, and Brooke Hindle. In 1968 Kranzberg was also instrumental in the founding of a sister society, the International Committee for the History of Technology (ICOHTEC). The two societies complement each other. The Society for the History of Technology is dedicated to the historical study of technology and its relations with politics, economic, labor, business, the environment, public policy, science, and the arts. The society now numbers around 1500 members, and regularly holds annual meetings at non-North-American venues. SHOT also sponsors smaller conferences focused on specialized topics, often jointly with other scholarly societies and organizations. Special Interest Groups The Albatrosses (technology of flight) SIGCIS: Computers, Information and Society Early Career Interest Group (ECIG) EDITH: Exploring Diversity in Technology's History Envirotech (technology and the natural environment) The Jovians (electrical technology) The Lynn White, Jr. Society: Prior to the "Industrial Revolution" The Mercurians (communications technology) SMiTInG (military technology) The Pelicans (chemical technology) The Prometheans (engineering) SHOT Asia Network TEMSIG: Technology Museums Special Interest Group WITH: Women in Technological History Annual meetings 2007 − Washington, D.C. − October 17–21 2008 − Lisbon, Portugal − October 11–14 2009 − Pittsburgh, Pennsylvania − October 15–19 2010 − Tacoma, Washington − September 29 - October 4 2011 − Cleveland, Ohio − November 2–6 2012 − Copenhagen, Denmark − October 4–7 2013 − Portland, Maine - October 10–13 2014 − Dearborn, Michigan - November 6–9 2015 − Albuquerque, New Mexico - October 7–11 2016 − Singapore - June 22–26 2017 − Philadelphia, Pennsylvania - October 26–29 2018 − St. Louis, Missouri - October 10–14 2019 − Milan, Italy - October 24–27 2020 − New Orleans, Louisiana - originally scheduled October 7–11 References Further reading David A. Hounshell, "Eugene S. Ferguson, 1916-2004," Technology and Culture 45 (2004): 911–21. DOI Robert C. Post, "Back at the Start: History and the History of Technology," Technology and Culture 51 (2010): 961-94. (muse.jhu.edu) Robert C. Post, "Chance and Contingency: Putting Mel Kranzberg in Context," Technology and Culture 50 (2009): 839-72. (DOI) Robert C. Post, "'A Very Special Relationship': SHOT and the Smithsonian's Museum of History and Technology," Technology and Culture 42 (2001): 401-35. (DOI) External links Sally Hacker Prize Edelstein Prize History of science organizations History of technology Historical societies of the United States Organizations established in 1958 Science and technology studies associations 1958 establishments in the United States
53990333
https://en.wikipedia.org/wiki/MB-Lab
MB-Lab
MB-Lab (previously ManuelbastioniLAB) is a free and open-source plug-in for Blender for the parametric 3D modeling of photorealistic humanoid characters. It was developed by the artist and programmer Manuel Bastioni, and was based on his over 15 year experience of 3D graphic projects. Bastioni withdrew support for the project but it has continued as a community project under the MB-Lab name. Graphical interface and usability The plugin is completely integrated in Blender. The GUI is designed to be self-explanatory and intuitive and when possible the features are designed to work with one click. Over 90% of the character is defined with only three sliders that control age (from 18 to 80 y.o.), body mass and body tone. The character is finished with other lab tools for body and face details, poses, skin and eye shaders, animation, poses, proxy, etc. Technology The software is designed as a laboratory in constant evolution and includes both consolidated algorithms as the 3D morphing and experimental technologies, as the fuzzy mathematics used to handle the relations between human parameters, the non-linear interpolation used to define the age, mass and tone, the auto-modelling engine based on body proportions and the expert system used to recognize the bones in motion capture skeletons. The software is written in Python and works on all the platforms supported by Blender: Windows, macOS and Linux. All the characters use the same standard skeleton, so the poses and animation can be easily moved from a character to another. Most of the data distributed in the package is stored using the standard json syntax. License ManuelbastioniLAB is completely open source, released under standard licenses of the Free Software Foundation. Code: All files written in Python are released under GNU General Public License 3. Data: All data files released in the ManuelbastioniLAB package are released under GNU Affero General Public License 3. The characters generated with ManuelbastioniLAB are released under the GNU Affero General Public License 3 (as derivative of AGPL'd data, meshes, textures etc.) Anatomy and mesh topology of 3D human models ManuelbastionLAB provides two different base meshes for male and female models. Each model respects the fundamental requisites of a professional mesh, as defined by the author: Optimization for subdivision surfaces. No triangles. Edge loops designed for deformation during poses and animation. The topology permits to model the main features of bodies and faces. Minimal use of poles. Human readable topology. Sculpting-friendly topology. The base humans are modelled after accurate studies of anatomy and anthropology. The lab 1.5.0 provides about 470 morphs for each human character, designed to parametrically describe most of the anatomical range in human bodies, faces and expressions. Genitalia are not present. Anthropology and phenotypes Concerning ManuelbastionLAB, the word phenotype is intended with the following meaning: A "phenotype" defines merely the physical appearance of a class of characters, it is not related to politics, culture, language and history. It's used to describe the variations of human traits in relation to the evolution in a specific geographical area. ManuelbastioniLAB supports most of the common human phenotypes to the extent of volumetric modelling features. The lab provides three main classes of humans: Caucasian, Asian and Afro. For each class there is a specific set of phenotypes. Each phenotype can be loaded from the library and used as base for a custom character, or mixed with another phenotype. The available phenotypes are: Afro phenotypes: Afromediterranean, Afroasian, Aboriginal, African. Asian phenotypes: Central Asian, North Asian, East Asian, South Asian, Central American, North American. Caucasian phenotypes: Central European, Afrocaucasian, East European, North European, Euromediterranean, Euroartic, North West European, West Asian. Non-human models: Anime, Elves, etc While the lab is aimed to create realistic 3d human beings based on a scientific description of their parameters, the same technology can be successfully applied to non-human characters, like fantasy creatures. The version 1.5.0 of the lab supports three variety of anime characters: classic shojo, modern shojo and "realistic style" anime. There are also male and female elves and male dwarf. Each model has a separate set of morphs to create millions of variations. Concerning the creation of fantasy characters, the lab supports some extra parameters for humans too, like pointed ears, special teeth, etc.. Comparisons While MakeHuman has similar characteristics to MB-Lab, the former is a stand-alone application and requires export and import to Blender which is not necessary with MB-Lab. Current stage of development The project was discontinued abruptly by Bastioni, after release 1.6.1a, which was not compatible with Blender 2.80. Bart Veldhhuzien indicates Bastoni attempted unsuccessfully to raise funds, and then chose to move on, quoting Bastoni as saying: "I’m sorry, I did my best, but I cannot continue the development of the lab. I will use Blender as artist, since Blender and its community are part of my life."; and "I realized that the lab community size is not enough to support a so expensive project". In December 2018, a new repository, based on Bastioni's last version (1.6.1a), aiming at Blender 2.80 compatibility, was opened on GitHub with the project name MB-Lab. New community based versions are available on GitHub supporting Blender 2.79 and 2.80. See also Blender (software), the underlying base 3D software MakeHuman, a related software for creation of 3D characters Notes References External links Free 3D graphics software 3D computer graphics software for Linux Free software programmed in Python Video game development software 3D modeling software for Linux Anatomical simulation Software using the GNU AGPL license
38310506
https://en.wikipedia.org/wiki/Jay%20Leiderman
Jay Leiderman
Jason Scott "Jay" Leiderman (April 12, 1971 - September 7, 2021) was an American criminal defense lawyer based in Ventura, California. The Atlantic Magazine called Leiderman the "Hacktivist's Advocate" for his work defending hacker-activists accused of computer crimes, or so-called "Hacktivism" especially people associated with Anonymous. Buzzfeed called Leiderman "The Maserati-Driving Deadhead Lawyer Who Stands Between Hackers And Prison" and stated he was "A medical marijuana and criminal defense lawyer from Southern California [who] has made himself into the country's leading defender of hackers." He was named to the top 100 criminal defense lawyers by the National Trial Lawyers. Leiderman was also featured in a video about his life and work on CNN's Great Big Story and appeared in the movie "The Hacker Wars." Leiderman was certified as a criminal law specialist by the California Board of Legal Specialization in 2006. Leiderman spends much of his time defending the kinds of clients Matlock might turn down on a good day and keeping Ventura's marijuana professionals out of trouble. Other noteworthy cases Leiderman defended include People v. Diaz, which went to the California Supreme Court and made law on the ability of police to search a cell phone, Louis Gonzalez, who was falsely accused of rape, attempted murder and torture by the mother of his child and was jailed for 83 days before he was released and ultimately found factually innocent, the Andrew Luster or so-called Max Factor heir habeas corpus proceeding, wherein his sentence was reduced by 74 years the first-ever trial of medical marijuana defendants in San Luis Obispo County, California County, and Leiderman represented the lead defendant in Ventura County, California's first concentrated Mexican Mafia prosecution. Leiderman represented journalist Matthew Keys who was found guilty on all charges against him in 2015. Leiderman was the lead trial attorney for Jonathan Koppenhaver, also known as War Machine, who was convicted of savagely beating and raping his girlfriend, porn star Christy Mack. Within a year at the Ventura County Public Defender's Office, Leiderman had graduated from misdemeanors to murders and three-strike cases. He sought out a series of cases defending the homeless, successfully challenging an open-container law that was frequently used to round up Ventura's large indigent population and getting a raft of misdemeanor illegal camping charges — also used as a weapon against the homeless — thrown out so decisively it led to an internal city review. "It pissed me off, it was a horrific injustice, and it was the right thing to do," Leiderman said." "Leiderman's years at the VCPDO coincided with the passage of California's medical marijuana statute, and the young lawyer started taking possession for sales and illegal cultivation cases. By the time he opened a private practice, in 2007, Leiderman had become something of an expert in state and county compliance laws. In addition to defending clients from marijuana-related criminal charges, Leiderman also advises medical marijuana collectives, teaching them the law, writing up their contracts and articles of association, and waiting on retainer for run-ins with the police." Leiderman co-authored a book on the legal defense of California medical marijuana crimes, which was published by NORML, the National Organization For the Reform of Marijuana Laws. Leiderman has "some pretty deep connections with Ventura County's medicinal cannabis community". Leiderman is also a founding member of the Whistleblower's Defense League, "formed to combat what they describe as the FBI and Justice Department's use of harassment and over-prosecution to chill and silence those who engage in journalism, Internet activism or dissent." Leiderman used the phrase "tin foil as reality" when describing the ever encroaching surveillance state. Leiderman frequently comments in diverse areas of the media about criminal and social justice issues. He also lectures around the state and nation on various criminal defense topics. According to Tor Ekeland, another prominent hacker attorney and sometimes co-counsel to Leiderman, Leiderman is "a "street-smart trial lawyer" who was "extraordinarily quick on his feet." "Leiderman is very much a defense attorney's defense attorney. "It is my duty under the constitution to represent these clients," he wrote in an email, continuing on to say, "I am what stands between the police state and the tyranny of ever encroaching government. If we abandon the ugliest of the cases, before we know it we're back to sending people to prison for a joint."" Leiderman graduated from the University of Michigan in 1993 and University of San Francisco School of Law in 1999. "He showed up for classes at the University of San Francisco Law School ... with hair halfway down his back, and left in 1999 as the class president. He applied to public defender jobs around the country, and picked Ventura because of the weather." Leiderman died aged 50 from a heart attack. References External links Jay Leiderman official site Jay Leiderman Blog Whistleblower Defense League 1971 births 2021 deaths Criminal defense lawyers American civil rights lawyers California lawyers University of San Francisco School of Law alumni University of Michigan alumni People from Ventura, California Activists from California People from Queens, New York New York (state) lawyers Activists from New York (state) 21st-century American lawyers
12685352
https://en.wikipedia.org/wiki/Research%20Institute%20for%20Advanced%20Computer%20Science
Research Institute for Advanced Computer Science
The Research Institute for Advanced Computer Science (RIACS) was founded June 1, 1983 as a joint collaboration between the Universities Space Research Association (USRA) and the NASA Ames Research Center. The Institute was created to conduct basic and applied research in computer science, covering a broad range of research topics of interest to the aerospace community including supercomputing, computational fluid dynamics, computational chemistry, high performance networking, and artificial intelligence. Since its inception, a goal of the Institute’s research has been to support scientific research and engineering from problem formulation to results dissemination, combining concurrent processing systems with intelligent systems to allow users to interact in the language of their discipline. This goal has since expanded to support a broad range of activities associated with space exploration and science, including mission operations and innovative information systems for technology research and development. An underlying philosophy and approach of the Institute is that successful research is interdisciplinary, and that challenging applications associated with NASA’s mission provide a driving force for developing innovative information systems and advancing computer science. To implement this approach, research staff undertakes collaborative projects with research groups at NASA, integrating computer science with other disciplines to support NASA’s mission. Over its nearly twenty five-year history, RIACS has acted as a bridge between academia and government research, engaging talented researchers from around the world to collaborate with NASA on challenging research topics. RIACS has also acted as a bridge between industry and the government to mature information technologies for infusion into NASA operations, enabling broader public benefit from research results. For NASA, RIACS has collaborated most closely with the Intelligent Systems Division and the NASA Advanced Supercomputing Division (NAS) at the NASA Ames Research Center – NASA’s Center for Excellence in Information Technology. RIACS, which was formed the same year as the NAS, worked closely with the division in its early years to develop a strong competency in supercomputing and computational fluid dynamics at NASA. RIACS helped establish the Intelligent Systems Division, and has since collaborated closely with the division to develop and infuse a number of software innovations in the areas of autonomous systems; intelligent information management and data understanding; and human-centered computing. Historical Contributions to NASA Examples of RIACS contributions to NASA Ames as part of the "Intelligent Systems Divisions" include leadership roles in developing and infusing: Autoclass Bayesian discovery system – used probabilistic techniques to discover new classes of infra-red stars in the Low Resolution Spectral catalogue from the NASA IRAS mission as the first artificial intelligence program to make an astronomical discovery; also used to discover new classes proteins, introns, and other patterns in DNA/protein sequence data, and others; Livingstone model-based diagnostic system – flown on Deep Space One as part of Remote Agent and on Earth Observing Mission EO-1 as the first artificial intelligence control system to control a spacecraft without human supervision; MAPGEN tactical activity planner – a constraint-based planning system used everyday during the Mars Exploration Rover mission resulting in an estimated 20% increase in scientific return; Clarissa voice-operated procedure browser – used on the International Space Station as the first spoken dialogue system used in space; and Program Management Tool – used to manage billions of dollars of NASA programs and projects with case studies showing 85% reduction in financial reporting time, elimination of 40% discrepancy rates in non-advocate review milestone data, and elimination of 30% error rates in baseline project plans. Purpose and Charter The stated purpose of RIACS since its formation is to: Provide an interface between the NASA Ames Research Center and the academic community and serve as a center of cooperation for activities conducted in areas of advanced computer science and engineering, applied mathematics, and the application of computers to NASA’s scientific and engineering challenges. Conduct independent research activities with the objective of developing concepts, techniques, or prototypes in computationally related disciplines to enhance problem-solving capabilities in scientific and engineering fields of interest to the aerospace community. Improve cooperative research efforts of government, industry, and academia toward the solution of problems requiring advanced computational facilities. Enhance technology transfer between universities, industry and other government agencies by conventional means, including encouragement of the rapid dissemination of preprint reports by institute personnel, presentations at symposia and publications in appropriate journals. Strengthen ties between the general academic and industrial communities and NASA’s Ames Research Center to further develop in-house programmatic activities in computer science and technology. Carry out a variety of additional activities, including lecture programs and visiting scientist programs, as well as provide consultation and collaboration with NASA Ames on topics in the fields of computer science and applied mathematics. RIACS personnel may lend support to local universities through the direction of dissertations, service on research committees, and participation in research seminars. Engage and leverage an eminent Science Council appointed by USRA to review progress of RIACS, including its research, reports, publications, and other matters that would affect or influence the purpose of the Institute. The Science Council also advises the RIACS Director on research projects, priorities, and resource requirements, and reports to USRA on the scientific activity of the Institute. References External links USRA NASA Ames Research Center NASA Intelligent Systems Division NASA Supercomputing Division NASA Computer science institutes Artificial intelligence laboratories Space organizations
3480937
https://en.wikipedia.org/wiki/Hardware%20security%20module
Hardware security module
A hardware security module (HSM) is a physical computing device that safeguards and manages digital keys, performs encryption and decryption functions for digital signatures, strong authentication and other cryptographic functions. These modules traditionally come in the form of a plug-in card or an external device that attaches directly to a computer or network server. A hardware security module contains one or more secure cryptoprocessor chips. Design HSMs may have features that provide tamper evidence such as visible signs of tampering or logging and alerting, or tamper resistance which makes tampering difficult without making the HSM inoperable, or tamper responsiveness such as deleting keys upon tamper detection. Each module contains one or more secure cryptoprocessor chips to prevent tampering and bus probing, or a combination of chips in a module that is protected by the tamper evident, tamper resistant, or tamper responsive packaging. A vast majority of existing HSMs are designed mainly to manage secret keys. Many HSM systems have means to securely back up the keys they handle outside of the HSM. Keys may be backed up in wrapped form and stored on a computer disk or other media, or externally using a secure portable device like a smartcard or some other security token. HSMs are used for real time authorisation and authentication in critical infrastructure thus are typically engineered to support standard high availability models including clustering, automated failover, and redundant field-replaceable components. A few of the HSMs available in the market have the capability to execute specially developed modules within the HSM's secure enclosure. Such an ability is useful, for example, in cases where special algorithms or business logic has to be executed in a secured and controlled environment. The modules can be developed in native C language, .NET, Java, or other programming languages. Further, upcoming next-generation HSMs can handle more complex tasks such as loading and running full operating systems and COTS software without requiring customization and reprogramming. Such unconventional designs overcome existing design and performance limitations of traditional HSMs. While providing the benefit of securing application-specific code, these execution engines protect the status of an HSM's FIPS or Common Criteria validation. Security Due to the critical role they play in securing applications and infrastructure, HSMs and/or the cryptographic modules are typically certified to internationally recognized standards such as Common Criteria or FIPS 140 to provide users with independent assurance that the design and implementation of the product and cryptographic algorithms are sound. The highest level of FIPS 140 security certification attainable is Security Level 4 (Overall). When used in financial payments applications, the security of an HSM is often validated against the HSM requirements defined by the Payment Card Industry Security Standards Council. Uses A hardware security module can be employed in any application that uses digital keys. Typically the keys would be of high value - meaning there would be a significant, negative impact to the owner of the key if it were compromised. The functions of an HSM are: onboard secure cryptographic key generation onboard secure cryptographic key storage, at least for the top level and most sensitive keys, which are often called master keys key management use of cryptographic and sensitive data material, for example, performing encryption or digital signature functions offloading application servers for complete asymmetric and symmetric cryptography. HSMs are also deployed to manage transparent data encryption keys for databases and keys for storage devices such as disk or tape. HSMs provide both logical and physical protection of these materials, including cryptographic keys, from disclosure, non-authorized use, and potential adversaries. HSMs support both symmetric and asymmetric (public-key) cryptography. For some applications, such as certificate authorities and digital signing, the cryptographic material is asymmetric key pairs (and certificates) used in public-key cryptography. With other applications, such as data encryption or financial payment systems, the cryptographic material consists mainly of symmetric keys. Some HSM systems are also hardware cryptographic accelerators. They usually cannot beat the performance of hardware-only solutions for symmetric key operations. However, with performance ranges from 1 to 10,000 1024-bit RSA signs per second, HSMs can provide significant CPU offload for asymmetric key operations. Since the National Institute of Standards and Technology (NIST) is recommending the use of 2,048 bit RSA keys from year 2010, performance at longer key sizes is becoming increasingly important. To address this issue, most HSMs now support elliptic curve cryptography (ECC), which delivers stronger encryption with shorter key lengths. PKI environment (CA HSMs) In PKI environments, the HSMs may be used by certification authorities (CAs) and registration authorities (RAs) to generate, store, and handle asymmetric key pairs. In these cases, there are some fundamental features a device must have, namely: Logical and physical high-level protection Multi-part user authorization schema (see Blakley-Shamir secret sharing) Full audit and log traces Secure key backup On the other hand, device performance in a PKI environment is generally less important, in both online and offline operations, as Registration Authority procedures represent the performance bottleneck of the Infrastructure. Card payment system HSMs (bank HSMs) Specialized HSMs are used in the payment card industry. HSMs support both general-purpose functions and specialized functions required to process transactions and comply with industry standards. They normally do not feature a standard API. Typical applications are transaction authorization and payment card personalization, requiring functions such as: verify that a user-entered PIN matches the reference PIN known to the card issuer verify credit/debit card transactions by checking card security codes or by performing host processing components of an EMV based transaction in conjunction with an ATM controller or POS terminal support a crypto-API with a smart card (such as an EMV) re-encrypt a PIN block to send it to another authorization host perform secure key management support a protocol of POS ATM network management support de facto standards of host-host key | data exchange API generate and print a "PIN mailer" generate data for a magnetic stripe card (PVV, CVV) generate a card keyset and support the personalization process for smart cards The major organizations that produce and maintain standards for HSMs on the banking market are the Payment Card Industry Security Standards Council, ANS X9, and ISO. SSL connection establishment Performance-critical applications that have to use HTTPS (SSL/TLS), can benefit from the use of an SSL Acceleration HSM by moving the RSA operations, which typically requires several large integer multiplications, from the host CPU to the HSM device. Typical HSM devices can perform about 1 to 10,000 1024-bit RSA operations/second. Some performance at longer key sizes is becoming increasingly important. To address this issue, some HSMs now support ECC. Specialized HSM devices can reach numbers as high as 20,000 operations per second. DNSSEC An increasing number of registries use HSMs to store the key material that is used to sign large zonefiles. OpenDNSSEC is an open-source tool that manages signing DNS zone files. On January 27, 2007, ICANN and Verisign, with support from the U.S. Department of Commerce, started deploying DNSSEC for DNS root zones. Root signature details can be found on the Root DNSSEC's website. Cryptocurrency wallet Cryptocurrency can be stored in a cryptocurrency wallet on a HSM. See also Electronic funds transfer FIPS 140 Public key infrastructure PKCS 11 Secure cryptoprocessor Security token Transparent data encryption Security switch Trusted Platform Module Notes and references External links Current NIST FIPS-140 certificates A Review of Hardware Security Modules Cryptographic hardware Banking technology Cryptanalytic devices
16297028
https://en.wikipedia.org/wiki/GenerativeComponents
GenerativeComponents
GenerativeComponents is parametric CAD software developed by Bentley Systems, was first introduced in 2003, became increasingly used in practice (especially by the London architectural community) by early 2005, and was commercially released in November 2007. GenerativeComponents has a strong traditional base of users in academia and at technologically advanced design firms. GenerativeComponents is often referred to by the nickname of 'GC'. GC epitomizes the quest to bring parametric modeling capabilities of 3D solid modeling into architectural design, seeking to provide greater fluidity and fluency than mechanical 3D solid modeling. Users can interact with the software by either dynamically modeling and directly manipulating geometry, or by applying rules and capturing relationships among model elements, or by defining complex forms and systems through concisely expressed algorithms. The software supports many industry standard file input and outputs including DGN by Bentley Systems, DWG by Autodesk, STL (Stereo Lithography), Rhino, and others. The software can also integrate with Building Information Modeling systems, specifically and an installed extension/Companion Feature to Bentley's AECOsim Building Designer. The software has a published API and uses a simple scripting language, both allowing the integration with many different software tools, and the creation of custom programs by users. This software is primarily used by architects and engineers in the design of buildings, but has also been used to model natural and biological structures and mathematical systems. Generative Components currently runs exclusively on Microsoft Windows operating systems, and in English. Bentley Systems Incorporated offers GC as a free download. This version of GC does not time-out and is not feature limited. It requires registration with an email address. This is a standalone version of GC that includes the underlying Bentley MicroStation software that is required for it to run. SmartGeometry Group The SmartGeometry Group has been instrumental in the formation of GenerativeComponents. GenerativeComponents was brought to the market after utilizing a multi-year testing cycle with a dedicated user community in the SmartGeometry group. This community was responsible for shaping the product very early in its life and continues to play an important role in defining it. The SmartGeometry Group is an independent non-profit organization; it is not a Bentley user group. The SmartGeometry Group organizes an annual multi-day workshop and accompanying conference highlighting advanced design practices and technology. Recent workshop and conferences have been in: IAAC - Barcelona, Spain (2010); CITA - Copenhagen, Denmark (2011); RPI - Troy, New York (2012); UCL - London, UK (2013). See also Architecture Architectural engineering Design computing Comparison of CAD Software References External links GenerativeComponents official free software download site GenerativeComponents Forum at Bentley Communities SmartGeometry Group website Official SmartGeometry Conferences website, with links to 2007-2010 sessions as webcasts CAD Insider, June 2007 AEC Magazine, January 2005 Parametric Design Repository, Canadian Design Research Network Computer-aided design Building engineering software Computer-aided design software Data modeling
181718
https://en.wikipedia.org/wiki/Mimer%20SQL
Mimer SQL
Mimer SQL is an SQL-based relational database management system produced by the Swedish company Mimer Information Technology AB (Mimer AB), formerly known as Upright Database Technology AB. It was originally developed as a research project at the Uppsala University, Uppsala, Sweden in the 1970s before being developed into a commercial product. The database has been deployed in a wide range of application situations, including the National Health Service Pulse blood transfusion service in the UK, Volvo Cars production line in Sweden and automotive dealers in Australia. It has sometimes been one of the limited options available in realtime critical applications and resource restricted situations such as mobile devices. History Mimer SQL originated from a project from the ITC service center supporting Uppsala University and some other institutions to leverage the relational database capabilities proposed by Codd and others. The initial release in about 1975 was designated RAPID and was written in IBM assembler language. The name was changed to Mimer in 1977 to avoid a trademark issue. Other universities were interested in the project on a number of machine architectures and Mimer was rewritten in Fortran to achieve portability. Further models were developed for Mimer with the Mimer/QL implementing the QUEL query languages. The emergence of SQL in the 1980s as the standard query language resulted in Mimers' developers choosing to adopt it with the product becoming Mimer SQL. In 1984 Mimer was transferred to the newly established company Mimer Information Systems. Versions the Mimer SQL database server is currently supported on the main platforms of Windows, MacOS, Linux, and OpenVMS (Alpha and Integrity). Previous versions of the database engine was supported on other operating systems including Solaris, AIX, HP-UX, Tru 64, SCO and DNIX. Versions of Mimer SQL are available for download and free for development. The Enterprise product is a standards based SQL database server based upon the Mimer SQL Experience database server. This product is highly configurable and components can be added, removed or replacing in the foundation product to achieve a derived product suitable for embedded, real-time or small footprint application. The Mimer SQL Realtime database server is a replacement database engine specifically designed for applications where real-time aspects are paramount. This is sometimes marketed as the Automotive approach. For resource limited environments the Mimer SQL Mobile database server is a replacement runtime environment without a SQL compiler. This is used for portable and certain custom devices and is termed the Mobile Approach. Custom embedded approaches can be applied to multiple hardware and operating system combinations. These options enable Mimer SQL to be deployed to a wide variety of additional target platforms, such as Android, and real-time operating systems including VxWorks. The database is available in real-time, embedded and automotive specialist versions requiring no maintenance, with the intention to make the product suitable for mission-critical automotive, process automation and telecommunication systems. Features Mimer SQL provides support for multiple database application programming interfaces (APIs): ODBC, JDBC, ADO.NET, Embedded SQL (C/C++, Cobol and Fortran), Module SQL (C/C++, Cobol, Fortran and Pascal), and the native API's Mimer SQL C API, Mimer SQL Real-Time API, and Mimer SQL Micro C API. MimerPy is an adapter for Mimer SQL in Python. The Mimer Provider Manager is an ADO.NET provider dispatcher that uses different plugins to access different underlying ADO.NET providers. The Mimer Provider Manager makes it possible to write database independent ADO.NET applications. Mimer SQL mainly uses optimistic concurrency control (OCC) to manage concurrent transactions. This makes the database locking free and enables real-time predictability. Mimer SQL is assigned port 1360 in the Internet Assigned Numbers Authority (IANA) registry. Etymology The name "Mimer" is taken from the Norse mythology, where Mimer was the giant guarding the well of wisdom, also known as "Mímisbrunnr". Metaphorically this is what a database system is doing managing data. See also Werner Schneider the professor who started the development section for the relational database that became Mimer SQL (Swedish article) References External links Mimer SQL Official developer website Proprietary database management systems Relational database management systems Real-time databases Embedded databases OpenVMS software
37014054
https://en.wikipedia.org/wiki/Acrobits
Acrobits
Acrobits is a privately owned software development company creating VoIP Clients for mobile platforms, based in Prague, Czech Republic. Company history Acrobits was founded in November 2008, and builds mobile VoIP software with a polished user interface, supporting encrypted calls using SRTP/SDES and ZRTP, Google Voice integration, and the G.729 Annex A audio codec. In 2009 Acrobits Softphone was released on the iTunes App Store. The following year Acrobits released their SIP Client with business features, Groundwire. In early 2011 Acrobits Softphone was released on the Android Market. In 2010 Acrobits also launched a service allowing SIP providers to appear on the list of pre-configured providers in Acrobits Softphone. In 2012 Acrobits added video calls over WiFi support for the iOS version of its softphone. Acrobits Softphone Acrobits Softphone is a VoIP client which uses Session Initiation Protocol. Acrobits Softphone is the leading SIP Client on the App Store, featuring push notifications and the G.729 Annex A audio codec, backgrounding, Google Voice integration and encrypted calls through ZRTP. History of Softphone The first version of Acrobits Softphone was released on the App Store in April 2009. Version 1.0 supported only a single SIP account and the G711 and GSM codecs. During the following months new updates were released rapidly, adding new features, and the app quickly became the most downloaded paid SIP app for iOS worldwide. Support for push notifications for incoming calls was added to Softphone in September 2009, shortly after push notifications were introduced in iOS3. The G729 codec was added in Apr 2010. In August 2010, a business-caliber version of Softphone called Groundwire was released on the App Store, adding support for conferencing, voicemail, call transfers, call forwarding and other advanced features of business-grade phones. With the release of Groundwire, the app reached the level of maturity and completeness and attracted much interest from VoIP providers, who asked for white-label versions of the app, optimized and fine-tuned for their network only. Until now, around 50 different white-label versions were created. Later, the following features were added to Softphone: ZRTP support (December 2010), NAT Bridge to help NAT traversal in difficult networking conditions (July 2011), support for video calls (Dec 2011), support for ICE (March 2012) Acrobits Softphone for Android was released in Feb 2011, followed by Android Groundwire in April 2012. Android apps are now on par with their iOS counterparts, with the exception of video calls which are not yet supported on Android. Features Acrobits Softphone and especially Groundwire support all features and technologies expected of the modern SIP client, plus some unique features described below. Push notifications for incoming calls The challenge with VoIP on mobile devices is to make sure that the device is ready to receive incoming calls while keeping the power consumption as low as possible. Due to the inherent mobility of mobile devices, the network conditions change often and frequent SIP re-registrations and keep-alive traffic are needed to make sure the mobile client is properly registered and will receive incoming call at all times. This has a significant impact on battery life. Acrobits Softphone uses a proprietary SIP Instance Server (SIPIS) to register on behalf of user when the mobile app is not running in foreground on the mobile device. As soon as the app is suspended to background or exited completely, SIPIS server takes over, registers the account and starts listening for incoming calls. When a call arrives, the mobile app is woken up using the Apple Push Notification Service (APNS) and the call is handed over to the mobile app. The advantage of this solution is that the mobile app does not need to run at all on the device, consuming no additional battery power, and is still able to receive incoming calls. The media of the call (audio and video) are still transferred directly to the mobile app, for lowest latency and security - no extra relaying is done. Using push notifications doesn't require any support on the SIP server side and uses only SIP protocol standard. An important point and a potential drawback of this solution is the need to transfer full SIP account credentials to SIPIS server, as it needs them to be able to register, which is an obvious security risk. One way to avoid it is to install the SIPIS server on the premises of the VoIP service provider, in which case the security risk is eliminated - the provider already knows the passwords anyway. Secure Calls Acrobits Softphone supports encrypted voice and video calls using the standard SRTP protocol. It is able to encrypt media packets with the AES-128, AES-192 or AES-256 ciphers and authenticate them using either 32-bit or 80-bit HMAC-SHA1 algorithm. For key exchange, Acrobits Softphone offers support for SDES and ZRTP protocols. The SDES protocol transmits the encryption keys in plain text inside SIP+SDP messages. This key exchange protocol is therefore pretty much useless for most users, unless they have a complete control over the SIP signalling system to ensure that the TLS transport protocol is used all the way from the originating to the receiving device. Even if a SIP provider guarantees usage of TLS everywhere in his infrastructure, the provider itself is still able to see the encryption keys in plain text, because its SIP proxies must decrypt the SIP+SDP messages in order to route them forward. To address the above shortcomings of the SDES protocol, Phil Zimmermann devised a military grade key exchange protocol, ZRTP, which is built on ideas from public-key cryptography. Using ZRTP, two devices can securely exchange encryption keys even over an inherently insecure communication channel. Moreover, by employing human brains to compare short authentication strings (SAS) spoken by the other party, ZRTP severely reduces the probability of a successful man-in-the-middle attack, which requires a single shot guess of the correct SAS out of 65536 possibilities. The whole point of SAS is that one human being compares and confirms spoken words of another human being whom the first recognizes (e.g. by voice) as the intended remote party. Any other usage of SAS is meaningless. Acrobits Softphone supports the following algorithms employed by ZRTP: SRTP Cipher: AES1 (AES with 128-bit key) AES2 (AES with 192-bit key) AES3 (AES with 256-bit key) SRTP Authentication: HS32 (HMAC-SHA1 32-bit) HS80 (HMAC-SHA1 80-bit) ZRTP Hash: S256 (SHA-2 256-bit) Key Agreement: DH3k (Finite Field Diffie-Hellman with 3072-bit Prime) DH2k (Finite Field Diffie-Hellman with 2048-bit Prime) Prsh (Pre Shared Mode) Mult (Multi Stream Mode) Short Authentication Strings: B32 (Base32, Four Letters and Digits) B256 (Base256, Two English Words) Other products Acrobits Groundwire Customers In addition to their flagship products Acrobits creates white label SIP Solutions for VoIP providers around the world. See also Comparison of VoIP software References External links Acrobits website Review from brighthub.com Review from onSIP.com Review from bestappsite.com Review from osnews.com VoIP software
29347908
https://en.wikipedia.org/wiki/Video%20plankton%20recorder
Video plankton recorder
A video plankton recorder (VPR) is towed underwater video microscope system, which photographs small to fine-scale structure of plankton, from 50 micrometers and up to a few centimeters in size. A VPR consists of five general components: cameras (with magnifying optics), strobe, additive sensor and flight control, underwater platform and interface software for plankton identification. Technical aspects In order to obtain high-quality and low-noise images, charge-coupled device (CCD) sensors are used in the camera system. In the early design system, the CCD cameras were mounted in one of the arm of the platform. The developments in the recent years made the cameras system possible to be mounted in the platform body along with other sensors and flight control. The magnification power on the cameras should be vary, with high magnification power on the camera, we can obtain detail observation result on the plankton sample, such as protozoan that has <1 µm resolution . In other hand, the high magnification is able to identify plankton into genus level while low magnification of the camera will provide the rare and larger taxa. The xenon strobe (red-filtered 80 W) provides the VPR systems with the Lighting to support the work of the video camera. It is placed in the other side of the platform arm. This design is intended to provide the area between camera and strobe as undisturbed water volume for continuous observation in the VPR system. As a complex system, VPR can also carry several oceanographic sensors in the same time such as, CTD, transmissometer, fluorometer and flowmeter. These sensors enable the system to measure temperature, conductivity, depth, flow measurement, fraction of light in the water and the fluorescence. The housing or platforms for this instrument is varying depend on the purpose of the survey. These are the particular platform that has been tested and used to mount the VPR: Towed device Remotely operated underwater vehicle(ROV) Autonomous underwater vehicle (AUV) Autonomous profiling mooring The improvements of imaging system to observe, calculate and measure plankton in the ocean enable the detail observation on the specific plankton community. This early development was conducted by enumerating and counting the silhouette photography of plankton, the result then is processed with software package such as Matlab. Imaging software The most important part in VPR is the plankton identification software. Any developments of the software should improve the required task performed by VPR. In a nutshell, the software should have the ability of: Importing the plankton images database in to the system. Validate the object and evaluate from the background of the sample. This qualification should include the capability of the software to discriminate unknown object from plankton. Identify and classify the samples therefore plankton can be distinguished from each other. Presenting the result in the form of; abundance, size of distribution and biomass. Observation result The study conducted by Benfield, M.C., et al. has discovered that VPR provided comparable data on the taxonomic composition of the plankton compared to physical plankton survey taken by MOCNESS. The figure in the right side shows us the general trends in abundance and detail scale patchiness along the observation area. Advantage and Disadvantages Although a VPR has a drawback for not being able to identify plankton into species level, the advantages to use this instrument has gone beyond its limitation to provide us with convenient and accurate result. The high resolution result on plankton taxa observation along with synchronized measurement environmental variables from another oceanographic sensor attached in VPR body can be considered as the milestone of this device. In addition, since the observation is conducted visually by photographing the sample, the observation of delicate plankton and gelatinous species can be done accurately without having them destroyed in the net. References Gallery Source of Picture Gallery Planktology
5954250
https://en.wikipedia.org/wiki/Washington%20Huskies%20football
Washington Huskies football
The Washington Huskies football team represents the University of Washington in college football. Washington competes in the NCAA Division I Football Bowl Subdivision (FBS) as a member of the North Division of the Pac-12 Conference. Husky Stadium, located on campus, has served as the home field for Washington since 1920. Washington has won 17 conference championships, seven Rose Bowls, and claims two national championships recognized by NCAA-designated major selectors. Washington's only consensus national championship, however, was in 1991 when the team finished No. 1 in the Coaches' Poll. The school's all-time record ranks 20th by win percentage and 19th by total victories among FBS schools as of 2018. Washington holds the FBS record for the longest unbeaten streak at 64 consecutive games, as well as the second-longest winning streak at 40 wins in a row. There have been a total of 12 unbeaten seasons in school history, including seven perfect seasons. Washington is one of four charter members of what became the Pac-12 Conference and, along with California, is one of only two schools with uninterrupted membership. From 1977 through 2003, Washington had 27 consecutive non-losing seasons—the most of any team in the Pac-12 and the 14th longest streak by an NCAA Division I-A team. Through the 2017 season, its 390 conference victories rank second in conference history. Washington is often referred to as one of the top Quarterback U's due to the long history of quarterbacks playing in the National Football League (NFL), including the second-most QB starts in NFL history. Dating back to Warren Moon in 1976, 14 of the last 19 quarterbacks who have led the team in passing for at least one season have gone on to play in the NFL. History Early history (1889–1907) Ten different men served as Washington head coaches during the first 18 seasons. While still an independent, the team progressed from playing 1 to 2 games per season to 10 matches per season as the sport grew in popularity. The school initially used a variety of locations for its home field. Home attendance grew from a few hundred to a few thousand per home game, with on-campus Denny Field becoming home from 1895 onward. The 1900 team played in-state rival Washington State College to a 5–5 tie, in the first game in the annual contest later known as the Apple Cup. Gil Dobie era (1908–1916) Gil Dobie left North Dakota Agricultural and became Washington's head coach in 1908. Dobie coached for nine remarkable seasons at Washington, posting a 58–0–3 record. Dobie's career comprised virtually all of Washington's NCAA all-time longest 64-game unbeaten streak (outscoring opponents 1930–118) and included a 40-game winning streak, second longest in NCAA Division I-A/FBS history. In 1916, Washington and three other schools formed the Pacific Coast Conference, predecessor to the modern Pac-12 Conference. In Dobie's final season at Washington, his 1916 team won the PCC's inaugural conference championship. Dobie was inducted into the College Football Hall of Fame in 1951 as a charter member. Hunt-Savage-Allison era (1917–1920) Following Dobie's tenure, Washington turned to a succession of coaches with mixed results. Claude J. Hunt (1917, 1919) went a cumulative 6–3–1 highlighted by the school's second PCC championship in 1919, Tony Savage (1918) 1–1, and Stub Allison (1920) 1–5. This era concluded with the team's move from Denny Field to its permanent home field of Husky Stadium in 1920. Washington athletics adopted the initial nickname of Sun Dodgers in 1919 used until 1922, before becoming the Huskies from 1923 onward. Enoch Bagshaw era (1921–1929) Enoch Bagshaw graduated from Washington in 1907 as the school's first five-year letterman in football history. After leading Everett High School from 1909 to 1920, including consecutive national championships in 1919 and 1920, Bagshaw returned to Washington as the first former player turned head coach in 1921, ultimately overseeing the program's second period of sustained success. Bagshaw's tenure was marked by 63–22–6 record and the school's first two Rose Bowl berths, resulting in a 14–14 tie against Navy in the 1924 Rose Bowl and a 19–20 loss to Alabama in the 1926 Rose Bowl. His 1925 team won the school's third PCC championship. Bagshaw left the program after his 1929 team had a losing season, only the second such season in his tenure. Bagshaw died the following year at the age of 46. James Phelan era (1930–1941) James Phelan succeeded Bagshaw for the 1930 season. The Notre Dame graduate guided the Huskies to a 65–37–8 record over 12 seasons. His 1936 team won the school's fourth PCC championship, but lost in the 1937 Rose Bowl to Pittsburgh 0–21. Phelan guided the Huskies to their first bowl game victory, beating Hawaii 53–13 in the 1938 Poi Bowl. In later years, he became the first former Husky head coach to take the same role in professional football. Phelan was inducted into the College Football Hall of Fame in 1973. Welch-Odell-Cherberg-Royal era (1942–1956) Following Phelan, Washington fielded a succession of teams under four coaches without either great success, or failure. Washington participated in one bowl game and tallied no conference championships during this period with an overall record of 65–68–7. Ralph Welch played at Purdue under head coach James Phelan, whom he followed to Washington to become an assistant coach in 1930. In 1942, Welch was promoted to succeed Phelan as Washington's head coach and served until 1947, compiling a record of 27–20–3. World War II limited both the 1943 and 1944 seasons of the PCC, reducing team participation from ten team down to just four. Welch's 1943 team accepted the school's third Rose Bowl bid, but lost to PCC champion USC 0–29 in the 1944 Rose Bowl. Welch's first five teams all fielded winning records, but final 1947 team did not. Howard Odell joined Washington in 1948 from Yale. In his five seasons from 1948 to 1952, he compiled a record of 23–25–2 with two winning seasons. John Cherberg, a Washington player and then assistant from 1946 to 1952, became head coach in 1953. He compiled a 10–18–2 record from 1953 to 1955, before being removed due to a payoff scandal. Cherberg went on to become Washington state's longest serving Lieutenant Governor, from 1957 until his death in 1989. Darrell Royal was retained and led the 1956 team to a 5–5 record, before leaving to coach at Texas where he won three national championships, was inducted into the College Football Hall of Fame in 1983, and had the school's football stadium renamed in his honor as Darrell K Royal–Texas Memorial Stadium. Jim Owens era (1957–1974) In 1957, Jim Owens came to Washington after stints as an assistant with Paul "Bear" Bryant at Kentucky and Texas A&M. According to legend, after the 1956 season, when the Huskies were looking for a head coach, Bryant indicated to reporters that Owens "will make a great coach for somebody some day." Over 18 seasons, Owens would compile a 99–82–6 record. After a pair of unremarkable initial seasons, Owens led his 1959, 1960, and 1963 teams to three AAWU championships and associated Rose Bowl berths: a 1960 Rose Bowl 44–8 win over Wisconsin, a 1961 Rose Bowl 17–7 win over Minnesota, and a 7–17 loss to Illinois in the 1964 Rose Bowl. The Helms Athletic Foundation named the 1960 team the national champions, the school's first such title in football. Owens' later teams would never match this level of success, partly owing to a conference prevention of a second bowl team representative until 1975. Owens concurrently served as the athletic director at Washington from 1960 to 1969. Owens resigned as head coach of the Huskies following the 1974 season, as the Pac-8's third winningest coach of all-time. He was elected to the College Football Hall of Fame as a player in 1982. Don James era (1975–1992) Don James came to Washington from Kent State. During his 18-year tenure, James' Huskies won four Rose Bowls and one Orange Bowl. His dominating 1991 Washington Huskies finished a perfect 12-0 season and shared the national championship with Miami. The Huskies won 22 consecutive games from 1990–1992. James' record with the Huskies was 153–57–2. James won national coach of the year honors in 1977, 1984 and 1991 and was inducted into the College Football Hall of Fame in 1997. Sports columnists and football experts have recognized the 1991 Washington Huskies among the top 10 college football teams of all time. During the 1992 season, it was revealed that several of James' players received improper benefits from boosters. The Huskies received sanctions from both the NCAA and then Pacific-10 Conference. Although James and his staff were not personally implicated in any violation, James resigned on August 22, 1993 in protest of the harsh sanctions the Pac-10 imposed on top of the NCAA's sanctions against his team. Though then University President William Gerberding and then Athletic Director Barbara Hedges had presented James the final list of penalties that all Pac-10 parties had agreed best for the football program and athletics, Gerberding argued in favor of altering the penalties against the program from a two-year TV revenue ban and one-year bowl ban, to a one-year TV revenue ban and two-year bowl ban. In a 2006 interview with columnist Blaine Newnham of The Seattle Times, Don James said his resignation from head coaching "probably saved his life". According to those who knew him, Don James was a great leader, a coach of character, a man of honor and integrity. Don James died on October 20, 2013, at the age of 80. A week later, the Huskies honored James during the game against California, which they won 41-17. On October 27, 2017, when the University of Washington unveiled a bronze statue of the legendary coach in the northwest plaza of Husky Stadium, "the Dawgfather" finally returned home. Jim Lambright era (1993–1998) Jim Lambright was promoted from defensive coordinator to head coach following the sudden resignation by Don James. Lambright led the Huskies to four bowl appearances in his six seasons. Despite these bowl appearances and a 44–25–1 overall record, Lambright was fired by athletic director Barbara Hedges following the 1998 season after going 6–6. Neuheisel and Gilbertson era (1999–2004) Rick Neuheisel was hired away from Colorado to take over as the Huskies' head football coach. During his tenure, the Huskies went 33–16, highlighted by a victory in the Rose Bowl in January 2001 over Purdue. Neuheisel also led the Huskies to two berths in the Holiday Bowl and to the Sun Bowl during his four-year tenure. Neuheisel was reprimanded by the NCAA for numerous recruiting violations. Neuheisel was fired in June 2003 after he admitted to taking part in a calcutta pool for the 2003 Men's NCAA basketball tournament. Neuheisel sued for wrongful termination, ultimately settling the case in March 2005 for $4.5 million, paid by the NCAA and Washington athletics department. Keith Gilbertson was promoted from offensive coordinator to head coach following Neuheisel's termination. The 2003 season, Gilbertson's first, ended with a 6–6 record but no bowl appearance. A 1–10 record the next year resulted in his firing. The 1–10 mark in 2004 was only Washington's second since the end of World War II. In two seasons, Gilbertson's record was 7–16. Tyrone Willingham era (2005–2008) Former Stanford and Notre Dame head coach Tyrone Willingham was hired as the next head football coach of the Washington Huskies in order to clean up the program's off-the-field reputation. The Huskies failed to post a winning record in any of Willingham's four seasons, the best being 5–7 in 2006. Willingham's record at Washington was a dismal 11–37 (.229). Willingham was fired after a winless (0-12) 2008 season. Steve Sarkisian era (2009–2013) USC offensive coordinator Steve Sarkisian was named the 23rd head football coach at Washington following the firing of Willingham. Sarkisian, known as an offensive mind and quarterbacks coach, led the Huskies to a 34–29 record over five seasons, never winning more than eight games in a year but recording just one losing season. Sarkisian departed after the 2013 regular season to return to USC as the head football coach, becoming the first head coach to voluntarily leave Washington for another program since Darrell Royal in 1956. Chris Petersen era (2014–2019) Washington hired Chris Petersen as head football coach on December 6, 2013. Petersen previously spent eight seasons as the head coach at Boise State. Petersen led Washington to a Pac-12 title and a College Football Playoff appearance in 2016. On April 11, 2017, the Washington Huskies Athletic Department extended Petersen's coaching contract through 2023, with a reported annual salary of $4.875 million, paid entirely from Washington Athletic Department revenue, such as ticket sales and television rights or gifts. Washington finished the 2017 season with an invitation to participate in the 2017 Fiesta Bowl. In the 2018 season, Petersen led the Huskies to their second Pac-12 title in three years and Washington's 15th Rose Bowl appearance. On December 2, 2019, Petersen announced he would step down as head coach and move into an advisory role. Jimmy Lake era (2020–2021) Defensive coordinator Jimmy Lake was named Petersen's successor following his departure. He coached the team to a 3-1 record and a Pac-12 North division title during the COVID-19 shortened 2020 season. The team was unable to play in the 2020 Pac-12 Football Championship Game due to numerous COVID-related absences. During the 2021 season, Lake was suspended without pay for shoving a Washington player during a loss to Oregon. Lake was later fired, finishing his tenure with a 7-6 record. Defensive coordinator Bob Gregory served as interim coach for the final three games of the season. Kalen DeBoer era (2022–Present) Washington hired Kalen DeBoer as head football coach on November 29, 2021. DeBoer spent the previous two seasons as head coach at Fresno State. Conference affiliations Washington played its first 26 seasons of college football from 1889 to 1915 as an independent. In 1916, Washington became one of the four charter members of the Pacific Coast Conference (PCC), which later evolved into the modern day Pac-12 Conference after going through several iterations: the PCC (1916–1958), Athletic Association of Western Universities (1959–1967), Pacific-8 (1968–1977), Pacific-10 (1978–2010), and Pac-12 (2011–present). The Pac-12 claims the history of each of these preceding conferences as its own. Washington and California are the only founding and continuous members in each of these successive conferences. Independent (1889–1915) Pac-12 Conference (1916–present) Pacific Coast Conference (1916–1958) Athletic Association of Western Universities (1959–1967) Pacific-8 Conference (1968–1977) Pacific-10 Conference (1978–2010) Pac-12 Conference (2011–present) Championships National championships Washington has won five national championships, including four from NCAA-designated major selectors. In addition, sports and celebrity biographer Bill Libby chose the 1910 team as national champions in his book Champions of College Football. Washington claims both the 1960 and 1991 championships. Washington's only consensus national championship was in 1991, when the team finished #1 in the Coaches' Poll. Claimed national championships Unclaimed national championships 1960 season The 1960 team took an improbable road to the Rose Bowl and national championship. After suffering a 1-point setback to Navy in the third week of the season, the team reeled off eight straight wins capped by a triumph over No. 1 Minnesota in the Rose Bowl. Because the final Associated Press and United Press International polls were conducted after the final game of the regular season, Minnesota was named the AP and UPI national champion for 1960. In its poll conducted following bowl games, the Football Writers Association of America recognized Ole Miss as its national champion. The postseason poll conducted by the Helms Athletic Foundation recognizes Washington as national champions. 1984 season The 1984 team opened the 1984 college football season with a 9–0 record which included a 20–11 win at No. 4 Michigan in Michigan Stadium. While ranked No. 1 in the AP poll, the Huskies dropped a 16–7 game to eventual Pac-10 champion USC, which cost Washington a chance at the Rose Bowl. The Huskies instead were invited to play in the Orange Bowl against the No. 2 Oklahoma Sooners. The game is famous for the Sooner Schooner incident. After Oklahoma kicked a field goal to take a 17–14 lead in the fourth quarter, a penalty was called on the Sooners that nullified the play. The Sooner Schooner driver, who didn't see the flag, drove the wagon on the field and was immediately flagged for unsportsmanlike conduct. The ensuing field goal attempt was blocked and led a momentum shift that saw Washington score two touchdowns in less than a minute en route to a 28–17 victory. Senior Jacque Robinson rushed for 135 yards and was named MVP, the first player in history to be named MVP of both the Orange and Rose Bowls. In winning, the Huskies became the first team from the Pac-10 to play in and win the Orange Bowl. The Huskies finished the year ranked No. 2 in the polls, behind the WAC champion BYU (13–0–0) who were 24–17 victors over the unranked Michigan Wolverines (6–5–0) in the Holiday Bowl. BYU's title was notable for being the only time since the inception of the AP poll that a team was awarded the national title without beating an opponent ranked in the top 25 at the season's end. The Huskies were given the opportunity to play BYU in the Holiday Bowl but chose a larger bowl payout over playing a higher ranked opponent in BYU, who carried a 22-game win streak into the bowl season. The B (QPRS), FN, and NCF polls awarded Washington the national championship, which the school does not claim. 1990 season The 1990 Huskies started out the season with wins against San Jose State and Purdue, then beat No. 5 USC by a score of 31–0. The next week fell to eventual AP national champion Colorado. After the loss, Washington went on to finish the season averaging over 40 points a game while only giving up 14. During this run, Washington would end up beating two more ranked teams on their way to the Rose Bowl. However, in the second to last game Washington lost to UCLA. Washington subsequently entered the Rose Bowl with a record of 9–2 against Iowa. The Huskies won by a final score of 46–34 to secure their fifth Rose Bowl title, displaying its trademark NCAA-best run-defense which allowed 66.8 yards per game. The AP awarded the national championship to Colorado, while the UPI chose undefeated Georgia Tech. Washington was ranked No. 5 in the AP poll, receiving no first place votes. The Rothman/FACT, active from 1968 to 2006, stated that the Washington Huskies were National Champions for 1990, sharing the honor with Colorado, Georgia Tech, and Miami. The school does not claim this championship. 1991 season The 1991 Huskies opened the 1991 season on the road, with a 42–7 victory over the Stanford Cardinal. Following a bye week, Washington traveled to Lincoln, Nebraska for a showdown with No. 9 Nebraska. Trailing 21–9 late in the third quarter, Washington rallied to score 27 unanswered points and claim a 36–21 victory. The following week saw the return of QB Mark Brunell, the 1991 Rose Bowl MVP who had suffered a knee injury in the spring, as the Huskies beat Kansas State 56–3 while holding the Wildcats to -17 yards on the ground. The Huskies followed with back-to-back shutouts of Arizona and Toledo. The Huskies then traveled to Berkeley to face No. 7 California. Washington won a wild game that was decided on the final play when Walter Bailey broke up a pass on the goal line to preserve a 24–17 win. Oregon and Arizona State visited Husky Stadium next and each left with a loss. The Huskies went on their final road trip of the season, first to USC, where they won in the Los Angeles Memorial Coliseum for the first time since 1980. Needing a win over Oregon State to clinch a Rose Bowl berth, Washington rolled to a 58–6 victory. Washington State visited Seattle for the Apple Cup but were no match for the Huskies, as Washington won 56–21, setting up a showdown with Michigan in the Rose Bowl on January 1, 1992. The Washington defense, led by Lombardi Award and Outland Trophy winner Steve Emtman, held Michigan to only 205 total yards and limited 1991 Heisman Trophy winner Desmond Howard to only one catch. The Husky offense, led by quarterbacks Mark Brunell and Billy Joe Hobert, racked up 404 yards of total offense in leading the Huskies to a 34–14 Rose Bowl victory. Hobert and Emtman shared MVP honors. Steve Emtman (DT) and Mario Bailey (WR) were consensus All-American picks. Dave Hoffmann (LB) and Lincoln Kennedy (OT) were All-American selections. Don James was voted Pac-10 and National Coach of the Year. Steve Emtman was the Pac-10 Defensive Player of the Year and Mario Bailey was the Pac-10 Offensive Player of the Year. Mario Bailey (WR), Ed Cunningham (C), Steve Emtman (DT), Chico Fraley (LB), Dana Hall (CB), Dave Hoffmann (LB), Donald Jones (LB) and Lincoln Kennedy (OL) were First Team All-Pac-10. The Huskies led the NCAA in total defense for most of the year, allowing only 237.1 yards per game. The Huskies were voted national champions by the USA Today/CNN Coaches Poll, while the Miami Hurricanes topped the AP Poll. The 1991 team averaged over 41 points per game, only once scoring fewer than 20 points, and held opponents to an average of less than 10 points per game, including two shutouts. Rose Bowl championships Washington has 7 Rose Bowl championships. The program been continuously affiliated with the Pac-12 Conference and its predecessors, which historically agreed to send a representative (typically the conference champion) to participate in the Rose Bowl. The Big Ten Conference was similarly contracted following World War II. This pairing made the Rose Bowl the most prestigious Bowl Game available to Pac-12 teams prior to the BCS era. Conference championships Washington has won 17 conference championships, including the inaugural PCC championship in 1916. This total includes four PCC, three AAWU, one Pac-8, seven Pac-10, and two Pac-12 titles, and at least one in every decade except the 1940s. Washington's 17 conference championships is tied for second in league history, level with UCLA and behind USC's 38 as of 2018. † Co-champions Division championships Through the 2020 season, Washington has won four Pac-12 North Division titles. † Co-champions Head coaches † College Football Hall of Fame inductee * Includes loss to Arizona State during Head Coach Jimmy Lake's suspension. Bowl games Washington has a bowl game record of 19–20–1 through the 2019 season, though the Poi Bowl game was not sanctioned by the NCAA. The Huskies' 15 Rose Bowl appearances are second only to USC in the Pac-12 while their seven victories are tied for third-most. In addition, Washington is also in an elite group of only seven schools to make three consecutive appearances in the Rose Bowl, a feat they accomplished in 1990–1992. The Pacific-8 did not allow a second bowl team from the conference until 1975. Program records Playoffs Washington has made one appearance in the College Football Playoff. All-time record vs. Pac-12 opponents As of November 2020, Washington's records against conference opponents are as follows. Rivalries Washington State Washington and Washington State first played each other in 1900. Traditionally, the Apple Cup is the final game of the regular season for both teams. The Apple Cup trophy has been presented to the winner of the game by the state's governor since 1962. Washington leads the series 74–33–6 as of the 2021 season. Oregon Washington and Oregon first met in 1900. Washington leads the series 60–47–5. Facilities Husky Stadium Husky Stadium has served as the home football stadium for Washington since 1920, with renovations in 1950, 1987 and 2012. Located on campus and set next to Lake Washington, it is the largest stadium in the Pacific Northwest with a seating capacity of 70,183. The stadium is one of a few football stadiums in the United States accessible through water. Washington has led the modern Pac-10 Conference in game attendance 13 times, including nine consecutive seasons from 1989 to 1997. With nearly 70 percent of the seats located between the end zones and grandstands covered by cantilevered metal roofs, Husky Stadium is one of the loudest stadiums in the country and is the loudest recorded stadium in college football. During the 1992 night game against the Nebraska Cornhuskers, ESPN measured the noise level at about 135 decibels, the loudest mark in NCAA history. In 1968 the Huskies became the first major collegiate team to install an Astroturf field, following the lead of the Astrodome. Prior to the 2000 season, the school was among the leaders adopting FieldTurf, trailing only Memorial Stadium's installation by one season. A $280 million renovation of Husky Stadium began on November 7, 2011. Home games were moved to CenturyLink Field for the 2012 season while construction took place. The newly renovated Husky Stadium reopened on August 31, 2013 in a game in which the Huskies defeated Boise State by a score of 38–6. Dempsey Indoor The Dempsey Indoor is an facility opened in September 2001. The building is used as an indoor practice facility for Washington's football, softball, baseball and men's and women's soccer teams. Traditions Logos and uniforms Washington has worn variations of uniforms over the years but are most recognized for their traditional home uniform of gold helmets, purple jerseys, and gold pants. Since Don James' first year as head coach in 1975, the Huskies have worn metallic gold helmets with a purple block "W" on both sides and white and purple center striping; he patterned the new helmet and uniforms after the San Francisco 49ers of the NFL. The exception was from 1995 to 1998 under Jim Lambright, when Washington wore solid purple helmets with a gold "W." During Jim Owens' tenure, an outstanding defensive player was awarded the honor of wearing a purple helmet. Rick Redman, an All-American linebacker in the 1960s, wore one. It was rather intimidating for the opposing quarterback to stand behind his center and see this lone purple-helmeted player staring him down before each play. In 1973 and 1974, Owens' last two seasons, the entire team wore purple helmets. For the 2010 home finale against UCLA, the Huskies unveiled a "blackout" theme. The end zones of Husky Stadium were painted black, while the team debuted black jerseys and pants and encouraged the home crowd to dress in black as well.<ref>{{cite news |url= Seattle Times |first=Bob|last=Condotta|title=Huskies planning to "black out UCLA|date=November 15, 2010}}</ref> Two weeks later for the Apple Cup in Pullman, UW wore the black pants with the usual white road jersey. Black jerseys and pants were worn again the next month for the 2010 Holiday Bowl. All three games were Washington victories. In 2013, the Huskies debuted chrome gold helmets, worn with purple tops and bottoms in a rain-soaked match against Arizona. Later that season against Oregon, Washington debuted matte black helmets featuring a purple "W" and two truncated purple stripes. Prior to the 2014 season, Washington revealed a new uniform set that featured three jersey, four pant, and three helmet color options to allow for a myriad of combinations on the field. The set included matte gold, matte black, and "frosted" white helmets; purple, white, and black jerseys; and gold, purple, white, and black pants. The chrome gold helmets that had been introduced the previous season returned in the 2014 game against Arizona State. In 2017, chrome purple helmets were added to the uniform set. In April 2018, the school agreed to a new 10-year, $119 million apparel deal with Adidas set to begin in summer 2019, ending a 20-year partnership with Nike. The deal with Adidas will rank among the top-10 most valuable in college athletics. Marching Band The University of Washington Husky Marching Band (HMB) is the marching band of the University of Washington, consisting 240 members. The 2017 season was the 88th for the HMB. Broadcasting The Huskies broadcast locally on KJR-950 AM under the IMG Sports Network with Tony Castricone as the play-by-play announcer and former UW quarterback Damon Huard on color commentary. Bob Rondeau, known as the "Voice of the Huskies," announced Washington football for over 30 years until his retirement in 2017. Individual awards and accomplishments Individual national award winners Players Coach Individual conference award winners Players † Warren Moon shared Pac-8 Player of the Year with Guy Benjamin in 1977 before Offensive and Defensive Players awards were named in 1983 Coach Heisman Trophy voting As of July 2017, seven Washington players have ranked among top finishes in the Heisman Trophy voting. † College Football Hall of Fame inductee Consensus All-Americans 22 different Washington players have been recognized on 23 occasions as consensus All-Americans by the National Collegiate Athletic Association (NCAA), by virtue of recording a majority of votes at their respective positions by the selectors. 1925 – George Wilson 1928 – Chuck Carroll 1936 – Max Starcevich 1940 – Rudy Mucha 1941 – Ray Frankowski 1963, 1964 – Rick Redman 1966 – Tom Greenlee 1968 – Al Worley 1982 – Chuck Nelson † 1984 – Ron Holmes 1986 – Jeff Jaeger and Reggie Rogers 1991 – Steve Emtman † and Mario Bailey 1992 – Lincoln Kennedy † 1995 – Lawyer Milloy † 1996 – Benji Olson † 1997 – Olin Kreutz 2002 – Reggie Williams 2014 – Hau'oli Kikaha † 2016 – Budda Baker 2017 – Dante Pettis † Unanimous selection Retired numbers The program has retired three jersey numbers, though some have been re-issued for use. † College Football Hall of Fame inductee Hall of Fame inductees College Football Hall of Fame 15 former Washington players and coaches have been inducted into the College Football Hall of Fame, located in Atlanta, Georgia. Pro Football Hall of Fame 3 former Washington players have been inducted into the Pro Football Hall of Fame, located in Canton, Ohio. Canadian Football Hall of Fame As of 2010, Warren Moon (Edmonton Eskimos 1978–83) is the only player to be a member of both the Canadian Football Hall of Fame and the Pro Football Hall of Fame (NFL). Rose Bowl Hall of Fame The Rose Bowl has inducted eight Washington coaches and players into the Rose Bowl Game Hall of Fame. Memorable games 1975 Apple Cup In the 1975 Apple Cup, Washington State led 27–14 with three minutes left in the game. WSU attempted a 4th-and-1 conversion at the UW 14-yard line rather than try for a field goal. The resulting pass was intercepted by Al Burleson and returned 93 yards for a touchdown. After a WSU three-and-out, Warren Moon's tipped pass was caught by Spider Gaines for a 78-yard touchdown reception and sealed a dramatic 28–27 win for Washington. WSU Head Coach Jim Sweeney resigned a week later, leaving with a 26–59–1 record. 1981 Apple Cup When 14th-ranked Washington State and 17th-ranked Washington met in the 1981 Apple Cup, it was billed as the biggest meeting in the series since the 1936 game when the winner was invited to the Rose Bowl. Washington's defense was the best in the conference, while the Cougars ranked high in offensive categories. Along with a win over WSU, the Huskies needed USC to upset UCLA, in a game that kicked off 40 minutes before the Apple Cup, to clear the way for a Rose Bowl bid. With his team trailing 7–3 late in the second quarter, Husky quarterback Steve Pelluer fired a low pass towards wideout Paul Skansi. Washington State cornerback Nate Brady looked as if he would smother the ball when Skansi dove over the defender for a catch in the endzone. Washington State drove the ball 69 yards to open the second half and tie the score at 10. From that point Washington, behind the fine play of their offensive line, took control. Ron "Cookie" Jackson capped an 80-yard drive by running 23 yards to put the Huskies ahead 17–10. Following a Cougar turnover, All-American kicker Chuck Nelson kicked his second field goal of the game to increase the Huskies' lead to 10 points. The fate of the Cougars was sealed when the score of the USC-UCLA game was announced- the Trojans had engineered the upset. Nelson added a field goal with less than three minutes to play, and the Huskies were off to the Rose Bowl. 1990 – "All I Saw Was Purple" Heading into the 1990 season, the winner of the USC-Washington game had gone to the Rose Bowl in 10 of the previous 13 seasons. The 1990 match would continue that trend. Washington's All-Centennial team was introduced at halftime of the game, while two members of the historic team, Hugh McElhenny and Nesby Glasgow, delivered inspirational talks to the current players. On a bright, sunny day with the temperature reaching 92 degrees Fahrenheit, the crowd of 72,617 witnessed one of the most memorable games in program history. Washington shut out USC for just the third time in 23 seasons, handing the Trojans their worst conference defeat in 30 years. "Student Body Right" was held to only 28 rushing yards as the Husky defense dominated the line of scrimmage. Greg Lewis, the Doak Walker Award winner as the nation's top running back, gained 126 rushing yards as sophomore quarterback Mark Brunell threw for 197 yards for the Huskies, as they rolled to a 24–0 halftime lead. The Husky defense, led by All-American lineman Steve Emtman, stopped everything the Trojans attempted. The defense would hold USC to 163 total yards and seven first downs for the game. They would record three sacks and put so much pressure on Todd Marinovich that after the game, weary and beaten, he famously said: "I just saw purple. That's all. No numbers, just purple." 1992 – "A Night To Remember" Playing in the first night in stadium history, No. 2 Washington posted a victory against No. 12 Nebraska that provided the loudest recorded moment in the history of Husky Stadium and would be dubbed "A Night To Remember." Late in the first quarter, Husky punter John Werdel pinned Nebraska on its three yard-line. Crowd noise caused the Husker linemen to false start on consecutive plays, only adding to the frenzy of the crowd. When Nebraska quarterback Mike Grant dropped back to his own end zone to attempt a pass, Husky roverback Tommie Smith blitzed Grant from his blind side and tackled him for a safety. The deafening roar following the play reverberated off the twin roofs of the stadium. ESPN measured the noise level at over 130 decibels, well above the threshold of pain. The peak recorded level of 133.6 decibels has been the highest ever recorded at a college football stadium. Holding a 9–7 lead, the Husky offense went into quick-strike mode at the close of the second quarter. Speedy running back Napoleon Kaufman ended an 80-yard drive with a 1-yard scoring run. Walter Bailey intercepted Grant to start the second half, and the Huskies extended their lead when quarterback Billy Joe Hobert threw a 24-yard touchdown pass to a diving Joe Kralik to boost the lead to 23–7. Kicker Travis Hanson later made a pair of field goals second half to cinch a 29–14 win. The victory propelled Washington to the No. 1 ranking in the AP poll the following week. 1994 – The "Whammy in Miami" The 'Whammy in Miami' was a college football game played between the Huskies and the Miami Hurricanes on September 24, 1994 in Miami's Orange Bowl. The game was the first football contest between the two schools. During the 1991 season, both teams finished the year with identical 12–0 records and both teams were crowned National Champions by different polls. The teams were unable to settle the championship on the field, as both teams were locked into their respective bowl games (Washington in the Rose and Miami in the Orange). As a result, both schools agreed to schedule the other for a series of games. Entering the game, Miami had an NCAA record home winning streak of 58 games and was ranked 5th in the nation with a 2–0 record. The Hurricanes had not lost at the Orange Bowl since 1985 and not to a team from outside of Florida since 1984. The Huskies were 1–1, having lost to USC and beaten Ohio State. Odds makers placed the Huskies as a 14-point underdog. The Hurricanes appeared to be on their way to a 59th consecutive home victory in the first half, leading the Huskies 14–3 at halftime. After the half, the Huskies came out firing by scoring 22 points in five minutes. Key plays included a 75-yard touchdown pass, 34-yard interception return, and a fumble recovery. The Huskies dominated the second half on the way to a 38–20 victory. Word got out among the Huskies that Miami Coach Dennis Erickson had jokingly suggested the losers of this game relinquish their national championship rings from 1991. "Take the rings back," safety Lawyer Milloy shouted into the air as he walked off the field. 2002 Apple Cup With the game in Pullman, No. 3 Washington State entered the game poised for BCS National Championship game consideration, behind QB Jason Gesser. Gesser was injured by DT Terry "Tank" Johnson late in the game. The Cougars led 20–10 with less than 4 minutes left in the game, with Matt Kegel having replacing Gesser. UW used a timely interception from freshman cornerback Nate Robinson to force overtime. The teams traded field goals in the first two overtime periods, and John Anderson converted another kick to start the third overtime. During the Cougars' possession, umpire Gordon Riese controversially ruled that Kegel threw a backward pass, which was knocked down and recovered by defensive end Kai Ellis. The fumble recovery ended the game as a Washington victory. The Martin Stadium crowd erupted angrily in response, and some individuals threw bottles on the field as Washington players and fans celebrated. Then UW athletic director Barbara Hedges said at the time that she "feared for her life." 2009 – "Miracle on Montlake" Entering the game, the No. 3 Trojans had the national spotlight after their defeat of Ohio State in Columbus the week before. Washington, meanwhile, had just won its first game in 16 contests with a victory over Idaho. Southern California opened the game with 10 unanswered points, marching down the field with ease. USC was playing without starting quarterback Matt Barkley, who had injured his shoulder the week before at Ohio State, but despite playing with backup QB Aaron Corp, the Trojans were able to lean on an experienced running game and veteran offensive line. Washington worked its way back into the game with a 4-yard touchdown run by quarterback Jake Locker, trimming the score to 10–7. Late in the second quarter, placekicker Erik Folk kicked a 46-yard field goal to tie the score at 10. The scored remained tied as the game entered the fourth quarter. After swapping field goals, the Huskies took possession with four minutes left in the game. Locker maneuvered the Huskies down the field, converting on two key third downs, including a 3rd-and-15 from his team's own 28 where Locker threw across the sideline to Jermaine Kearse for 21 yards. The Huskies would eventually drive to the USC 4-yard line before Folk kicked the game-winning field goal for the 16–13 victory, Washington's first conference win since 2007. 2010 – "Deja Vu" On October 2, 2010 the Huskies went on the road to face No. 18 USC at Los Angeles Memorial Coliseum, a place where they had not won since 1996. They hadn't won on the road period since November 3, 2007 against Stanford, a streak of 13 consecutive games. The Huskies led for parts of all four quarters but never put the game away, including a play in which Jake Locker had the ball stripped out of the end-zone on what was a sure touchdown run. Locker left the game for one play after taking a knee to helmet on a quarterback sneak. Keith Price, a redshirt freshman from Compton, California, came in to make his Washington debut and completed a touchdown pass on his only play of the game, putting the Huskies ahead 29–28. The Trojans made a field goal on the following possession to retake the lead, 31–29. The Huskies' final drive started with two incomplete passes and a near fumble, but on a 4th-and-11 Jake Locker completed a pass to a leaping DeAndre Goodwin. The Huskies continued to push the ball into field goal range in a similar situation to the previous year when playing USC. With 3 seconds left, Erik Folk kicked the game-winning field goal as time expired, giving the Huskies their first road win in three years. 2016 — "70 In Eugene" Prior to this game, Oregon had beaten Washington 12 straight times, ten of which were by a margin of 20 points or more. This was the longest winning streak by either team in the Oregon-Washington football rivalry. The Huskies, ranked No. 5 in the AP Poll after a 44–6 win against No. 7 Stanford at Husky Stadium the previous week, traveled to Autzen Stadium to face a 2–3 Oregon team. The Oregon winning streak was finally snapped after a 70–21 Washington rout. On the first play from scrimmage, Washington safety Budda Baker, a one-time commit to Oregon, intercepted a pass from Oregon true freshman quarterback Justin Herbert. The Huskies took the lead on a Jake Browning touchdown run with 13:23 left in the first quarter and never relinquished it. The Huskies led 35–7 by halftime, 42–7 after the first possession of the third quarter, and 70–21 with 9:58 left in the fourth quarter. The Washington offense racked up 682 yards of total offense, averaged 10.1 yards per play, amassed 6 passing touchdowns by quarterback Jake Browning, and scored 70 points, the most scored by either team in the rivalry. This was also the second-most an opponent has ever scored on Oregon in Eugene. Future opponents Non-division conference opponents Washington plays each of the other 5 schools in the North Division annually and 4 of the 6 schools from the South Division. Each season, Washington "misses" two schools from the South Division: either UCLA or USC and one of the four Arizona or Mountain schools. This cycle repeats after eight seasons. Non-conference opponents Announced schedules as of January 17, 2020.The school years of 2029–30 do not have any scheduled non-conference opponents as of January 17, 2020. See also List of Washington Huskies in the NFL Draft College football national championships in NCAA Division I FBS References External links American football in Seattle American football teams established in 1889 Huskies
21855450
https://en.wikipedia.org/wiki/MongoDB
MongoDB
MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas. MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License (SSPL). History 10gen software company began developing MongoDB in 2007 as a component of a planned platform as a service product. In 2009, the company shifted to an open-source development model, with the company offering commercial support and other services. In 2013, 10gen changed its name to MongoDB Inc. On October 20, 2017, MongoDB became a publicly traded company, listed on NASDAQ as MDB with an IPO price of $24 per share. MongoDB is a global company with US headquarters in New York City and International headquarters in Dublin. On October 30, 2019, MongoDB teamed up with Alibaba Cloud, who will offer its customers a MongoDB-as-a-service solution. Customers can use the managed offering from BABA's global data centers. Main features Ad-hoc queries MongoDB supports field, range query, and regular-expression searches. Queries can return specific fields of documents and also include user-defined JavaScript functions. Queries can also be configured to return a random sample of results of a given size. Indexing Fields in a MongoDB document can be indexed with primary and secondary indices or index. Replication MongoDB provides high availability with replica sets. A replica set consists of two or more copies of the data. Each replica-set member may act in the role of primary or secondary replica at any time. All writes and reads are done on the primary replica by default. Secondary replicas maintain a copy of the data of the primary using built-in replication. When a primary replica fails, the replica set automatically conducts an election process to determine which secondary should become the primary. Secondaries can optionally serve read operations, but that data is only eventually consistent by default. If the replicated MongoDB deployment only has a single secondary member, a separate daemon called an arbiter must be added to the set. It has a single responsibility, which is to resolve the election of the new primary. As a consequence, an idealized distributed MongoDB deployment requires at least three separate servers, even in the case of just one primary and one secondary. Load balancing MongoDB scales horizontally using sharding. The user chooses a shard key, which determines how the data in a collection will be distributed. The data is split into ranges (based on the shard key) and distributed across multiple shards. (A shard is a master with one or more replicas.) Alternatively, the shard key can be hashed to map to a shard – enabling an even data distribution. MongoDB can run over multiple servers, balancing the load or duplicating data to keep the system up and running in case of hardware failure. File storage MongoDB can be used as a file system, called GridFS, with load balancing and data replication features over multiple machines for storing files. This function, called grid file system, is included with MongoDB drivers. MongoDB exposes functions for file manipulation and content to developers. GridFS can be accessed using mongofiles utility or plugins for Nginx and lighttpd. GridFS divides a file into parts, or chunks, and stores each of those chunks as a separate document. Aggregation MongoDB provides three ways to perform aggregation: the aggregation pipeline, the map-reduce function, and single-purpose aggregation methods. Map-reduce can be used for batch processing of data and aggregation operations. But according to MongoDB's documentation, the Aggregation Pipeline provides better performance for most aggregation operations. The aggregation framework enables users to obtain the kind of results for which the SQL GROUP BY clause is used. Aggregation operators can be strung together to form a pipeline – analogous to Unix pipes. The aggregation framework includes the $lookup operator which can join documents from multiple collections, as well as statistical operators such as standard deviation. Server-side JavaScript execution JavaScript can be used in queries, aggregation functions (such as MapReduce), and sent directly to the database to be executed. Capped collections MongoDB supports fixed-size collections called capped collections. This type of collection maintains insertion order and, once the specified size has been reached, behaves like a circular queue. Transactions MongoDB claims to support multi-document ACID transactions since the 4.0 release in June 2018. This claim was found to not be true as MongoDB violates snapshot isolation. Editions MongoDB Community Server The MongoDB Community Edition is free and available for Windows, Linux, and macOS. MongoDB Enterprise Server MongoDB Enterprise Server is the commercial edition of MongoDB, available as part of the MongoDB Enterprise Advanced subscription. MongoDB Atlas MongoDB is also available as an on-demand fully managed service. MongoDB Atlas runs on AWS, Microsoft Azure, and Google Cloud Platform. Architecture Programming language accessibility MongoDB has official drivers for major programming languages and development environments. There are also a large number of unofficial or community-supported drivers for other programming languages and frameworks. Serverless access Management and graphical front-ends The primary interface to the database has been the mongo shell. Since MongoDB 3.2, MongoDB Compass is introduced as the native GUI. There are products and third-party projects that offer user interfaces for administration and data viewing. Licensing MongoDB Community Server As of October 2018, MongoDB is released under the Server Side Public License (SSPL), a license developed by the project. It replaces the GNU Affero General Public License, and is nearly identical to the GNU General Public License version 3, but requires that those making the software publicly available as part of a "service" must make the service's entire source code available under this license. The SSPL was submitted for certification to the Open Source Initiative but later withdrawn. The language drivers are available under an Apache License. In addition, MongoDB Inc. offers proprietary licenses for MongoDB. The last versions licensed as AGPL version 3 are 4.0.3 (stable) and 4.1.4. MongoDB has been removed from the Debian, Fedora and Red Hat Enterprise Linux distributions due to the licensing change. Fedora determined that the SSPL version 1 is not a free software license because it is "intentionally crafted to be aggressively discriminatory" towards commercial users. Bug reports and criticisms Security Due to the default security configuration of MongoDB, allowing anyone to have full access to the database, data from tens of thousands of MongoDB installations has been stolen. Furthermore, many MongoDB servers have been held for ransom. In September 2017; updated January 2018, in an official response Davi Ottenheimer, lead Product Security at MongoDB, proclaimed that measures have been taken by MongoDB to defend against these risks. From the MongoDB 2.6 release onwards, the binaries from the official MongoDB RPM and DEB packages bind to localhost by default. From MongoDB 3.6, this default behavior was extended to all MongoDB packages across all platforms. As a result, all networked connections to the database will be denied unless explicitly configured by an administrator. Technical criticisms In some failure scenarios where an application can access two distinct MongoDB processes, but these processes cannot access each other, it is possible for MongoDB to return stale reads. In this scenario it is also possible for MongoDB to roll back writes that have been acknowledged. This issue was addressed since version 3.4.0 released in November 2016 (and back-ported to v3.2.12). Before version 2.2, locks were implemented on a per-server process basis. With version 2.2, locks were implemented at the database level. Since version 3.0, pluggable storage engines were introduced, and each storage engine may implement locks differently. With MongoDB 3.0 locks are implemented at the collection level for the MMAPv1 storage engine, while the WiredTiger storage engine uses an optimistic concurrency protocol that effectively provides document-level locking. Even with versions prior to 3.0, one approach to increase concurrency is to use sharding. In some situations, reads and writes will yield their locks. If MongoDB predicts a page is unlikely to be in memory, operations will yield their lock while the pages load. The use of lock yielding expanded greatly in 2.2. Up until version 3.3.11, MongoDB could not do collation-based sorting and was limited to byte-wise comparison via memcmp which would not provide correct ordering for many non-English languages when used with a Unicode encoding. The issue was fixed on August 23, 2016. Prior to MongoDB 4.0, queries against an index were not atomic. Documents which were being updated while the query was running could be missed. The introduction of the snapshot read concern in MongoDB 4.0 eliminated this phenomenon. Although MongoDB claims in an undated article entitled "MongoDB and Jepsen" that their database passed Distributed Systems Safety Research company Jepsen's tests, which it called “the industry’s toughest data safety, correctness, and consistency Tests”, Jepsen published an article in May 2020 stating that MongoDB 3.6.4 had in fact failed their tests, and that the newer MongoDB 4.2.6 has more problems including “retrocausal transactions” where a transaction reverses order so that a read can see the result of a future write. Jepsen noted in their report that MongoDB omitted any mention of these findings on MongoDB's "MongoDB and Jepsen" page. MongoDB Conference MongoDB Inc. hosts an annual developer conference which has been referred to as either MongoDB World or MongoDB.live. See also Apache Cassandra BSON, the binary JSON format MongoDB uses for data storage and transfer List of server-side JavaScript implementations MEAN, a solutions stack using MongoDB as the database Server-side scripting TokuMX, a fork of MongoDB with stronger consistency and new index structures Amazon DocumentDB, a proprietary database service designed for MongoDB compatibility References Bibliography External links 2009 software Database-related software for Linux Distributed computing architecture Document-oriented databases NoSQL Structured storage
194576
https://en.wikipedia.org/wiki/Centronics
Centronics
Centronics Data Computer Corporation was an American manufacturer of computer printers, now remembered primarily for the parallel interface that bears its name, the Centronics connector. History Foundations Centronics began as a division of Wang Laboratories. Founded and initially operated by Robert Howard (president) and Samuel Lang (vice president and owner of the well known K & L Color Photo Service Lab in New York City), the group produced remote terminals and systems for the casino industry. Printers were developed to print receipts and transaction reports. Wang spun off the business in 1971 and Centronics was formed as a corporation in Hudson, New Hampshire with Howard as president and chairman. The Centronics Model 101 was introduced at the 1970 National Computer Conference in May. The print head used an innovative seven-wire solenoid impact system. Based on this design, Centronics later developed the first dot matrix impact printer (while the first such printer was the OKI Wiredot in 1968). Howard developed a personal relationship with his neighbor, Max Hugel, the founder and president of Brother International, the United States arm of Brother Industries, Ltd., a manufacturer of sewing machines and typewriters. A business relationship developed when Centronics needed reliable manufacturing of the printer mechanisms—a relationship that would help propel Brother into the printer industry. Hugel would later become executive vice president of Centronics. Print heads and electronics were built in Centronics plants in New Hampshire and Ireland, mechanisms were built in Japan by Brother and the printers were assembled in New Hampshire. In the 1970s, Centronics formed a relationship with Canon to develop non-impact printers. No products were ever produced, but Canon continued to work on laser printers, eventually developing a highly successful series of engines. In 1977, Centronics sued competitor Mannesmann AG in a patent dispute regarding the return spring used in the print actuator. In 1975, Centronics formed an OEM agreement with Tandy and produced DMP and LP series printers for several years. The 6000 series band printers were introduced in 1978. By 1979 company revenues were over $100 million. In 1980, the Mini-Printer Model 770 was introduced—a small, low-cost desktop serial matrix printer. This was the first printer built completely in-house, and there were problems. Flaws in the microprocessor led to a recall and a stoppage of manufacturing for a year. During this period, Epson, Brother and others began to gain market share and Centronics never recovered. 1980 also saw the introduction of the E Series 900 and 1200 LPM band printers. Change of ownership In 1982, Control Data Corporation merged their current printer business unit, CPI, into Centronics and at the same time invested $25 million in the company, effectively taking control from Howard. During 1980-1985 the company lost $80 million. Control Data controlled the company until 1986 when CDC's interest was acquired by a group of investors affiliated with Drexel Burnham Lambert. The Drexel interest was acquired by Centronics in 1987. The LineWriter 400 band printer was introduced in 1983, closely followed by the faster LineWriter 800 band printer in 1984. The LineWriter series would continue through 1995. The GLP (Great Little Printer) was a series of low-end serial matrix printers introduced in 1984. The relationship with Brother continued with several of the PrintStation models being produced from rebadged Brother products. Exclusive rights to market Trilog color matrix printers was acquired in 1984, and Trilog was purchased outright in 1985. Advanced Terminals (a manufacturer of sheet feeders) and BDS Computer Australia Pty Ltd were purchased in 1986. The PrintStation 350 series serial matrix printer was highly successful in the OEM market, sold with the logos of Data General, ITT Courier, NCR, CDC, Decision Data and ISI. Most profitable was the agreement to build the IBM 4214 based on a modified PS350. In 1985, company revenues were $126 million with $65 million from IBM 4214 production. In 1986 the IBM 4214 production ended and revenue dropped. On June 23, 1986, Centronics announced the new corporate logo. The new logo never gained recognition before the sale to GENICOM, and GENICOM used the old logo in continued sales of printers and supplies. The only Centronics laser product was released in July 1986: the PagePrinter 8. The PP8 used a Sharp engine identical to an existing Sharp copier, using a 6800 based controller jointly developed by Sharp and Centronics. At $2,495, the PP8 was $500 less than the HP LaserJet. A faster version was announced, but never materialized. Printer division sale In 1987 the Centronics printer business was sold to GENICOM for $87 million. Centronics Data Computer Corporation continued as a New York Stock Exchange company and soon changed its name to Centronics Corporation in 1987. After using the proceeds of the sale to purchase Ekco Housewares in 1988 for $125 million, Centronics changed their name to Ekco Group, Inc. Centronics 101 The Centronics 101 (introduced 1970) was highly innovative and affordable at its inception. Some selected specifications: Print speed: 165 characters per second Weight: 155 pounds (70.3 kg) Size: 27 ½ " W x 11 ¼ " H x 19 ¼ D (~ 70 cm x 29 cm x 49 cm) Shipping: 200 pounds (ca 91 kg), wooden crate, unpacked by removal of 36 screws Characters: 62, 10 numeric, 26 upper case and 26 special characters (no lower case) Character size: 10 characters per inch Line spacing: 6 lines per inch Vertical control: punched tape reader for top of form and vertical tab Forms thickness: original plus four copies Interfaces: Centronics parallel, optional RS-232 serial Legacy The connectors developed for its parallel interface live on as the "Centronics connector", used in other computer hardware applications, notably as the printer end of the once ubiquitous parallel-printer cable. References Further reading Robert Howard, Connecting the Dots: My Life and Inventions, From X-rays to Death Rays, Welcome Rain Publishers, July 16, 2009. Edward Webster, Print Unchained: Fifty Years of Digital Printing, 1950-2000 and Beyond, DRA of Vermont, Inc. (2000). Centronics Model 101 User Manual Datek Printer Report, July 1986 PC Magazine, Nov 27, 1984 External links Centronics data sheets American companies established in 1971 American companies disestablished in 1987 Computer buses Computer companies established in 1971 Computer companies disestablished in 1987 Computer printer companies Control Data Corporation Defunct companies based in New Hampshire Defunct computer companies of the United States Electronics companies established in 1971 Hudson, New Hampshire
45278781
https://en.wikipedia.org/wiki/LinnSequencer
LinnSequencer
The LinnSequencer is a rack-mount 32-track hardware MIDI sequencer manufactured by Linn Electronics and released in 1985 at a list price of US $1,250. An optional Remote Control was available. Like the LinnDrum Midistudio, the LinnSequencer used the same flawed operating system used in the ill-fated Linn 9000, released in 1984. As a result, both machines earned a reputation for being notoriously unreliable. In addition, the optional LinnSequencer SMPTE feature could not be deployed due to flawed circuit design. The last LinnSequencer operating system released by Linn Electronics was version 5.17. When Linn went out of business in 1986, Forat Electronics purchased Linn's remaining assets and completely revamped the Linn 9000 and LinnSequencer operating system. They fixed all the bugs and added some new features to the LinnSequencer. The Forat LinnSequencer was released in 1987 by Forat Electronics at a list price of $1,000 (including all fixes and upgrades). The Forat LinnSequencer was manufactured and sold as a new complete unit. Forat also offered software and hardware upgrades to existing LinnSequencers. Forat discontinued manufacturing new complete Forat LinnSequencers in 1994. However, at the time of writing (2015), Forat still offers the LinnSequencer software and hardware upgrades to stock LinnSequencers. Features The LinnSequencer is a state-of-the-art composition and performance tool for the professional musician. It is extremely powerful, yet amazingly simple to learn and use. Features added by Forat Electronics include: 40,000-note capacity (four times the original) MIDI clock MIDI song pointer Features of the original Linn Electronics LinnSequencer that are retained in the Forat LinnSequencer include: Operation is similar to a multi-track tape recorder with PLAY, STOP, RECORD, FAST FORWARD, REWIND, and LOCATE controls Each of the 100 sequences contains 32 simultaneous, polyphonic tracks. Each track may be assigned to one of 16 MIDI channels. Simultaneously plays up to 16 polyphonic synthesizers Ultra-fast 3.5-inch floppy disk drive stores complex songs in seconds and holds over 110,000 notes per disk One or all tracks may be TRANSPOSED at the touch of a key Exclusive real-time ERASE function makes editing FAST Exclusive REPEAT function automatically repeats any held notes at a pre-selected rhythmic value TIMING CORRECTION works during playback and operates without "chopping" notes Optional remote control Brochure (1985) References External links Official Roger Linn site MIDI controllers MIDI instruments Electronic musical instruments
41235756
https://en.wikipedia.org/wiki/MatterHackers
MatterHackers
MatterHackers is an Orange County-based company founded in 2012 that supplies 3D printing materials and tools. MatterHackers is developing their 3D printer control software, MatterControl. History MatterHackers was founded in 2012, and provides both an online and physical, retail presence for customers. MatterHackers was an exhibitor at the World Maker Faire New York 2013. In 2014, MatterHackers was a sponsor of the 2014 3D Printer World EXPO, held in Burbank, California. MatterControl MatterControl is MatterHacker's software for 3D printers. "MatterControl is free software for organizing and managing 3D print jobs, with integrated slicing." MatterHackers has stated that while they may provide additional features as paid plug-ins, MatterControl at its core will remain free. MatterControl currently has a stable build that receives updates and patches from the development team. It has stable builds for Windows, Mac and Linux platforms. It is currently in a 2.0 beta for Windows phase. The beta includes several upgrades, including design capabilities, software 64-bit slicing and cloud storage for designs. MatterControl Touch was launched in 2015. This is a product that is a 3D printer controller including onboard slicing, remote monitoring, and automatic print leveling. In 2016 MatterHackers launched and updated, and larger version of MatterControl Touch called MatterControl T10 Services MatterHackers also supplies customers with 3D printing goods. In June 2013, MatterHackers opened their own retail location in Lake Forest, California where they sell 3D printing supplies, parts, and accessories. They currently produce their own 3D printer, the Pulse and the Pulse XE (which is designed to work specifically with Nylon and NylonX filament - but it can print all filaments.) The shop also carries 3d printers made by: Ultimaker LulzBot SeeMeCNC BCN3D MakeIt MakerGear Peopoly Raise3D Robo3D Zortrax Intamsys FlashForge CraftBot Products MatterHackers offers a variety of material selections, and filaments, but is known for their own PRO series brand. Materials carried include: PRO Series PLA PRO Series ABS PRO Series PETG NylonX PRO Series Nylon (Available in multiple colors) ColorFabb Fillamentum NinjaTek Taulman 3D Kai Parthy's LAY Series Ultimaker Materials 3D Fuel PolyMaker Proto-Pasta 3DXTech Dupont Raise3D Filaments See also 3D Printing References External links MatterControl website Companies based in Orange County, California American companies established in 2012 3D printing
14957440
https://en.wikipedia.org/wiki/TableCurve%203D
TableCurve 3D
TableCurve 3D is a linear and non-linear surface fitting software package for engineers and scientists that automates the surface fitting process and in a single processing step, fits and ranks about 36,000 out of over 450 million built-in frequently encountered equations, enabling users to find the ideal model to their 3D data. Once the user has selected the best fit equation, they can output function and test programming codes or generate reports and publication quality graphs. TableCurve 3D was developed by Ron Brown of AISN Software. TableCurve 3D 1.0 was introduced to the scientific market in September 1993. Version 1.0 was a Windows based 16-bit product. In February 1995, the 32-bit version 2.0 was released. It was initially distributed by Jandel Scientific Software but by January 2004, Systat Software acquired the exclusive worldwide rights from SPSS, Inc. to distribute SigmaPlot and other Sigma Series products. SYSTAT Software is now based in San Jose, California. Related links SYSTAT PeakFit TableCurve 2D External links Systat webpage TableCurve 3D support webpage Plotting software
10426636
https://en.wikipedia.org/wiki/National%20Association%20for%20the%20Support%20of%20Long%20Term%20Care
National Association for the Support of Long Term Care
The National Association for the Support of Long Term Care (NASL) is a United States trade association of ancillary providers of products and services to the post acute care industry. This includes nursing homes, assisted living, home care, inpatient rehabilitation facilities, independent living, adult day care, hospice and long term care hospitals. NASL represents and advocates for its members on legislative and regulatory issues that impact the quality of care to patients in long term care settings. NASL is headquartered in Washington, D.C. NASL was formed in 1989 and represents over 100 companies providing medical products, medical services, and information technology to the post acute care industry. NASL member companies employ over 25,000 therapy professionals and represent approximately 64% of the nation's long term care health information technology providers. NASL continually influences national healthcare policy, particularly, improved Medicare ancillary payments and assurance that laws and regulations are in place to promote quality patient care. NASL has established a positive working relationship with the Government and leads initiatives, such as, extension of therapy cap exceptions process, alternative to a therapy cap, competitive bidding process, and adoption of health information technology. Every NASL member is active on one of four working committees: Medical Products, Medical Services, Information Technology and Diagnostic Testing. NASL committees and leadership participate in and manage national and industry-wide coalitions, host and sponsor national and state education sessions, and the support the development of healthcare policy. NASL members receive daily communications regarding federal and state government healthcare decision-making; participate directly in the government healthcare policy-making process; influence issues by speaking directly to elected, appointed, and senior health policy officials; and receive discounts on meeting registration. External links NASL website Health industry trade groups based in the United States
12579300
https://en.wikipedia.org/wiki/Remedy%20Debugger
Remedy Debugger
The Remedy debugger was the first embedded system level debugger in the world. It offered many features that users take for granted today in the days when having a source level debugger was a luxury. Some of these features include: Multiprocessor operation Heterogeneous Distributed Dynamic thread view of the system Synchronized debugging for multiple threads Trace functions Operating system resource displays Source and assembly level debugging It started as an academic research project (originally called Melody for debugging the Harmony Operating System). The results were published in one of the early papers on debugging multiprocessor systems. The current version of Unison Operating System continues to use both gdb and Remedy debugger. References Debuggers
22812927
https://en.wikipedia.org/wiki/Netviewer
Netviewer
Netviewer AG is a German IT company headquartered in Karlsruhe. The company's software products provide web conferencing, desktop sharing, and remote maintenance capabilities. According to market research companies Frost & Sullivan and Gartner, Netviewer is among the leading European providers of web conferencing software. As of 2009, the firm employed more than 200 people at nine locations in Europe and served more than 15,000 corporate customers in 55 countries. History Netviewer was founded in 2001. In 2002 the firm won a competition for startup companies sponsored by German savings banks and McKinsey Consulting in Germany. In years 2006 and 2007 it ranked in the top-10 of the Deloitte Technology Fast 50 in Germany. Since 2011 Netviewer is part of Citrix Online, a division of Citrix Systems, Inc. Products The core functionality of Netviewer products is Desktop sharing, establishing a direct connection between two or more computers over the Internet t permitting the exchange of screen contents. The firm's products include the web conferencing solution Netviewer Meet, helpdesk and IT support software Netviewer Support, remote access tool Netviewer Admin and the web-based webinar and webcast solution Netviewer Present. Netviewer Meet was available as freeware for private users and as a software-as-a-service or on-premises licensing models for commercial use. The other products are available as software-as-a-service or on-premises licenses exclusively. As of 2014, the firm offered two different products, GoToMeeting for web conferencing and GoToAssist for remote access. No free versions are available, only trial versions. Security Netviewer products use 256-bit AES encryption and key-phrase-based authentication. Fraunhofer has certified the security of the software in an expert opinion. (The SIT opinion is dated April 2004 and is for Netviewer 2.0 (build 521). There is no opinion about newer versions.) See also Web conferencing Comparison of web conferencing software Collaborative software References External links Netviewer.co.uk English homepage Netviewer.de . . German homepage TechCrunch on investments into Netviewer Companies based in Baden-Württemberg Software companies of Germany Companies established in 2001 Internet Protocol based network software Remote desktop Virtual Network Computing Web conferencing
30291999
https://en.wikipedia.org/wiki/Vengeance%20of%20Orion
Vengeance of Orion
Vengeance of Orion is a 1988 science fiction novel by American writer Ben Bova. It is the sequel to Orion and follows his adventures in the time of the Greek heroes Achilles and Odysseus in the siege of Troy. The story takes up many plot elements of Homer's "Iliad" but also includes elements not appearing in Homer, such as the presence of the Hittite empire to the east of Troy. Plot summary Orion comes into "being" as a rower on board a Greek ship headed to the city of Troy and makes friends with a talkative old man called Poletes. The Golden One, the "creator" from the previous novel in the series, soon appears to him revealing himself as Apollo the Greek god and that his plans are for the Trojans to be victorious in that era so as to create a Euro-Asian Empire. Little by little Orion remembers that he was traveling with the woman he loved on a star ship that ended up exploding while in flight. The Golden One states that Orion was punished for defying him and that the former Goddess who chose to become mortal so she and Orion could share their love together was now dead as a result of that explosion. As part of that ongoing punishment, he has resurrected Orion to serve him yet again, and during that time intending Orion to suffer the pains of the loss of his love. Orion, angered at the petty arrogance of the murderous Golden One, decides to thwart whatever plans Apollo might have for the era and ends up saving the Greek camp from being overrun by the Trojans on a counterattack. His courageous acts earn the attention of Odysseus who then adopts him as a member of his household. As a favor Orion requests that Poletes becomes his servant thereby elevating the man's station. Odysseus' first duty for Orion is to accompany him with Ajax and Nestor to Achilles' tent to persuade him to return into the fray. Achilles had earlier withdrawn from the battle because Agamemnon had taken from Bris whom he captured. Achilles however insists on only re-entering battle when High King Agamemnon apologizes to him. Seeing this is an impossibility, Odysseus and his team withdraw and he then asks Orion to go as a herald to the Trojans with an offer of peace. Orion, sensing this will enable the Trojans to win changes the demand to the previous insulting demands brought by earlier heralds. This offer is of course rejected and while there he notices a weak point in the walls of Troy via the Western section built by men. The Trojans then send Orion back to the Greek camp with their refusal and a warning that the Hatti are coming to their defense. This news upsets the Greeks who had earlier been assured by the Hattis of their intention not to interfere in the Trojan War. Orion is then sent to the Hattis with a copy of the agreement with the Hattis only to discover that the mighty empire of the Hattis had disintegrated and the soldiers were nothing more than bandits. He then encounters a band of Hatti soldiers led by Lukka who agree to follow him back to his camp. Upon return he discovers Achilles' partner Patrocles is dead and this has prompted Achilles to enter the battle again. He faces off with Hector and defeats him cutting off his head. However, Achilles is wounded by a stray arrow piercing his heel. He later commits suicide. Lukka and his group then help in building a ramp over then Western section of the wall and by this the Greeks enter into Troy and raze the city to the ground, thus defeating Apollo's plans. While the spoils are being shared the High King takes half of all the goods which leads to dissatisfaction. Poletes uses this to make fun of him but Agamemnon decides to blind him. This upsets Orion and he decides to leave, taking Helen with him who decided against going back to Sparta because of the barbaric way of life there. She suggests going to Egypt where she believes she would be treated civilly. On their way he meets Apollo who instructs him to assist the Israelites in their toppling of Jericho's wall before arriving in Egypt. On arrival, he is involved in palace intrigues again as Pharaoh's Chief priest has been poisoning him and usurping his power. Also, Menelaus has arrived in Egypt having been told by Nekoptah that his wife is in Egypt. Nekoptah tries to set up Orion to be killed in addition to killing his twin brother, Hetepamon. He is however, able to turn the tables against Nekoptah with the help of the crown prince and Hetepamon. He also assists the other Creators to capture Apollo who has actually gone mad and was trying to kill the others so as to be the only one and be worshiped as god. Nekoptah still tries one last time to set Orion up with Menelaus but he loses and is killed when he kidnapped Helen. However, Orion is stabbed through the heart with a spear by Nekoptah, who then dies. Orion wakes up later in the Neolithic and sees Anya, who then wakes up after he kisses her. The novel then ends with an epilogue which is the beginning of the succeeding book. References Reader Jolted By Lightning' Bolt Of Suspense, Terror. Pittsburgh Press, March 5, 1988 1988 American novels 1988 science fiction novels American science fiction novels Novels by Ben Bova Tor Books books Novels set during the Trojan War Agamemnon
43359843
https://en.wikipedia.org/wiki/UniKey
UniKey
UniKey Technologies is an alternative access control company based in the United States that designs and licenses keyless entry technology worldwide. Its first product in partnership with Kwikset is Kēvo, a Bluetooth-enabled deadbolt door lock. History UniKey Technologies was founded in 2010 in Florida by Phil Dumas, who serves as president and chief executive officer. He has an electrical engineering degree from the University of Central Florida and a background in biometric security. Dumas was part of the team that launched SmartScan, the first mass-market biometric residential deadbolt lock. He started UniKey in an effort to create a more dependable and convenient way to access everything. UniKey Technologies first came to the public's attention in May 2012, when Dumas appeared on ABC's reality series Shark Tank. UniKey received investment offers from all five judges on the show, including an offer of $500,000 for a 40% equity stake from Mark Cuban and Kevin O'Leary. Dumas accepted the offer, but they ultimately were unable to come to terms after the show. By 2013, UniKey had raised approximately $2.6 million in funding, led by ff Venture Capital. That year, UniKey announced a licensing deal with Kwikset, the largest residential lock manufacturer in the United States and one of the largest in the world, to manufacture and distribute Kēvo, a residential smart lock using UniKey's technology. Products Launched in 2013, Kēvo is the first Bluetooth-enabled touch-to-open smart lock. Kēvo has the ability to detect a user's compatible smartphone or tablet via an app, to lock and unlock the door. The deadbolt lock senses when the user's phone is nearby and when it's outside; the phone emits a low-energy Bluetooth signal, allowing the door to be unlocked when the lock face is touched, making it unnecessary to interact with the phone in order to open the door. Users can grant unrestricted or temporary access to other phones as well. Kēvo also comes with a keychain fob that provides the same touch-to-open function as an authorized smart phone. A prime security feature of the product is UniKey's Inside/Outside Intelligence, which detects whether a verified device is currently inside or outside the home. If an authorized device is known to be inside the house, unauthorized users are unable to activate Kēvo from outside. The system acts as a one-way filter that lets only authorized users pass through the entryway. Partnerships In June 2014, UniKey and MIWA Lock Company announced a partnership to offer keyless entry to hotels. UniKey has developed a touch-to-open passive keyless entry system to be integrated into MIWA's existing radio-frequency identification hospitality locks. The keys are activated through smartphone apps. When guests check in through the app, they are sent their room number and the phone is enabled to act as a virtual key. Following the MIWA partnership, UniKey is making its way into the commercial access control industry through partnerships with Grosvenor Technologies and Nortek Security and Control. The company has also expanded its international reach with a partnership with UK home security company ERA. Milestones References External links American companies established in 2010 Access control Technology companies of the United States 2010 establishments in Florida
35892471
https://en.wikipedia.org/wiki/Arxan%20Technologies
Arxan Technologies
Digital Ai (Formerly known as Arxan Technologies) is an American technology company specializing in anti-tamper and digital rights management (DRM) for Internet of Things (IoT), mobile, and other applications. Arxan's security products are used to prevent tampering or reverse engineering of software, thus preventing access or modifications to said software that are deemed undesirable by its developer. The company reports that applications secured by it are running on over 500 million devices. Its products are used across a range of industries, including mobile payments & banking, automotive, healthcare and gaming. History Arxan is privately held and private equity-backed. In the fall of 2013, TA Associates, a private equity firm, completed a majority investment in Arxan Technologies. Previously, the company received Series B funding in 2003, followed by $13 million in Series C funding in 2007 and a Series D funding of $4 million in 2009. Early investors included Trident Capital, EDF Ventures, Legend Ventures, Paladin Capital, Dunrath Capital, TDF Fund and Solstice Capital. Arxan was founded in 2001 by Eric Davis and Purdue University researchers, Mikhail Atallah, Tim Korb, John Rice and Hoi Chang. The first funding came from Richard Early and Dunrath Capital. Rich Early subsequently became Arxan's first CEO. The company's early intellectual property was licensed from Purdue University. The company's initial focus was on defense anti-tamper applications. Following the sale of its defense technology unit, Arxan Defense Systems, to Microsemi in 2010, Arxan focused on commercial applications. In April 2020, Arxan Technologies joined CollabNet VersionOne and XebiaLabs to form Digital.ai, a software company with the stated aim of 'pulling software development, business agility and application security into a single platform'. Products Arxan offers a number of Anti-Tamper Software products for application and cryptographic key protection. These include: Arxan Code Protection to secure Mobile, IoT & Embedded, Desktop and Server applications Arxan Cryptographic Key & Data Protection to secure secret keys and data with white-box cryptography, which provide all the major crypto algorithms and features required to protect sensitive keys and data in hostile or untrusted operational environments. Arxan Cryptographic Key & Data Protection is FIPS140-2 validated. In May 2012, the company announced comprehensive support for Android application protection and hardening against tampering and piracy. In June 2014, Arxan announced that its mobile application protection offerings will be sold by IBM as part of IBM's portfolio of security products. Arxan's products are based on patented security techniques for code hardening, tamper-proofing, key security and node locking. The core technology consists of a multi-layered, interconnected network of Guards that each perform a specific security function and are embedded into application binaries to make programs tamper-aware, tamper-resistant, and self-healing. The company claims a three-layer protection paradigm of defend, detect and react as a differentiating approach. By detecting when an attack is being attempted and responding to detected attacks with alerts and repairs, this protection helps secure software against hacking attacks and threats such as: static reverse engineering or code analysis dynamic reverse engineering or debugging tampering to disable or circumvent security mechanisms (authentication, encryption, anti-virus, security policies, etc.) tampering to modify program functionality tampering for piracy or unauthorized use insertion of malware into an application counterfeiting and IP theft stealing of cryptographic keys IoT anti-tamper Arxan's IoT products insert the anti-tamper protection into the firmware of the device itself, causing parts of the code to continually check each other for integrity. If any tamper attempt is detected, Arxan's product can either attempt to restore the code to its original form, stop the firmware from running entirely, send a notification to the developer or any combination of the three. DRM Its DRM solutions have been compared to their competitor Denuvo, with both working to provide a layer of anti-tamper security on top of already existing copy protection mechanisms added by the developer. This results in a multi-layered approach in which the original DRM software protects the software from unauthorized copying, modification or use, while Arxan prevents any attempt to remove or alter said protection. However, much like with Denuvo's application of it, this approach has also been criticised for increasing the use of system resources. Arxan has previously expressed strong confidence that its DRM solutions would not be cracked, but in fact cracks or bypasses for Arxan products have been shown to exist; in one example Zoo Tycoon Ultimate Animal Collection was successfully cracked in 2018 while using a five-layer approach featuring UWP, XbLA, MSStore, EAppX and Arxan protection simultaneously. Several more bypasses of Arxan's protection have since emerged in 2018 and 2019, with Arxan-protected Gears 5 being cracked by a scene group less than two weeks following its original release. Media and awards Deloitte 2014 Top 500 Fastest Growing Technology Company CIOReview Magazine 2014 Top 50 Most Promising IoT Companies 2015 Mobile Innovations Award Winner for Best Management of Mobile Security Issues Info Security Products Guide 2014 Winner for Best New Product: Mobile Application Integrity Protection™ Suite v 5.0 See also Tamper resistance Application Security Encryption Content Protection Digital rights management Cryptographic Key Types Obfuscated Code Cryptography References Companies based in San Francisco Cryptography companies Computer security software companies
35487183
https://en.wikipedia.org/wiki/Adam%20C.%20Siepel
Adam C. Siepel
Adam C. Siepel (born 1972) is an American computational biologist known for his research in comparative genomics and population genetics, particularly the development of statistical methods and software tools for identifying evolutionarily conserved sequences. Siepel is currently Chair of the Simons Center for Quantitative Biology and Professor in the Watson School for Biological Sciences at Cold Spring Harbor Laboratory. Education and career Siepel completed a B.S. in Agricultural and Biological Engineering at Cornell University in 1994, then worked at Los Alamos National Laboratory until 1996. From 1996 to 2001, he worked as a software developer at the National Center for Genome Resources in Santa Fe, while completing an M.S. in Computer Science at the University of New Mexico. He obtained a Ph.D. in Computer Science from the University of California, Santa Cruz in 2005. He was on the faculty of Cornell University from 2006 to 2014 and moved to Cold Spring Harbor Laboratory in 2014. Research Siepel has worked on various problems at the intersection of computer science, statistics, evolutionary biology, and genomics. At Los Alamos National Laboratory, he developed phylogenetic methods for detecting recombinant strains of HIV, and at the National Center for Genome Resources, he led the development of ISYS, a technology for integrating heterogeneous bioinformatics databases, analysis tools, and visualization programs. Siepel also did theoretical work on algorithms for phylogeny reconstruction based on genome rearrangements, working with Bernard Moret at the University of New Mexico. When Siepel left software development to join David Haussler's laboratory at the University of California, Santa Cruz, he turned to computational problems in comparative genomics. In Haussler's group, he developed several analysis methods based on phylogenetic hidden Markov models, including a widely used program called phastCons for identifying evolutionarily conserved sequences in genomic sequences. At Cornell, Siepel's research group continued to work on the identification and characterization of conserved non-coding sequences. They also studied fast-evolving sequences in both coding and noncoding regions, including human accelerated regions. In recent years, the Siepel laboratory has increasingly focused on human population genetics, developing methods for estimating the times in early human history when major population groups first diverged, for measuring the influence of natural selection on transcription factor binding sites, and for estimating probabilities that mutations across the human genome will have fitness consequences. The group also has an active research program in transcriptional regulation, carried out in close collaboration with John T. Lis's laboratory. A common theme in Siepel's research is the development of precise mathematical models for the complex processes by which genomes evolve over time. His research group uses these models, together with techniques from computer science and statistics, both to peer into the past, and to address questions of practical importance for human health. Awards and honours Siepel was a recipient of a Guggenheim Fellowship in 2012. He was also awarded a David and Lucile Packard Fellowship for Science and Engineering in 2007, a Microsoft Research Faculty Fellowship in 2007, and a Sloan Research Fellowship in 2009. References American bioinformaticians American geneticists Cornell University alumni University of New Mexico alumni University of California, Santa Cruz alumni Living people 1972 births
53345922
https://en.wikipedia.org/wiki/DNADynamo
DNADynamo
DNADynamo is a commercial DNA sequence analysis software package produced by Blue Tractor Software Ltd that runs on Microsoft Windows, Mac OS X and Linux It is used by molecular biologists to analyze DNA and Protein sequences. A free demo is available from the software developers website. Features DNADynamo is a general purpose DNA and Protein sequence analysis package that can carry out most of the functions required by a standard research molecular biology laboratory DNA and Protein Sequence viewing, editing and annotating Contig assembly and chromatogram editing including comparison to a reference sequence to identify mutations Global Sequence alignment with ClustalW and MUSCLE and editing. Select and drag Sequence alignment editing for hand made dna vs protein alignments Restriction site analysis - for viewing restriction cut sites in tables and on linear and circular maps. A Subcloning tool for the assembly of constructs using Restriction Sites or Gibson assembly, Agarose Gel simulation. Online Database searching - Search public databases at the NCBI such as Genbank and UniProt. Online BLAST searches. Protein analysis including estimation of Molecular Weight, Extinction Coefficient and pI. PCR Primer design, including an interface to Primer3 3D structure viewing via an interface to Jmol History DNADynamo has been developed since 2004 by BlueTractorSoftware Ltd, a software development company based in North Wales, UK References External links DNADynamo homepage Bioinformatics software Computational science
64733514
https://en.wikipedia.org/wiki/Michel%20Bercovier
Michel Bercovier
Michel Bercovier (Hebrew: מישל ברקוביאר; born: 10 September 1941) is a French-Israeli Professor (Emeritus) of Scientific Computing and Computer Aided Design (CAD) in The Rachel and Selim Benin School of Computer Science and Engineering at the Hebrew University of Jerusalem. Bercovier is also the head of the School of Computer Science at the Hadassah Academic College, Jerusalem. Early life and education Michel Bercovier was born in Lyon, France. He received his B.Sc in Mathematics from Paris University in 1964. He was from 1964 to 1965 vice president of Union of French Jewish Students and co-principal editor of its magazine Kadima. During the years 1965-67 he served in the French Army. He earned his D. Es Sc. in 1976 at the Faculté des Sciences de Rouen. Bercovier authored the thesis Régularisation duale des problèmes variationnels mixtes (Dual regularization of mixed variational problems), under the supervision of Jacques-Louis Lions. He belongs to the second generation of Lions' students. Career Bercovier was an assistant professor at the University of Rouen (1969 - 1972) where he created the Computation Center. He emigrated to Israel in 1973 and was director of applications and services at the Hebrew University of Jerusalem Computer Center (1973 - 1976). He joined the School of Applied Sciences of the Hebrew University of Jerusalem as a Lecturer in 1977, becoming an Associate Professor in 1983, and moved to its Institute of Computer Science in 1986. From 1997 until 2006 he held the Bertold Badler Chair of Computer Science as a full professor. In 1996-1998 Bercovier set up the Computer Science department of a new university at Paris-La Defense (Pôle universitaire Léonard de Vinci). From 1999 to 2007 he was in charge of the W3C office in Israel, and thus very active in the development of the Internet. He retired from the Hebrew University of Jerusalem in 2007 as an emeritus professor. From 2010 he is also a professor and head of the school of Computer Science at the Hadassah College, Jerusalem. Bercovier has advised more than 30 M.Sc. and 16 Ph.D. students. Research Professor Bercovier's research work focuses on Computer Aided Design and in Scientific Computation. He developed new Finite Element Methods for fluid flows and incompressible materials based on penalty and reduced integration methods that are universally implemented.Together with Pironneau, Olivier he proved the optimality of the Hood Taylor finite element for incompressible fluids. He has made contributions to the integration of Computer Aided Design (CAD) and Analysis, developed new methods in surface design, integrated optimal control methods in CAD and cloth simulation for animation. He has been involved in multidisciplinary research, teaming with surgeons, biologists and pharmacologists (artificial heart valve modeling, Ca+ discharge in axons, keratotomical surgery, drug release models). Bercovier carries joint research with INRIA, Pierre and Marie Curie University, EPFL, IMATI-Pavia, Institute of Applied Geometry (Johannes Kepler University Linz) and MIT, among others. He is currently involved in several aspects of Isogeometric Analysis, such as smooth surfaces on arbitrary meshes and Domain Decomposition methods on arbitrarily overlapping domains. On the former subject, his book with Tanya Matskevitch is at the origin of numerous research on smooth surfaces over arbitrary quadrilateral meshes. Professional experience Parallel to his academic work, Bercovier has been involved in industrial research: he was Chief Consultant for Kleber and Michelin (1972 – 1985), Hutchinson (1990-2017), Pechiney (1992-2017), L’Oréal (1996-2013). He also contributed to the creation of several Hi-Tech companies: FDI (now part of Ansys, a US company, was based on his research, as was Bercom, a leading CAD/CAE Israeli firm. Bercovier was also the chairman of Aleph Yissum (now Ex Libris) in the years 1986 - 1996, and was active in turning the small start up into the leader in computer systems for libraries. He contributed to the creation of the R&D team of Visiowave (Lausanne), which was acquired by General Electric. He is on the editorial board of several scientific journals. He is a member of SIAM, European Mathematical Society and ACM, on the board of SIA (Automotive Engineering Society in France) and ECCOMAS. He was a visiting fellow for long periods at IBM, Digital Equipment Corporation and Matra. He was a member of the scientific council of AMIES, Agency for Interaction in Mathematics with Business and Society, a founding member of the Israel Association for Computational Methods in Mechanics and co-founder and chairman of the World User Association in CFD (Computational fluid dynamics). Honors and awards Chevalier des Palmes Académiques (1986) Conseiller du Commerce Extérieur(fr) (1993, renewed 1995) Publications Bercovier is the author of over 80 papers and 3 books. Books Topics in computer aided geometric design. Barnhill RF. ,Bercovier M., Boehm W., Capasso V. ,eds Symposium on topics in computer aided geometric design held in Erice 1990 RAIRO MMAN 26, Duno Paris, 1992. Domain Decomposition Methods in Science and Engineering 18, Editor, With M.J Gander, Kornhubler and O Widlund, Springer, 2008. (with Tanya Matskevich) Smooth Bézier Surfaces over Arbitrary Quadrilateral Meshes Lectures Notes of the UMI, 22, Springer, 2017. Selected articles Isogeometric analysis with geometrically continuous functions on multi-patch geometries, Kapl, Mario and Buchegger, Florian and Bercovier,Michel and Jüttler, Bert. Computer Methods in Applied Mechanics and Engineering (Vol 316, Pages 209-234), April 2017 Overlapping non Matching Meshes Domain Decomposition Method in Isogeometric Analysis, Bercovier, Michel and Soloveichik, Ilya. February 2015. Efficient simulation of inextensible cloth. Goldenthal, Rony and Harmon, David and Fattal, Raanan and Bercovier, Michel and Grinspun, Eitan. ACM Transactions on Graphics (TOG) (Pages 49), 2007 Curve and surface fitting and design by optimal control methods. Alhanaty, Michal and Bercovier, Michel. Computer-Aided Design (Volume 33, Pages 167–182), 2001 Virtual topology operators for meshing. Sheffer, Alla and Bercovier, Michel and BLACKER, TED and Clements, Jan. International Journal of Computational Geometry & Applications (Volume 10, Pages 309–331), 2000 “Discrete” G 1 assembly of patches over irregular meshes. Matskevich, T and Volpin, O and Bercovier, M. Proceedings of the international conference on *Mathematical methods for curves and surfaces II Lillehammer, 1997 (Pages 351–358), 1998 The development of a mechanical model for a tyre: a 15 years story. Bercovier,Michel and Jankovich, Etienne and Durand, Michel. Proceedings of the Second *European Symposium on Mathematics in Industry: ESMI II, March 1–7, 1987, Oberwolfach (Pages 269), 1988 Computer simulation of lamellar keratectomy and laser myopic keratomileusis. Hanna, Khalil D and Jouve, Francois and Bercovier, Michel H and Waring, George O. Journal of refractive surgery (Volume 4, Pages 222–231), 1988 Finite elements and characteristics for some parabolic-hyperbolic problems. Bercovier, Michel and Pironneau, Olivier and Sastri, Vedala. Applied Mathematical Modelling (Volume 7, Pages 89–96), 1983 The vortex method with finite elements. Bardos, Claude and Bercovier, Michel and Pironneau, Olivier. Mathematics of Computation (Volume 36, Pages 119–136), 1981 A finite-element method for incompressible non-Newtonian flows. Bercovier, Michel and Engelman, Michael. Journal of Computational Physics (Volume 36, Pages 313–326), 1980 Error estimates for finite element method solution of the Stokes problem in the primitive variables. Bercovier, Michel and Pironneau, Olivier. Numerische Mathematik (Volume 33, Pages 211–224), 1979 A finite element for the numerical solution of viscous incompressible flows. Bercovier, Michel and Engelman, Michael. Journal of Computational Physics (Volume 30, Pages 181–201), 1979 Perturbation of mixed variational problems. Application to mixed finite element methods. Bercovier, Michel. RAIRO. Analyse numérique (Volume 12, Pages 211–236), 1978 Personal life Bercovier is divorced, has three sons and lives in Jerusalem. His brother, Herve Bercovier, is a Professor (Emeritus) in the faculty of medicine at the Hebrew University of Jerusalem. Michel Bercovier is a co-founder and Honorary President of the Association du Festival Lyrique de Montperreux(fr). References External links Michel Bercovier, Hebrew University of Jerusalem Michel Bercovier, Hadassah Academic College Michel Bercovier, Mathematics Genealogy Project My personal story of Aleph-Yissum (Ex Libris), Hebrew University of Jerusalem 1941 births Living people Hebrew University of Jerusalem faculty Israeli computer scientists Applied mathematicians People associated with the finite element method Numerical analysts Computer graphics researchers University of Paris alumni University of Rouen alumni French emigrants to Israel
55951
https://en.wikipedia.org/wiki/Instant%20messaging
Instant messaging
Instant messaging (IM) technology is a type of online chat allowing real-time text transmission over the Internet or another computer network. Messages are typically transmitted between two or more parties, when each user inputs text and triggers a transmission to the recipient(s), who are all connected on a common network. It differs from email in that conversations over instant messaging happen in real-time (hence "instant"). Most modern IM applications (sometimes called "social messengers", "messaging apps" or "chat apps") use push technology and also add other features such as emojis (or graphical smileys), file transfer, chatbots, Voice over IP, or video chat capabilities. Instant messaging systems tend to facilitate connections between specified known users (often using a contact list also known as a "buddy list" or "friend list"), and can be standalone applications or integrated into e.g. a wider social media platform, or a website where it can for instance be used for conversational commerce. IM can also consist of conversations in "chat rooms". Depending on the IM protocol, the technical architecture can be peer-to-peer (direct point-to-point transmission) or client–server (an IM service center retransmits messages from the sender to the communication device). It is usually distinguished from text messaging which is typically simpler and normally uses cellular phone networks. Instant messaging was pioneered in the early Internet era; the IRC protocol was the earliest to achieve wide adoption. Later in the 1990s, ICQ was among the first closed and commercialized instant messengers, and several rival services appeared afterwards as it became a popular use of the Internet. Beginning with its first introduction in 2005, BlackBerry Messenger, which initially had been available only on BlackBerry smartphones, soon became one of the most popular mobile instant messaging apps worldwide. BBM was for instance the most used mobile messaging app in the United Kingdom and Indonesia. Instant messaging remains very popular today; IM apps are the most widely used smartphone apps: in 2018 there were over 1.3 billion monthly users of WhatsApp and Facebook Messenger, and 980 million monthly active users of WeChat. Overview Instant messaging is a set of communication technologies used for text-based communication between two (private messaging) or more (chat room) participants over the Internet or other types of networks (see also LAN messenger). IM–chat happens in real-time. Of importance is that online chat and instant messaging differ from other technologies such as email due to the perceived quasi-synchrony of the communications by the users. Some systems permit messages to be sent to users not then 'logged on' (offline messages), thus removing some differences between IM and email (often done by sending the message to the associated email account). IM allows effective and efficient communication, allowing immediate receipt of acknowledgment or reply. However IM is basically not necessarily supported by transaction control. In many cases, instant messaging includes added features which can make it even more popular. For example, users may see each other via webcams, or talk directly for free over the Internet using a microphone and headphones or loudspeakers. Many applications allow file transfers, although they are usually limited in the permissible file-size. It is usually possible to save a text conversation for later reference. Instant messages are often logged in a local message history, making it similar to the persistent nature of emails. Major IM services are controlled by their corresponding companies. They usually follow the client–server model when all clients have to first connect to the central server. This requires users to trust this server because messages can generally be accessed by the company. Companies can be compelled to reveal their user's communication. Companies can also suspend user accounts for any reason. Non-IM types of chat include multicast transmission, usually referred to as "chat rooms", where participants might be anonymous or might be previously known to each other (for example collaborators on a project that is using chat to facilitate communication). An Instant Message Service Center (IMSC) is a network element in the mobile telephone network which delivers instant messages. When a user sends an IM message to another user, the phone sends the message to the IMSC. The IMSC stores the message and delivers it to the destination user when they are available. The IMSC usually has a configurable time limit for how long it will store the message. Few companies who make many of the IMSCs in use in the GSM world are Miyowa, Followap and OZ. Other players include Acision, Colibria, Ericsson, Nokia, Comverse Technology, Now Wireless, Jinny Software, Miyowa, Feelingk and few others. The term "Instant Messenger" is a service mark of Time Warner and may not be used in software not affiliated with AOL in the United States. For this reason, in April 2007, the instant messaging client formerly named Gaim (or gaim) announced that they would be renamed "Pidgin". Clients Each modern IM service generally provides its own client, either a separately installed piece of software, or a browser-based client. They are normally centralised networks run by the servers of the platform's operators, unlike peer-to-peer protocols like XMPP. These usually only work within the same IM network, although some allow limited function with other services. Third party client software applications exist that will connect with most of the major IM services. There is the class of instant messengers that uses the serverless model, which doesn't require servers, and the IM network consists only of clients. There are several serverless messengers: RetroShare, Tox, Bitmessage, Ricochet, Ring. Some examples of popular IM services today include WhatsApp, Facebook Messenger, WeChat, QQ Messenger, Telegram, Viber, Line, and Snapchat. The popularity of certain apps greatly differ between different countries. Certain apps have emphasis on certain uses - for example Skype focuses on video calling, Slack focuses on messaging and file sharing for work teams, and Snapchat focuses on image messages. Some social networking services offer messaging services as a component of their overall platform, such as Facebook's Facebook Messenger, while others have a direct messaging function as an additional adjunct component of their social networking platforms, like Instagram, Reddit, Tumblr, TikTok, Clubhouse and Twitter, either directly or through chat rooms. Features Private and group messaging Private chat allows private conversation with another person or a group. The privacy aspect can also be enhanced as applications have a timer feature, like Snapchat, where messages or conversations are automatically deleted once the time limit is reached. Public and group chat features allow users to communicate with multiple people at a time. Calling Many major IM services and applications offer the call feature for user-to-user calls, conference calls, and voice messages. The call functionality is useful for professionals who utilize the application for work purposes and as a hands-free method. Videotelephony using a webcam is also possible by some. Games and entertainment Some IM applications include in-app games for entertainment. Yahoo! Messenger for example introduced these where users could play a game and viewed by friends in real-time. The Facebook Messenger application has a built in option to play computer games with people in a chat, including games like Tetris and Blackjack. Payments Though a relatively new feature, peer-to-peer payments are available on major messaging platforms. This functionality allows individuals to use one application for both communication and financial tasks. The lack of a service fee also makes messaging apps advantageous to financial applications. Major platforms such as Facebook messenger and WeChat already offer a payment feature, and this functionality is likely to become a standard amongst IM apps competing in the market. History Though the term dates from the 1990s, instant messaging predates the Internet, first appearing on multi-user operating systems like Compatible Time-Sharing System (CTSS) and Multiplexed Information and Computing Service (Multics) in the mid-1960s. Initially, some of these systems were used as notification systems for services like printing, but quickly were used to facilitate communication with other users logged into the same machine. CTSS facilitated communication via text message for up to 30 people. Parallel to instant messaging were early online chat facilities, the earliest of which was Talkomatic (1973) on the PLATO system, which allowed 5 people to chat simultaneously on a 512x512 plasma display (5 lines of text + 1 status line per person). During the bulletin board system (BBS) phenomenon that peaked during the 1980s, some systems incorporated chat features which were similar to instant messaging; Freelancin' Roundtable was one prime example. The first such general-availability commercial online chat service (as opposed to PLATO, which was educational) was the CompuServe CB Simulator in 1980, created by CompuServe executive Alexander "Sandy" Trevor in Columbus, Ohio. As networks developed, the protocols spread with the networks. Some of these used a peer-to-peer protocol (e.g. talk, ntalk and ytalk), while others required peers to connect to a server (see talker and IRC). The Zephyr Notification Service (still in use at some institutions) was invented at MIT's Project Athena in the 1980s to allow service providers to locate and send messages to users. Early instant messaging programs were primarily real-time text, where characters appeared as they were typed. This includes the Unix "talk" command line program, which was popular in the 1980s and early 1990s. Some BBS chat programs (i.e. Celerity BBS) also used a similar interface. Modern implementations of real-time text also exist in instant messengers, such as AOL's Real-Time IM as an optional feature. In the latter half of the 1980s and into the early 1990s, the Quantum Link online service for Commodore 64 computers offered user-to-user messages between concurrently connected customers, which they called "On-Line Messages" (or OLM for short), and later "FlashMail." Quantum Link later became America Online and made AOL Instant Messenger (AIM, discussed later). While the Quantum Link client software ran on a Commodore 64, using only the Commodore's PETSCII text-graphics, the screen was visually divided into sections and OLMs would appear as a yellow bar saying "Message From:" and the name of the sender along with the message across the top of whatever the user was already doing, and presented a list of options for responding. As such, it could be considered a type of graphical user interface (GUI), albeit much more primitive than the later Unix, Windows and Macintosh based GUI IM software. OLMs were what Q-Link called "Plus Services" meaning they charged an extra per-minute fee on top of the monthly Q-Link access costs. Modern, Internet-wide, GUI-based messaging clients as they are known today, began to take off in the mid-1990s with PowWow, ICQ, and AOL Instant Messenger. Similar functionality was offered by CU-SeeMe in 1992; though primarily an audio/video chat link, users could also send textual messages to each other. AOL later acquired Mirabilis, the authors of ICQ; establishing dominance in the instant messaging market. A few years later ICQ (then owned by AOL) was awarded two patents for instant messaging by the U.S. patent office. Meanwhile, other companies developed their own software; (Excite, MSN, Ubique, and Yahoo!), each with its own proprietary protocol and client; users therefore had to run multiple client applications if they wished to use more than one of these networks. In 1998, IBM released IBM Lotus Sametime, a product based on technology acquired when IBM bought Haifa-based Ubique and Lexington-based Databeam. In 2000, an open-source application and open standards-based protocol called Jabber was launched. The protocol was standardized under the name Extensible Messaging and Presence Protocol (XMPP). XMPP servers could act as gateways to other IM protocols, reducing the need to run multiple clients. Multi-protocol clients can use any of the popular IM protocols by using additional local libraries for each protocol. IBM Lotus Sametime's November 2007 release added IBM Lotus Sametime Gateway support for XMPP. Video calling using a webcam also started taking off during this time. Microsoft NetMeeting was one of the earliest, but Skype released in 2003 was one of the first that focused on this features and brought it to a wider audience. By 2006, AIM controlled 52 percent of the instant messaging market, but rapidly declined shortly thereafter as the company struggled to compete with other services. By 2010, instant messaging over the Web was in sharp decline in favor of messaging features on social networks. Social networking providers often offer IM abilities, for example Facebook Chat, while Twitter can be thought of as a Web 2.0 instant messaging system. Similar server-side chat features are part of most dating websites, such as OKCupid or PlentyofFish. The former most popular IM platforms were terminated in later years, such as AIM. The popularity of instant messaging was soon revived with new services in the form of mobile applications, notable examples of the time being BlackBerry Messenger (first released in 2005; today available as BlackBerry Messenger Enterprise) and WhatsApp (first released in 2009). Unlike previous IM applications, these newer ones usually ran only on mobile devices and coincided with the rising popularity of Internet-enabled smartphones; this led to IM surpassing SMS in message volume by 2013. By 2014, IM had more users than social networks. In January 2015, the service WhatsApp alone accommodated 30 billion messages daily in comparison to about 20 billion for SMS. In 2016, Google introduced a new intelligent messaging app that incorporates machine learning technology called Allo. Google Allo was shut down on March 12, 2019. Interoperability Standard complementary instant messaging applications offer functions like file transfer, contact list(s), the ability to hold several simultaneous conversations, etc. These may be all the functions that a small business needs, but larger organizations will require more sophisticated applications that can work together. The solution to finding applications capable of this is to use enterprise versions of instant messaging applications. These include titles like XMPP, Lotus Sametime, Microsoft Office Communicator, etc., which are often integrated with other enterprise applications such as workflow systems. These enterprise applications, or enterprise application integration (EAI), are built to certain constraints, namely storing data in a common format. There have been several attempts to create a unified standard for instant messaging: IETF's Session Initiation Protocol (SIP) and SIP for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Application Exchange (APEX), Instant Messaging and Presence Protocol (IMPP), the open XML-based Extensible Messaging and Presence Protocol (XMPP), and Open Mobile Alliance's Instant Messaging and Presence Service developed specifically for mobile devices. Most attempts at producing a unified standard for the major IM providers (AOL, Yahoo! and Microsoft) have failed, and each continues to use its own proprietary protocol. However, while discussions at IETF were stalled, Reuters signed the first inter-service provider connectivity agreement in September 2003. This agreement enabled AIM, ICQ and MSN Messenger users to talk with Reuters Messaging counterparts and vice versa. Following this, Microsoft, Yahoo! and AOL agreed to a deal in which Microsoft's Live Communications Server 2005 users would also have the possibility to talk to public instant messaging users. This deal established SIP/SIMPLE as a standard for protocol interoperability and established a connectivity fee for accessing public instant messaging groups or services. Separately, on October 13, 2005, Microsoft and Yahoo! announced that by the 3rd quarter of 2006 they would interoperate using SIP/SIMPLE, which was followed, in December 2005, by the AOL and Google strategic partnership deal in which Google Talk users would be able to communicate with AIM and ICQ users provided they have an AIM account. There are two ways to combine the many disparate protocols: Combine the many disparate protocols inside the IM client application. Combine the many disparate protocols inside the IM server application. This approach moves the task of communicating with the other services to the server. Clients need not know or care about other IM protocols. For example, LCS 2005 Public IM Connectivity. This approach is popular in XMPP servers; however, the so-called transport projects suffer the same reverse engineering difficulties as any other project involved with closed protocols or formats. Some approaches allow organizations to deploy their own, private instant messaging network by enabling them to restrict access to the server (often with the IM network entirely behind their firewall) and administer user permissions. Other corporate messaging systems allow registered users to also connect from outside the corporation LAN, by using an encrypted, firewall-friendly, HTTPS-based protocol. Usually, a dedicated corporate IM server has several advantages, such as pre-populated contact lists, integrated authentication, and better security and privacy. Certain networks have made changes to prevent them from being used by such multi-network IM clients. For example, Trillian had to release several revisions and patches to allow its users to access the MSN, AOL, and Yahoo! networks, after changes were made to these networks. The major IM providers usually cite the need for formal agreements, and security concerns as reasons for making these changes. The use of proprietary protocols has meant that many instant messaging networks have been incompatible and users have been unable to reach users on other networks. This may have allowed social networking with IM-like features and text messaging an opportunity to gain market share at the expense of IM. Effects of IM on communication Messaging applications have affected the way people communicate on their devices. A survey conducted by MetrixLabs showed that messaging applications 63% of Baby Boomers, 63% of Generation X, and 67% of Generation Y said that they used messaging applications in place of texting. A Facebook survey showed that 65% of people surveyed thought that messaging applications made group messaging easier. Effects on workplace communication Messaging applications have also changed how people communicate in the workplace. Enterprise messaging applications like Slack, TeleMessage, Teamnote and Yammer allow companies to enforce policies on how employees message at work and ensure secure storage of sensitive data. Message applications allow employees to separate work information from their personal emails and texts. Messaging applications may make workplace communication efficient, but they can also have consequences on productivity. A study at Slack showed on average, people spend 10 hours a day on Slack, which is about 67% more time than they spend using email. IM language Users sometimes make use of internet slang or text speak to abbreviate common words or expressions to quicken conversations or reduce keystrokes. The language has become widespread, with well-known expressions such as 'lol' translated over to face-to-face language. Emotions are often expressed in shorthand, such as the abbreviation LOL, BRB and TTYL; respectively laugh(ing) out loud, be right back, and talk to you later. Some, however, attempt to be more accurate with emotional expression over IM. Real time reactions such as (chortle) (snort) (guffaw) or (eye-roll) are becoming more popular. Also there are certain standards that are being introduced into mainstream conversations including, '#' indicates the use of sarcasm in a statement and '*' which indicates a spelling mistake and/or grammatical error in the prior message, followed by a correction. Business application Instant messaging has proven to be similar to personal computers, email, and the World Wide Web, in that its adoption for use as a business communications medium was driven primarily by individual employees using consumer software at work, rather than by formal mandate or provisioning by corporate information technology departments. Tens of millions of the consumer IM accounts in use are being used for business purposes by employees of companies and other organizations. In response to the demand for business-grade IM and the need to ensure security and legal compliance, a new type of instant messaging, called "Enterprise Instant Messaging" ("EIM") was created when Lotus Software launched IBM Lotus Sametime in 1998. Microsoft followed suit shortly thereafter with Microsoft Exchange Instant Messaging, later created a new platform called Microsoft Office Live Communications Server, and released Office Communications Server 2007 in October 2007. Oracle Corporation also jumped into the market with its Oracle Beehive unified collaboration software. Both IBM Lotus and Microsoft have introduced federation between their EIM systems and some of the public IM networks so that employees may use one interface to both their internal EIM system and their contacts on AOL, MSN, and Yahoo. As of 2010, leading EIM platforms include IBM Lotus Sametime, Microsoft Office Communications Server, Jabber XCP and Cisco Unified Presence. Industry-focused EIM platforms such as Reuters Messaging and Bloomberg Messaging also provide IM abilities to financial services companies. The adoption of IM across corporate networks outside of the control of IT organizations creates risks and liabilities for companies who do not effectively manage and support IM use. Companies implement specialized IM archiving and security products and services to mitigate these risks and provide safe, secure, productive instant messaging abilities to their employees. IM is increasingly becoming a feature of enterprise software rather than a stand-alone application. IM products can usually be categorised into two types: Enterprise Instant Messaging (EIM) and Consumer Instant Messaging (CIM). Enterprise solutions use an internal IM server, however this is not always feasible, particularly for smaller businesses with limited budgets. The second option, using a CIM provides the advantage of being inexpensive to implement and has little need for investing in new hardware or server software. For corporate use, encryption and conversation archiving are usually regarded as important features due to security concerns. There are also a bunch of open source encrypting messengers. Sometimes the use of different operating systems in organizations requires use of software that supports more than one platform. For example, many software companies use Windows in administration departments but have software developers who use Linux. Comparison to SMS SMS is the acronym for “short message service” and allows mobile phone users to send text messages without an Internet connection, while instant messaging provides similar services through an Internet connection. SMS was a much more dominant form of communication before, when smartphones became widely used globally. While SMS relied on traditional paid telephone services, instant messaging apps on mobiles were available for free or a minor data charge. In 2012 SMS volume peaked, and in 2013 chat apps surpassed SMS in global message volume. Easier group messaging was another advantage of smartphone messaging apps and also contributed to their adoption. Before the introduction of messaging apps, smartphone users could only participate in single-person interactions via mobile voice calls or SMS. With the introduction of messaging apps, the group chat functionality allows all the members to see an entire thread of everyone's responses. Members can also respond directly to each other, rather than having to go through the member who started the group message, to relay the information. However, SMS still remains popular in the United States because it is usually included free in monthly phone bundles. While SMS volumes in some countries like Denmark, Spain and Singapore dropped up to two-thirds from 2011 to 2013, in the United States SMS use only dropped by about one quarter. Security and archiving Crackers (malicious or black hat hackers) have consistently used IM networks as vectors for delivering phishing attempts, "poison URLs", and virus-laden file attachments from 2004 to the present, with over 1100 discrete attacks listed by the IM Security Center in 2004–2007. Hackers use two methods of delivering malicious code through IM: delivery of viruses, trojan horses, or spyware within an infected file, and the use of "socially engineered" text with a web address that entices the recipient to click on a URL connecting him or her to a website that then downloads malicious code. Viruses, computer worms, and trojans usually propagate by sending themselves rapidly through the infected user's contact list. An effective attack using a poisoned URL may reach tens of thousands of users in a short period when each user's contact list receives messages appearing to be from a trusted friend. The recipients click on the web address, and the entire cycle starts again. Infections may range from nuisance to criminal, and are becoming more sophisticated each year. IM connections sometimes occur in plain text, making them vulnerable to eavesdropping. Also, IM client software often requires the user to expose open UDP ports to the world, raising the threat posed by potential security vulnerabilities. In the early 2000s, a new class of IT security provider emerged to provide remedies for the risks and liabilities faced by corporations who chose to use IM for business communications. The IM security providers created new products to be installed in corporate networks for the purpose of archiving, content-scanning, and security-scanning IM traffic moving in and out of the corporation. Similar to the e-mail filtering vendors, the IM security providers focus on the risks and liabilities described above. With rapid adoption of IM in the workplace, demand for IM security products began to grow in the mid-2000s. By 2007, the preferred platform for the purchase of security software had become the "computer appliance", according to IDC, who estimated that by 2008, 80% of network security products would be delivered via an appliance. By 2014 however, the level of safety offered by instant messengers was still extremely poor. According to a scorecard made by the Electronic Frontier Foundation, only 7 out of 39 instant messengers received a perfect score, whereas the most popular instant messengers at the time only attained a score of 2 out of 7. A number of studies have shown that IM services are quite vulnerable for providing user privacy. Encryption Encryption is the primary method that messaging apps use to protect user's data privacy and security. SMS messages are not encrypted, making them insecure, as the content of each SMS message is visible to mobile carriers and governments and can be intercepted by a third party. SMS messages also leak metadata, or information about the message that is not the message content itself, such as phone numbers of the sender and recipient, which can identify the people involved in the conversation. SMS messages can also be spoofed and the sender of the message can be edited to impersonate another person. Messaging applications on the market that use end-to-end encryption include Signal, WhatsApp, Wire and iMessage. Applications that have been criticized for lacking or poor encryption methods include Telegram and Confide, as both are prone to error. Compliance risks In addition to the malicious code threat, the use of instant messaging at work also creates a risk of non-compliance to laws and regulations governing use of electronic communications in businesses. In the United States alone there are over 10,000 laws and regulations related to electronic messaging and records retention. The better-known of these include the Sarbanes–Oxley Act, HIPAA, and SEC 17a-3. Clarification from the Financial Industry Regulatory Authority (FINRA) was issued to member firms in the financial services industry in December, 2007, noting that "electronic communications", "email", and "electronic correspondence" may be used interchangeably and can include such forms of electronic messaging as instant messaging and text messaging. Changes to Federal Rules of Civil Procedure, effective December 1, 2006, created a new category for electronic records which may be requested during discovery in legal proceedings. Most nations also regulate use of electronic messaging and electronic records retention in similar fashion as the United States. The most common regulations related to IM at work involve the need to produce archived business communications to satisfy government or judicial requests under law. Many instant messaging communications fall into the category of business communications that must be archived and retrievable. User base As of October 2019, the most used messaging apps worldwide are WhatsApp with 1.6 billion active users, Facebook messenger with 1.3 billion users, and WeChat with 1.1 billion. There are only 25 countries in the world where WhatsApp is not the market leader in messaging apps and only 10 countries where the leading messenger app is not owned by Facebook. More than 100 million users Other platforms Closed services and such with unclear activity See also Terms Ambient awareness Communications protocol Mass collaboration Message-oriented middleware Operator messaging Social media Text messaging SMS Unified communications / Messaging Lists Comparison of instant messaging clients Comparison of instant messaging protocols Comparison of user features of messaging platforms Other Code Shikara (Computer worm) References External links Internet culture Internet Relay Chat Social networking services Online chat Videotelephony Text messaging
294312
https://en.wikipedia.org/wiki/Icewind%20Dale
Icewind Dale
Icewind Dale is a role-playing video game developed by Black Isle Studios and originally published by Interplay Entertainment for Windows in 2000 and by MacPlay for the Macintosh in 2002 (both the Classic Mac OS and OS X). The game takes place in the Dungeons & Dragons Forgotten Realms campaign setting and the region of Icewind Dale, and uses the 2nd edition ruleset. The story follows a different set of events than those of R. A. Salvatore's The Icewind Dale Trilogy novels: in the game, an adventuring party becomes enlisted as a caravan guard while in Icewind Dale, in the wake of strange events, and eventually discover a plot that threatens the Ten Towns of Icewind Dale and beyond. Icewind Dale received positive reviews, being praised for its musical score and gameplay. It was a commercial success, with sales above 400,000 units worldwide by early 2001. An expansion, Icewind Dale: Heart of Winter, was released in 2001, and a sequel, Icewind Dale II, followed in 2002. A remake by Overhaul Games, entitled Icewind Dale: Enhanced Edition, was published for several platforms in 2014. Gameplay Icewind Dales gameplay operates on a similar basis to that of Baldur's Gate in that it incorporates a modified version of the Advanced Dungeons & Dragons 2nd edition ruleset in which the rules' intricacies are automatically computed; the game keeps track of statistics and controls dice rolling. It has a similar user interface with minor cosmetic changes, and focuses mainly on combat, often against large groups of enemies, with dialogue driving the main story. The player is able to order a character(s) to engage in movement, dialogue, combat, or other actions such as pickpocketing within each game location. Combat has a real-time as opposed to a turn-based system, though with the option of pausing at any time so the player can give the party orders which are carried out when the game is resumed. Like other D&D-based games developed and/or published by Black Isle, Icewind Dale employs a paper-doll style inventory system, the storyline is divided into chapters, and there is a journal system archiving quests and notable entries on specific story-related information from non-player characters. Players begin the game by creating an adventuring party of up to six characters, either by creating new characters or importing those from a previous game. Each new character created requires the player to provide them with their name, gender, race, class, and alignment, and then determine their ability scores and weapon proficiencies. The class of a character affects what alignments are available to them, what weapons and combat styles they can use, and how proficient they can be in them. Characters designated as thieves require the player to allocate points to the various thieving skills, and spellcasters need a few 1st level spells selected for their spellbook and then one memorised for use at the start of the game. Once a party is created, characters earn experience points in the game through completing quests and defeating enemies, and level up upon earning enough. Leveling up will automatically increase a character's hit points, grants spellcasters access to more spell slots including higher levels of magic, sometimes allows additional weapon proficiencies, and allows thieves to improve their thieving abilities. Plot In the town of Easthaven, a party of adventurers are met in the tavern by the town's leader, Hrothgar (voiced by Jim Cummings), who invites them to join him on an expedition to investigate the town of Kuldahar, after reports of strange happenings there. On the road to Kuldahar, the expedition is ambushed by frost giants, who cause an avalanche that blocks the path back to Easthaven. With only the adventurers surviving, they continue to Kuldahar and meet with Arundel (Jim Cummings), the village's archdruid, who explains that a mysterious evil force has been kidnapping villagers, causing abnormal weather patterns, provoking monsters, and reducing the magical warmth provided by the giant tree that towers over the village. Asking for their help to discover the source of the evil, the adventurers begin by searching the Vale of Shadows, an area containing Kuldahar's crypts, due to rumours of undead creature sightings. They encounter a cursed barbarian spirit named Kresselack (Tony Jay) who tells them that the threat lies elsewhere. Reporting this back to the druid, Arundel instructs the group to retrieve an ancient scrying item called the Heartstone Gem, so that he may discover the source of the evil more quickly. After finding the gem was stolen from its original resting place within a temple, the party travel to the caverns of Dragon's Eye, finding a number of the missing villagers being held there by lizard men. They eventually find the gem being used by a powerful Marilith named Yxunomei (Tara Strong). After killing Yxunomei and retrieving the gem, the party return to find Kuldahar under attack by Orogs, and Arundel mortally wounded by a shapeshifter disguised as the archdruid, who taunts them before vanishing. The true Arundel advises the party to take the Heartstone to Larrel (Michael Bell) at the fortress of the Severed Hand, the only one capable of using it now, before dying from his wounds. Arriving at the fortress, the party discover that Larrel is insane, and complete a task to help him regain his sanity. Using the gem, Larrel discovers the source of the evil to reside in the former dwarven city of Dorn's Deep. Fighting their way through the city, the group eventually come across the source of the evil – a priest named Brother Poquelin (John Kassir). Poquelin reveals himself to be a demon who was exiled from his home realm by his superiors, and that both he and Yxunomei maintained a vendetta against each other that was getting out of control. Predicting she would follow him to the material plane, the demon sought a base of operation in the region to form a military force that could crush her. While doing so, he stumbled upon the ancient artifact Crenshinibon, which he claims had been "calling" to him. Poquelin immediately used its power to help him amass an army to conquer the lands of Icewind Dale, until Yxunomei's activities around Kuldahar led to the formation of Hrothgar's expedition. Seeking to stop it, the demon had his frost giant minions crush the expedition, but did not count on the adventurers' survival being a problem until they recovered the Heartstone Gem, forcing him to eliminate Arundel. Despite the party having found someone else to use it, Poquelin had managed to build up his forces, which he soon sent to Easthaven. After a brief battle with Poquelin, the party finds itself transported back to Easthaven, which is now in ruins. After freeing the surviving villagers, the local cleric of Tempus, Everard, informs the party that Poquelin is going after Jerrod's Stone, a mystical object housed under the town's temple, which acts as a seal on a portal to the Nine Hells of Baator. Originally opened during a major historic battle between the combined might of the barbarian tribes and an army of a powerful mage, it was sealed shut by the sacrifice of the shaman Jerrod who led the barbarians in the conflict. Gaining entry into the demon's crystal tower that enveloped the temple, the group discover that Poquelin's true intention was to reopen the portal contained within the Stone, allowing him to conquer the North with an army of devils at his command. Although he successfully achieves this, Everard, having shunned the tale of Jerrod's sacrifice until finally understanding what he did, throws himself into the portal and seals it off at the cost of his life. The party then fights Poquelin in his true form as the devil Belhifet, and manage to defeat him, banishing him to the Nine Hells and escaping the tower as it collapses. In time, Easthaven eventually recovers, and the town is reconstructed. In a twist ending, it is revealed that the game's narrator (David Ogden Stiers), was really Belhifet, who spent a mandatory century of imprisonment at the hands of the adventurers that is now close to end, and that he will soon walk the Prime Material once more to seek his revenge. (see Baldur's Gate: Siege of Dragonspear) Development Icewind Dale is based on the BioWare Infinity Engine, featuring pre-rendered backgrounds and sprite-based characters displayed with an isometric camera perspective. This engine was used to power Black Isle Studios' previous games Planescape: Torment, Baldur's Gate, and others. Release Icewind Dale was released on June 29, 2000 for Windows by Interplay Entertainment, and on March 26, 2002 for Mac OS and OS X by MacPlay. An expansion, Icewind Dale: Heart of Winter, was released in 2001. The game and its expansion were re-released in two budget packages in 2002, entitled Icewind Dale: The Collection and Icewind Dale: Complete. They were re-released again in 2002 alongside Baldur's Gate and Planescape: Torment in Black Isle Compilation. A collector's edition called Icewind Dale: The Ultimate Collection, which included the sequel Icewind Dale II and its expansion, was released in 2003. All four games were released again in Black Isle Compilation – Part Two in 2004, in Ultimate Dungeons & Dragons in 2006, and in Atari's Rollenspiele: Deluxe Edition in 2007. Icewind Dale was again re-released on October 6, 2010, complete with expansion packs on GOG.com. Reception Sales In the United States, Icewind Dale debuted at #4 on PC Data's weekly computer game sales rankings for June 25–July 1, 2000, following the title's release on the 30th. Domestic sales for the period totaled 39,285 copies, which drew revenues of $1.71 million. Mark Asher of CNET Gamecenter called this performance a "mild surprise" and noted that the game was "doing well". It was the country's 16th-best-selling computer title for the month of June. After retaining position 4 in its second week, it dropped to sixth place in its third week. James Fudge of Computer Games Magazine wrote that Icewind Dale was among the titles that "dominated the retail charts in the U.S. for the month of July". The game remained in PC Data's weekly top 10 until the week ending August 5. Later that month, Interplay's Brian Fargo noted that Icewind Dale was "selling beyond our forecasts and in number one position[s] in certain European territories". It was the United States' sixth-highest computer game seller of July and 16th-highest of August, moving 21,923 units and earning $1.05 million during the latter month alone. According to Chart-Track, Icewind Dale was the United Kingdom's best-selling computer game for its debut week, breaking Diablo IIs three-week streak in the region. It dropped to third place the following week, before falling to seventh. Discussing Icewind Dales chart performance, a writer for PC Zone mentioned being "a little surprised at seeing Diablo II capitulate so easily, especially to Icewind Dale, despite the success of Baldur's Gate". Icewind Dale was the United Kingdom's third-best-selling computer title in August, placing above Diablo II for the month. According to PC Gamer US, it also achieved "high sales" in Germany, where it debuted in 17th place on the computer game sales charts in July. After peaking at #5 the following month, it claimed places 16 and 29 in September and October before exiting Germany's top 30. By early 2001, Icewind Dale had sold more than 350,000 units worldwide, including 45,000 units in Germany. Sales rose above 400,000 units by April. In the United States alone, it sold 145,564 copies and earned $6.8 million by the end of 2000, according to PC Data. Its lifetime sales there climbed to 270,000 copies ($9.5 million) by 2006; as of that year, the Icewind Dale franchise together had sold 580,000 units in the region. In August 2006, Edge ranked the original Icewind Dale as the United States' 74th-best-selling computer game, and best-selling Icewind Dale title, released since January 2000. Critical reviews Icewind Dale was well received by critics, scoring 86% from GameRankings and 87/100 from Metacritic. GameSpot Greg Kasavin gave the game 8.6/10, opining it is "well suited for fans of Black Isle Studios' previous games, fans of classic hack-and-slash AD&D computer games, and anyone looking for an action-packed role-playing game with a lot of depth". IGN scored it 8.8/10 and GameZone gave it 9.5/10. According to GameSpy's Allen Rausch, "Icewind Dale was a fun dungeon romp that can hold its head up high, even if it can't match its big brothers". The game's music score by Jeremy Soule received widespread critical acclaim. Chris Chan of the New Straits Times said the game was one of the best he ever played, and went on to positively compare it with Diablo II. The strongest criticism was that the gameplay was too uniform and was mostly combat-focused, with little interaction or investigation. Bob Low of the Daily Record noted technical issues such as poor pathfinding and occasional crashes. PC Zone criticized its similarities to previous Infinity Engine games. IGN ranked Icewind Dale No. 6 on their list of "The Top 11 Dungeons & Dragons Games of All Time" in 2014. Ian Williams of Paste rated the game #3 on his list of "The 10 Greatest Dungeons and Dragons Videogames" in 2015. Computer Gaming World, GameSpot, The Electric Playground and CNET Gamecenter nominated Icewind Dale as the top computer role-playing game of 2000, but it lost all four awards to Baldur's Gate II: Shadows of Amn. The former publication's editors wrote that Icewind Dale "hearkened back to the old days" and was "dangerously close to being the most purely fun RPG that we've played in a long time." Kevin Rice reviewed the PC version of the game for Next Generation, rating it four stars out of five, and stated that "a huge, engrossing game with the most action in the Forgotten Realms series, Icewind Dale earns its place on the hard drive of any self-respecting RPG fan". Remake A remake of Icewind Dale was developed by Beamdog's Overhaul Games and published by Atari for Windows, OS X, Linux, Android, and iOS in 2014. References External links Icewind Dale at MobyGames 2000 video games Android (operating system) games Black Isle Studios games Classic Mac OS games Cooperative video games Fantasy video games Forgotten Realms video games Infinity Engine games Multiplayer and single-player video games IOS games Linux games MacOS games Video games developed in the United States Video games featuring protagonists of selectable gender Video games scored by Jeremy Soule Video games with expansion packs Video games with isometric graphics Windows games
4693975
https://en.wikipedia.org/wiki/MidSTAR-1
MidSTAR-1
MidSTAR-1 is an artificial satellite produced by the United States Naval Academy Small Satellite Program. It was sponsored by the United States Department of Defense (DoD) Space Test Program (STP), and was launched on March 9, 2007 at 03:10 UTC, aboard an Atlas V expendable launch vehicle from Cape Canaveral Air Force Station. MidSTAR-1 flew along with FalconSat 3, STPSat 1, and CFESat as secondary payloads; the primary payload was Orbital Express. MidSTAR-1 Mission (USNA-5) MidSTAR is a general-purpose satellite bus capable of supporting a variety of space missions by easily accommodating a wide range of space experiments and instruments. The integration of the experiments with the satellite bus must be accomplished with minimal changes to the satellite bus design. MidSTAR is intended to be a relatively low-cost, quick response platform accommodating small payloads approved by the DoD Space Experiments Review Board (SERB) and awaiting launch through STP.MidSTAR is designed for use on the EELV Secondary Payload Adapter (ESPA) Ring developed by Air Force Research Laboratory (AFRL) for placement on Delta IV or Atlas V expendable launch vehicles. MidSTAR is a Class D spacecraft, produced at minimum cost with a correspondingly higher technical risk in production and operation. It is intentionally simple in design and rugged in construction, using commercial off-the-shelf “plug-and-play” components to the greatest extent possible. Component development and circuit-board level design are accomplished only when necessary. MidSTAR-1 is the first implementation of the design. It was commissioned by STP to carry the Internet Communications Satellite (ICSat) Experiment for SSP and the Configurable Fault Tolerant Processor (CFTP) Experiment for Naval Postgraduate School (NPS). In addition, MidSTAR-1 carries the Nano Chem Sensor Unit (NCSU) for the National Aeronautics and Space Administration (NASA) Ames Research Center; Eclipse, built by Eclipse Energy Systems, Inc. for NASA Goddard Space Flight Center (GSFC); and the Micro Dosimeter Instrument (MiDN), sponsored by the National Space Biomedical Research Institute (NSBRI) and built by the USNA Department of Aerospace Engineering. The mission is intended to last two years. Mission architecture The MidSTAR-1 mission includes a single spacecraft under the command and control of a single satellite ground station (SGS) located at the United States Naval Academy, Annapolis, Maryland. The ground station forwards downlinked data files to the principal investigators via the Internet. The launch segment for MidSTAR-1 utilized an Atlas V launch vehicle through the Space Test Program, placing the satellite in a circular orbit at 496 km altitude, 46 degrees inclination. The satellite uses an uplink at 1.767 GHz with an intermediate frequency (IF) of 435 MHz, and a 2.20226 GHz downlink. By utilizing a Gaussian Mean Shift Key modulation, communications with the satellite are achieved at 68.4 kbit/s or higher data rate. The satellite also uses open source software based on the Linux operating system. MidSTAR-1 has no attitude control or determination, no active thermal control, and its mass is 120 kg. One hundred percent success would be the successful launch and operation of the satellite with full support for the two primary experiments for two years. Fifty percent success was the successful launch and operation of the satellite with: Full support of one primary experiment for two years; Full support of both primary experiments for one year; or, partial support of both primary experiments for two years. Thirty-three percent success was successful launch of the satellite and full operation of the satellite bus with partial support of any combination of primary and secondary payloads for any length of time. Mission log 9 March 2007: MidSTAR-1 flew as part of the STP-1 mission on a United Launch Alliance Atlas V from Cape Canaveral Air Force Station. Liftoff occurred at 0310 UTC; spacecraft separation occurred at 0332 UTC. USNA SGS successfully acquired communications with the spacecraft during the first pass over Annapolis MD at 0459 UTC. The spacecraft was operating nominally in safe mode. 21 March 2007: CFTP turned on at 2217 UTC to add 6 W continuous to the electrical power system load and thus lessen charging stress on the batteries. 28 March 2007: MiDN turned on at approximately 2400 UTC. Spacecraft stopped responding to all ground commands subsequent to this pass. 4 April 2007: First use of firecode reset of spacecraft at approximately 2130 UTC. This command toggles the reset switch on the MIP-405 processor and reboots the operating system. This reset returned the CFTP and MiDN experiments to off and cleared all command buffers. At 2324 UTC the spacecraft responded to a transmitter on command. Telemetry confirmed that the reboot was successful. 5 April 2007: CFTP and MiDN turned back on. 6 April 2007: Selective download of MiDN files retrieved 71 files of 92 bytes each which were delivered to the Principal Investigator (PI). This was the first successful retrieval of science data from the spacecraft. With this milestone, MidSTAR-1 satisfied the criteria of 33% mission success. 26 May 2007: NCSU turned on at approximately 1900 Z. 29 May 2007: First data package delivered to NCSU PI. All four experiments are on and delivering data to the PIs. 18 June 2007: NASA press release announces success of NCSU. 5 September 2007: Spacecraft computer froze as a result of unknown influences, most likely radiation-induced upsets. This happened while the spacecraft was in full sun and with the power drains (30 W) on to prevent battery overcharging. Without the computer to cycle the drains off, the spacecraft remained in a continuous negative net power configuration which eventually drained the batteries. When the battery voltage dropped below 8 V, the electronic switches for the drains defaulted to off, returning the spacecraft to positive net power and allowing the batteries to recharge. 7 September 2007: Once the batteries recharged sufficiently, the computer restarted successfully. Restart occurred 48 hours after the initial event. No telemetry from the spacecraft or any experiment is available for that 48-hour period. Telemetry indicates that normal operation resumed, but all experiments were left off pending post-event analysis and the development of a plan to bring them back online. 12 September 2007: CFTP restarted. 21 September 2007: MiDN restarted. April 2009: Contact with MidSTAR-1 lost. Spacecraft ceased transmitting and failed to respond to ground command. Anomaly attributed to failure of battery packs. MidSTAR-1 declared non-operational. MidSTAR-1 fully supported all onboard experiments for two full years, fulfilling the 100% success criteria. Structure The MidSTAR-1 frame is an octagonal structure 32.5" along the long axis, including separation system, and 21.2"x21.2" measured side-to-side in cross-section. The deployment mechanism is mounted on the negative x face. The positive x face is reserved for externally mounted experiments. Of the 38" along the x-axis allowed in the ESPA envelope, 2-4" are reserved for the deployment mechanism (15-in motorized lightband manufactured by Planetary Systems, Inc.), and 4-6" are reserved for external experiments. The frame length is 30". All eight sides of the spacecraft are covered with solar cells in order to maximize the power available. Eight dipole antennas are mounted on the four faces of the spacecraft which "cut the corners" of the ESPA envelope, and are therefore positioned within the ESPA envelope rather than coincident with the envelope surface. The remaining sides are mounted with remove-before-flight eyeholes for lifting and transport during ground support. MidSTAR-1 has three interior shelves which provide area inside the satellite for mounting of components and payloads. Their locations are determined by the dimensions of the payloads and components. These can be varied in future implementations of the MidSTAR model if necessary, as long as the structure remains within the center of gravity requirements. The load-bearing structure of the octagon consists of the top and bottom decks, connected at the eight corners by stringers. The side panels of the spacecraft are 1/8" aluminum panels mounted to the stringers with #10 bolts. Command and Data Handling (C&DH) The mission of the Command and Data Handling System (C&DH) is to receive and execute commands; collect, store, and transmit house-keeping data; and support the onboard payloads. The flight computer is designed to control the satellite and manage telemetry and experiment data for a minimum of two years. The C&DH system consists of a custom-modified MIP405-3X single board computer which included (i) 133 MHz PowerPC processor; (ii) 128 MB ECC; (iii) 4 RS-232 asynchronous serial ports; (iv) 1 Ethernet Port; (v) a PC/104 bus; (vi) a PC/104+ bus; and, (vi) a 202-D384-X Disc on Chip providing 384 MB of secondary storage. The computer board is supported by an ESCC-104 Synchronous Serial Card with 2 synchronous serial ports, and an EMM-8M-XT Serial Expansion Card with 8 RS-232/422/485 asynchronous serial ports and 8 digital I/O channels. A modified I0485 data acquisition board provides 22 analog telemetry channels and 32 digital I/O channels. The decision to use the PowerPC based MIP405 over an x86 based board was based solely on the low power consumption of the board combined with the feature set. The choice was limited to x86, PowerPC, and ARM processor architectures because of a program decision to use the Linux operating system. The MIP405 integrates Ethernet, serial ports, and Disk-on-Chip interface on a single board while providing 128 MB of ECC memory and a powerful processor for under 2 watts. The closest x86 based system with comparable features found consumed 5 watts of power. The M-Systems Disk-on-Chip was chosen because it was the de facto'' standard flash memory harddisk replacement. Flash memory was chosen over a traditional hard disk to increase reliability and reduce power. The 384 MB version was chosen to provide the storage required for the operating system and still maintain adequate margin. The Diamond Systems Emerald-MM-8 was chosen for the asynchronous serial board based on its innate flexibility with any of the 8 ports capable of being configured as RS-232, RS-422, RS-485. RMV's IO485 data acquisition and control board was chosen for the distributed telemetry system because of built-in support for daisy chaining and handling a large number of boards. The integrated expandability is fundamental to addressing future telemetry issues in later versions of the MidSTAR line. The C&DH uses the Linux operating system with a 2.4 series kernel. To create an open software architecture the IP protocol stack was chosen to provide inter process, intra-satellite, and satellite-ground communications. This allowed programs created at different facilities on different hardware to be integrated with minimum difficulty. All internal and external communications use internet protocols. TCP is used for all internal satellite communications; UDP or MDP is used on the uplink and downlink. See also USNA MidSTAR Program eoPortal describes MidSTAR-1 References Satellite Internet access Satellites orbiting Earth Spacecraft launched in 2007 United States Naval Academy
31365982
https://en.wikipedia.org/wiki/ZBar
ZBar
ZBar is an open-source C barcode reading library with C++, Python, Perl, and Ruby bindings. It is also implemented on Linux and Microsoft Windows as a command-line application, and as an iPhone application. It was originally developed at SourceForge. As the latest official release (version 0.10) was in 2009-10-27, a fork was created in March, 2017, converting it to use qt5 and libv4l, improving it to better support the Video4Linux API version 2. Features Image scanning Real-time scanning of video streams C++, Python, Perl, and Ruby bindings Qt, GTK+, and PyGTK GUI bindings Recognition of EAN-13, UPC-A, UPC-E, EAN-8, Code 128, Code 39, Interleaved 2 of 5 and QR code symbologies References External links LinuxTV site where the latest version is available GitHub ZBar clone of the latest version Official site Free software programmed in C
42249593
https://en.wikipedia.org/wiki/IMAX%20432
IMAX 432
iMAX 432 (Intel Multifunction Applications Executive for the Intel 432 Micromainframe) was an operating system developed by Intel for digital electronic computers based on the 1980s Intel iAPX 432 32-bit microprocessor. The term micromainframe was an Intel marketing designation describing the iAPX 432 processor's capabilities as being comparable to a mainframe. The iAPX 432 processor and the iMAX 432 operating system were incompatible with the x86 architecture commonly found in personal computers. iMAX 432 was implemented in a subset of the original (1980) version of the Ada, extended with runtime type checking and dynamic package creation. As of 1982 in iMAX version 2, iMAX was aimed at programmers rather than application users, and it did not provide a command line or other human interface. iMAX provided a runtime environment for the Ada programming language and other high-level languages, as well as an incomplete Ada compiler which was to be extended to cover the full Ada programming language in a later iMAX version after Version 2. There were at least two versions of iMAX as of 1982, Version 1 and Version 2. Version 1 was undergoing internal Intel testing as of 1981 and was scheduled to be released in 1982. Version 2 was modular and the programmer could choose what parts of the iMAX operating system to load; there were two standard configurations of iMAX version 2 named "Full" and "Minimal", with the minimal configuration being similar to Version 1 of iMAX. As of 1982, a Version 3 of iMAX was planned for release, which was to add support for virtual memory. See also History of operating systems References Bibliography Capability systems Discontinued operating systems Intel software
66095730
https://en.wikipedia.org/wiki/Andrea%20Grimes%20Parker
Andrea Grimes Parker
Andrea Grimes Parker is an American computer scientist, researcher, and Associate Professor, known for her interdisciplinary study of human computer interaction (HCI) and personal health informatics. Parker is currently an Associate Professor at Georgia Institute of Technology (Georgia Tech) School of Interactive Computing. She also currently serves as an Adjunct Associate Professor in the Rollins School of Public Health at Emory University. She was previously an Assistant Professor at Northeastern University, with joint appointments in the Khoury College of Computer Sciences and the Bouvé College of Health Sciences. Biography Early life and education She was born Andrea Elaina Grimes, to African American parents Octavia R. Grimes and Vincent E. Grimes. Her father works at the Santa Clara County public defender's office and her mother is a nurse case manager with Kaiser Permanente in San Jose. In 2004, she was one of two United States representatives for the 2004 World Association for Cooperative Education Conference. Parker attended Northeastern University and received a B.S. degree in Computer Science in 2005. Parker was a member of Phi Kappa Phi National Honors Society and Upsilon Pi Epsilon while at Northeastern. In 2010, she married Lonnie Thomas Parker IV, a classmate at Georgia Tech. She changed her name in 2010, and has research papers in both names. In 2011, she received a PhD from Georgia Tech. Parker's doctoral advisor was Rebecca E. Grinter and her thesis was titled, "A Cultural, Community Based Approached to Health Technology Design". Research career Parkers research lies generally in the fields of human-computer interaction (HCI) and computer-supported cooperative work (CSCW). In 2010, OrderUP! was a game presented by Parker and colleagues at Ubicomp 2010 conference in Copenhagen, Denmark, created to teach people how to make smart choices when ordering food. The game was designed using Transtheoretical Model (TTM). In 2013, Parker launched a social media platform to share workout tips, for people in the neighborhood of Roxbury that participate in a once a week gym program. She has done research on the role of digital fitness trackers and social networks, and their impact on motivation, future planning, and behavior change. Parker is specifically interested in vulnerable and marginalized populations overcome barriers, and looking beyond the surface level interaction of data sharing found currently in many fitness trackers. From 2014 until 2016, Parker served as the National Evaluator for the Aetna Foundation's portfolio of projects on mobile health interventions in community settings. From 2018 until 2019, Parker was a Northeastern University Institute of Health Equity and Social Justice Research Faculty Scholar. Teaching career Parker is the founder and director of the Wellness Technology Research Lab at Georgia Tech. Parker is currently an Associate Professor at Georgia Institute of Technology (Georgia Tech) School of Interactive Computing. She also currently serves as an Adjunct Associate Professor in the Rollins School of Public Health at Emory University. She was previously an Assistant Professor at Northeastern University, with joint appointments in the Khoury College of Computer Sciences and the Bouvé College of Health Sciences. Publications Books Articles References External links Official website Profile on Association for Computing Machinery (ACM) Digital Library Profile and profile on Microsoft Academic African-American computer scientists American women computer scientists American computer scientists Northeastern University alumni Northeastern University faculty Georgia Tech alumni Georgia Tech faculty Human–computer interaction researchers Year of birth missing (living people) Living people 21st-century African-American people
42297752
https://en.wikipedia.org/wiki/Percona%20Server%20for%20MySQL
Percona Server for MySQL
Percona Server for MySQL is a distribution of the MySQL relational database management system created by Percona. Percona Server for MySQL is an open source relational database management system (RDBMS). It is a free, fully compatible drop in replacement for Oracle MySQL. The software includes a number of scalability, availability, security and backup features only available in MySQL's commercial Enterprise edition. The software includes XtraDB, an enhanced distribution of the InnoDB Storage Engine. The developers aim to retain close compatibility to the official MySQL releases, while focusing on performance and increased visibility into server operations. See also Comparison of relational database management systems References Client-server database management systems Cross-platform software Free database management systems RDBMS software for Linux MySQL Software forks Software using the GPL license
16884326
https://en.wikipedia.org/wiki/SliTaz
SliTaz
SliTaz GNU/Linux is a lightweight Linux distribution, community-based, suitable for use on older hardware and as a Live CD or Live USB. SliTaz stands for "Simple, Light, Incredible, Temporary Autonomous Zone" according to the boot screen. Features SliTaz uses the Openbox window manager. Additional packages are added using a program called "TazPanel". This is due to the specific package format that SliTaz uses (tazpkg). It can still use packages from the more popular distribution though, as Debian, by means of first carrying out a conversion of these different packages. By default, SliTaz offers no persistence, however it can still be added if the user wishes. The choice of the filesystem/bootloader used with slitaz is then of importance however; persistence being only available with ext2 and ext3 filesystems and the syslinux or extlinux boot loader. Another useful tool is TazLiTo, with which users can create their own LiveCD based on selected packages or even based upon the current system state. System requirements SliTaz GNU/Linux is supported on all machines based on the i486 or x86 Intel compatible processors. The Live CD has four variants of SliTaz, requiring from 192 MB of RAM for the Core system to 48 MB for a text mode and X Window System. SliTaz can even run in 24 MB of RAM and a little swap memory. SliTaz can be booted from a Live CD, Live USB, floppy disk, or a local area network (PXE), or can be installed, requiring approximately 80 MB of hard disk space. TazLiTo TazLito is the LiveCD creation utility in SliTaz GNU/Linux. Common Operations Check Root Check to ensure UID is zero (i.e., TazLito was run by root or root sudoer). Check Root File System Looks for the existence of an etc. directory in the root file system directory. N.B., this does not do any further checking to ensure anything is actually in the directory. However, if TazLito is used for all LiveCD creation operations (that is, one does not create/modify the directories used by TazLito) the directories existence implies it is populated properly. Verify Root CD Looks for the existence of a boot directory in the root CD directory. N.B., this does not do any further checking to ensure anything is actually in the directory. However, if TazLito is used for all LiveCD creation operations (that is, one does not create/modify the directories used by TazLito) the directories existence implies it is populated properly. Generate initramfs Executes scripts for packages altering the root file system Hard links redundant files in the root filesystem to save space Runs cpio to create the initramfs, compressing with lzma or gzip (or no compression) Release history As with any Linux distribution, the route of development of SliTaz is mainly determined by the coders themselves. For SliTaz 5, some major changes seem to be the swapping of systemd by BusyBox's init and udev, hence avoiding safety risks, and more implementation of Qt. An implementation of x64 and ARM architectures are currently under development. Reception Dedoimedo reviewed SliTaz GNU/Linux 1.0. and commented: Dedoimedo also reviewed version 2.0. DistroWatch wrote to DistroWatch Weekly a review of SliTaz GNU/Linux 1.0: Gallery See also Comparison of Linux Live Distros Lightweight Linux distribution List of Linux distributions that run from RAM References External links Operating system distributions bootable from read-only media Light-weight Linux distributions Live USB Linux distributions without systemd Linux distributions
59902426
https://en.wikipedia.org/wiki/One%20Bad%20Knight
One Bad Knight
One Bad Knight was a 1938 theatrical advertisement for Chevrolet, produced by the Jam Handy Organization, featuring the gnome, Nicky Nome. Plot To the tune of "The Love Bug Will Bite You," a "love bug" sprays a pair of frogs, a pair of birds and Nicky's "horse hopper," Hortense, with a love potion. A young boy, pulling a toy horse, sees a wanted poster for the Black Knight, then meets a princess. While he and the princess are picking flowers, the unnamed hero boy encounters Nicky, while the Black Knight kidnaps the princess. King Louis, a trencherman, leaves his dinner table to rescue the princess, only to be repulsed by the Black Knight's castle. Overnight, Louis is lamenting how to breach such a strong castle, mentioning many horsepower, a mobile fortress and great speed. Nicky then whispers to Louis, and through the remainder of the night, a Trojan horse is built. At dawn, Louis looks out of the right eye of the Trojan horse and says, "Give up, Blackie?" only to be answered with a salvo of arrows that surrounded the eye where Louis issued his challenge. Louis replied, "Let 'em have it!" A hatch opens in the front of the Trojan horse, and out drives the hero boy at the wheel of a 1938 Chevrolet. The wheels bounce over logs and rocks to tout their "Knee-Action" suspension, arrows go in and out of the vent windows to tout their "Draft-Free Ventilation," rocks bounce off the roof to tout their "Turret-Top" roof, and then the car bulldozed the castle walls with the sound effect of a steam locomotive blowing its whistle for a railroad crossing. The hero boy returns in the undamaged car to the last turret standing in the castle, rushing up a spiral staircase to free the princess and to battle the Black Knight, although with a wooden toy sword. The hero boy winds up on his back, and the Black Knight picks up a battleaxe and swings it at the boy, who rolls out of the way. When the knight hits the floorboard, it acts like a catapult, hurling the knight down the spiral staircase to his doom. The Princess and the hero boy kiss. Back at the king's castle, Louis knights the hero boy for bravery on the field of combat, and offers the hero boy one wish, pushing his daughter toward the boy. The boy looked at the Chevrolet and said, "One wish, Sire?" The boy looked again at the car and back at the king, and said, "Double or nothing, Your Majesty!" Louis gave the boy a waving, left-handed salute. Flanked by knights on horseback singing "The Love Bug Will Bite You," the young couple drive away in the Chevrolet. Nicky and Hortense appear from behind the grille and shake hands. The End. References External links Vimeo IMDB YouTube Chevrolet Advertisements 1938 animated films Jam Handy Organization films 1938 films American films Promotional films
4327261
https://en.wikipedia.org/wiki/Cisco%20NAC%20Appliance
Cisco NAC Appliance
Cisco NAC Appliance, formerly Cisco Clean Access (CCA), was a network admission control (NAC) system developed by Cisco Systems designed to produce a secure and clean computer network environment. Originally developed by Perfigo and marketed under the name of Perfigo SmartEnforcer, this network admission control device analyzes systems attempting to access the network and prevents vulnerable computers from joining the network. The system usually installs an application known as the Clean Access Agent on computers that will be connected to the network. This application, in conjunction with both a Clean Access server and a Clean Access Manager, has become common in many universities and corporate environments today. It is capable of managing wired or wireless networks in an in-band or out-of-band configuration mode, and Virtual Private networks (VPN) in an in-band only configuration mode. Cisco NAC Appliance is no longer in production and no longer sold as of the early 2010s. Mainstream support ending in 2015. Extended support ending in 2018. Clean Access Agent The Clean Access Agent (abbreviation: CCAA, "Cisco Clean Access Agent") resides on the client's machine, authenticates the user, and scans for the required patches and software. Currently the Clean Access Agent application is only available for some Windows and Mac OS X operating systems (Windows 98, Windows Me, Windows 2000, Windows XP, Windows XP Media Center Edition, Windows Vista, Windows 7, Windows 8 and Mac OS X); most network administrators allow clients with non-Windows operating systems (such as Mac OS 9, Linux, and FreeBSD) to access the network without any security checks (authentication is still required and is usually handled via a Web interface). Authentication After successfully authenticating via a web interface, the Clean Access Server will direct new Windows based clients to download and install the Clean Access Agent application (at this time, non-Windows based clients need only authenticate via the web interface and agree to any network terms of service). Once installed, the Agent software will require the user to re-authenticate. Once re-authenticated, the Agent software will typically check the client computer for known vulnerabilities to the Windows operating system being used, as well as for updated anti-virus software and definitions. The checks are maintained as a series of "rules" on the Clean Access Manager side. The Clean Access Manager (CAM) can be configured to check, install, or update anything on the user's system. Once the Agent application checks the system, the Agent will inform the user of the result – either with a success message, or a failed message. Failed messages inform the user of what category(s) the system failed (Windows updates, antivirus, etc.), and instruct the user on how to proceed. Any system failing the checks will be denied general access to the network and will probably be placed in a quarantined role (how exactly a failed system is handled depends entirely on how the Clean Access Manager is configured, and may vary from network to network. For example: a failed system may simply be denied all network access afterward). Quarantined systems are then typically given a 60-minute window where the user can try to resolve the reason(s) for quarantine. In such a case, the user is only allowed connectivity to the Windows Update website and a number of antivirus providers (Symantec, McAfee, Trend Micro, etc.), or the user may be redirected to a Guest Server for remediation. All other traffic is typically blocked. Once the 60-minute window expires, all network traffic is blocked. The user has the option of re-authenticating with Clean Access again, and continuing the process as needed. Systems passing the checks are granted access to the network as defined by the assigned role on the Clean Access Manager. Clean Access configurations vary from site to site. The network services available will also vary based on Clean Access configuration and the assigned user role. Systems usually need to re-authenticate a minimum of once per week, regardless of their status; however, this option can be changed by the network administrator. Also, if a system is disconnected from the network for a set amount of time (usually ten minutes), the user will have to re-authenticate when they reconnect to the network. Windows Updates Clean Access normally checks a Windows system for required updates by checking the system's registry. A corrupted registry may keep a user from being able to access the network. Security Issues and Concerns User Agent Spoofing The Clean Access Server (CAS) determines the client's operating system by reading the browser's user agent string after authentication. If a Windows system is detected, then the server will ask the user to download the Clean Access Agent; on all other operating systems, login is complete. To combat attempts to spoof the OS in use on the client, newer versions of the Server and Agent (3.6.0 and up) also probe the host via TCP/IP stack fingerprinting and JavaScript to verify the machine's operating system: By default, the system uses the User-Agent string from the HTTP header to determine the client OS. Release 3.6.0 provides additional detection options to include using the platform information from JavaScript or OS fingerprinting from the TCP/IP handshake to determine the client OS. This feature is intended to prevent users from changing identification of their client operating systems through manipulating HTTP information. Note that this is a "passive" detection technique that only inspects the TCP handshake and is not impacted by the presence of a firewall. Microsoft Windows Scripting The Clean Access Agent makes extensive use of the Windows Script Engine, version 5.6. It was demonstrated that removal or disabling of the scripting engine in MS Windows will bypass and break posture interrogation by the Clean Access Agent, which will "fail open" and allow devices to connect to a network upon proper authentication. MAC Spoofing Prevention Device Segregation While MAC address spoofing may be accomplished in a wireless environment by means of using a sniffer to detect and clone the MAC address of a client who has already been authorized or placed in a "clean" user role, it is not easy to do so in a wired environment, unless the Clean Access Server has been misconfigured. In a correct architecture and configuration, the Clean Access Server would hand out IP subnets and addresses via DHCP on its untrusted interface using a 30-bit network address and 2 bits for hosts, therefore only one host could be placed in each DHCP scope/subnet at any given time. This segregates unauthorized users from each other and from the rest of the network, and makes wired-sniffing irrelevant and spoofing or cloning of authorized MAC addresses nearly impossible. Proper and similar implementation in a wireless environment would in fact contribute to a more secure instance of Clean Access. Certified-Device Timers In addition, MAC-spoofing could further be combated with the use of timers for certified devices. Timers allow administrators to clear the list of certified MAC addresses on a regular basis and force a re-authorization of devices and users to the Clean Access Server. Timers allow an administrator to clear certified devices based on user roles, time and date, and age of certification; a staggered method is also available that allows one to avoid clearing all devices at once. Complaints Cisco NAC Appliance is notorious for creating disruptions in the Internet connections of users, considering a continuous connection between a computer and a server or another computer as suspicious activity. This is problematic for individuals using Skype or any webcam activity as well as online games such as World of Warcraft. With online games, the disruptions created by Cisco NAC Appliance cause the player to be logged off the gaming server. Numerous individuals who have experienced this rather blunt manner of security have openly expressed frustration with this software in forums as well as on Facebook with groups and posts. References External links Cisco Product Page Clean Access Administrators Mailing List – Archives hosted by Miami University Cisco Security Response – Cisco's Response to the latest NAC Agent Installation Bypass vulnerability CCA Workaround/Hack/Exploit Details Internet Protocol based network software Cisco software
214657
https://en.wikipedia.org/wiki/FFmpeg
FFmpeg
FFmpeg is a free and open-source software project consisting of a suite of libraries and programs for handling video, audio, and other multimedia files and streams. At its core is the command-line ffmpeg tool itself, designed for processing of video and audio files. It is widely used for format transcoding, basic editing (trimming and concatenation), video scaling, video post-production effects and standards compliance (SMPTE, ITU). FFmpeg also includes other tools: ffplay, a simple media player and ffprobe, a command-line tool to display media information. Among included libraries are libavcodec, an audio/video codec library used by many commercial and free software products, libavformat (Lavf), an audio/video container mux and demux library, and libavfilter, a library for enhancing and editing filters through a Gstreamer-like filtergraph. FFmpeg is part of the workflow of many other software projects, and its libraries are a core part of software media players such as VLC, and has been included in core processing for YouTube and BiliBili. Encoders and decoders for many audio and video file formats are included, making it highly useful for the transcoding of common and uncommon media files. FFmpeg is published under the LGPL-2.1-or-later or GPL-2.0-or-later, depending on which options are enabled. History The project was started by Fabrice Bellard (using the pseudonym "Gérard Lantau") in 2000, and was led by Michael Niedermayer from 2004 until 2015. Some FFmpeg developers were also part of the MPlayer project. The name of the project is inspired by the MPEG video standards group, together with "FF" for "fast forward". The logo uses a zigzag pattern that shows how MPEG video codecs handle entropy encoding. On March 13, 2011, a group of FFmpeg developers decided to fork the project under the name Libav. The event was related to an issue in project management, in which developers disagreed with the leadership of FFmpeg. On January 10, 2014, two Google employees announced that over 1000 bugs had been fixed in FFmpeg during the previous two years by means of fuzz testing. In January 2018, the ffserver command-line program – a long-time component of FFmpeg – was removed. The developers had previously deprecated the program citing high maintenance efforts due to its use of internal application programming interfaces. The project publishes a new release every three months on average. While release versions are available from the website for download, FFmpeg developers recommend that users compile the software from source using the latest build from their source code Git version control system. Codec history Two video coding formats with corresponding codecs and one container format have been created within the FFmpeg project so far. The two video codecs are the lossless FFV1, and the lossless and lossy Snow codec. Development of Snow has stalled, while its bit-stream format has not been finalized yet, making it experimental since 2011. The multimedia container format called NUT is no longer being actively developed, but still maintained. In summer 2010, FFmpeg developers Fiona Glaser, Ronald Bultje, and David Conrad, announced the ffvp8 decoder. Through testing, they determined that ffvp8 was faster than Google's own libvpx decoder. Starting with version 0.6, FFmpeg also supported WebM and VP8. In October 2013, a native VP9 and the OpenHEVC decoder, an open source High Efficiency Video Coding (HEVC) decoder, were added to FFmpeg. In 2016 the native AAC encoder was considered stable, removing support for the two external AAC encoders from VisualOn and FAAC. FFmpeg 3.0 (nicknamed "Einstein") retained build support for the Fraunhofer FDK AAC encoder. Since version 3.4 "Cantor" FFmpeg supported the FITS image format. Since November 2018 in version 4.1 "al-Khwarizmi" AV1 can be muxed in MP4 and Matroska incl. WebM. Components Command line tools ffmpeg is a command-line tool that converts audio or video formats. It can also capture and encode in real-time from various hardware and software sources such as a TV capture card. ffplay is a simple media player utilizing SDL and the FFmpeg libraries. ffprobe is a command-line tool to display media information (text, CSV, XML, JSON), see also Mediainfo. Libraries libswresample is a library containing audio resampling routines. libavresample is a library containing audio resampling routines from the Libav project, similar to libswresample from ffmpeg. libavcodec is a library containing all of the native FFmpeg audio/video encoders and decoders. Most codecs were developed from scratch to ensure best performance and high code reusability. libavformat (Lavf) is a library containing demuxers and muxers for audio/video container formats. libavutil is a helper library containing routines common to different parts of FFmpeg. This library includes hash functions, ciphers, LZO decompressor and Base64 encoder/decoder. libpostproc is a library containing older H.263 based video postprocessing routines. libswscale is a library containing video image scaling and colorspace/pixelformat conversion routines. libavfilter is the substitute for vhook which allows the video/audio to be modified or examined between the decoder and the encoder. Filters have been ported from many projects including MPlayer and avisynth. libavdevice is a library containing audio/video io through internal and external devices. Supported hardware CPUs FFmpeg encompasses software implementations of video and audio compressing and decompressing algorithms. These can be compiled and run on diverse instruction sets. Many widespread instruction sets are supported by FFmpeg, including x86 (IA-32 and x86-64), PPC (PowerPC), ARM, DEC Alpha, SPARC, and MIPS. Special purpose hardware There are a variety of application-specific integrated circuits (ASICs) for audio/video compression and decompression. These ASICs can partially or completely offload the computation from the host CPU. Instead of a complete implementation of an algorithm, only the API is required to use such an ASIC. Use with the FFmpeg utility Internal hardware acceleration decoding is enabled through the -hwaccel option. It starts decoding normally, but if a decodable stream is detected in hardware, then the decoder designates all significant processing to that hardware, thus accelerating the decoding process. Whereas if no decodable streams are detected (as happens on an unsupported codec or profile), hardware acceleration will be skipped and it will still be decoded in software. -hwaccel_device option is applied when the hardware requires a particular device to function especially when there are several graphic cards available. Supported codecs and formats Image formats FFmpeg supports many common and some uncommon image formats. The PGMYUV image format is a homebrewn variant of the binary (P5) PGM Netpbm format. FFmpeg also supports 16-bit depths of the PGM and PPM formats, and the binary (P7) PAM format with or without alpha channel, depth 8 bit or 16 bit for pix_fmts monob, gray, gray16be, rgb24, rgb48be, ya8, rgba, rgb64be. Supported formats In addition to FFV1 and Snow formats, which were created and developed from within FFmpeg, the project also supports the following formats: Muxers Output formats (container formats and other ways of creating output streams) in FFmpeg are called "muxers". FFmpeg supports, among others, the following: AIFF ASF AVI and also input from AviSynth BFI CAF FLV GIF GXF, General eXchange Format, SMPTE 360M HLS, HTTP Live Streaming IFF ISO base media file format (including QuickTime, 3GP and MP4) Matroska (including WebM) Maxis XA MPEG-DASH MPEG program stream MPEG transport stream (including AVCHD) MXF, Material eXchange Format, SMPTE 377M MSN Webcam stream NUT Ogg OMA RL2 Segment, for creating segmented video streams Smooth Streaming TXD WTV Pixel formats FFmpeg supports many pixel formats. Some of these formats are only supported as input formats. The command ffmpeg -pix_fmts provides a list of supported pixel formats. FFmpeg does not support IMC1-IMC4, AI44, CYMK, RGBE, Log RGB and other formats. It also does not yet support ARGB 1:5:5:5, 2:10:10:10, or other BMP bitfield formats that are not commonly used. Supported protocols Open standards IETF RFCs: FTP Gopher HLS HTTP HTTPS RTP RTSP SCTP SDP SRTP TCP TLS UDP UDP-Lite IETF I-Ds: SFTP (via libssh) Microsoft OSP: CIFS/SMB (via libsmbclient) MMS over TCP (MS-MMSP) MMS over HTTP (MS-WMSP) CENELEC SAT>IP OASIS standards: AMQP 0-9-1 (via librabbitmq) SRT Alliance standard: SRT (via libsrt) De facto standards RTSP over TLS Icecast protocol Adobe RTMP, RTMPT, RTMPE, RTMPTE and RTMPS RealMedia RTSP/RDT ZeroMQ (via libzmq) RIST (librist) Supported filters FFmpeg supports, among others, the following filters. Audio Resampling (aresample) Pass/Stop filters Low-pass filter (lowpass) High-pass filter (highpass) All-pass filter (allpass) Butterworth Band-pass filter (bandpass) Butterworth Band-stop filter (bandreject) Arbitrary Finite Impulse Response Filter (afir) Arbitrary Infinite Impulse Response Filter (aiir) Equalizer Peak Equalizer (equalizer) Butterworth/Chebyshev Type I/Type II Multiband Equalizer (anequalizer) Low Shelving filter (bass) High Shelving filter (treble) Xbox 360 rqulizer FIR equalizer (firequalizer) Biquad filter (biquad) Remove/Add DC offset (dcshift) Expression evaluation Time domain expression evaluation (aeval) Frequency domain expression evaluation (afftfilt) Dynamics Limiter (alimiter) Compressor (acompressor) Dynamic range expander () Side-chain Compressor (sidechaincompress) Compander (compand) Noise gate (agate) Side-chain Noise gate(sidechaingate) Distortion Bitcrusher (acrusher) Emphasis (aemphasis) Amplify/Normalizer Volume (volume) Dynamic Audio Normalizer (dynaudnorm) EBU R 128 loudness normalizer (loudnorm) Modulation Sinusoidal Amplitude Modulation (tremolo) Sinusoidal Phase Modulation (vibrato) Phaser (aphaser) Chorus (chorus) Flanger (flanger) Pulsator (apulsator) Echo/Reverb Echo (aecho) Routing/Panning Stereo widening (stereowiden) Increase channel differences (extrastereo) M/S to L/R (stereotools) Channel mapping (channelmap) Channel splitting (channelsplit) Channel panning (pan) Channel merging (amerge) Channel joining (join) for Headphones Stereo to Binaural (earwax, ported from SoX) Bauer Stereo to Binaural (bs2b, via libbs2b) Crossfeed (crossfeed) Multi-channel to Binaural (sofalizer, requires libnetcdf) Delay Delay (adelay) Delay by distance (compensationdelay) Fade Fader (afade) Crossfader (acrossfade) Audio time-scale/pitch modification Time stretching (atempo) Time-stretching and Pitch-shifting (rubberband, via librubberband) Editing Trim (atrim) Silence-padding (apad) Silence remover (silenceremove) Show frame/channel information Show frame information (ashowinfo) Show channel information (astats) Show silence ranges (silencedetect) Show audio volumes (volumedetect) ReplayGain scanner (replaygain) Modify frame/channel information Set output format (aformat) Set number of sample (asetnsamples) Set sampling rate (asetrate) Mixer (amix) Synchronization (asyncts) HDCD data decoder (hdcd) Plugins LADSPA (ladspa) LV2 (lv2) Do nothing () Video Transformations Cropping (crop, cropdetect) Fading (fade) Scaling (scale) Padding (pad) Rotation (rotate) Transposition (transpose) Others: Lens correction (lenscorrection) OpenCV filtering (ocv) Perspective correction (perspective) Temporal editing Framerate (fps, framerate) Looping (loop) Trimming (trim) Deinterlacing (bwdif, idet, kerndeint, nnedi, yadif, w3fdif) Inverse Telecine Filtering Blurring (boxblur, gblur, avgblur, sab, smartblur) Convolution filters Convolution (convolution) Edge detection (edgedetect) Sobel Filter (sobel) Prewitt Filter (prewitt) Unsharp masking (unsharp) Denoising (atadenoise, bitplanenoise, dctdnoiz, owdenoise, removegrain) Logo removal (delogo, removelogo) Subtitles (ASS, subtitles) Alpha channel editing (alphaextract, alphamerge) Keying (chromakey, colorkey, lumakey) Frame detection Black frame detection (blackdetect, blackframe) Thumbnail selection (thumbnail) Frame Blending (blend, tblend, overlay) Video stabilization (vidstabdetect, vidstabtransform) Color and Level adjustments Balance and levels (colorbalance, colorlevels) Channel mixing (colorchannelmixer) Color space (colorspace) Parametric adjustments (curves, eq) Histograms and visualization CIE Scope (ciescope) Vectorscope (vectorscope) Waveform monitor (waveform) Color histogram (histogram) Drawing OCR Quality measures SSIM (ssim) PSNR (psnr) Lookup Tables lut, lutrgb, lutyuv, lut2, lut3d, haldclut Supported test patterns SMPTE color bars (smptebars and smptehdbars) EBU color bars (pal75bars and pal100bars) Supported LUT formats cineSpace LUT format Iridas Cube Adobe After Effects 3dl DaVinci Resolve dat Pandora m3d Supported media and interfaces FFmpeg supports the following devices via external libraries. Media Compact disc (via libcdio; input only) Physical interfaces IEEE1394 (a.k.a. FireWire; via libdc1394 and libraw1394; input only) IEC 61883 (via libiec61883; input only) DeckLink Brooktree video capture chip (via bktr driver; input only) Audio IO Advanced Linux Sound Architecture (ALSA) Open Sound System (OSS) PulseAudio JACK Audio Connection Kit (JACK; input only) OpenAL (input only) sndio Core Audio (for macOS) AVFoundation (input only) AudioToolbox (output only) Video IO Video4Linux2 Video for Windows (input only) Windows DirectShow Android Camera (input only) Screen capture and output Simple DirectMedia Layer 2 (output only) OpenGL (output only) Linux framebuffer (fbdev) Graphics Device Interface (GDI; input only) X Window System (X11; via XCB; input only) X video extension (XV; via Xlib; output only) Kernel Mode Setting (via libdrm; input only) Others ASCII art (via libcaca; output only) Applications Legal aspects FFmpeg contains more than 100 codecs, most of which use compression techniques of one kind or another. Many such compression techniques may be subject to legal claims relating to software patents. Such claims may be enforceable in countries like the United States which have implemented software patents, but are considered unenforceable or void in member countries of the European Union, for example. Patents for many older codecs, including AC3 and all MPEG-1 and MPEG-2 codecs, have expired. FFmpeg is licensed under the LGPL license, but if a particular build of FFmpeg is linked against any GPL libraries (notably x264), then the entire binary is licensed under the GPL. Projects using FFmpeg FFmpeg is used by software such as VLC media player, xine, Shotcut, Cinelerra-GG video editor, Plex, Kodi, Blender, HandBrake, YouTube, VirtualDub2, a VirtualDub fork, and MPC-HC; it handles video and audio playback in Google Chrome, and the Linux version of Firefox. Graphical user interface front-ends for FFmpeg have been developed, including XMedia Recode. FFmpeg is used by ffdshow, LAV Filters, the GStreamer FFmpeg plug-in, Perian, OpenMAX IL, and FFmpegInterop to expand the encoding and decoding capabilities of their respective multimedia platform. As part of NASA's Mars 2020 mission, FFmpeg is used by the Perseverance rover on Mars for image and video compression before being sent back to Earth. See also MEncoder, a similar project List of open-source codecs References External links C (programming language) libraries Command-line software Cross-platform free software Free codecs Free computer libraries Free music software Free software programmed in C Free video conversion software Multimedia frameworks Software that uses FFmpeg Assembly language software Software using the LGPL license
213474
https://en.wikipedia.org/wiki/Academic%20Competition%20Federation
Academic Competition Federation
The Academic Competition Federation (ACF) is an organization, founded as the Academic Competition Foundation in 1991, that runs a national championship for collegiate quiz bowl as well as other tournaments. History During the mid-1980s, several schools began to prepare for College Bowl's regional and national tournaments by holding independent invitationals, during which players became unsatisfied with College Bowl's questions. Several scandals soon emerged, in which the 1988 College Bowl Regionals were found to have recycled nearly all of their questions from the 1982 College Bowl Regionals, independent tournaments were threatened with lawsuits from College Bowl, and the 1983 and 1985 College Bowl National Championship Tournaments were cancelled. In response to these concerns, the University of Maryland and University of Tennessee stopped participating in College Bowl, followed a few years later by the Georgia Institute of Technology and a steadily increasing number schools. In the fall of 1990, the then-coach of the University of Tennessee team Carol Guthrie joined with a few University of Maryland team members to found the Academic Competition Foundation. In 1991, they held the first ACF Nationals, which was won by the host Tennessee team over Georgia Tech. While departing from College Bowl's structure, the tournament initially included a few elements carried over from College Bowl games. Those elements were later removed. No ACF Nationals tournament was held in 1992, but, beginning in 1993, Regionals and Nationals tournaments were held every year. By 1996, ACF Nationals was attracting 40 teams, but after the 1997 Nationals, Carol Guthrie announced that she and co-founder Jim Dendy were each resigning, and that ACF would go defunct. In 1996, a new company called National Academic Quiz Tournaments (1996) was formed. NAQT was more organized than ACF in several respects, yet also included College Bowl-like features in their questions. University of Virginia student Andrew Yaphe thus organized the Academic Competition Federation to continue running the Regional and National tournaments along with John Sheahan and David Hamilton. The 1999 Nationals saw the first presentation of the Dr. N. Gordon Carper Lifetime Achievement Award, which recognizes individuals "for meritorious services in sustaining and enriching collegiate academic competitions." Following the rise to popularity of NAQT, the decline of College Bowl, and longstanding complaints about the difficulty of ACF, a decision was made in 2001 to focus on the accessibility of Academic Competition Federation tournaments. Despite those efforts, however, only sixteen teams attended ACF Nationals in 2001, and the future of the tournament seemed tenuous. Thus, a third, easier tournament, ACF Fall, was conceived by University of Kentucky player Kelly McKenzie, star player of the University of Kentucky team, and held in November 2001. This three-tournament lineup continues to the present day, with an "ACF Winter" tournament occurring in 2009 and 2010. Format An ACF game consists of twenty ten-point tossups with thirty-point bonuses, a format now widely used by various collegiate and high school quiz bowl tournaments. The ACF finals format is unique in that it involves awarding a tournament title outright to a team which is two or more games ahead in the standings of the second-place team at the end of the tournament proper. If two teams are tied, a one-game winner-take-all final is played. An advantaged final of up to two games is played if the first-place team is exactly one game ahead of the second-place team. ACF Tournaments Overview ACF tournaments follow the packet submission model, where editors organize the writing project and teams submit questions which will be used in competition. Depending on the experience of the players on a given team, that team may need to submit questions that will either comprise the entirety of a 20-tossup, 20-bonus packet, or that will be combined by editors with the question submissions of one or more teams to produce a full packet. ACF Fall ACF Fall is intended to be an easy college tournament for new college players or players with limited high school quiz bowl experience. ACF fall is played concurrently each year throughout the United States and internationally in Canada and the United Kingdom. With over 200 teams participating across all sites, ACF fall is the most-widely played college set in a given year. ACF Fall follows the packet-submission model. ACF Regionals ACF Regionals is the regular-difficulty (more difficult that ACF Fall) college tournament by which teams may qualify for ACF Nationals. In 2020, there were 11 concurrent ACF Regionals tournaments in the United States, 2 in Canada, and one in the United Kingdom. ACF Regionals follows the packet-submission model. ACF Nationals ACF Nationals is the final ACF tournament each season, and it has been run for more than 25 years. With the 48 strongest teams in the United States, Canada, and United Kingdom competing together at a single tournament location, the questions at ACF Nationals are more difficult than those at ACF Regionals. ACF Nationals does not follow the same packet-submission model as ACF Fall and ACF Regionals. Most of the questions are written by the tournament's editors, but teams may submit questions for a discounted entry fee. Past ACF Tournaments ACF Winter In 2009 and 2010, ACF organized the ACF Winter tournament. The target difficulty for ACF Winter was that of a regular college tournament, i.e. more difficult than ACF Fall and easier than ACF Regionals. In February 2020, ACF announced that it will be releasing ACF Winter again for the 2020-2021 season. Early Autumn Collegiate Novice EACN was an ACF-sponsored collegiate novice tournament written and competed on from 2010 to 2013. With stricter eligibility requirements than ACF Fall, EACN was intended to be an introduction to collegiate quiz bowl for players entirely new with quiz bowl. ACF Nationals results Carper Award recipients Dr. N. Gordon Carper Lifetime Achievement Award was established in 1999 to honor individuals for meritorious services in sustaining and enriching collegiate academic competitions. The award is presented annually to a member of the quizbowl community who exhibits the kind of dedication to and long-term support of academic competitions as exemplified by career of Dr. Carper. Beginning in 2019, ACF empowered a committee of former Carper winners who are also ACF members to select a second winner. 1999: Dr. N. Gordon Carper, coach at Berry College 2000: Dr. Carol Guthrie, former coach at the University of Tennessee 2001: Dr. Robert Meredith, coach at the Georgia Institute of Technology 2002: Not presented 2003: Eric Hillemann, coach at Carleton College 2004: Don Windham, ACF co-founder and Gaius Stern, of the University of California, Berkeley 2005: Charlie Steinhice, coach at the University of Tennessee at Chattanooga 2006: R. Robert Hentzel, president of National Academic Quiz Tournaments 2007: Andrew Yaphe, player at the University of Chicago 2008: Chris Sewell, developer of the SQBS statistics program 2009: Ezequiel Berdichevsky, ACF editor 2010: Subash Maddipoti, former player at the University of Chicago and the University of Illinois 2011: Seth Teitler, former player at the University of Chicago and the University of California, Berkeley 2012: Jeff Hoppes, former player at Princeton University and the University of California, Berkeley 2013: Matt Weiner, tournament organizer and question set editor 2014: Susan Ferrari, former player at the University of Chicago 2015: Jerry Vinokurov, former player at the University of California, Berkeley and Brown University 2016: Andrew Hart, former player at the University of Minnesota 2017: Jonathan Magin, former player at the University of Maryland 2018: Mike Bentley, former player at the University of Maryland and the University of Washington 2019: Rob Carson, former player at the University of Minnesota; and Kelly McKenzie, former player at the University of Kentucky and creator of ACF Fall 2020: Alex Damisch, former player at Lawrence University; and Mike Sorice, former player at the University of Illinois See also College Bowl National Academic Quiz Tournaments References External links Official website Student quiz competitions 1991 establishments in the United States Recurring events established in 1991
415406
https://en.wikipedia.org/wiki/English%20as%20a%20second%20or%20foreign%20language
English as a second or foreign language
English as a second or foreign language is the use of English by speakers with different native languages. Language education for people learning English may be known as English as a second language (ESL), English as a foreign language (EFL), English as an additional language (EAL), or English for speakers of other languages (ESOL). The aspect in which ESL is taught is referred to as teaching English as a foreign language (TEFL), teaching English as a second language (TESL) or teaching English to speakers of other languages (TESOL). Technically, TEFL refers to English language teaching in a country where English is not the official language, TESL refers to teaching English to non-native English speakers in a native English-speaking country and TESOL covers both. In practice, however, each of these terms tends to be used more generically across the full field. TEFL is more widely used in the UK and TESL or TESOL in the US. The term "ESL" has been seen by some to indicate that English would be of subordinate importance; for example, where English is used as a lingua franca in a multilingual country. The term can be a misnomer for some students who have learned several languages before learning English. The terms "English language learners" (ELL), and, more recently, "English learners" (EL), have been used instead, and the students' native languages and cultures are considered important. Methods of learning English are highly variable, depending on the student's level of English proficiency and the manner and setting in which they are taught, which can range from required classes in school to self-directed study at home, or a blended combination of both. In some programs, educational materials (including spoken lectures and written assignments) are provided in a mixture of English, and the student's native language. In other programs, educational materials are always in English, but the vocabulary, grammar, and context clues may be modified to be more easily understood by students with varying levels of comprehension (Wright, 2010). Adapting comprehension, insight-oriented repetitions, and recasts are some of the methods used in training. However, without proper cultural immersion (social learning grounds) the associated language habits and reference points (internal mechanisms) of the host country are not completely transferred through these programs (Wright, 2010). As a further complication, the syntax of the language is based on Latin grammar hence it suffers inconsistencies. The major engines that influence the language are the United States and the United Kingdom and they both have assimilated the language differently so they differ in expressions and usage. This is found to a great extent primarily in pronunciation and vocabulary. Variants of the English language also exist in both of these countries (e.g. African American Vernacular English). The English language has a great reach and influence, and English is taught all over the world. In countries where English is not usually a native language, there are two distinct models for teaching English: Educational programs for students who want to move to English-speaking countries, and other programs for students who do not intend to move but who want to understand English content for the purposes of education, entertainment, employment or conducting international business. The differences between these two models of English language education have grown larger over time, and teachers focusing on each model have used different terminology, received different training, and formed separate professional associations. English is also taught as a second language for recent immigrants to English-speaking countries, which faces separate challenges because the students in one class may speak many different native languages. Terminology and types The many acronyms and abbreviations used in the field of English teaching and learning may be confusing and the following technical definitions may have their currency contested upon various grounds. The precise usage, including the different use of the terms ESL and ESOL in different countries, is described below. These terms are most commonly used in relation to teaching and learning English as a second language, but they may also be used in relation to demographic information. English language teaching (ELT) is a widely used teacher-centered term, as in the English language teaching divisions of large publishing houses, ELT training, etc. Teaching English as a second language (TESL), teaching English to speakers of other languages (TESOL), and teaching English as a foreign language (TEFL) are also used. Other terms used in this field include English as an international language (EIL), English as a lingua franca (ELF), English for special purposes and English for specific purposes (ESP), and English for academic purposes (EAP). Those who are learning English are often referred to as English language learners (ELL). The learners of the English language are of two main groups. The first group includes the learners learning English as their second language i.e. the second language of their country and the second group includes those who learn English as a totally foreign language i.e. a language that is not spoken in any part of their county. English outside English-speaking countries EFL, English as a foreign language, indicates the teaching of English in a non–English-speaking region. The study can occur either in the student's home country, as part of the normal school curriculum or otherwise, or, for the more privileged minority, in an anglophone country that they visit as a sort of educational tourist, particularly immediately before or after graduating from university. TEFL is the teaching of English as a foreign language; note that this sort of instruction can take place in any country, English-speaking or not. Typically, EFL is learned either to pass exams as a necessary part of one's education or for career progression while one works for an organization or business with an international focus. EFL may be part of the state school curriculum in countries where English has no special status (what linguistic theorist Braj Kachru calls the "expanding circle countries"); it may also be supplemented by lessons paid for privately. Teachers of EFL generally assume that students are literate in their mother tongue. The Chinese EFL Journal and Iranian EFL Journal are examples of international journals dedicated to specifics of English language learning within countries where English is used as a foreign language. English within English-speaking countries The other broad grouping is the use of English within the English-speaking world. In what Braj Kachru calls "the inner circle", i.e., countries such as the United Kingdom and the United States, this use of English is generally by refugees, immigrants, and their children. It also includes the use of English in "outer circle" countries, often former British colonies and the Philippines, where English is an official language even if it is not spoken as a mother tongue by a majority of the population. In the US, Canada, Australia, and New Zealand this use of English is called ESL (English as a second language). This term has been criticized on the grounds that many learners already speak more than one language. A counter-argument says that the word "a" in the phrase "a second language" means there is no presumption that English is the second acquired language (see also Second language). TESL is the teaching of English as a second language. There are also other terms that it may be referred to in the US including ELL (English Language Learner) and CLD (Culturally and Linguistically Diverse). In the UK and Ireland, the term ESL has been replaced by ESOL (English for speakers of other languages). In these countries TESOL (teaching English to speakers of other languages) is normally used to refer to teaching English only to this group. In the UK and Ireland, the term EAL (English as an additional language) is used, rather than ESOL, when talking about primary and secondary schools, in order to clarify that English is not the students' first language, but their second or third. The term ESOL is used to describe English language learners who are above statutory school age. Other acronyms were created to describe the person rather than the language to be learned. The term Limited English proficiency (LEP) was first used in 1975 by the Lau Remedies following a decision of the U.S. Supreme Court. ELL (English Language Learner), used by United States governments and school systems, was created by James Crawford of the Institute for Language and Education Policy in an effort to label learners positively, rather than ascribing a deficiency to them. Recently, some educators have shortened this to EL – English Learner. Typically, a student learns this sort of English to function in the new host country, e.g., within the school system (if a child), to find and hold down a job (if an adult), or to perform the necessities of daily life (cooking, taking a cab/public transportation, or eating in a restaurant, etc.). The teaching of it does not presuppose literacy in the mother tongue. It is usually paid for by the host government to help newcomers settle into their adopted country, sometimes as part of an explicit citizenship program. It is technically possible for ESL to be taught not in the host country, but in, for example, a refugee camp, as part of a pre-departure program sponsored by the government soon to receive new potential citizens. In practice, however, this is extremely rare. Particularly in Canada and Australia, the term ESD (English as a second dialect) is used alongside ESL, usually in reference to programs for Aboriginal peoples in Canada or Australians. The term refers to the use of standard English by speakers of a creole or non-standard variety. It is often grouped with ESL as ESL/ESD. Umbrella terms All these ways of denoting the teaching of English can be bundled together into an umbrella term. Unfortunately, not all of the English teachers in the world would agree on just only a simple single term(s). The term TESOL (teaching English to speakers of other languages) is used in American English to include both TEFL and TESL. This is also the case in Canada as well as in Australia and New Zealand. British English uses ELT (English language teaching), because TESOL has a different, more specific meaning; see above. Systems of simplified English Several models of "simplified English" have been suggested or developed for international communication, among them: Basic English, developed by Charles Kay Ogden (and later also I. A. Richards) in the 1930s; a recent revival has been initiated by Bill Templer Threshold Level English, developed by van Ek and Alexander Globish, developed by Jean-Paul Nerrière Basic Global English, developed by Joachim Grzega Nuclear English, proposed by Randolph Quirk and Gabriele Stein but never fully developed Difficulties for learners Language teaching practice often assumes that most of the difficulties that learners face in the study of English are a consequence of the degree to which their native language differs from English (a contrastive analysis approach). A native speaker of Chinese, for example, may face many more difficulties than a native speaker of German, because German is more closely related to English than Chinese. This may be true for anyone of any mother tongue (also called the first language, normally abbreviated L1) setting out to learn any other language (called a target language, second language or L2). See also second language acquisition (SLA) for mixed evidence from linguistic research. Language learners often produce errors of syntax, vocabulary, and pronunciation thought to result from the influence of their L1, such as mapping its grammatical patterns inappropriately onto the L2, pronouncing certain sounds incorrectly or with difficulty, and confusing items of vocabulary known as false friends. This is known as L1 transfer or "language interference". However, these transfer effects are typically stronger for beginners' language production, and SLA research has highlighted many errors which cannot be attributed to the L1, as they are attested in learners of many language backgrounds (for example, failure to apply 3rd person present singular -s to verbs, as in 'he make' not 'he makes). Some students may have problems due to the incoherence in certain rules, some words for example could be a noun or a verb depending on the word placement. For instance, in "I am suffering terribly", suffering is the verb, but in "My suffering is terrible", it is a noun. However, both sentences express the same idea using the same words. Other students might have problems due to the prescribing and proscribing nature of rules in the language formulated by amateur grammarians rather than ascribing to the functional and descriptive nature of languages evidenced from distribution. For example, a cleric, Robert Lowth, introduced the rule to never end a sentence with a preposition, inspired from Latin grammar, through his book A Short Introduction to English Grammar. The inconsistencies brought from Latin language standardization of English language led to classifying and sub-classifying an otherwise simple language structure. Like many alphabetic writing systems, English also has incorporated the principle that graphemic units should correspond to the phonemic units; however, the fidelity to the principle is compromised, compared to an exemplar language like the Finnish language. This is evident in the Oxford English Dictionary; for many years it experimented with many spellings of 'SIGN' to attain a fidelity with the said principle, among them are SINE, SEGN, and SYNE, and through the diachronic mutations they settled on SIGN. Cultural differences in communication styles and preferences are also significant. For example, a study among Chinese ESL students revealed that preference for not using the tense marking on verb present in the morphology of their mother tongue made it difficult for them to express time-related sentences in English. Another study looked at Chinese ESL students and British teachers and found that the Chinese learners did not see classroom 'discussion and interaction' type of communication for learning as important but placed a heavy emphasis on teacher-directed lectures. Pronunciation English contains a number of sounds and sound distinctions not present in some other languages. These sounds can include vowels and consonants, as well as diphthongs and other morphemes. Speakers of languages without these sounds may have problems both with hearing and pronouncing them. For example: The interdentals, and (both written as th), are relatively rare in other languages. Phonemic contrast of with (beat vs bit vowels), of with (fool vs full vowels), and of with (bet vs bat vowels) is rare outside northwestern Europe, so unusual mergers or exotic pronunciations such as for bit may arise. Note that [bɪt] is a pronunciation often used in England and Wales for bet, and also in some dialects of American English. See Northern cities vowel shift, and Pin-pen merger. Native speakers of Japanese, Korean, and most Chinese dialects have difficulty distinguishing and , also present for speakers of some Caribbean Spanish dialects (only at the end of syllables), which is known as lambdacism, one form of lallation. Native speakers of Brazilian Portuguese, Spanish or Galician, and Ukrainian may pronounce -like sounds where a , , or , respectively, would be expected, as those sounds often or almost always follow this process in their native languages, what is known as debuccalization. Native speakers of Arabic, Tagalog, Japanese, Korean, and important dialects of all current Iberian Romance languages (including most of Spanish) have difficulty distinguishing and , what is known as betacism. Native speakers of almost all of Brazilian Portuguese, of some African Portuguese registers, of Portuguese-derived creole languages, some dialects of Swiss German, and several pontual processes in several Slavic languages, such as Bulgarian and Ukrainian, and many dialects of other languages, have instances of or always becoming at the end of a syllable in a given context, so that milk may be variously pronounced as . This is present in some English registers—known as l-vocalization—but may be shunned as substandard or bring confusion in others. Native speakers of many widely spoken languages (including Dutch and all the Romance ones) distinguish voiceless stop pairs from their voiced counterparts merely by their sound (and in Iberian Romance languages, the latter trio does not even need to be stopped, so its native speakers unconsciously pronounce them as , , and – voiced fricatives or approximants in the very same mouth positions – instead much or most of the time, that native English speakers may erroneously interpret as the or , and , , or of their language). In English, German, Danish, and some other languages, though, the main distinguishing feature in the case of initial or stressed stopped voiceless consonants from their voiced counterparts is that they are aspirated (unless if immediately preceded or followed by ), while the voiced ones are not. As a result, much of the non-English and will sound to native English ears as and instead (i.e. parking may sound more like barking). Ukrainian, Turkish and Azeri speakers may have trouble distinguishing between and as both pronunciations are used interchangeably for the letter v in those languages. Languages may also differ in syllable structure; English allows for a cluster of up to three consonants before the vowel and five after it (e.g. strengths, straw, desks, glimpsed, sixths). Japanese and Brazilian Portuguese, for example, broadly alternate consonant and vowel sounds so learners from Japan and Brazil often force vowels between the consonants (e.g. desks becomes or , and milk shake becomes or , respectively). Similarly, in most Iberian dialects, a word can begin with , and can be followed by a consonant, but a word can never begin with immediately followed by a consonant, so learners whose mother tongue is in this language family often have a vowel in front of the word (e.g. school becomes , , or for native speakers of Spanish, Brazilian and European Portuguese, and Catalan, respectively). Grammar Tense, aspect, and mood – English has a relatively large number of tense–aspect–mood forms with some quite subtle differences, such as the difference between the simple past "I ate" and the present perfect "I have eaten". Progressive and perfect progressive forms add complexity. (See English verbs.) Functions of auxiliaries – Learners of English tend to find it difficult to manipulate the various ways in which English uses auxiliary verbs. These include negation (e.g. "He hasn't been drinking."), inversion with the subject to form a question (e.g. Has he been drinking?), short answers (e.g. Yes, he has.) and tag questions (has he?). A further complication is that the dummy auxiliary verb do/does/did is added to fulfil these functions in the simple present and simple past, but not to replace the verb to be (He drinks too much./Does he? but He is an addict/Is he?). Modal verbs – English has several modal auxiliary verbs, each with a number of uses. These verbs convey a special sense or mood such as obligation, necessity, ability, probability, permission, possibility, prohibition, or intention. These include "must", "can", "have to", "need to", "will", "shall", "ought to", "will have to", "may", and "might". For example, the opposite of "You must be here at 8" (obligation) is usually "You don't have to be here at 8" (lack of obligation, choice). "Must" in "You must not drink the water" (prohibition) has a different meaning from "must" in "You must have eaten the chocolate" (deduction). This complexity takes considerable work for most English language learners to master. All these modal verbs or "modals" take the first form of the verb after them. These modals (most of them) do not have past or future inflection, i.e. they do not have past or future tense (exceptions being have to and need to). Idiomatic usage – English is reputed to have a relatively high degree of idiomatic usage. For example, the use of different main verb forms in such apparently parallel constructions as "try to learn", "help learn", and "avoid learning" poses difficulty for learners. Another example is the idiomatic distinction between "make" and "do": "make a mistake", not "do a mistake"; and "do a favor", not "make a favor". Articles – English has two forms of article: the (the definite article) and a and an (the indefinite article). In addition, at times English nouns can or indeed must be used without an article; this is called the zero article. Some of the differences between definite, indefinite, and zero articles are fairly easy to learn, but others are not, particularly since a learner's native language may lack articles, have only one form, or use them differently from English. Although the information conveyed by articles is rarely essential for communication, English uses them frequently (several times in the average sentence) so that they require some effort from the learner. Vocabulary Phrasal verbs – Phrasal verbs (also known as multiple-word verbs) in English can cause difficulties for many learners because of their syntactic pattern and because they often have several meanings. There are also a number of phrasal verb differences between American and British English. Prepositions – As with many other languages, the correct use of prepositions in the English language is difficult to learn, and it can turn out to be quite a frustrating learning experience for ESL/EFL learners. For example, the prepositions on (rely on, fall on), of (think of, because of, in the vicinity of), and at (turn at, meet at, start at) are used in so many different ways and contexts, it is very difficult to remember the exact meaning for each one. Furthermore, the same words are often used as adverbs (come in, press on, listen in, step in) as part of a compound verb (make up, give up, get up, give in, turn in, put on), or in more than one way with different functions and meanings (look up, look on, give in) (He looked up her skirt/He looked up the spelling/Things are looking up/When you're in town, look me up!; He gave in his homework/First he refused but then he gave in; He got up at 6 o'clock/He got up the hill/He got up a nativity play). Also, for some languages, such as Spanish, there is/are one/some prepositions that can mean multiple English prepositions (i.e. en in Spanish can mean on, in, or at). When translating back to the ESL learners' respective L1, a particular preposition's translation may be correct in one instance, but when using the preposition in another sense, the meaning is sometimes quite different. "One of my friends" translates to (transliterated) wahed min isdiqa'i in Arabic. Min is the Arabic word for "from", so it means one "from" my friends. "I am on page 5" translates to ich bin auf Seite 5 in German just fine, but in Arabic it is Ana fee safha raqm 5 (I am "in" page 5). Word formation – Word formation in English requires much rote learning. For example, an adjective can be negated by using the prefixes un- (e.g. unable), in- (e.g. inappropriate), dis- (e.g. dishonest), non- (non-standard) or a- (e.g. amoral), as well as several rarer prefixes. Size of lexicon – The history of English has resulted in a very large vocabulary, including one stream from Old English and one from the Norman infusion of Latin-derived terms. (Schmitt & Marsden claim that English has one of the largest vocabularies of any known language.) One estimate of the lexicon puts English at around 250,000 unique words. This requires more work for a learner to master the language. Collocations – Collocation in English is the tendency for words to occur together with others. For example, nouns and verbs that go together ("ride a bike" or "drive a car"). Native speakers tend to use chunks of collocations and ESL learners make mistakes with collocations. Slang and colloquialisms – In most native English-speaking countries, many slang and colloquial terms are used in everyday speech. Many learners may find that classroom based English is significantly different from how English is usually spoken in practice. This can often be difficult and confusing for learners with little experience of using English in Anglophone countries. Also, slang terms differ greatly between different regions and can change quickly in response to popular culture. Some phrases can become unintentionally rude if misused. Silent letters - Within English, almost every letter has the 'opportunity' to be silent in a word, except F, J, Q, R, V, and Y. The most common is e, usually at the end of the word and used to elongate the previous vowel(s). The common usage of silent letters can throw off how ESL learners interpret the language (especially those who are fluent in a Germanic language), since a common step to learning words in most languages is to pronounce them phonetically. Words such as queue, Colonel, knight and Wednesday tend to throw off the learner, since they contain large amounts of silent letters. First-language literacy Learners who have had less than eight years of formal education in their first language are sometimes called adult ESL literacy learners. Usually, these learners have had their first-language education interrupted. Many of these learners require a different level of support, teaching approaches and strategies, and a different curriculum from mainstream adult ESL learners. For example, these learners may lack study skills and transferable language skills,Bigelow, M., & Schwarz, R. L. (2010). Adult English Language Learners with Limited Literacy. National Institute for Literacy. pp. 5, 13. and these learners may avoid reading or writing. Often these learners do not start classroom tasks immediately, do not ask for help, and often assume the novice role when working with peers. Generally, these learners may lack self-confidence. For some, prior schooling is equated with status, cultured, civilized, high class, and they may experience shame among peers in their new ESL classes.Bigelow, M., & Schwarz, R. L. (2010). Adult English Language Learners with Limited Literacy. National Institute for Literacy. p. 13. Second-language literacy Learners who have not had extensive exposure to reading and writing in a second language, despite having acceptable spoken proficiency, may have difficulties with the reading and writing in their L2. Joann Crandall (1993) has pointed out that most teacher training programs for TESOL instructors do not include sufficient, in most cases "no", training for the instruction in literacy. This is a gap that many scholars feel needs to be addressed. Social and academic language acquisition Basic interpersonal communication skills (BICS) are language skills needed in social situations. These language skills usually develop within six months to two years. Cognitive academic language proficiency (CALP) refers to the language associated with formal content material and academic learning. These skills usually take from five to seven years to develop. Importance of reading in ESL instruction According to some English professionals, reading for pleasure is an important component in the teaching of both native and foreign languages: "Studies that sought to improve writing by providing reading experiences in place of grammar study or additional writing practice found that these experiences were as beneficial as, or more beneficial than, grammar study or extra writing practice." Differences between spoken and written English As with most languages, written language tends to use a more formal register than spoken language. Spelling and pronunciation: probably the biggest difficulty for non-native speakers, since the relation between English spelling and pronunciation does not follow the alphabetic principle consistently. Because of the many changes in pronunciation which have occurred since a written standard developed, the retention of many historical idiosyncrasies in spelling, and the large influx of foreign words (mainly from Norman French, Classical Latin and Greek) with different and overlapping spelling patterns, English spelling and pronunciation are difficult even for native speakers to master. This difficulty is shown in such activities as spelling bees. The generalizations that exist are quite complex and there are many exceptions, leading to a considerable amount of rote learning. The spelling and pronunciation system causes problems in both directions: a learner may know a word by sound but be unable to write it correctly (or indeed find it in a dictionary) or they may see a word written but not know how to pronounce it or mislearn the pronunciation. However, despite the variety of spelling patterns in English, there are dozens of rules that are 75% or more reliable. There is also debate about "meaning-focused" learning and "correction-focused" learning. Supporters for the former think that using speech as the way to explain meaning is more important. However, supporters of the latter do not agree with that and instead think that grammar and correct habit is more important. Technology Technology plays an integral part in our lives and has become a major instrument in the field of education. Educational technologies make learning and teaching of English language more convenient and enable new opportunities. The video talks about the history of technology in education and its current integration in learning.Computers have made an entry into education in the past decades and have brought significant benefits to teachers and students alike. Computers help learners by making them more responsible for their own learning. Studies have shown that one of the best ways of improving one's learning ability is to use a computer where all the information one might need can be found. In today's developed world, a computer is one of a number of systems that help learners to improve their language. Computer Assisted Language Learning (CALL) is a system which aids learners to improve and practice language skills. It provides a stress-free environment for learners and makes them more responsible.Computers can provide help to ESL learners in many different ways such as teaching students to learn a new language. The computer can be used to test students about the language they already learn. It can assist them in practicing certain tasks. The computer permits students to communicate easily with other students in different places. In recent years the increasing use of mobile technology, such as smartphones and tablet computers, has led to a growing usage application created to facilitate language learning, such as The Phrasal Verbs Machine from Cambridge. In terms of online materials, there are many forms of online materials such as blogs, wikis, webquests. For instance, blogs can allow English learners to voice their opinions, sharpen their writing skills, and build their confidence. However, some who are introverted may not feel comfortable sharing their ideas on the blog. Class wikis can be used to promote collaborative learning through sharing and co-constructing knowledge. On-line materials are still just materials and thus need to be subject to the same scrutiny of evaluation as any other language material or source. Augmented Reality (AR) is another emerging technology that has an important place in language education. It allows for merging of the virtual objects into the real world, as if they co-exist in the same time and place. The research has shown 8 benefits of AR in the educational setting: 1. Collaboration; 2. Connectivity; 3. Student centred; 4.Community; 5. Exploration; 6. Shared knowledge; 7. Multisensory experience; 8. Authetnticity. Learners have mentioned that AR increased classroom engagement and student motivation. Two applications that have been tested in the ESL setting are QuiverVision and JigSpace.  QuiverVision offers colouring pages that can be brought to life using Android or iOS devices. JigSpace can be a helpful resource in learning complex scientific, technical and historical concepts for ESL students. Increasing social nature of internet opened up new opportunities for language learners and educators. Videos, memes and chats are all sources of authentic language that are easily accessible via mobile devices or computers. Additional benefit for English language learners is that non-textual representation can be more beneficial for students with various learning preferences. Integration of games and gaming in language learning has recently received a surge of interest. There are games that have been specifically designed for English language learning while there are others that can be adapted to this context. Games to Learn English includes multiple games that can be played to develop language skills. Trace Effects is a game developed by U.S. Department of State which helps learners not only increase their language knowledge but also explore American culture. The most important features of gaming are their collaborative and interactive nature which makes learning engaging for learners. The learning ability of language learners can be more reliable with the influence of a dictionary. Learners tend to carry or are required to have a dictionary which allows them to learn independently and become more responsible for their own work. In these modern days, education has upgraded its methods of teaching and learning with dictionaries where digital materials are being applied as tools. Electronic dictionaries are increasingly a more common choice for ESL students. Most of them contain native-language equivalents and explanations, as well as definitions and example sentences in English. They can speak the English word to the learner, and they are easy to carry around. However, they are expensive and easy to lose, so students are often instructed to put their names on them. Varieties of English The English language in England (and other parts of the United Kingdom) exhibits significant differences by region and class, noticeable in structure (vocabulary and grammar), accent (pronunciation) and in dialect. The numerous communities of English native speakers in countries all over the world also have some noticeable differences like Irish English, Australian English, Canadian English, Newfoundland English, etc. For instance, the following are words that only make meaning in originating culture: Toad in the hole, Gulab jamun, Spotted Dick, etc. Attempts have been made to regulate English to an inclination of a class or to a specific style of a community by John Dryden and others. Auspiciously, English as a lingua franca is not racialized and has no proscribing organization that controls any prestige dialect for the language – unlike the French Academie de la langue française, Spain's Real Academia Española, or Esperanto's Akademio. Teaching English, therefore, involves not only helping the student to use the form of English most suitable for their purposes, but also exposure to regional forms and cultural styles so that the student will be able to discern meaning even when the words, grammar, or pronunciation are different from the form of English they are being taught to speak. Some professionals in the field have recommended incorporating information about non-standard forms of English in ESL programs. For example, in advocating for classroom-based instruction in African-American English (also known as Ebonics), linguist Richard McDorman has argued, "Simply put, the ESL syllabus must break free of the longstanding intellectual imperiousness of the standard to embrace instruction that encompasses the many "Englishes" that learners will encounter and thereby achieve the culturally responsive pedagogy so often advocated by leaders in the field." Social challenges and benefits Class placement ESL students often suffer from the effects of tracking and ability grouping. Students are often placed into low ability groups based on scores on standardized tests in English and math. There is also low mobility among these students from low to high performing groups, which can prevent them from achieving the same academic progress as native speakers. Similar tests are also used to place ESL students in college-level courses. Students have voiced frustration that only non-native students have to prove their language skills, when being a native speaker in no way guarantees college-level academic literacy. Studies have shown that these tests can cause different passing rates among linguistic groups regardless of high school preparation. Dropout rates Dropout rates for ESL students in multiple countries are much higher than dropout rates for native speakers. The National Center for Education Statistics (NCES) in the United States reported that the percentage of dropouts in the non-native born Hispanic youth population between the ages of 16 and 24 years old is 43.4%. A study in Canada found that the high school dropout rate for all ESL students was 74%. High dropout rates are thought to be due to difficulties ESL students have in keeping up in mainstream classes, the increasing number of ESL students who enter middle or high school with interrupted prior formal education, and accountability systems. The accountability system in the US is due to the No Child Left Behind Act. Schools that risk losing funding, closing, or having their principals fired if test scores are not high enough begin to view students that do not perform well on standardized tests as liabilities. Because dropouts actually increase a school's performance, critics claim that administrators let poor performing students slip through the cracks. A study of Texas schools operating under No Child Left Behind found that 80% of ESL students did not graduate from high school in five years. Access to higher education ESL students face several barriers to higher education. Most colleges and universities require four years of English in high school. In addition, most colleges and universities only accept one year of ESL English. It is difficult for ESL students that arrive in the United States relatively late to finish this requirement because they must spend a longer time in ESL English classes in high school, or they might not arrive early enough to complete four years of English in high school. This results in many ESL students not having the correct credits to apply for college, or enrolling in summer school to finish the required courses. ESL students can also face additional financial barriers to higher education because of their language skills. Those that don't place high enough on college placement exams often have to enroll in ESL courses at their universities. These courses can cost up to $1,000 extra, and can be offered without credit towards graduation. This adds additional financial stress on ESL students that often come from families of lower socioeconomic status. The latest statistics show that the median household income for school-age ESL students is $36,691 while that of non-ESL students is $60,280. College tuition has risen sharply in the last decade, while family income has fallen. In addition, while many ESL students receive a Pell Grant, the maximum grant for the year 2011–2012 covered only about a third of the cost of college. Interaction with native speakers ESL students often have difficulty interacting with native speakers in school. Some ESL students avoid interactions with native speakers because of their frustration or embarrassment at their poor English. Immigrant students often also lack knowledge of popular culture, which limits their conversations with native speakers to academic topics. In classroom group activities with native speakers, ESL students often do not participate, again because of embarrassment about their English, but also because of cultural differences: their native cultures may value silence and individual work at school in preference to social interaction and talking in class. These interactions have been found to extend to teacher-student interactions as well. In most mainstream classrooms, a teacher-led discussion is the most common form of lesson. In this setting, some ESL students will fail to participate, and often have difficulty understanding teachers because they talk too fast, do not use visual aids, or use native colloquialisms. ESL students also have trouble getting involved with extracurricular activities with native speakers for similar reasons. Students fail to join extra-curricular activities because of the language barrier, the cultural emphasis of academics over other activities, or failure to understand traditional pastimes in their new country. Social benefits Supporters of ESL programs claim they play an important role in the formation of peer networks and adjustment to school and society in their new homes. Having class among other students learning English as a second language relieves the pressure of making mistakes when speaking in class or to peers. ESL programs also allow students to be among others who appreciate their native language and culture, the expression of which is often not supported or encouraged in mainstream settings. ESL programs also allow students to meet and form friendships with other non-native speakers from different cultures, promoting racial tolerance and multiculturalism. Controversy over ethical administration of ESL programs ESL programs have been critiqued for focusing more on revenue-generation than on educating students.Friesen, N., & Keeney, P. (2013). Internationalizing the Canadian campus: ESL students and the erosion of higher education. University Affairs. Retrieved from https://www.universityaffairs.ca/opinion/in-my-opinion/internationalizing-the-canadian-campus/ This has led to controversy over how ESL programs can be managed in an ethical manner. Professional and Technical Communication Advocacy The field of technical and professional communication has the potential to disrupt barriers that hinder ESL learners from entering the field, although it can just as easily perpetuate these issues. One study by Matsuda & Matsuda sought to evaluate introductory-level textbooks on the subject of technical communication. Among their research, they found that these textbooks perpetuated the “myth of linguistic homogeneity—the tacit and widespread acceptance of the dominant image of composition students as native speakers of a privileged variety of English." While the textbooks were successful in referencing global and international perspectives, the portrayal of the intended audience, the you of the text, ultimately alienated any individual not belonging to a predominantly white background and culture. In constructing this guise, prospective ESL learners are collectively lumped into an “other” group that isolates and undermines their capacity to enter the field. Furthermore, this alienation is exacerbated by the emergence of English as the pinnacle language for business and many professional realms. In Kwon & Klassen's research, they also identified and criticized a “single native-speaker recipe for linguistic success,” which contributed to anxieties about entering the professional field for ESL technical communicators. These concerns about an English-dominated professional field indicate an affective filter that provides a further barrier to social justice for these ESL individuals. These misconceptions and anxieties point towards an issue of exclusivity that technical and professional communicators must address. This social justice concern becomes an ethical concern as well, with all individuals deserving usable, accessible, and inclusive information. There is a major concern about the lack of accessibility to translation services and the amount of time and attention their English proficiency is given throughout their educational experiences. If a student lacks an understanding of the English language and still needs to participate in their coursework, they will turn to translations in order to aid their efforts. The issue is that many of these translations rarely carry the same meaning as the original text. The students in this study said that a translated text is “pretty outdated, covers only the basics or is terribly translated,” and that “The technical vocabulary linked to programming can be complicated to assimilate, especially in the middle of explanatory sentences if you don’t know the equivalent word in your native language.” Students can't be proficient in their given subjects if the language barrier is complicating the message. Researchers found that syntax, semantics, style, etc., scramble up the original messages. This disorientation of the text fogs up the message and makes it difficult for the student to decipher what they are supposed to be learning. This is where additional time and attention are needed to bridge the gap between native English speakers and ESL students. ESL students face difficulties in areas concerning lexico-grammatical aspects of technical writing., overall textual organization and comprehension, differentiation between genres of technical communication and the social hierarchies that concern the subject matter. This inhibits their ability to comprehend complex messages from English texts, and it would be more beneficial for them to tackle these subjects individually. The primary issue with this is the accessibility to more instruction. ESL students need an individual analysis of their needs and this needs to revolve around the student's ability to communicate and interpret information in English. Due to the civil rights decision of Lauv v. Nichols school districts are required to provide this additional instruction based on the needs of students, but this requirement still needs to be acted on. Many ESL students have issues in higher-level courses that hinder their academic performances due to the complicated language used in these courses being at a more complex level than what many ESL students were taught. In many cases of ESL students learning Computer Programming, they struggle with the language used in instructional manuals. Writing media centers have caused ESL students issues with universities unable to provide proofreading in their writing media center programs. This causes many ESL students to have difficulties writing papers for high-level courses that require a more complex lexicon than what many of them were taught. Fortunately, university tutors have had successes with teaching ESL students how to write a more technically complex language that ESL students need to know for their courses, but it raises the question of if ESL learners need to know a more complex version of the English language to succeed in their professional careers. Peer tutoring for ESL students Peer tutoring refers to an instructional method that pairs up low-achieving English readers, with ESL students that know minimal English and who are also approximately the same age and same grade level. The goal of this dynamic is to help both the tutor, in this case, the English speaker, and the tutee, the ESL student. Monolingual tutors are given the class material in order to provide tutoring to their assigned ESL tutee. Once the tutor has had the chance to help the student, classmates get to switch roles in order to give both peers an opportunity to learn from each other. In a study, which conducted a similar research, their results indicated that low-achieving readers that were chosen as tutors, made a lot of progress by using this procedure. In addition, ESL students were also able to improve their grades due to the fact that they increased their approach in reading acquisition skills. Importance Since there is not enough funding to afford tutors, and teachers find it hard to educate all students who have different learning abilities, it is highly important to implement peer-tutoring programs in schools. Students placed in ESL program learn together along with other non-English speakers; however, by using peer tutoring in a classroom it will avoid the separation between regular English classes and ESL classes. These programs will promote community between students that will be helping each other grow academically. To further support this statement, a study researched the effectiveness of peer tutoring and explicit teaching in classrooms. It was found that students with learning disabilities and low performing students who are exposed to the explicit teaching and peer tutoring treatment in the classroom, have better academic performance than those students who do not receive this type of assistance. It was proven that peer tutoring is the most effective and no cost form of teaching Benefits It has been proven that peer-mediated tutoring is an effective tool to help ESL students succeed academically. Peer tutoring has been utilized across many different academic courses and the outcomes for those students that have different learning abilities are outstanding. Classmates who were actively involved with other peers in tutoring had better academic standing than those students who were not part of the tutoring program. Based on their results, researchers found that all English student learners were able to maintain a high percentage of English academic words on weekly tests taught during a tutoring session. It was also found that the literature on the efficacy of peer tutoring service combined with regular classroom teaching, is the best methodology practice that is effective, that benefits students, teachers, and parents involved. Research on peer English immersion tutoring Similarly, a longitudinal study was conducted to examine the effects of the paired bilingual program and an English-only reading program with Spanish speaking English learners in order to increase students’ English reading outcomes. Students whose primary language was Spanish and were part of the ESL program were participants of this study. Three different approaches were the focus in which immersing students in English from the very beginning and teaching them reading only in that language; teaching students in Spanish first, followed by English; and teaching students to read in Spanish and English simultaneously. This occurs through a strategic approach such as structured English immersion or sheltered instruction. Findings showed that the paired bilingual reading approach appeared to work as well as, or better than, the English-only reading approach in terms of reading growth and results. Researchers found differences in results, but they also varied based on several outcomes depending on the student's learning abilities and academic performance. ESL teachers' training Teachers in an ESL class are specifically trained in particular techniques and tools to help students learn English. Research says that the quality of their teaching methods is what matters the most when it comes to educating English learners. It was also mentioned how it is highly important for teachers to have the drive to help these students succeed and "feel personal responsibility." It is important to highlight the idea that the school system needs to focus on school-wide interventions in order to make an impact and be able to help all English learners. There is a high need for comprehensive professional development for teachers in the ESL program. Effects of peer tutoring on the achievement gap Although peer tutoring has been proven to be an effective way of learning that engages and promotes academic achievement in students, does it have an effect on the achievement gap? It is an obvious fact that there is a large academic performance disparity between White, Black, and Latino students, and it continues to be an issue that has to be targeted. In an article, it was mentioned that no one has been able to identify the true factors that cause this discrepancy. However it was mentioned that by developing effective peer tutoring programs in schools could be a factor that can potentially decrease the achievement gap in the United States. Exams for learners Learners of English are often eager to get accreditation and a number of exams are known internationally: IELTS (International English Language Testing System) is the world's most popular English test for higher education and immigration. It is managed by the British Council, Cambridge Assessment English and IDP Education. It is offered in Academic, General and Life Skills versions. IELTS Academic is the normal test of English proficiency for entry into universities in the UK, Australia, Canada, and other British English countries. IELTS General is required for immigration into Australia and New Zealand. Both versions of IELTS are accepted for all classes of UK visa and immigration applications. IELTS Life Skills, was introduced in 2015 specifically to meet the requirements for some classes of UK visa application. CaMLA, a collaboration between the University of Michigan and Cambridge English Language Assessment offer a suite of American English tests, including the MET (Michigan English Test), the MTELP Series (Michigan Test of English Language Proficiency), MELAB (Michigan English Language Assessment Battery), CaMLA EPT (English Placement Test), YLTE (Young Learners Test of English), ECCE and ECPE. TOEFL (Test of English as a Foreign Language), an Educational Testing Service product, developed and used primarily for academic institutions in the US, and now widely accepted in tertiary institutions in Canada, New Zealand, Australia, the UK, Japan, South Korea, and Ireland. The current test is an Internet-based test and is thus known as the TOEFL iBT. Used as a proxy for English for Academic Purposes. iTEP (International Test of English Proficiency), developed by former ELS Language Centers President Perry Akins' Boston Educational Services, and used by colleges and universities such as the California State University system. iTEP Business is used by companies, organizations, and governments, and iTEP SLATE (Secondary Level Assessment Test of English) is designed for middle and high school-age students. PTE Academic (Pearson Test of English Academic), a Pearson product, measures reading, writing, speaking and listening as well as grammar, oral fluency, pronunciation, spelling, vocabulary and written discourse. The test is computer-based and is designed to reflect international English for academic admission into any university requiring English proficiency. TOEIC (Test of English for International Communication), an Educational Testing Service product for Business English used by 10,000 organizations in 120 countries. Includes a listening and reading test as well as a speaking and writing test introduced in selected countries beginning in 2006. Trinity College London ESOL offers the Integrated Skills in English (ISE) series of 5 exams which assesses reading, writing, speaking and listening and is accepted by academic institutions in the UK. They also offer Graded Examinations in Spoken English (GESE), a series of 12 exams, which assesses speaking and listening, and ESOL Skills for Life and ESOL for Work exams in the UK only. Cambridge Assessment English offers a suite of globally available examinations including General English: Key English Test (KET), Preliminary English Test (PET), First Certificate in English (FCE), Certificate in Advanced English (CAE) and Certificate of Proficiency in English (CPE). London Tests of English from Pearson Language Tests, a series of six exams each mapped to a level from the Common European Framework (CEFR) – see below. Secondary Level English Proficiency test MTELP (Michigan Test of English Language Proficiency), is a language certificate measuring a student's English ability as a second or foreign language. Its primary purpose is to assess a learner's English language ability at an academic or advanced business level. Many countries also have their own exams. ESOL learners in England, Wales, and Northern Ireland usually take the national Skills for Life qualifications, which are offered by several exam boards. EFL learners in China may take the College English Test, the Test for English Majors (TEM), and/or the Public English Test System (PETS). People in Taiwan often take the General English Proficiency Test (GEPT). In Greece, English students may take the PALSO (PanHellenic Association of Language School Owners) exams. The Common European Framework Between 1998 and 2000, the Council of Europe's language policy division developed its Common European Framework of Reference for Languages. The aim of this framework was to have a common system for foreign language testing and certification, to cover all European languages and countries. The Common European Framework (CEF) divides language learners into three levels: A. Basic User B. Independent User C. Proficient User Each of these levels is divided into two sections, resulting in a total of six levels for testing (A1, A2, B1, etc.). This table compares ELT exams according to the CEF levels: Qualifications for teachers Qualifications vary from one region or jurisdiction to the next. There are also different qualifications for those who manage or direct TESOL programsBailey, K. M., & Llamas, C. N. (2012). Language program administrators’ knowledge and skills. In M. Christison & F. L. Stoller (Eds.), Handbook for language program administrators (2nd. ed., pp. 19-34). Burlingame, CA: Alta Book Center Publishers. Non-native speakers Most people who teach English are in fact not native speakers. They are state school teachers in countries around the world, and as such, they hold the relevant teaching qualification of their country, usually with a specialization in teaching English. For example, teachers in Hong Kong hold the Language Proficiency Assessment for Teachers. Those who work in private language schools may, from commercial pressures, have the same qualifications as native speakers (see below). Widespread problems exist of minimal qualifications and poor quality providers of training, and as the industry becomes more professional, it is trying to self-regulate to eliminate these. Australian qualifications The Australian Skills Quality Authority accredits vocational TESOL qualifications such as the 10695NAT Certificate IV in TESOL and the 10688NAT Diploma in TESOL. As ASQA is an Australian Government accreditation authority, these qualifications rank within the Australian Qualifications Framework. And most graduates work in vocational colleges in Australia. These TESOL qualifications are also accepted internationally and recognized in countries such as Japan, South Korea, and China. British qualifications Common, respected qualifications for teachers within the United Kingdom's sphere of influence include certificates and diplomas issued by Trinity College London ESOL and Cambridge English Language Assessment (henceforth Trinity and Cambridge). A certificate course is usually undertaken before starting to teach. This is sufficient for most EFL jobs and for some ESOL ones. CertTESOL (Certificate in Teaching English to Speakers of Other Languages), issued by Trinity, and CELTA (Certificate in English Language Teaching to Adults), issued by Cambridge, are the most widely taken and accepted qualifications for new teacher trainees. Courses are offered in the UK and in many countries around the world. It is usually taught full-time over a one-month period or part-time over a period of up to a year. Teachers with two or more years of teaching experience who want to stay in the profession and advance their career prospects (including school management and teacher training) can take a diploma course. Trinity offers the Trinity Licentiate Diploma in Teaching English to Speakers of Other Languages (DipTESOL) and Cambridge offers the Diploma in English Language Teaching to Adults (DELTA). These diplomas are considered to be equivalent and are both accredited at level 7 of the revised National Qualifications Framework. Some teachers who stay in the profession go on to do an MA in a relevant discipline such as applied linguistics or ELT. Many UK master's degrees require considerable experience in the field before a candidate is accepted onto the course. The above qualifications are well-respected within the UK EFL sector, including private language schools and higher education language provision. However, in England and Wales, in order to meet the government's criteria for being a qualified teacher of ESOL in the Learning and Skills Sector (i.e. post-compulsory or further education), teachers need to have the Certificate in Further Education Teaching Stage 3 at level 5 (of the revised NQF) and the Certificate for ESOL Subject Specialists at level 4. Recognised qualifications which confer one or both of these include a Postgraduate Certificate in Education (PGCE) in ESOL, the CELTA module 2, and City & Guilds 9488. Teachers of any subject within the British state sector are normally expected to hold a PGCE and may choose to specialise in ELT. Canadian qualifications Teachers teaching adult ESL in Canada in the federally funded Language Instruction to Newcomers (LINC) program must be TESL certified. Most employers in Ontario encourage certification by TESL Ontario. Often this requires completing an eight-month graduate certificate program at an accredited university or college. See the TESL Ontario or TESL Canada websites for more information. United States qualifications Some U.S. instructors at community colleges, private language schools and universities qualify to teach English to adult non-native speakers by completing a Master of Arts (MA) in TESOL. Other degrees may be a Master in Adult Education and Training or Applied Linguistics. This degree also qualifies them to teach in most EFL contexts. There are also a growing number of online programs offering TESOL degrees. In fact, "the growth of Online Language Teacher Education (OLTE) programs from the mid-1990s to 2009 was from 20 to more than 120". In many areas of the United States, a growing number of K-12 public school teachers are involved in teaching ELLs (English Language Learners, that is, children who come to school speaking a home language other than English). The qualifications for these classroom teachers vary from state to state but always include a state-issued teaching certificate for public instruction. This state licensing requires substantial practical experience as well as course work. In some states, an additional specialization in ESL/ELL is required. This may be called an "endorsement". Endorsement programs may be part of a graduate program or maybe completed independently to add the endorsement to the initial teaching certificate An MA in TESOL may or may not meet individual state requirements for K-12 public school teachers. It is important to determine if a graduate program is designed to prepare teachers for adult education or K-12 education. The MA in TESOL typically includes second language acquisition theory, linguistics, pedagogy, and an internship. A program will also likely have specific classes on skills such as reading, writing, pronunciation, and grammar. Admission requirements vary and may or may not require a background in education and/or language. Many graduate students also participate in teaching practica or clinicals, which provide the opportunity to gain experience in classrooms. In addition to traditional classroom teaching methods, speech pathologists, linguists, actors, and voice professionals are actively involved in teaching pronunciation of American English—called accent improvement, accent modification, and accent reduction—and serve as resources for other aspects of spoken English, such as word choice. It is important to note that the issuance of a teaching certificate or license for K-12 teachers is not automatic following completion of degree requirements. All teachers must complete a battery of exams (typically the Praxis test or a specific state test subject and method exams or similar, state-sponsored exams) as well as supervised instruction as student teachers. Often, ESL certification can be obtained through extra college coursework. ESL certifications are usually only valid when paired with an already existing teaching certificate. Certification requirements for ESL teachers vary greatly from state to state; out-of-state teaching certificates are recognized if the two states have a reciprocity agreement. The following document states the qualifications for an ESL certificate in the state of Pennsylvania. Chile qualifications Native speakers will often be able to find work as an English teacher in Chile without an ESL teaching certificate. However, many private institutes give preference to teachers with a TEFL, CELTA, or TESOL certificate. The Chilean Ministry of Education also sponsors the English Opens Doors program, which recruits native English speakers to come work as teaching assistants in Chilean public schools. English Opens Doors requires only a bachelor's degree in order to be considered for acceptance. United Arab Emirates qualifications Native speakers must possess teacher certification in their home country in order to teach English as a foreign language in most institutions and schools in United Arab Emirates (UAE). Otherwise, CELTA/TESOL/TEFL/ Certificate or the like is required along with prior teaching experience. Professional associations and unions TESOL International Association (TESOL) is a professional organization based in the United States. In addition, TESOL International Association has more than 100 statewide and regional affiliates in the United States and around the world, see below. The International Association of Teachers of English as a Foreign Language (IATEFL) is a professional organization based in the United Kingdom. Professional organizations for teachers of English exist at national levels. Many contain phrases in their title such as the Japan Association for Language Teaching (JALT), TESOL Greece in Greece, or the Society of Pakistan English Language Teachers (SPELT). Some of these organizations may be bigger in structure (supra-national, such as TESOL Arabia in the Gulf states), or smaller (limited to one city, state, or province, such as CATESOL in California). Some are affiliated with TESOL or IATEFL. The National Association for Teaching English and other Community Languages to Adults (NATECLA) which focuses on teaching ESOL in the United Kingdom. National Union of General Workers is a Japanese union which includes English teachers. University and College Union is a British trade union which includes lecturers of ELT. Acronyms and abbreviations Note that some of the terms below may be restricted to one or more countries, or may be used with different meanings in different countries, particularly the US and UK. See further discussion is Terminology, and types above. Types of English 1-to-1 - One to one lesson BE – Business English EAL – English as an additional language EAP – English for academic purposes EFL – English as a foreign language EIL – English as an international language (see main article at International English) ELF – English as a lingua franca'', a common language that is not the mother tongue of any of the participants in a discussion ELL – English language learner ELT – English language teaching ESL – English as a second language ESOL – English for speakers of other languages ESP – English for specific purposes, or English for special purposes (e.g. technical English, scientific English, English for medical professionals, English for waiters) EST – English for science and technology (e.g. technical English, scientific English) TEFL – Teaching English as a foreign language. This link is to a page about a subset of TEFL, namely travel-teaching. More generally, see the discussion in Terminology and types. TESL – Teaching English as a second language TESOL – Teaching English to speakers of other languages, or Teaching English as a second or other languages. Also the short name for TESOL International Association. TYLE – Teaching Young Learners English. Note that "Young Learners" can mean under 18, or much younger. Other abbreviations BULATS – Business Language Testing Services, a computer-based test of business English, produced by CambridgeEsol. The test also exists for French, German, and Spanish. CELT – Certificate in English Language Teaching, certified by the National Qualifications Authority of Ireland (ACELS). CELTA – Certificate in English Language Teaching to Adults CELTYL – Certificate in English Language Teaching to Young Learners Delta – Diploma in English Language Teaching to Adults ECPE – Examination for the Certificate of Proficiency in English IELTS – International English Language Testing System LTE – London Tests of English by Pearson Language Tests OLTE- Online Language Teacher Education TOEFL – Test of English as a Foreign Language TOEIC – Test of English for International Communication UCLES''' – University of Cambridge Local Examinations Syndicate, an exam board ELICOS - English Language Intensive Courses for Overseas Students, commonly used in Australia See also Spanish as a second or foreign language Language terminology Foreign language Glossary of language teaching terms and ideas Second language Basic English General language teaching and learning Applied linguistics Contrastive rhetoric Language education Second language acquisition English language teaching and learning Assistant Language Teacher Academic English Non-native pronunciations of English Structured English Immersion, a framework for teaching English language learners in public schools Teaching English as a foreign language (TEFL) Translanguaging Contemporary English Comparison of American and British English English language English studies International English Dictionaries and resources Advanced learner's dictionary Foreign language writing aid Statistics EF English Proficiency Index References and notes Further reading Grace Hui Chin Lin & Paul Shih Chieh Chien (2009). An Introduction to English Teaching, Germany. Harmer, J. (2007). How to Teach English (new edition). Essex, UK: Pearson Longman. Betty Schrampfer Azar & Stacy A. Hagen. Fundamentals of English Grammar, 4th edition, Allyn & Bacon. Understanding and Using English Grammar, 5th Edition by Azar and Hagen. Lightbown, P.M., & Spada, N. (2006); How Languages Are Learned (4th ed.); Oxford: Oxford University Press Brown, H. D., & Abeywickrama, P. (2010); Language Assessment (2nd ed.); Pearson Longman. Eric Henderson, The Active Reader: Strategies for Academic Reading and Writing, Third Edition. Advanced Reading Power 4 2nd edition by Mikulecky and Jeffries, Pearson Longman, 2014. Marina Rozenberg, Perspectives: Academic Reading Skills and Practice, OUP. Spack, Ruth. Guidelines: A Cross-Cultural Reading/Writing Text, New York: St. Martin's Press. Clear Speech from the Start, 2nd Edition by Judy B. Gilbert Skillful Listening & Speaking. Student's Book 3 by Mike Boyle & Ellen Kisslinger Leap High Intermediate Listening and Speaking by Dr. Ken Beatty. Pathways Listening, Speaking, and Critical Thinking by MacIntyre. Douglas, Scott R. Academic Inquiry: Writing for Post-Secondary Success. Don Mills, Ont: Oxford University Press, 2014. Joy M. Reid. The Process of Composition, Pearson Education. Leki, Ilona. Academic Writing: Exploring Processes and Strategies (2nd ed). New York: Cambridge University Press. 1998. Easy Writer – A Pocket Reference, 4th edition by Andrea A. Lunsford. Stoynoff, S. & Chapelle, C. A. (2005). ESOL tests and testing: A resource for teachers and administrators. Alexandria, VA: TESOL Publications. External links EAL Nexus – Free teaching resources Limited English Proficiency - Interagency site of the Federal Government of the United States Academic Phrasebank - University of Manchester Notes on grammar and academic writing—special series by University of Canterbury ESL.Wiki: English as a Second Language Wikibook Second or foreign language
27678217
https://en.wikipedia.org/wiki/Network%20UPS%20Tools
Network UPS Tools
Network UPS Tools (NUT) is a suite of software component designed to monitor power devices, such as uninterruptible power supplies, power distribution units, solar controllers and servers power supply units. Many brands and models are supported and exposed via a network protocol and standardized interface. It follows a three-tier model with dozens of NUT device driver daemons that communicate with power-related hardware devices over selected media using vendor-specific protocols, the NUT server which represents the drivers on the network (defaulting to IANA registered port ) using the standardized NUT protocol, and NUT clients (running on same as the server, or on remote systems) which can manage the power devices and query their power states and other metrics for any applications, usually ranging from historic graphing and graceful shutdowns to orchestrated power failover and VM migration. Clients maintained in the NUT codebase include , and for command-line actions, for relatively simple monitoring and graceful shutdowns (considering the amount of minimally required vs. total available power source units in the current server), for complex monitoring scenarios, for a simple web interface, a X11 desktop client, as well as C and C++ libraries for third-party clients. Being a cross-platform project, NUT works on most Unix, BSD and Linux platforms with various system architectures, from embedded systems to venerable Solaris, HP-UX and AIX servers. There were also native Windows builds based on previous stable NUT release line, last being 2.6.5. History Pavel Korensky's original provided the inspiration for pursuing the APC Smart-UPS protocol in 1996. This is the same software that Apcupsd derived from, according to the Debian maintainer of the latter. Russell Kroll, the original NUT author and coordinator, released the initial package, named smartupstools, in 1998. The design already provided for two daemons, (which serves data) and (which protects systems), a set of drivers and examples, a number of CGI modules and client integration, and a set of client CLI tools (, and ), for interfacing the system with a specific UPS of a given model. Evgeny "Jim" Klimov, the current project leader since 2020, focuses first on automated testing and quality assurance of existing codebase to ensure minimal breakage introduced by new contributions, as well as to clean up older technical debts and inconsistencies highlighted by modern lint and coverage tools, and issuing a long-overdue new official release. Over its two-decade history, the open-source project became the de facto standard solution for UPS monitoring provided with OS distributions and embedded into many NAS solutions, some converged hypervisor set-ups, and other appliances, and enjoyed contributions and support from numerous end-users as well as representatives of power hardware vendors providing protocol specifications, sample hardware, and in many cases new NUT driver code and subsequent fixes based on NUT community feedback. References External links Electrical device control software Free software programmed in C Servers (computing) Linux software Uninterruptible power supply Unix software
57257634
https://en.wikipedia.org/wiki/WireGuard
WireGuard
WireGuard is a communication protocol and free and open-source software that implements encrypted virtual private networks (VPNs), and was designed with the goals of ease of use, high speed performance, and low attack surface. It aims for better performance and more power than IPsec and OpenVPN, two common tunneling protocols. The WireGuard protocol passes traffic over UDP. In March 2020, the Linux version of the software reached a stable production release and was incorporated into the Linux 5.6 kernel, and backported to earlier Linux kernels in some Linux distributions. The Linux kernel components are licensed under the GNU General Public License (GPL) version 2; other implementations are under GPLv2 or other free/open-source licenses. Protocol WireGuard uses the following: Curve25519 for key exchange ChaCha20 for symmetric encryption Poly1305 for message authentication codes SipHash for hashtable keys BLAKE2s for cryptographic hash function UDP-based only In May 2019, researchers from INRIA published a machine-checked proof of WireGuard, produced using the CryptoVerif proof assistant. Optional Pre-shared Symmetric Key Mode WireGuard supports pre-shared symmetric key mode, which provides an additional layer of symmetric encryption to mitigate any future advances in quantum computing. The risk being that traffic is stored until quantum computers are capable of breaking Curve25519; at which point traffic could be decrypted. Pre-shared keys are "usually troublesome from a key management perspective and might be more likely stolen", but in the shorter term, if the symmetric key is compromised, the Curve25519 keys still provide more than sufficient protection. Networking WireGuard only uses UDP and thus does not work in networks that block UDP traffic. This is unlike alternatives like OpenVPN because of the many disadvantages of TCP-over-TCP routing. WireGuard fully supports IPv6, both inside and outside of tunnel. It supports only layer 3 for both IPv4 and IPv6 and can encapsulate v4-in-v6 and vice versa. WireGuard supports multiple topologies: Point-to-point Star (server/client) A client endpoint does not have to be defined before the client starts sending data. Client endpoints can be statically predefined. Mesh Extensibility Excluding such complex features from the minimal core codebase improves its stability and security. For ensuring security, WireGuard restricts the options for implementing cryptographic controls, limits the choices for key exchange processes, and maps algorithms to a small subset of modern cryptographic primitives. If a flaw is found in any of the primitives, a new version can be released that resolves the issue. Also, configuration settings that affect the security of the overall application cannot be modified by unprivileged users. Reception WireGuard aims to provide a simple and effective virtual private network implementation. A 2018 review by Ars Technica observed that popular VPN technologies such as OpenVPN and IPsec are often complex to set up, disconnect easily (in the absence of further configuration), take substantial time to negotiate reconnections, may use outdated ciphers, and have relatively massive code bases of over 400,000 and 600,000 lines of code, respectively, which hinders debugging. WireGuard's design seeks to reduce these issues, aiming to make the tunnel more secure and easier to manage by default. By using versioning of cryptography packages, it focuses on ciphers believed to be among the most secure current encryption methods, and at the time of the Ars Technica review had a codebase of around 4000 lines of kernel code, about 1% of either OpenVPN or IPsec, making security audits easier. WireGuard was praised by Linux kernel creator Linus Torvalds who called it a "work of art" in contrast to OpenVPN and IPsec. Ars Technica reported that in testing, stable tunnels were easily created with WireGuard, compared to alternatives, and commented that it would be "hard to go back" to long reconnection delays, compared to WireGuard's "no nonsense" instant reconnections. Oregon senator Ron Wyden has recommended to the National Institute of Standards and Technology (NIST) that they evaluate WireGuard as a replacement for existing technologies like IPsec and OpenVPN. Availability Implementations Implementations of the WireGuard protocol include: Donenfeld's initial implementation, written in C and Go. Cloudflare's BoringTun, a user space implementation written in Rust. Matt Dunwoodie's implementation for OpenBSD, written in C. Ryota Ozaki's wg(4) implementation, for NetBSD, is written in C. The FreeBSD implementation is written in C and shares most of the data path with the OpenBSD implementation. Native Windows kernel implementation named "wireguard-nt", since August 2021 OPNsense via standard package os-WireGuard pfSense via standard package (pfSense-pkg-WireGuard) (A Netgate-endorsed community package) Linux support User space programs supporting WireGuard include: NetworkManager since version 1.16 systemd since version 237 Intel's ConnMan since version 1.38 IPVanish VPN since version 3.7.4.0 Mozilla VPN (with Mullvad) NOIA Network NordVPN via Nordlynx Veeam Powered Network v2, since May 2019 PiVPN since 17 October 2019 VPN Unlimited since November 2019 Private Internet Access VPN since 10 April 2020 hide.me CLI VPN client since July 20, 2020 Surfshark since October 2020 Mistborn (software) VPN since March 2020 Oracle Linux with "Unbreakable Enterprise Kernel" Release 6 Update 1, since November 2020 oVPN since Feb 2020, roll-out in 2021 Torguard since 2020 Vypr VPN since May 2020 Windscribe in 2020 Trust.zone VPN since February 2021 ProtonVPN since October 2021 History Early snapshots of the code base exist from June 30, 2016. Four early adopters of WireGuard were the VPN service providers Mullvad, AzireVPN, IVPN and cryptostorm. WireGuard has received donations from Mullvad, Private Internet Access, IVPN, the NLnet Foundation and now also from OVPN. the developers of WireGuard advise treating the code and protocol as experimental, and caution that they have not yet achieved a stable release compatible with CVE tracking of any security vulnerabilities that may be discovered. On 9 December 2019, David Miller - primary maintainer of the Linux networking stack - accepted the WireGuard patches into the "net-next" maintainer tree, for inclusion in an upcoming kernel. On 28 January 2020, Linus Torvalds merged David Miller's net-next tree, and WireGuard entered the mainline Linux kernel tree. On 20 March 2020, Debian developers enabled the module build options for WireGuard in their kernel config for the Debian 11 version (testing). On 29 March 2020 WireGuard was incorporated into the Linux 5.6 release tree. The Windows version of the software remains at beta. On 30 March 2020, Android developers added native kernel support for WireGuard in their Generic Kernel Image. On 22 April 2020, NetworkManager developer Beniamino Galvani merged GUI support for WireGuard. On 12 May 2020, Matt Dunwoodie proposed patches for native kernel support of WireGuard in OpenBSD. On 22 June 2020, after the work of Matt Dunwoodie and Jason A. Donenfeld, WireGuard support was imported into OpenBSD. On 23 November 2020, Jason A. Donenfeld released an update of the Windows package improving installation, stability, ARM support, and enterprise features. On 29 November 2020, WireGuard support was imported into the FreeBSD 13 kernel. On 19 January 2021, WireGuard support was added for preview in pfSense Community Edition (CE) 2.5.0 development snapshots. In March 2021, kernel-mode WireGuard support was removed from FreeBSD 13.0, still in testing, after an urgent code cleanup in FreeBSD WireGuard could not be completed quickly. FreeBSD-based pfSense Community Edition (CE) 2.5.0 and pfSense Plus 21.02 removed kernel-based WireGuard as well. In May 2021, WireGuard support was re-introduced back into pfSense CE and pfSense Plus development snapshots as an experimental package written by a member of the pfSense community, Christian McDonald. The WireGuard package for pfSense incorporates the ongoing kernel-mode WireGuard development work by Jason A. Donenfeld that was originally sponsored by Netgate In June 2021, the official package repositories for both pfSense CE 2.5.2 and pfSense Plus 21.05 included the WireGuard package See also Comparison of virtual private network services Secure Shell (SSH), a cryptographic network protocol used to secure services over an unsecured network. Notes References External links Free security software Linux network-related software Tunneling protocols Virtual private networks
2795443
https://en.wikipedia.org/wiki/Jtest
Jtest
Jtest is an automated Java software testing and static analysis product that is made by Parasoft. The product includes technology for Data-flow analysis Unit test-case generation and execution, static analysis, regression testing, code coverage, and runtime error detection. Jtest is used by companies such as Cisco Systems, TransCore, AIG United Guaranty and Wipro Technologies. It is also known to be used by Lockheed Martin for the F-35 Joint Strike Fighter program (JSF). Awards Jtest won the 19th Dr.Dobb's Jolt Product Excellence & Productivity Awards in the security tool category (as part of Parasoft's Application Security Solution). Previously, Jtest was granted a Codie award from the Software and Information Industry Association (SIIA) for "Best Software Testing Solution" in 2007 and 2005. It also won "Technology of the Year" award as "Best Application Test Tool" from InfoWorld two years in a row in 2006 and 2007. Jtest first received the Jolt Award for Excellence in 2000. See also Automated testing List of unit testing frameworks List of tools for static code analysis Regression testing Software testing System testing Test case Test-driven development xUnit, a family of unit testing frameworks References External links Jtest page Computer security software Extreme programming Java platform Java development tools Security testing tools Software review Software testing tools Static program analysis tools Unit testing Unit testing frameworks
21073844
https://en.wikipedia.org/wiki/Waljat%20Colleges%20of%20Applied%20Sciences
Waljat Colleges of Applied Sciences
Waljat College of Applied Sciences (WCAS) () is one of Oman's leading higher education institutes. It was established in October 2001 by H.E. Dr. Omar Bin Abdul Muniem Al Zawawi in academic partnership with Birla Institute of Technology (BIT, Mesra), India, one of India's premier universities. It is the second International Centre of BIT, Mesra. It provides education in the specialized branches of Engineering, Information Technology and Business Administration. Location WCAS is located in the premises of Knowledge Oasis at Rusayl, a suburb of Muscat, the capital city of Oman. It enjoys the locational advantage of being close to companies like Microsoft, Oracle Corporation, Motorola etc. Affiliating University WCAS used to offer degree programs of Birla Institute of Technology, Mesra (BIT, Mesra), India. The course curriculum and the teaching, learning process are exactly similar to that of BIT, Mesra. BIT was established by philanthropist industrialist Mr. B.M. Birla in 1955 at Ranchi, the industrial centre of India. BIT is a full member of the Association of Commonwealth Universities. It was conferred Deemed University status in 1986 due to the achievements of the Institute, both in terms of research and excellent standards of academic programmes. The Institute has been accredited by the National Assessment & Accreditation Council (NAAC) & the National Board of Accreditation (NBA) established by the UGC & AICTE respectively. BIT has consistently been ranked among top institutes in India in the field of Engineering by leading Indian publications like India Today, Outlook, Dataquest India, Mint etc. In 2018 due to a decision from the honorable Indian supreme court BIT Mesra withdrew its affiliation from Waljat. Infrastructure and Facilities All the facilities provided by WCAS are as per the norms laid down in the Criteria for Private Colleges recommended by the Ministry of Higher Education, Sultanate of Oman. The buildings of the college are spread over four blocks; Dean's Office, Department of Computer Science and Department of Electronics & Communication Engineering are located in Block I; Department of Management, Department of English and Library are located in Block II; The Admin & HR, Finance Department, Admission & Registration Department, IT Manager, Video Conference Facility, Medical Room are located in Block III, The Department of Biotechnology and Workshop are in Block IV.Besides this the college has Auditorium, Multipurpose hall, a canteen, language Lab., e-learning facility, computer lab and well equipped laboratory facilities. Ample car parking facility is available. Academic programs Waljat College of Applied Sciences offers following degree programs of BIT, Mesra: Master of Business Administration (MBA) Executive Master of Business Administration (EMBA) - Part Time Bachelor of Engineering (BE) Biotechnology Computer Science & Engineering Electronics & Communications Engineering Bachelor of Computer Applications (BCA) Bachelor of Business Administration (BBA) BBA and BCA programs are also offered in part-time mode for working people. Following Diploma, and Advanced Diploma programs are also available as exit routes to the above-mentioned degree programs : Diploma in Electronics and Communication / Computer Science / Computer Application / Business Administration. Advanced Diploma in Electronics and Communication / Computer Science. Based on the performance at the entrance evaluation tests for BE/BBA/BCA, students can get admitted to one year foundation program, where they will be taught subjects so that they come up to the entry level of the respective graduate degree. Admissions Admission process normally starts from the month of May every year by way of announcing in leading news papers of Oman. BE : On the basis of the entrance evaluation test conducted by WCAS in Muscat or CBSE-AIEEE examination score OR SAT -II Score. BBA/BCA : On the basis of the entrance evaluation test conducted by WCAS in Muscat. Training & Placement Training & Placement Cell functions in the college with the aim of providing career counseling and placement opportunities to graduating students through campus recruitment in different private and public companies. Career Fair is organised every year to give an opportunity to students to interact with companies. The students of the final year can participate in campus interviews both at the Mesra campus in India of the Birla Institute of Technology, and at their own campus of WCAS, Muscat. Students have been successful in getting employment. Many local companies have evinced interest in hiring WCAS students. See also Birla Institute of Technology, Mesra Birla Institute of Technology International Centre Birla Institute of Technology – Science and Technology Entrepreneurs' Park Notes External links Official Website Birla Institute of Technology, Mesra, India Colleges in Oman Buildings and structures in Muscat, Oman Birla Institute of Technology
2800398
https://en.wikipedia.org/wiki/Windows%20Media%20Components%20for%20QuickTime
Windows Media Components for QuickTime
Windows Media Components for QuickTime, also known as Flip4Mac WMV Player by Telestream, Inc. was one of the few commercial products that allow playback of Microsoft's proprietary audio and video codecs inside QuickTime for macOS. It allowed playback of: Windows Media Video 7, 8, 9, SD and HD Windows Media Audio 7, 8, 9, Professional and Lossless It also included a web browser plug-in to allow playback of embedded Windows Media files in web pages. With the components installed, any QuickTime-compatible application is able to directly play WMV content. This includes the official QuickTime Player by Apple as well as countless third party players. WMV Player also allows Windows media files to be associated to QuickTime Player. On January 12, 2006, Microsoft discontinued support for Windows Media Player for Mac OS X and began distributing a free version of WMV Player as Windows Media Components for QuickTime on their website. As of June 2015, there is no longer a free version of this application offered. Flip4Mac was retired as of July 1, 2019. "If you are a current user of Flip4Mac, or your Flip4Mac stopped functioning when up upgraded your operating system, we invite you to take a look at Switch." Timeline July 8, 2006 – Flip4Mac did not officially run on Intel-based Macs. July 15, 2006 – version 2.1 of Flip4Mac now supported Windows Media Player 10 content, which was previously inaccessible to Macintosh users. This newer version also supports Intel-based Macs. July 27, 2006 – version 2.1 is a non-beta release of the Universal Binary format for Mac OS X. September 20, 2016 - Flip4Mac doesn't work in macOS Sierra (10.12) July 1, 2019 - Flip4Mac officially discontinued by Telestream (Telestream had stopped development as of 2016 See also Flip4Mac Perian VLC media player, an alternative open source player Xiph QuickTime Components References External links Flip4Mac 3.2.0.16 download Flip4Mac 3.2 and Flip4Mac 4.2.2 downloads Official retirement announcement QuickTime macOS media players Codecs Component-based software engineering
4557246
https://en.wikipedia.org/wiki/History%20of%20computer%20hardware%20in%20Yugoslavia
History of computer hardware in Yugoslavia
The Socialist Federal Republic of Yugoslavia (SFRY) was a socialist country that existed in the second half of the 20th century. Being socialist meant that strict technology import rules and regulations shaped the development of computer history in the country, unlike in the Western world. However, since it was a non-aligned country, it had no ties to the Soviet Bloc either. One of the major ideas contributing to the development of any technology in SFRY was the apparent need to be independent of foreign suppliers for spare parts, fueling domestic computer development. Development Early computers In former Yugoslavia, at the end of 1962 there were 30 installed electronic computers, in 1966, there were 56, and in 1968 there were 95. Having received training in the European computer centres (Paris 1954 and 1955, Darmstadt 1959, Wien 1960, Cambridge 1961 and London 1964), engineers from the BK.Institute-Vinča and the Mihailo Pupin Institute- Belgrade, led by Prof. dr Tihomir Aleksić, started a project of designing the first "domestic" digital computer at the end of the 1950s. This was to become a line of CER (Serbian Cifarski Elektronski Računar, Cyrillic ЦЕР - Цифарски Електронски Рачунар - Digital Electronic Computer), starting with the model CER-10 in 1960, a primarily vacuum tube and electronic relays-based computer. By 1964, CER-20 computer was designed and completed as "electronic bookkeeping machine", as the manufacturer recognized increasing need in accounting market. This special-purpose trend continued with the release of CER-22 in 1967, which was intended for on-line "banking" applications. There were more CER models, such as CER-11, CER-12, and CER-200, but there is currently little information here available on them. In the late 1970s, "Ei-Niš Računarski Centar" from Niš, Serbia, started assembling Mainframe computers H6000 under Honeywell license, mainly for banking businesses. Computer initially had a great success that later led into local limited parts production. In addition, the company produced models such as H6 and H66 and was alive as late as early 2000s under name "Bull HN". Models H6 were installed in enterprises (e.g., telecom) for business applications and ran the GCOS operating system. Also, they were used in education. E.g., one of the built Honeywell H6 was installed in local electronics engineering and trade school "Nikola Tesla" in Niš and was used for training and educational purposes until late 80s and dawn of personal computers. Imports Eventually, the socialist government of SFRY allowed foreign computers to be imported under strict conditions. This led to the increasing dominance of foreign mainframes and a continuous reduction of relative market share for domestic products. Despite this, since the interest in computer technology grew overall, systems built by the Mihailo Pupin Institute (first CER, then TIM lines) and Iskra Delta (e.g. model 800, derivative of PDP-11/34) continued to evolve through the 1970s and even the 1980s. Early 1980s: Home computer era Many companies attempted to produce microcomputers similar to 1980s home computers, such as Ivo Lola Ribar Institute's Lola 8, M.Pupin Institute's TIM-001, EI's Pecom 32 and 64, PEL Varaždin's Galeb (computer) and Orao, Ivel Ultra and Ivel Z3, etc. Jožef Stefan Institute in Ljubljana made first 16-bit microcomputer PMP-11 under the leadership of Marijan Miletić, former technical director of Iskra-Delta in 1984. It had 8 MHz DEC T-11 CPU, maximum of 64 kB RAM, 10 MB hard disk, 8" diskette and two RS-232 ports for VT-100 video terminal and COM. Branko Jevtić modified RT-11 operating system so plenty of DEC-11 applications were available. Some 50 machines were made before IBM AT became widely available. Many factors caused them to fail or not even attempt to enter the home computer market: they were prohibitively expensive for individuals (especially when compared to popular foreign ZX Spectrum, Commodore 64, etc.); lack of entertainment and other software meant they were not appealing to majority of contemporary computer enthusiasts; they were not available in stores. The end result was that domestic computers were predominantly used in government institutions that were prohibited from purchasing imported equipment. Those computers that could have been connected to existing mainframes and used as terminals were more successful in business environments, while others were used as educational tools in schools. Given that all medium and large enterprises in the country were government-owned, this was still a significant part of the domestic market which explains both the unnatural, relative success of domestic business computers, as well as why IBM PC/AT and compatibles had a low influx in the local business market. However, while the government tried to proliferate domestic home computers by introducing the cost and memory size limitations for imports, many people imported them nevertheless either illegally or by dividing a single computer into pieces that separately fit within prescribed restrictions. Lack of proper legislation and such grey market activity only helped the demise of domestic home computer production. By the middle of the decade home computer market was, much like in the rest of the Europe, dominated by Commodore 64 and ZX Spectrum as a runner up. One domestic microcomputer model managed to stand out - Galaksija. Created by Voja Antonić, the entire do-it-yourself diagrams and instructions were published in the special issue of popular science magazine "Galaksija" called Računari u vašoj kući (Computers in your home) in January 1984. Although initially unavailable for purchase in assembled form, more than 1,000 enthusiasts built the microcomputer for games. Many were later produced for use in some schools. Home computers were widely popular in SFRY - so much so that software (otherwise recorded on Compact Cassette) was broadcast by radio stations (e.g. Ventilator 202, Radio Študent Ljubljana etc.). Due to lack of regulation, copyright infringement of software was common and unlicensed copies for sale were freely advertised in popular computer magazines of the time, such as Računari, Svet kompjutera, Moj Mikro and Revija za mikroračunala. This distribution led to essentially every home computer owner having access to hundreds, if not thousands of commercial software titles. This would later cause benefits and drawbacks for the economy. Several student developers became computer experts since cheap and unauthorized development tools were common. However, they found themselves still competing with these warez domestically after trying to find a market for their skills. Late 1980s: PC era The second half of the 1980s saw the rise of popularity of IBM AT compatible among business users, and a slow movement towards 16-bits like Amiga and Atari ST computers in the enthusiast market, while mainstream home computing was still largely dominated by the ubiquitous C-64. Domestic computer hardware manufacturers produced a number of different IBM AT compatibles, such as TIM-microcomputers and Lira, and the first domestic Unix workstation (in one of the configurations, Iskra Delta's Triglav was shipped with Microsoft's Xenix) but their success was again limited to government-controlled companies that were required to purchase only domestic or legally imported technology. Timeline 1959 Branko Souček leads a team from 1955 to 1959 to create the '256 channel analyzer' digital computer at the Ruđer Bošković Institute 1960 Mihajlo Pupin Institute releases first digital computer in SFRY - CER-10. 1964 Mihajlo Pupin Institute releases CER-20 - "electronic bookkeeping machine" model. 1966 Mihajlo Pupin Institute releases a serie of minicomputers CER-200. 1967 Mihajlo Pupin Institute releases CER-22 - "digital computer for on-line banking applications". 1971 Mihajlo Pupin Institute releases hybrid computer systems HRS-100 for AN.USSR, Moscow. Mihajlo Pupin Institute releases CER-12 computer system for business data processing in ERCs. Mihajlo Pupin Institute releases CER-203. 1979 Iskradata releases Iskradata 1680 1980 Ivo Lola Ribar Institute releases industrial programmable logic controller PA512 1983 Mihajlo Pupin Institute releases "computer system for real-time generation of images" and a model TIM-001 Iskra Delta releases Iskra Delta Partner Z80A-based computer Complete build-it-yourself new instructions for Galaksija (en. Galaxy) computer are published in Racunari u vašoj kući magazine. 1984 Iskra Delta releases Iskra Delta 800 computer derived from Digital PDP-11/34 Institute Jozef Stefan releases PMP-11 16-bit microcomputer compatible with DEC RT-11 OS PEL Varaždin releases Galeb (en. seagull) computer later to be replaced by Orao 1985 Mihajlo Pupin Institute releases "Microprocessor post-office computers" serie TIM-100. Mihajlo Pupin Institute releases an application development microcomputer model TIM-001. PEL Varaždin releases Orao (en. eagle) computer for use in schools Galaksija Plus (enhanced version of Galaksija) is released. Elektronska Industrija Niš releases Pecom 32 and Pecom 64 also for use in some schools. Ivo Lola Ribar Institute announced official release of Lola 8 for an exhibition in 1985. 1986 Ivo Lola Ribar Institute releases industrial programmable logic controller LPA512. 1988 Mihajlo Pupin Institute releases 32-bit microcomputer systems TIM-600. Mihajlo Pupin Institute releases HD64180-based TIM-011 microcomputer integrated with green monochrome monitor, for use in many Serbian secondary schools. See also List of computer systems from SFRY History of computer hardware in Soviet Bloc countries Notes and references SFRY Socialist Federal Republic of Yugoslavia Computer companies of Yugoslavia
9462866
https://en.wikipedia.org/wiki/IP%20camera
IP camera
An Internet Protocol camera, or IP camera, is a type of digital video camera that receives control data and sends image data via an IP network. They are commonly used for surveillance but unlike analog closed-circuit television (CCTV) cameras, they require no local recording device, only a local area network. Most IP cameras are webcams, but the term IP camera or netcam usually applies only to those that can be directly accessed over a network connection, usually used for surveillance. Some IP cameras require support of a central network video recorder (NVR) to handle the recording, video and alarm management. Others are able to operate in a decentralized manner with no NVR needed, as the camera is able to record directly to any local or remote storage media. The first IP Camera was invented by Axis Communications in 1996: the AXIS Neteye 200. History The first centralized IP camera, the AXIS Neteye 200, was released in 1996 by Axis Communications and was developed by the team of Martin Gren and Carl-Axel Alm. Though promoted based on its direct accessibility from anywhere with an internet connection, the camera couldn't stream real-time motion video. It was limited to a snapshot image each time the camera was accessed due to the lack of powerful integrated circuits at the time capable of handling image processing and networking. At the time of launch, it was considered incapable of operating as a motion camera due to what was at the time, "enormous" bandwidth requirements. Thus it was aimed primarily at the tourism industry. The Axis Neteye 200 was not intended to replace traditional analogue CCTV systems, given that its capability was limited to just one frame per second in Common Intermediate Format (CIF), or one every 17 seconds in 4CIF resolution, with a maximum resolution quality of 0.1MP (352x288). Axis used a custom proprietary web server named OSYS, yet by the summer of 1998, it had started porting the camera software to Linux. Axis also released documentation for its low-level application programming interface (API) called VAPIX, which builds on the open standards of HTTP and real time streaming protocol (RTSP). This open architecture was intended to encourage third-party software manufacturers to develop compatible management and recording software. The first decentralized IP camera was released in 1999 by Mobotix. The camera's Linux system contained video, alarm, and recording management functions. In 2005, the first IP camera with onboard video content analytics (VCA) was released by Intellio. This camera was able to detect a number of different events, such as if an object was stolen, a human crossed a line, a human entered a predefined zone, or if a car moved in the wrong direction. Standards Previous generations of analog CCTV cameras use established broadcast television formats (e.g. CIF, NTSC, PAL, and SECAM). Since 2000, there has been a shift in the consumer TV business towards high-definition (HD) resolutions (e.g. 1080P (Full-HD), 4K resolution (Ultra-HD) and 16:9 widescreen format). IP cameras may differ from one another in resolution, features, video encoding schemes, available network protocols, and the API for video management software. IP cameras are available at resolutions from 0.3 (VGA resolution) to 29 megapixels. To address IP video surveillance standardization issues, two industry groups formed in 2008: the Open Network Video Interface Forum (ONVIF) and the Physical Security Interoperability Alliance (PSIA). PSIA was founded by 20 member companies including Honeywell, GE Security, and Cisco. ONVIF was founded by Axis Communications, Bosch and Sony. Each group now has numerous additional members, thus; cameras and recording hardware that operate under the same standard can work with each other. Wi-Fi home camera Many consumer-level IP cameras used for home security send a live video stream to a companion app on the user's phone. IP cameras in the home generally connect to the internet through Wi-Fi, Broadband, or Ethernet cable. IP cameras used to be more common in small businesses than in homes, but that is no longer the case. A 2016 survey of 2,000 Americans revealed 20% of them owned home security cameras. This crossover of IP cameras to home use is partly due to the device's self-installation. IP cameras don't require professional installation, saving time for home and business owners. On the other hand, large businesses and commercial spaces, like malls, require high-resolution videos (i.e., 4K), many cameras, and professional applications to accommodate the installation and management of the cameras. One of the most popular abilities that Wi-Fi home security cameras have is to view their camera footage via a mobile app or other application software. Many cameras offer features such as a wide-angle lens (around 140 degrees, or pan/tilt up to 350 degrees horizontal, 90 degrees vertical), low-light or night vision, and motion detection. When an event occurs, such as detected motion, users can receive alarms and notifications via an app. Video clips can be stored in a local device such as a micro-SD or through a cloud service. The market size of home security systems reached $4.8 billion in 2018. It had a compound annual growth rate of 22.4% between 2011 and 2018. People in countries that suffer from high crime rates, particularly robbery and theft, are keen to adopt home security cameras. In addition, two countries, the US and China, have a high implementation rate of residential security cameras. Major key players in the home security market are Nest (owned by Google, U.S.), Ring (owned by Amazon, U.S.), Arlo (owned by Netgear, U.S.), and SimpliSafe (U.S.). Hikvision Digital Technology (Ltd.) and Leshi Video Tech (China) are the largest IP camera manufacturers. As for the alarm security industry, key players are ADT, Security Services (U.S.), Vivint (U.S.), and Frontpoint Security Solutions (U.S.). IP camera types Source: Indoor cameras are widely used both residentially and commercially. Depending on their functionality, they're classified as a fixed camera or a pan–tilt–zoom camera (PTZ camera). Fixed cameras are generally used to monitor a set of areas, whereas a PTZ camera can be used to either track motion or manually adjust the monitoring area. Outdoor wired cameras, also known as AC powered cameras, are placed in outdoor environments. They are designed to survive weather conditions, such as heat, cold, and rain, and are generally capable of capturing video in low light conditions. They are often rated IP65/IP67 standards to withstand the outdoor environment. Wired (AC Powered) or Wired free cameras for homes are IP cameras that have their own independent power source, such as a Solar panel or Battery. Cloud and local storage Source: Some camera manufacturers offer cloud subscriptions where users may remotely view and download recent video clips by paying recurring subscription fees. Cloud subscription plans typically come with several days of looping storage, and the videos will be overwritten beyond this duration. Some cameras include a micro SD card slot so users may store videos locally. There is no looping as long as the memory card has sufficient space to store the images. However, locally stored video footage can not be accessed remotely. Considerations Potential benefits Previous generation cameras transmitted analog video signals. IP cameras send images digitally using the transmission and security features of the TCP/IP protocol. Advantages to this approach include: Two-way audio via a single network cable allows users to listen to and speak to the subject of the video (e.g., a clerk assisting a customer through step-by-step instructions) Use of a Wi-Fi or wireless network Distributed artificial intelligence (DAI)—as the camera can contain video analytics that analyze images Secure data transmission through encryption and authentication methods such as WPA or WPA2, TKIP or AES Remote accessibility that lets users view live video from any device with sufficient access privileges Power over Ethernet (PoE) to supply power through the Ethernet cable and operate without a dedicated power supply Better image resolution, typically four times the resolution of an analog camera Artificial intelligence and Internet privacy The American Civil Liberties Union (ACLU) has expressed privacy concerns if AI is widely practiced. AI is capable of tracking movements and studying behaviors; moreover, AI can also recognize emotions, and further predict patterns of movement. Facial recognition system Facial recognition identifies a human face by analyzing facial features from a picture or video, an example of biometrics. If a camera allows users to set up a database that includes family members and close friends, the system may distinguish whether someone exists in the database. If the camera is capable of providing accurate facial recognition, it can tell if the person it detects is authorized (in the database). The detection of unauthorized persons may prompt the owner to call law enforcement. The footage can be used as a means of identifying and apprehending offenders. Potential concerns Concerns include: Privacy concerns Average higher purchase cost per camera Security can be compromised by insecure credentials, given that the camera can be accessed independently of a video recorder. Public internet connection video can be complicated to set up or using the peer-to-peer (P2P) network. Data storage capacity concerns Hacking If the video is transmitted over the public internet rather than a private network or intranet, the system potentially becomes open to a wider audience including hackers. Criminals can hack into a CCTV system to disable or manipulate them or observe security measures and personnel, thereby facilitating criminal acts and rendering the surveillance counterproductive. This can be counteracted by ensuring the network and device and other devices connected to the main router are secured. In 2012, users of 4chan hacked into thousands of streaming personal IP cameras by exploiting a vulnerability in some models of TRENDnet home security cameras. In 2014, it was reported that a site indexed 73,011 locations worldwide with security cameras that used default usernames and passwords, and were therefore, unprotected. See also Closed-circuit television camera Dashcam Remote camera References External links Surveillance Internet Protocol Video Video surveillance 20th-century inventions Cloud storage Facial recognition software Privacy Hacking (computer security) Artificial intelligence Wireless Battery (electricity) Internet protocols
35157246
https://en.wikipedia.org/wiki/Bill%20Tomlinson
Bill Tomlinson
William M. "Bill" Tomlinson is a professor of informatics at the University of California, Irvine, and a researcher in the California Institute for Telecommunications and Information Technology. He studies the fields of environmental informatics, human-computer interaction, multi-agent systems and computer-supported learning. His book Greening through IT (MIT Press, 2010) examines the ways in which information technology can help people think and act on the broad scales of time, space, and complexity necessary for us to address the world's current environmental issues. In addition, he has authored dozens of papers across a range of journals and conferences in computing, the learning sciences, and the law. His work has been reviewed by The Wall Street Journal, The Washington Post, the Los Angeles Times, Wired.com, Scientific American Frontiers, CNN, and the BBC. In 2007, he received an NSF CAREER award, and in 2008 he was selected as a Sloan Research Fellow. He holds an AB in biology from Harvard College, an MFA in experimental animation from CalArts, and SM and PhD degrees from the MIT Media Lab. His animated film, Shaft of Light, screened at the 1997 Sundance Film Festival and dozens of other film festivals around the world. His 2009 paper with Andrew Torrance on patent systems has been cited in amicus briefs and in a writ filed with the United States Supreme Court. Currently his research is focused on the expanding field on disaster informatics, which deals with using information technology on limited resources in times of disaster or chaos to locate scarce resources. Books authored Greening through IT (MIT Press, 2010) References Year of birth missing (living people) Living people American computer scientists Harvard College alumni Massachusetts Institute of Technology alumni American educators California Institute of the Arts alumni
18968032
https://en.wikipedia.org/wiki/IntervalZero
IntervalZero
IntervalZero, Inc. develops hard real-time software and its symmetric multiprocessing (SMP) enabled RTX and RTX64 software transform the Microsoft Windows general-purpose operating system (GPOS) into a real-time operating system (RTOS). IntervalZero and its engineering group regularly release new software (cf its history). Its most recent product, RTX64, focuses on 64-bit and symmetric multiprocessing (SMP) to replace dedicated hardware based systems such as digital signal processors (DSPs) or field-programmable gate arrays (FPGAs) with multicore PCs. For instance, an audio mixing surface manufacturer which largely deployed DSP based systems, switched to personal computer (PC) based systems, dedicating multi-core processors for the real time audio processing. Founded in July 2008 by a group of former Ardence executives, IntervalZero is headed by CEO Jeffrey D. Hibbard. The firm has offices in Waltham, MA; Nice, France; Munich, Germany, and Taiwan, ROC. This global presence is important because these solutions are deployed worldwide, primarily in industrial automation, military, aerospace, medical devices, digital media, and test and simulation software. The corporate name, IntervalZero, comes from the technical definition of the optimal experience between a system command and execution. History IntervalZero's lineage traces back to 1980, when a group of Massachusetts Institute of Technology engineers started VenturCom and began to develop expertise in embedded technology. It was during this time that Venix was developed and marketed. Their first innovation was to focus on Windows NT 4.0 as a possible real-time solution for the Industry in 1995 by releasing RTX. Since then, a lot of controllers are PC and Windows based. Their second innovation came as a second product, Component Integrator, which makes Windows NT 4.0 an embedded OS. It was licensed by Microsoft a few years later and became the origin of Windows NT Embedded. In 2004, VenturCom, was renamed Ardence. In December 2006, Citrix Systems announced an agreement to acquire Ardence's enterprise and embedded software businesses. It integrated the software streaming products into the Citrix portfolio in 2007 and early 2008. In 2008, a group of former Ardence executives founded IntervalZero and acquired the Ardence embedded software business from Citrix Systems Inc. Citrix retained a minority ownership the firm. Products IntervalZero develops RTX and RTX64, hard real-time software that transforms Microsoft Windows into a real-time operating system (RTOS). Executive Officers Jeffrey D. Hibbard, Chief Executive Officer Mark Van Vranken, Chief Financial Officer Brian Calder, Vice President, North America Sales & Marketing Daron Underwood, Vice President, CTO Brian Carter, Vice President, Strategic Communications Bryan Levey, Vice President, Engineering References Software companies based in Massachusetts Real-time operating systems Software companies established in 2008 2008 establishments in Massachusetts Companies formed by management buyout
1688142
https://en.wikipedia.org/wiki/Psion%20Series%207
Psion Series 7
The Psion Series 7 is a subnotebook computer from Psion that was released in 2000. In size it is fairly original: larger than a palmtop or handheld machine, but smaller than a laptop computer. It was the first and last of the Psion series to have a full color electronic visual display. It has a leather-bound clamshell design, with a touch-sensitive, Video Graphics Array (VGA) resolution liquid-crystal display (LCD) and QWERTY computer keyboard. Internally it has a 132.71 MHz StrongARM SA-1100 processor, 16 (upgradable to 32) megabyte (MB) of random-access memory (RAM) and 16 MB of internal read-only memory (ROM). The machine runs the EPOC operating system (OS), a predecessor of Symbian OS, and as such, can be programmed in the Open Programming Language (OPL), using the provided development program, or in C++ or Java, using a separate personal computer (PC) hosted development system. It can be synchronized to a PC by means of an RS-232 serial port to serial connector, a method that is obsoleted by later standards. The unit has an expansion port for a CompactFlash (CF) II device such as the Hitachi Microdrive. It also has a PC Card expansion port supporting flash storage, compact flash adapters, modems, wireless and GPS adapters. For data transfer between Psion computers, printers and to use mobile phones as modems the Series 7 features IrDA (infrared) connectivity. The Series 7 is a variant of the Psion netBook, a machine aimed at the corporate market. Due to customer demand, the reduced capacity Series 7 was released, distinguished by replacing 16 MB of the 32 MB of RAM with a 16 MB ROM chip. Accessing the OS in ROM required slowing the processor down, leading to the false perception that the netBook and Series 7 used a different processor or printed circuit board (PCB). It is thus possible to convert a Series 7 to netBook configuration by replacing this memory card. However, at least two different (interchangeable) PCBs were used during the product's lifecycle, the later PCB distinguished by higher power output to the PC Card. Included software Agenda – a personal information management program Bombs – a minesweeper game Calc – a calculator Comms – a terminal emulator Contacts – a contacts manager Data – a flat-file database program Email – an email, SMS, and fax client Jotter – a multipage scratchpad Program – an Open Programming Language (OPL) program editor Record – a voice recording program, for use with the in-built microphone Sheet – a spreadsheet and graphing package Sketch – a drawing program (for use with the touch-screen interface) Spell – a spellchecker, thesaurus and anagram program Time – a world clock and alarm program Web – a web browser Word – a word processor Linux on the Series 7 An open source project OpenPsion, formerly PsiLinux, aims to port Linux to the Psion Series 7, netBook, and other Psion PDAs. Linux on the Series 7 rather struggles, given the Series 7's limited resources, but most PC Card (16-bit) adapters seem to be supported. See also Psion (company) Psion Organiser Psion Series 3 Psion Series 5 References Psion devices Personal digital assistants Personal information managers Computer-related introductions in 2000
473234
https://en.wikipedia.org/wiki/Brian%20Behlendorf
Brian Behlendorf
Brian Behlendorf (born March 30, 1973) is an American technologist, executive, computer programmer and leading figure in the open-source software movement. He was a primary developer of the Apache Web server, the most popular web server software on the Internet, and a founding member of the Apache Group, which later became the Apache Software Foundation. Behlendorf served as president of the foundation for three years. He has served on the board of the Mozilla Foundation since 2003, Benetech since 2009 and the Electronic Frontier Foundation since 2013. Currently, Behlendorf serves as the General Manager of the Open Source Security Foundation. Career Behlendorf, raised in Southern California, became interested in the development of the Internet while he was a student at the University of California, Berkeley, in the early 1990s. One of his first projects was an electronic mailing list and online music resource, SFRaves, which a friend persuaded him to start in 1992. This would soon develop into the Hyperreal.org website, an online resource devoted to electronic music and related subcultures. In 1993, Behlendorf, Jonathan Nelson, Matthew Nelson and Cliff Skolnick co-founded Organic, Inc., the first business dedicated to building commercial web sites. While developing the first online, for-profit, media project—the HotWired web site for Wired magazine—in 1994, they realized that the most commonly used web server software at the time (developed at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign) could not handle the user registration system that the company required. So, Behlendorf patched the open-source code to support HotWired's requirements. It turned out that Behlendorf wasn't the only one busy patching the NCSA code at the time, so he and Skolnick put together an electronic mailing list to coordinate the work of the other programmers. By the end of February 1995, eight core contributors to the project started Apache as a fork of the NCSA codebase. Working loosely together, they eventually rewrote the entire original program as the Apache HTTP Server. In 1999, the project incorporated as the Apache Software Foundation. Behlendorf served as president of the Foundation for three years. Behlendorf was the CTO of the World Economic Forum. He is also a former director and CTO of CollabNet, a company he co-founded with O'Reilly & Associates (now O'Reilly Media) in 1999 to develop tools for enabling collaborative distributed software development. CollabNet used to be the primary corporate sponsor of the open source version control system Subversion, before it became a project of the Apache Software Foundation. He continues to be involved with electronic music community events such as Chillits, and speaks often at open-source conferences worldwide. In 2003, he was named to the MIT Technology Review TR100 as one of the top 100 innovators in the world under the age of 35. Behlendorf has served on the board of the Mozilla Foundation since 2003, Benetech since 2009 and the Electronic Frontier Foundation since 2013. He was a managing director at Mithril Capital, a global technology investment firm based in San Francisco, from 2014 until he joined the Linux Foundation. In 2016, he was appointed executive director of the open source Hyperledger project at the Linux Foundation to advance blockchain technology. Behlendorf became the General Manager of the Open Source Security Foundation in October 2021. The appointment was shared publicly at KubeCon, along with announcement of $10m in investments to secure open source supply chains. References External links Organic Inc. website 1973 births Living people Burning Man Free software programmers Mozilla people Members of the Open Source Initiative board of directors Place of birth missing (living people) American chief technology officers UC Berkeley College of Engineering alumni Open source advocates People associated with cryptocurrency Electronic Frontier Foundation people
5094580
https://en.wikipedia.org/wiki/Kathleen%20Antonelli
Kathleen Antonelli
Kathleen "Kay" McNulty Mauchly Antonelli (12 February 1921 – 20 April 2006) was an Irish computer programmer and one of the six original programmers of the ENIAC, one of the first general-purpose electronic digital computers. The other five ENIAC programmers were Betty Holberton, Ruth Teitelbaum, Frances Spence, Marlyn Meltzer, and Jean Bartik. Early life and education She was born Kathleen Rita McNulty in Feymore, part of the small village of Creeslough in what was then a Gaeltacht area (Irish-speaking region) of County Donegal in Ulster, the northern province in Ireland, on February 12, 1921, during the Irish War of Independence. She was the third of six children of James McNulty and Anne Nelis. On the night of her birth, her father, James McNulty, an Irish Republican Army training officer, was arrested and imprisoned in Derry Gaol for two years as he was a suspected member of the IRA. On his release, the family emigrated to the United States in October 1924 and settled in the Chestnut Hill section of Philadelphia, Pennsylvania where James found work as a stonemason. At the time, Antonelli was unable to speak any English, only Irish; she would remember prayers in Irish for the rest of her life. She attended parochial grade school in Chestnut Hill and J. W. Hallahan Catholic Girls High School in Philadelphia. In high school, she had taken a year of algebra, a year of plane geometry, a second year of algebra, and a year of trigonometry and solid geometry. After graduating high school, she enrolled in Chestnut Hill College for Women. During her studies, she took every mathematics course offered, including spherical trigonometry, differential calculus, projective geometry, partial differential equations, and statistics. She graduated with a degree in mathematics in June 1942, one of only a few mathematics majors out of a class of 92 women. During her third year of college, Antonelli was looking for relevant jobs, knowing that she wanted to work in mathematics but did not want to be a school teacher. She learned that insurance companies' actuarial positions required a master's degree; therefore, feeling that business training would make her more employable, she took as many business courses as her college schedule would permit: accounting, money and banking, business law, economics, and statistics. Career Computer programmer A week or two after graduating, she saw a US Civil Service ad in The Philadelphia Inquirer looking for women with degrees in mathematics. During World War II, the US Army was hiring women to calculate bullet and missile trajectories at Ballistic Research Laboratory, which had been established at the Aberdeen Proving Ground in Aberdeen, Maryland, with staff from both the Aberdeen Proving Ground and the Moore School of Engineering at the University of Pennsylvania . She immediately called her two fellow math majors, Frances Bilas and Josephine Benson about the ad. Benson couldn't meet up with them, so Antonelli and Bilas met in Philadelphia one morning in June 1942 for an interview in a building on South Broad Street (likely the Union League of Philadelphia Building). One week later, they were both hired as human "computers" at a pay grade of SP-4, a subprofessional civil service grade. The starting pay was $1620 annually. Antonelli stated the pay was "very good at the time". They were notified to report to work at the Moore School of Engineering. Their job was to compute ballistics trajectories used for artillery firing tables, mostly using mechanical desk calculators and extremely large sheets of columned paper. The pay was low, but both Antonelli and Bilas were satisfied to have attained employment that used their educations and that served the war effort. Her official civil service title, as printed on her employment documentation, was "computer." She and Bilas began work with about 10 other "girls" (as the female computers were called) and 4 men—a group recently brought to the Moore School from Aberdeen Proving Grounds. Antonelli and Bilas conducted their work in a large, former classroom in the Moore School; the same room would later be the one where the ENIAC was built and operated until December 1946. Despite all their coursework, their mathematics training had not prepared Antonelli and Bilas for their work calculating trajectories for firing tables: they were both unfamiliar with numerical integration methods used to compute the trajectories, and the textbook lent to them to study from (Numerical Mathematical Analysis, 1st Edition by James B. Scarborough, Oxford University Press, 1930) provided little enlightenment. The two newcomers ultimately learned how to perform the steps of their calculations, accurate to ten decimal places, through practice and the advisement of a respected supervisor, Lila Todd. A total of about 75 female computers were employed at the Moore School in this period, many of them taking courses from Adele Goldstine, Mary Mauchly, and Mildred Kramer. Each gun required its own firing table, which had about 1,800 trajectories. Computing just one trajectory required approximately 30–40 hours of handwork with a calculator. After two or three months, Antonelli and Bilas were moved to work on the differential analyser in the basement of the Moore School, the largest and most sophisticated analogue mechanical calculator of the time, of which there were only three in the United States and five or six in the world (all of the others were in Great Britain). The analyser had been lent to the University of Pennsylvania for the duration of the war. Using the analyser (invented by Vannevar Bush of MIT a decade prior and made more precise with improvements by the Moore School staff), a single trajectory computation—about 40 hours of work on a mechanical desk calculator—could be performed in about 50 minutes. Antonelli was further promoted to supervising calculations on the analyser. The analyser room staff worked six days a week, with their only official holidays as Christmas and the Fourth of July. Working with ENIAC The Electronic Numerical Integrator And Computer was developed for the purpose of performing these same ballistics calculations between 1943 and 1946. In June 1945, Antonelli was selected to be one of its first programmers, along with several other women from the computer corps: Betty Snyder, Marlyn Wescoff, and Ruth Lichterman, and a fifth computer named Helen Greenman. When Greenman declined to go to Aberdeen for training and a 1st alternate refused as well, Betty Jean Jennings, the 2nd alternate, got the job, and between June and August 1945 they received training at Aberdeen Proving Grounds in the IBM punched card equipment that was to be used as the I/O for the ENIAC. Later, Antonelli's college schoolmate and fellow computer, Bilas, would join the team of ENIAC programmers at the Moore School, though she did not attend the initial training at Aberdeen. The computer could complete the same ballistics calculations described above in about 10 seconds, but it would often take one or two days to set the computer up for a new set of problems, via plugs and switches. It was the computer programmer's responsibility to determine the sequence of steps required to complete the calculations for each problem and set up the ENIAC according; early on, they consulted with ENIAC engineers such as Arthur Burks to determine how the ENIAC could be programmed. In 1996, Antonelli said that John Mauchly pronounced the name of the computer "EN-ee-ack", unlike the common pronunciation at the time of "EEN-ee-ack". The ENIAC was programmed using subroutines, nested loops, and indirect addressing for both data locations and jump destinations. During her work programming the ENIAC, Antonelli is credited with the invention of the subroutine. Her colleague, Jean Jennings, recalled when Antonelli proposed the idea to solve the problem where the logical circuits did not have enough capacity to compute some trajectories. The team collaborated on the implementation. Because the ENIAC was a classified project, the programmers were not at first allowed into the room to see the machine, but they were given access to blueprints from which to work out programs in an adjacent room. Programming the ENIAC involved discretising the differential equations involved in a trajectory problem to the precision allowed by the ENIAC and calculating the route to the appropriate bank of electronics in parallel progression, with each instruction having to reach the correct location in time to within 1/5,000th of a second. Having devised a program on paper, the programmers were allowed into the ENIAC room to physically program the machine. Antonelli would later find out that her team had been testing the conveniency of the H-bomb. Much of the programming time of the ENIAC consisted of setting up and running test programs that assured its operators of the whole system's integrity: every vacuum tube, every electrical connection needed to be verified before running a problem. As the team was preparing for the launch, Antonelli and the other women who had worked on the ENIAC were told to act as hostesses and greet those around them. They were to stand near the machine and "look good." Thus, at the time, they did not receive the recognition they deserved. Antonelli was transferred to Aberdeen Proving Ground's Ballistics Research Laboratory along with the ENIAC when it was moved there in mid-1947. She was joined by Ruth Lichterman and Bilas, but the other three programmers preferred to stay in Philadelphia rather than relocate to the remote Aberdeen. Personal life ENIAC co-inventor John Mauchly, who had since departed his post as a professor at the Moore School to found his own computer company along with Presper Eckert, made frequent trips to Washington, D.C. during this period, and stopped in to check up on the ENIAC in Aberdeen. Mauchly had already hired Jean Bartik (née Betty Jean Jennings) and Betty Holberton (née Snyder); and had hoped to attract McNulty as well. Instead, Mauchly married McNulty in 1948 and she resigned her post at Aberdeen. The couple, along with his two children from his first marriage, lived initially in his row house on St. Mark's Street near the University of Pennsylvania. They later moved to a large farmhouse called Little Linden in Ambler, Pennsylvania. With Mauchly, McNulty had five children. Mauchly's first wife had died in a September 1946 drowning accident. Later life Kay McNulty worked on the software design for later computers including the BINAC and UNIVAC I computers whose hardware was designed by her husband. John Mauchly died in 1980 following several bouts of illness and recoveries. She then married photographer Severo Antonelli in 1985. After a long struggle with Parkinson's disease, her second husband died in 1996; Kay herself had suffered a heart attack while caring for her husband, but made a full recovery. Following Mauchly's death, Kay carried on the legacy of the ENIAC pioneers by authoring articles, giving talks (frequently along with Jean Bartik, with whom she remained lifelong friends), and making herself available for interviews with reporters and researchers. She was inducted into the Women in Technology International Hall of Fame in 1997 along with the other original ENIAC programmers, and she accepted the induction of John Mauchly into the National Inventors Hall of Fame in Akron, Ohio, in 2002. Kay McNulty died from cancer in Wyndmoor, Pennsylvania, on April 20, 2006, at the age of 85. Legacy During the heyday of ENIAC, proper recognition escaped Antonelli and her fellow programmers. The invisibility of the ENIAC programmers (both from being women and the secrecy of their work, especially during the war) kept them from the public eye. In 2010, a documentary called, "Top Secret Rosies: The Female "Computers" of WWII" was released. The film centered around in-depth interviews of three of the six women programmers, focusing on the commendable patriotic contributions they made during World War II. The ENIAC team is also the inspiration behind the award-winning 2013 documentary The Computers. This documentary, created by Kathy Kleiman and the ENIAC Programmers Project, combines actual footage of the ENIAC team from the 1940s with interviews with the female team members as they reflect on their time working together on the ENIAC. It is the first documentary of a series of three, and parts two and three will be entitled The Coders and The Future-Maker, respectively. In July 2017, Dublin City University (DCU) honoured Antonelli by naming their computing building after Kathleen (Kay) McNulty. In 2019, the Irish Centre for High-End Computing (ICHEC) at the National University of Ireland, Galway, named its new Waterford-based primary supercomputer, which is to serve as Ireland's national supercomputer for academic researchers, Kay, following a public poll, wherein Antonelli beat out candidates including botanist Ellen Hutchins, scientist and inventor Nicholas Callan, geologist Richard Kirwan, chemist Eva Philbin, and hydrographer Sir Francis Beaufort. See also Timeline of women in science References External links WITI Hall of Fame Biography from The MacTutor History of Mathematics archive, University of St Andrews, Scotland Oral history interview with Frances E. Holberton – Holberton was, with Antonelli, one of the six original ENIAC programmers. Charles Babbage Institute, University of Minnesota. Death of Donegal's Computing Pioneer 1921 births 2006 deaths 20th-century Irish people People from County Donegal Chestnut Hill College alumni Irish emigrants to the United States Deaths from cancer in Pennsylvania People from Ambler, Pennsylvania Human computers Irish computer programmers American computer programmers 20th-century American women scientists American women computer scientists American computer scientists 21st-century American women
353077
https://en.wikipedia.org/wiki/EROS%20%28microkernel%29
EROS (microkernel)
Extremely Reliable Operating System (EROS) is an operating system developed starting in 1991 at the University of Pennsylvania, and then Johns Hopkins University, and The EROS Group, LLC. Features include automatic data and process persistence, some preliminary real-time support, and capability-based security. EROS is purely a research operating system, and was never deployed in real world use. , development stopped in favor of two successor systems, CapROS and Coyotos. Key concepts The overriding goal of the EROS system (and its relatives) is to provide strong support at the operating system level for the efficient restructuring of critical applications into small communicating components. Each component can communicate with the others only through protected interfaces, and is isolated from the rest of the system. A protected interface, in this context, is one that is enforced by the lowest level part of the operating system, the kernel. That is the only part of the system that can move information from one process to another. It also has complete control of the machine and (if properly constructed) cannot be bypassed. In EROS, the kernel-provided mechanism by which one component names and invokes the services of another is a capability, using inter-process communication (IPC). By enforcing capability-protected interfaces, the kernel ensures that all communications to a process arrive via an intentionally exported interface. It also ensures that no invocation is possible unless the invoking component holds a valid capability to the invokee. Protection in capability systems is achieved by restricting the propagation of capabilities from one component to another, often through a security policy termed confinement. Capability systems naturally promote component-based software structure. This organizational approach is similar to the programming language concept of object-oriented programming, but occurs at larger granularity and does not include the concept of inheritance. When software is restructured in this way, several benefits emerge: The individual components are most naturally structured as event loops. Examples of systems that are commonly structured this way include aircraft flight control systems (see also DO-178B Software Considerations in Airborne Systems and Equipment Certification), and telephone switching systems (see 5ESS switch). Event-driven programming is chosen for these systems mainly because of simplicity and robustness, which are essential attributes in life-critical and mission-critical systems. Components become smaller and individually testable, which helps to more readily isolate and identify flaws and bugs. The isolation of each component from the others limits the scope of any damage that may occur when something goes wrong or the software misbehaves. Collectively, these benefits lead to measurably more robust and secure systems. The Plessey System 250 was a system originally designed for use in telephony switches, which capability-based design was chosen specifically for reasons of robustness. In contrast to many earlier systems, capabilities are the only mechanism for naming and using resources in EROS, making it what is sometimes referred to as a pure capability system. In contrast, IBM i is an example of a commercially successful capability system, but it is not a pure capability system. Pure capability architectures are supported by well-tested and mature mathematical security models. These have been used to formally demonstrate that capability-based systems can be made secure if implemented correctly. The so-called "safety property" has been shown to be decidable for pure capability systems (see Lipton). Confinement, which is the fundamental building block of isolation, has been formally verified to be enforceable by pure capability systems, and is reduced to practical implementation by the EROS constructor and the KeyKOS factory. No comparable verification exists for any other primitive protection mechanism. There is a fundamental result in the literature showing that safety is mathematically undecidable in the general case (see HRU, but note that it is of course provable for an unbounded set of restricted cases). Of greater practical importance, safety has been shown to be false for all of the primitive protection mechanisms shipping in current commodity operating systems. Safety is a necessary precondition to successful enforcement of any security policy. In practical terms, this result means that it is not possible in principle to secure current commodity systems, but it is potentially possible to secure capability-based systems provided they are implemented with sufficient care. Neither EROS nor KeyKOS has ever been successfully penetrated, and their isolation mechanisms have never been successfully defeated by any inside attacker, but it is not known whether the two implementations were careful enough. One goal of the Coyotos project is to demonstrate that component isolation and security has been definitively achieved by applying software verification techniques. The L4.sec system, which is a successor to the L4 microkernel family, is a capability-based system, and has been significantly influenced by the results of the EROS project. The influence is mutual, since the EROS work on high-performance invocation was motivated strongly by Jochen Liedtke's successes with the L4 microkernel family. History The primary developer of EROS was Jonathan S. Shapiro. He is also the driving force behind Coyotos, which is an "evolutionary step" beyond the EROS operating system. The EROS project started in 1991 as a clean-room reconstruction of an earlier operating system, KeyKOS. KeyKOS was developed by Key Logic, Inc., and was a direct continuation of work on the earlier Great New Operating System In the Sky (GNOSIS) system created by Tymshare, Inc. The circumstances surrounding Key Logic's demise in 1991 made licensing KeyKOS impractical. Since KeyKOS did not run on popular commodity processors in any case, the decision was made to reconstruct it from the publicly available documentation. By late 1992, it had become clear that processor architecture had changed significantly since the introduction of the capability idea, and it was no longer obvious that component-structured systems were practical. Microkernel-based systems, which similarly favor large numbers of processes and IPC, were facing severe performance challenges, and it was uncertain if these could be successfully resolved. The x86 architecture was clearly emerging as the dominant architecture but the expensive user/supervisor transition latency on the 386 and 486 presented serious challenges for process-based isolation. The EROS project was turning into a research effort, and moved to the University of Pennsylvania to become the focus of Shapiro's dissertation research. By 1999, a high performance implementation for the Pentium processor had been demonstrated that was directly performance competitive with the L4 microkernel family, which is known for its exceptional speed in IPC. The EROS confinement mechanism had been formally verified, in the process creating a general formal model for secure capability systems. In 2000, Shapiro joined the faculty of Computer Science at Johns Hopkins University. At Hopkins, the goal was to show how to use the facilities provided by the EROS kernel to construct secure and defensible servers at application level. Funded by the Defense Advanced Research Projects Agency and the Air Force Research Laboratory, EROS was used as the basis for a trusted window system, a high-performance, defensible network stack, and the beginnings of a secure web browser. It was also used to explore the effectiveness of lightweight static checking. In 2003, some very challenging security issues were discovered that are intrinsic to any system architecture based on synchronous IPC primitives (notably including EROS and L4). Work on EROS halted in favor of Coyotos, which resolves these issues. , EROS and its successors are the only widely available capability systems that run on commodity hardware. Status Work on EROS by the original group has halted, but there are two successor systems. The CapROS system is building directly from the EROS code base, while the Coyotos system is a successor system that addresses some of the architectural deficiencies of EROS, and is exploring the possibility of a fully verified operating system. Both CapROS and Coyotos are expected to be released in various commercial deployments. See also Nanokernel References Journals External links Coyotos home page at the Wayback Machine (archived September 2, 2016) Jonathan Shapiro's homepage CapROS Microkernels Real-time operating systems Capability systems
3369962
https://en.wikipedia.org/wiki/Shackerstone
Shackerstone
Shackerstone is a village and civil parish in the Hinckley and Bosworth district of Leicestershire, England. It is situated on the Ashby-de-la-Zouch Canal and the River Sence. According to the 2001 census the parish, which also includes the village of Barton in the Beans, had a population of 811, including Odstone which had risen to 921 at the 2011 census. History In the Elizabethan era the Hall family were prominent in the village. They occupied the hall next to the church, known as Shakerstone Mannor. They sold this property in 1843, 13 years after Henry Edward and Sarah Theodosia Hall (William Shakespeare Hall's parents) had moved the family to the Swan River Colony. During the Civil War, Shackerstone was near enough to Ashby de la Zouch to attract the attention of both parties. Parliamentary soldiers from Tamworth and Coventry stole horses, including a mare worth ten pounds from Mr. Hall. The local vicar, the Rev. John Hodges, was ejected from the living in 1646 and brought before the parliamentary sequestration committee for deserting his parish to join the royalist garrison at Ashby for four months. The commissioners charged him with frequenting the village alehouse on Sundays, and of being "a companion with fidlers and singers". In the early eighteenth century, John Nichols records a fine church, a water mill and an absentee parson, Dr Adamthwaite, a prolific and energetic letter-writer, who was vicar from 1779 to 1811. This was a poor parish. By 1789 time the parson complained that he could not afford to live there, residing instead in Hampton in Arden, in Warwickshire some 24 miles away, where he had a curacy. He claimed that the parsonage had been "miserably beggared" by the previous incumbent who died insolvent in a gaol. The vicarage was "so entirely let down as that no sign remains of there ever having been one". By 1 April 1805, the population seems to have slightly increased, a local census counting 51 families in Shackerstone, 53 families in Odstone and six in Barton, providing a total population of around 375. Transport In 1804 the Ashby Canal was opened and Shackerstone is passed by it on the east. There are public moorings prior to bridge 52, and between bridges 52 and 53 are private moorings. The sharp turn by the station has been known to cause a certain amount of entertainment for the unwary boater. Shackerstone railway station is on the Battlefield Line Railway, a preserved steam and diesel museum, that runs trains to Bosworth Battlefield. The railway came to Shackerstone in 1873 and continued providing passenger services until 1931, after which only freight ran on the rails of the Ashby and Nuneaton Joint Railway. The line was finally closed by British Rail in 1970 at which point the railway society arrived and has restored the station and reopened the line to Shenton Station, the terminus for Bosworth Battlefield. Landmarks During World War II, the remains of the motte-and-bailey castle in the village had an air raid shelter dug into it. It is believed that this still has a rocking chair within it. Located close to Shackerstone was the stately home of Gopsall Hall home of Charles Jennens, a librettist and friend of George Frideric Handel. Culture and community From 1994 to 2019 Shackerstone also hosted a large family festival, usually in the first week of September that covered everything from vintage cars to aerobatic stunt planes. The charity event is based around the a number of organising parties: the village, the canal, 4x4 group, an unaffiliated group and the railway. The festival was usually well attended by the public. References External links Adamthwaite documents & Historical records Civil parishes in Leicestershire Villages in Leicestershire Hinckley and Bosworth
555338
https://en.wikipedia.org/wiki/Roger%20Connor
Roger Connor
Roger Connor (July 1, 1857 – January 4, 1931) was a 19th-century Major League Baseball (MLB) player. He played for several teams, but his longest tenure was in New York, where he was responsible for the New York Gothams becoming known as the Giants. He was the player whom Babe Ruth succeeded as the all-time home run champion. Connor hit 138 home runs during his 18-year career, and his career home run record stood for 23 years after his retirement in 1897. Connor owned and managed minor league baseball teams after his playing days. He was elected to the Baseball Hall of Fame by its Veterans Committee in 1976. Largely forgotten after his retirement, Connor was buried in an unmarked grave until a group of citizens raised money for a grave marker in 2001. Early life Connor was born in Waterbury, Connecticut. He was the son of Irish immigrants Mortimer Connor and Catherine Sullivan Connor. His father had arrived in the United States only five years before Roger's birth. The family lived in the Irish section of Waterbury, known as the Abrigador district, which was separated from the rest of the city by a large granite hill. Connor was the third of eleven children born to the family, though two did not survive childhood. Connor left school around age 12 to work with his father at the local brass works. Connor entered professional baseball with the Waterbury Monitors of the Eastern League in 1876. Though he was left-handed, Connor was initially a third baseman; in early baseball, left-handed third basemen were more common than they are in modern baseball. In 1878 he would transfer to the minor league Holyoke Shamrocks, where he became known for hitting home runs across the field into the Connecticut River. This so-impressed Springfield baseball boss Bob Ferguson that he signed Connor onto the National League (NL) Troy Trojans when he bought them out in 1880. MLB playing career Early years (1880–1889) In Connor's first year with the Troy Trojans, he teamed with future Hall of Fame players Dan Brouthers, Buck Ewing, Tim Keefe and Mickey Welch, all of whom were just starting their careers. Also on that 1880 Trojans team, though much older, was player-manager Bob "Death to Flying Things" Ferguson. Though Connor, Ferguson and Welch were regularly in the lineup, the other future stars each played in only a handful of the team's 83 games that season. The team finished in fourth place with a 41–42 win-loss record. Connor committed 60 errors in 83 games and sustained a shoulder injury, prompting a position change to first baseman for 1881. He later played for the New York Gothams, and, due to his great stature, gave that team the enduring nickname "Giants". Connor hit baseball's first grand slam on September 10, 1881, at Riverfront Park in Rensselaer, New York. His grand slam came with two outs and his team down three runs in the bottom of the ninth inning, a situation known today as a walk-off home run. George Vecsey, in The New York Times wrote: "Roger Connor was a complete player — a deft first baseman and an agile base runner who hit 233 triples and stole 244 bases despite his size (6 feet 3 inches and 200 pounds)." He led the NL with a .371 average in 1885. On September 11, 1886, Connor hit a ball completely out of the Polo Grounds, a very difficult park in which to hit home runs. He hit the pitch from Boston's Old Hoss Radbourn over the right field fence and onto 112th Street. The New York Times reported of the feat, "He met it squarely and it soared up with the speed of a carrier pigeon. All eyes were turned on the tiny sphere as it soared over the head of Charlie Buffinton in right field." A group of fans with the New York Stock Exchange took up a collection for Connor and bought him a $500 gold watch in honor of the home run. Players' League (1890) Another New York baseball team, also known as the Giants, emerged with the founding of the Players' League (PL) in 1890. Several players from the NL team left for the new league's Giants team, including future Hall of Famers Connor, Keefe, Jim O'Rourke and Hank O'Day. In 123 games, Connor registered 169 hits, a .349 batting average, 14 home runs, 103 runs batted in (RBI) and 22 stolen bases. His home run total led the league and it represented the only major league single-season home run title that he won. Connor experimented with some changes to his batting style that year. He hit more balls to the opposite field and he sometimes batted right-handed, though he did not have much success from the right side. Though Connor had success in his season with the PL, the league struggled. Some of the teams ran into financial difficulties. National League teams rescheduled many of their games to conflict with PL games in the same cities, and a high number of PL games were cancelled late in the season due to rainouts. Connor was optimistic that the league would be successful in 1891, but it officially broke up that January. Later career (1891–1897) Returning to the NL Giants for a season in 1891, Connor hit .294. In the offseason before 1892, Connor signed with the Philadelphia Athletics. The team broke up shortly after Connor signed, and his contract was awarded to the Philadelphia Phillies for that year. He returned to the Giants in 1893, raising his average to .322 and hitting 11 home runs. During the 1894 season, the Giants looked toward the team's youth and Connor lost his starting position to Jack Doyle. He was released that year and picked up by the St. Louis Browns. The next year, his brother Joe Connor made his major league debut with the same team. Joe played two games with St. Louis before being sent back down to the minor leagues. That year's St. Louis team finished with a 39–92 record, games out of first place. Connor was released by the Browns in May 1897 after starting the season with a .227 batting average. His major league playing career was over. While a major league player, Connor was regularly among the league leaders in batting average and home runs. Connor's career mark of 138 was a benchmark not surpassed until 1921 by Babe Ruth. He finished his career with a .317 batting average. Connor finished in the top ten in batting average ten times, all between 1880 and 1891. Over an 18-year career, Connor finished in the top ten for doubles ten times, finished in the top three for triples seven times and remains fifth all-time in triples with 233. He also established his power credentials by finishing in the top ten in RBI ten times and top ten in homers twelve times. Personal In 1886, Connor and his wife Angeline had a daughter named Lulu. She died as an infant. Connor interpreted the baby's death as God's punishment for marrying Angeline, who was not Catholic. Angeline had secretly begun receiving Catholic education and was planning to surprise Connor by getting baptized on the day that Lulu would have turned a year old. The couple later adopted a girl named Cecelia from a Catholic orphanage in New York City. Roger and Angeline Connor lived in Waterbury, Connecticut, for many years, even while Roger played in New York. Every winter, a banquet was held in Waterbury in Connor's honor. Near the end of the 19th century, Angeline gave Roger a weather vane which had been constructed from two of his baseball bats. The weather vane served as a well-known landmark in Waterbury even after the couple moved away. Later life Minor league baseball Connor signed with the Fall River Indians of the New England League in June 1897. Connor attracted some attention by wearing eyeglasses on the field. He hit cleanup, played first base and was popular among fans. In 1898, Connor moved back to his hometown of Waterbury and purchased the local minor league team. He served as president, manager and played first base on the side. Connor's wife, Angeline, kept the team's books and his daughter helped by collecting tickets. Joe Connor was the team's catcher; he later returned to the major leagues for several seasons. After the 1899 season, Connor expressed satisfaction with his Waterbury team, saying that the team played well and did not lose money despite not getting strong attendance numbers at their games. In 1901, Connor became interested in purchasing the minor league franchise in Hartford, Connecticut. The team had been dropped from the Eastern League and had suffered financial losses related to traveling as far away as Canada for games. Connor proposed that he might purchase the team and attempt to have it admitted to the Connecticut State League, decreasing its travel requirements. However, upon selling the Waterbury club at the end of that season, he bought the Springfield Ponies franchise in the same league. Retirement from baseball In September 1903, Connor announced his retirement from baseball and placed his team up for sale. He had made a similar statement the year before and apparently on a frequent basis before that. In June 1902, the local newspaper said, "Roger bobs up every summer and makes his farewell to the baseball public." His 1903 retirement was earnest though; he attended a 1904 Springfield-Norwich game as a retired spectator. Connor worked as a school inspector in Waterbury until 1920. He lived to see his career home run record bested by Babe Ruth, although if it was celebrated, it might have been on the wrong day. At one time, Connor's record was thought to be 131, per the Sporting News book Daguerreotypes. As late as the 1980s, in the MacMillan Baseball Encyclopedia, it was thought to be 136. However, John Tattersall's 1975 Home Run Handbook, a publication of the Society for American Baseball Research (SABR), credited Connor with 138. Both MLB.com and the independent Baseball-Reference.com now consider Connor's total to be 138. Death Connor died on January 4, 1931, following a lengthy stomach illness. He was 73. A news article after his death said that his "likeable personality and his colorful action made him an idol." He was buried in an unmarked grave at St. Joseph's Cemetery in Waterbury. Connor was inducted to the Baseball Hall of Fame in 1976. Hall of Fame umpire Bill Klem had long campaigned on behalf of Connor's inclusion in the Hall of Fame. Decades after his death, Waterbury citizens and baseball fans raised enough money to purchase a headstone at his grave, which was dedicated in a 2001 ceremony. See also List of Major League Baseball career hits leaders List of Major League Baseball career doubles leaders List of Major League Baseball career triples leaders List of Major League Baseball career runs scored leaders List of Major League Baseball career runs batted in leaders List of Major League Baseball career stolen bases leaders List of Major League Baseball batting champions List of Major League Baseball annual doubles leaders List of Major League Baseball annual triples leaders List of Major League Baseball annual runs batted in leaders List of Major League Baseball players to hit for the cycle List of Major League Baseball single-game hits leaders List of Major League Baseball triples records List of Major League Baseball player-managers List of St. Louis Cardinals team records Notes References Kerr, Roy. Roger Connor: Home Run King of 19th Century Baseball. McFarland, 2011. . External links , or Retrosheet 1857 births 1931 deaths 19th-century baseball players American people of Irish descent Baseball players from Connecticut Burials in Connecticut Catholics from Connecticut Connecticut League Managers Fall River Indians players Hartford (minor league baseball) players Holyoke (minor league baseball) players Major League Baseball first basemen Major League Baseball player-managers Minor league baseball managers National Baseball Hall of Fame inductees National League batting champions National League RBI champions New Bedford (minor league baseball) players New Haven Blues players New Haven (minor league baseball) players New York Giants (NL) players New York Giants (PL) players Philadelphia Phillies players Sportspeople from Waterbury, Connecticut Springfield Ponies players St. Louis Browns (NL) players Troy Trojans players Waterbury Indians players Waterbury Pirates players Waterbury Rough Riders players
28526089
https://en.wikipedia.org/wiki/Bovard%20Field
Bovard Field
Bovard Field was a stadium in Los Angeles, California, on the campus of the University of Southern California. The Trojans football team played here until they moved to Los Angeles Memorial Coliseum in 1923 and it was the home of USC baseball until Dedeaux Field opened in 1974, about to the northwest. The football stadium and running track held 12,000 people at its peak, and ran southwest to northeast, near and parallel to today's Watt Way. The elevation of the field is approximately above sea level. The baseball field was aligned (home to center field) similar to Dedeaux Field, but a few degrees clockwise, nearly true north, but just slightly west. Home plate was located in today's E.F. Hutton Park and left field was bounded by Watt Way. Beyond first base, a large eucalyptus tree came into play; while its trunk was in foul territory, some of its branches crossed into fair territory and guarded the foul line in shallow right field. Mickey Mantle In March 1951, a 19-year-old Mickey Mantle of the New York Yankees, about to embark on his rookie season in the majors, went 4-for-5 with a pair of home runs, one from each side of the plate against the Trojans in an exhibition game. The home run as a leftie was a massive shot that went well beyond the right field fence into the football practice field, during spring drills. He also had a triple for a total of seven runs batted in for the game, which the Yanks won 15–1. References External links Stadium information American football venues in Los Angeles Athletics (track and field) venues in Los Angeles Baseball venues in Los Angeles Defunct athletics (track and field) venues in the United States Defunct college football venues Defunct college baseball venues in the United States USC Trojans baseball venues USC Trojans football venues 1973 disestablishments in California Sports venues demolished in 1973
53670297
https://en.wikipedia.org/wiki/Chris%20Harrison%20%28computer%20scientist%29
Chris Harrison (computer scientist)
Chris Harrison is a British-born, American computer scientist and entrepreneur, working in the fields of human–computer interaction, machine learning and sensor-driven interactive systems. He is a professor at Carnegie Mellon University and director of the Future Interfaces Group within the Human–Computer Interaction Institute. He has previously conducted research at AT&T Labs, Microsoft Research, IBM Research and Disney Research. He is also the CTO and co-founder of Qeexo, a machine learning and interaction technology startup. Harrison has authored more than 80 peer-reviewed papers and his work appears in more than 40 books. For his contributions in human–computer interaction, Harrison was named a top 35 innovator under 35 by MIT Technology Review (2012), a top 30 scientist under 30 by Forbes (2012), one of six innovators to watch by Smithsonian (2013), and a top Young Scientist by the World Economic Forum (2014). Over the course of his career, Harrison has been awarded fellowships by the Packard Foundation, Sloan Foundation, Google, Qualcomm and Microsoft Research. He currently holds the A. Nico Habermann Chair in Computer Science. More recently, NYU, Harrison's undergraduate alma mater named him as their 2014 Distinguished Young Alumnus, and the lab also won a Fast Company Innovation by Design Award for their work on EM-Sense. Biography Harrison was born in 1984 in London, United Kingdom, but emigrated with his family to New York City in the United States at a young age. Harrison actively participated in the ACM programming competitions and engaged in a variety crafts. He also displayed an interest in Slinging and was contacted for this hobby by BBC for an ancient weapons documentary. Consequently, Harrison created and launched slinging.org on March 20, 2003 as an online forum for sling enthusiasts, as is currently the largest website on the subject, with over 200,000 forum posts. Harrison obtained his citizenship in the United States on May 13, 2002. Harrison obtained both a B.A. (2002-2005) and M.S. (2006) in Computer Science from the Courant Institute of Mathematical Sciences at New York University. His Master's thesis was advised by Dr. Dennis Shasha, with whom he worked on a relational file system built around the concept of temporal context. New York University honored Harrison as its 2014 Distinguished Young Alumnus. During his master's studies, Harrison worked at IBM Research - Almaden on an early personal assistant application called Enki under Mark Dean, then the director of the lab. After completing his master's degree, Harrison worked at AT&T Labs, developing among the first asynchronous social video platforms, dubbed CollaboraTV, with features now common in modern systems. Encouraged by colleagues, Harrison joined the Ph.D. program in Human–Computer Interaction at Carnegie Mellon University in 2007, completing his dissertation on "The Human Body as an Interactive Computing Platform" in 2013 under the supervision of Dr. Scott Hudson. From 2009 to 2012, Harrison was the Editor-in-Chief of ACM's Crossroads magazine, which he relaunched as XRDS, the flagship magazine for the over 30,000 student members of the ACM. Harrison has spun-out several technologies from CMU and cofounded the machine learning startup Qeexo in 2012, which provides specialized machine-learning engines for mobile and embedded platforms, with a focus on interactive technologies. In 2019, the company won a CES Innovation Award for their EarSense solution, which was used in the bezel-less Oppo Find X, replacing the need for a physical proximity sensor with a virtual machine-learning-powered solution. In total, the company software is used on more than 100 million devices as of 2017. In 2013, Harrison became faculty at Carnegie Mellon University, founding the Future Interfaces Group within the Human–Computer Interaction Institute. Research Harrison broadly investigates novel sensing and interface technologies, especially those "that empower people to interact with small devices in big ways". He is best known for his research into ubiquitous and wearable computing, where computation escapes the confines of today's small, rectangular screens, and spills interactivity out onto everyday surfaces, such as walls, countertops and furniture. This research thread dates back to 2008, starting with Scratch Input appropriating walls and tables as ad hoc input surfaces. Insights from this work, especially the vibroacoustic propagation of touch inputs, led to Skinput being developed while Harrison was interning at Microsoft Research. Skinput was the first on-body system to demonstrate touch input and coordinated projected graphics without the need to instrument the hands. This research was followed shortly after by OmniTouch, also at Microsoft Research. More recently, Harrison has been conducting research at the Future Interfaces Group in the Human Computer Interaction department of CMU. In 2016, Harrison presented 3 topics at the ACM Symposium on User Interface Software and Technology (UIST) through Future Interfaces Group. These topics were: using a high speed mode of a smartwatch's accelerometer to acquire and interpret acoustic samples at 4000 samples per second, using electrical impedance sensing to create a real-time hand gesture sensor, and using infrared sensors on a smartwatch to recognize and utilize hand gestures made on the skin directly around the smartwatch. In 2017, Robert Xiao, an HCII PhD student, along with Harrison and Scott Hudson, his advisors, created Desktopography, an interactive multi-touch interface that is projected onto a desktop surface. Inspired by the Xerox PARC DigitalDesk, one of the first digitally augmented desks of its time, Desktopography explores the possibilities of virtual-physical interactions and deals with how to best create a user-friendly interface which has to navigate around various, constantly moved objects, as commonly found on one's desktop surface. Other activities Harrison co-developed and co-wrote Crash Course Computer Science, a PBS Digital Studios-funded educational series hosted on YouTube, with his partner, Amy Ogan. This project was initiated following a discussion between Harrison and John Green at the World Economic Forum in 2016, where both were guest speakers. Along with Robert Xiao and Scott Hudson, colleagues at CMU, he developed Lumitrack, a motion tracking technology which is currently used in video game controllers and in the film industry. Harrison was also one of the Program Committee Chairs for the 2017 ACM Symposium on User Interface Software and Technology (UIST). Chris is also an amateur digital artist and sculptor. His artworks have appeared in over 40 books and more than a dozen international galleries. Notable among these appearances were showings at the Triennale di Milano in Milan, Italy (2014), and the Biennale Internationale Design in Saint-Étienne, France (2010). References American computer scientists Human–computer interaction researchers Living people Human-Computer Interaction Institute faculty Carnegie Mellon University faculty Carnegie Mellon University alumni New York University alumni 1984 births Disney Research people
994639
https://en.wikipedia.org/wiki/GameStop
GameStop
GameStop Corp. is an American video game, consumer electronics, and gaming merchandise retailer. The company is headquartered in Grapevine, Texas (a suburb of Dallas), and is the largest video game retailer worldwide. As of January 30, 2021, the company operated 4,816 stores including 3,192 in the United States, 253 in Canada, 417 in Australia and New Zealand and 954 in Europe under the GameStop, EB Games, EB Games Australia, Micromania-Zing, ThinkGeek and Zing Pop Culture brands. The company was founded in Dallas in 1984 as Babbage's, and took on its current name in 1999. The company's performance declined during the mid-late 2010s due to the shift of video game sales to online shopping and downloads and failed investments by GameStop in smartphone retail. In 2021 however, the company's stock price skyrocketed due to a short squeeze orchestrated by users of the Internet forum r/wallstreetbets. The company received significant media attention during January and February 2021 due to the volatility of its stock price and the GameStop short squeeze. The company is now ranked 521st on the Fortune 500. In addition to retail stores, GameStop owns and publishes Game Informer, a video game magazine and in Australia runs Zing Marketplace an e-commerce retro gaming and pop culture marketplace that facilitates consumer-to-consumer sales. History Babbage's (1984–1994) GameStop traces its roots to Babbage's, a Dallas, Texas-based software retailer founded in 1984 by former Harvard Business School classmates James McCurry and Gary M. Kusin. The company was named after Charles Babbage and opened its first store in Dallas's NorthPark Center with the help of Ross Perot, an early investor in the company. The company quickly began to focus on video game sales for the then-dominant Atari 2600. Babbage's began selling Nintendo games in 1987. Babbage's became a public company via an initial public offering in 1988. By 1991, video games accounted for two-thirds of Babbage's sales. NeoStar Retail Group (1994–1996) Babbage's merged with Software Etc., an Edina, Minnesota-based retailer that specialized in personal computing software, to create NeoStar Retail Group in 1994. The merger was structured as a stock swap, where shareholders of Babbage's and Software Etc. received shares of NeoStar, a newly formed holding company. Babbage's and Software Etc. continued to operate as independent subsidiaries of NeoStar and retained their respective senior management teams. Babbage's founder and chairman James McCurry became chairman of NeoStar, while Babbage's president Gary Kusin and Software Etc. President Daniel DeMatteo retained their respective titles. Software Etc. chairman Leonard Riggio became chairman of NeoStar's executive committee. Gary Kusin resigned as president of Babbage's in February 1995 to start a cosmetics company. Daniel DeMatteo, formerly president of Software Etc., assumed Kusin's duties and was promoted to president and chief operating officer of NeoStar. NeoStar chairman James McCurry was also appointed to the newly created position of NeoStar CEO. The company relocated from its headquarters in Dallas to Grapevine later that year. NeoStar merged its Babbage's and Software Etc. units into a single organization in May 1996 amid declining sales. Company president Daniel DeMatteo also resigned, and NeoStar chairman and CEO James McCurry assumed the title of president. In September of that year, after NeoStar was unable to secure the credit necessary to purchase inventory necessary for the holiday season, the company filed for Chapter 11 bankruptcy and appointed Thomas G. Plaskett chairman while James McCurry remained company chief executive and president. The leadership changes were not enough and in November 1996 the assets of NeoStar were purchased for $58.5 million by Leonard Riggio, a founder of Software Etc. and chairman and principal stockholder of Barnes & Noble. Electronics Boutique had also bid to purchase NeoStar, but the judge presiding over NeoStar's bankruptcy accepted Riggio's bid because it kept open 108 stores more than Electronics Boutique's bid would have. Approximately 200 retail stores were not included in the transaction and were subsequently closed. Babbage's Etc. (1996–1999) Following his purchase of NeoStar's assets, Leonard Riggio dissolved the holding company and created a new holding company named Babbage's Etc. He appointed Richard "Dick" Fontaine, previously Software Etc.'s chief executive during its expansion in the late 1980s and early 1990s, as Babbage Etc.'s chief executive. Daniel DeMatteo, previously the president of both Software Etc. and NeoStar, became company president and COO. Three years later, in 1999, Babbage's Etc. launched its GameStop brand with 30 stores in strip malls. The company also launched gamestop.com, a website that allowed consumers to purchase video games online. GameStop.com was promoted in Babbage's and Software Etc. stores. Barnes & Noble Booksellers (1999–2004) In October 1999, Barnes & Noble Booksellers purchased Babbage's Etc. for $215 million. Because Babbage's Etc. was principally owned by Leonard Riggio, who was also Barnes & Noble's chairman and principal shareholder, a special committee of independent directors of Barnes & Noble Booksellers evaluated and signed off on the deal. A few months later, in May 2000, Barnes & Noble acquired Funco, the owner of Eden Prairie, Minnesota-based video game retailer FuncoLand, for $160 million. Babbage's Etc., which had been previously operating as a direct subsidiary of Barnes & Noble, became a wholly owned subsidiary of Funco. With its acquisition of Funco, Barnes & Noble also acquired Game Informer, a video game magazine that was first published in 1991. Funco was renamed GameStop, Inc. in December 2000 in anticipation of holding an initial public offering for the company. In February 2002, the company once again became a public company via an initial public offering. Barnes & Noble retained control over the newly public company with 67% of outstanding shares and 95% of voting shares. Barnes & Noble retained control over GameStop until October 2004, when it distributed its 59% stake in GameStop to stakeholders of Barnes & Noble, making it an independent company. GameStop's successful years (2004–2016) GameStop acquired EB Games (formerly Electronics Boutique) in 2005 for $1.44 billion. The acquisition expanded GameStop's operations into Europe, Canada, Australia, and New Zealand. Two years later, in 2007, GameStop acquired Rhino Video Games from Blockbuster LLC for an undisclosed amount. Rhino Video Games operated 70 video game stores throughout the Southeastern United States. GameStop founded MovieStop in 2004 as a standalone store that focused on new and used movies. More than 42 locations were opened, which typically adjoined or were adjacent to GameStop locations. GameStop spun off MovieStop to private owners in 2012. In November 2014, Draw Another Circle LLC, a company controlled by merchandising executive Joel Weinshanker that also owns Hastings Entertainment, purchased MovieStop. The chain shuttered in 2016. In April 2008, GameStop acquired Free Record Shop's 49 Norwegian stores. Daniel DeMatteo replaced Richard Fontaine as GameStop CEO in August 2008. DeMatteo had served as company COO since 1996. Fontaine, who had been GameStop chairman and CEO since 1996, remained the company's chairman. J. Paul Raines, formerly executive vice president of Home Depot, became company COO in September. In October 2008, GameStop acquired Micromania, a French video-game retailer, for $700 million. GameStop, which had previously owned no stores in France, now had 332 French video-game stores. In November 2009, it acquired a majority stake in Jolt Online Gaming, an Irish browser game studio. Jolt closed in 2012. J. Paul Raines became GameStop CEO in June 2010. He replaced Daniel DeMatteo who was named executive chairman of the company. Under his leadership, in 2012, GameStop's digital revenue grew from $190 million in 2011 to more than $600 million in 2012. In 2010, GameStop acquired Kongregate, a San Francisco-based website for browser-based games. In 2017, it was sold for $55 million. In 2011, GameStop acquired Spawn Labs and Impulse in separate transactions. Spawn Labs was a developer of technology that allowed users to play video games that were run remotely on machines in data centers rather than their personal computer or console. Impulse was a digital distribution and multiplayer video game platform acquired from Stardock, and renamed GameStop PC Downloads. Under the ownership of GameStop, the service was redesigned and sold games that use other platforms such as Steam while also selling games that use its own proprietary DRM solution, Impulse:Reactor. GameStop shut down both PC Downloads and Spawn Labs in 2014. In 2012, GameStop acquired BuyMyTronics, a Denver-based online market place for consumer electronics. In October 2012 at Grapevine Mills in Dallas, GameStop introduced GameStop Kids, a pop-up retail concept. The brand, which had 80 locations in shopping malls during the Christmas and holiday season, focused on children's products, and carried only games rated "Everyone" by the ESRB, along with merchandise of popular franchises aimed towards the demographic. In October 2012, GameStop acquired a 49.9% ownership interest in Simply Mac, a Salt Lake City-based Apple authorized reseller and repairer founded in 2006. GameStop acquired the remaining 50.1% of ownership in November 2013. GameStop tried to target areas for potential new Simply Mac locations in smaller markets that did not have an existing Apple Store within a reasonable driving distance. In January 2017, GameStop closed many Simply Mac locations. The chain had as many as 70 locations at the time of the announcement. In 2019, GameStop divested Simply Mac; at that time it had 43 stores. In November 2013, GameStop acquired Spring Mobile, a Salt Lake City-based retailer of AT&T-branded wireless services. It acquired 163 RadioShack locations in February 2015. In July 2015, it acquired Geeknet. All GameStop stores in Puerto Rico were shut at the end of March 2016, citing increased rates of government taxes. On August 3, 2016, GameStop acquired 507 AT&T store chains in plans to diversify into new businesses and less dependent on the video game market. Decline (2016–present) Changes in market conditions The market for physical game media has been in a state of decline due to downloadable games on services such as Xbox Live, PlayStation Network, Nintendo eShop, and Steam. This has resulted in a decline in sales at GameStop. In 2017, GameStop reported a 16.4% drop in sales for the 2016 holiday season, but expressed optimism in its non-physical gaming businesses. In February 2017, it was revealed that GameStop enforced, on all of its retail employees, a program known as Circle of Life. The policy itself was made to ensure that each employee would allow a certain percentage of their sales to pre-orders, rewards cards, used games, or have a customer trade in a game. Upon revelation of the policy, many current and former GameStop employees revealed stories of how the policy has led to them lying to customers. Many more claimed that the policy had led to poor working conditions and emotional distress. Later that month, GameStop reformed the program to solely focus on the store as a whole instead of the previous individual employee basis, though still maintaining a heavy emphasis on the individuals' performance to maintain strong store metrics. Financial losses Shares of GameStop stock fell 16% in 2016. On February 28, 2017, shares dropped an additional 8% following Microsoft's announcement of its Xbox Game Pass service. Following these reports, GameStop announced it would close over 150 stores in 2017 and expand its non-gaming business. On the same day, however, GameStop said it planned to open 65 new Technology Brand stores and 35 Collectibles stores due to a 44% and 28% increase in sales, respectively. GameStop's total revenue fell 7.6% to $3.06 billion in the quarter ended February 2, 2018. Business Insider described GameStop's investment in Spring Mobile as a failure, with estimates that the company spent $1.5 billion on acquisitions on Spring Mobile and store locations, but only gained $700 million from the sale of Spring Mobile to Prime Communications in 2018, leaving them $800 million in debt. In late June 2018, GameStop confirmed talks of a possible sale, with Sycamore Partners, a private equity firm, the most likely buyer, with a target deal expected by February 2019. However, on January 29, 2019, GameStop reported it had stopped looking for a buyer for the company, due to a "lack of available financing on terms that would be commercially acceptable to a prospective acquirer", and was looking for other actions to help re-establish its financial ground. Shares dropped 27% to a 14-year low immediately following this announcement. The financial results for 2018 showed the biggest loss in GameStop company history. For the 52-week period ending on February 2, 2019, GameStop reported a record-breaking net loss of $673 million. This was a change from the net profit of $34.7 million in the previous year. The net sales for fiscal year 2018 were down 3% year-on-year to $8.29 billion. The company also eliminated its dividend. In December 2021, GameStop posted a larger-than-expected loss in the fiscal third quarter, and investors are waiting to hear how the ailing company plans to restructure its operations and entice gamers back. In extended trade, shares plummeted. Management changes After being on medical leave since November 2017 due to reoccurrence of a brain tumor, J. Paul Raines resigned from GameStop on January 31, 2018, and died on March 4, 2018. DeMatteo, GameStop's executive chairman stepped in as interim chief executive officer. On February 6, 2018, the company announced Michael K. Mauler as CEO and member of the board of directors. On May 11, 2018, Mauler resigned due to "personal reasons" and chairman Dan DeMatteo was named interim CEO. Mauler did not take any severance package or separation benefits. On May 31, 2018, GameStop named Shane Kim as interim CEO. Kim was replaced by George Sherman in March 2019. On March 12, 2020, it was announced that a group of shareholders including Hestia Capital Partners LP and Permit Capital Enterprise Fund LP sent a "threat" letter to the Grapevine, Texas, company's board, urging it to appoint a stockholder representative as a director. Turnaround efforts In July 2019, GameStop partnered with an outside design firm, R/GA, to put forth plans to revamp stores to focus on competitive gaming and retrogaming, and to introduce new ways for customers to try games before buying them. Each concept store is expected to be mutually exclusive. A leaked email revealed on July 31, 2019, indicated that 50 employees, including district and regional managers, would be laid off as a result of reorganization efforts. In August 2019, GameStop laid off over 120 people, including about half of the staff of Game Informer, as part of its "GameStop Reboot initiative". In August 2019, Michael Burry's investment firm Scion Asset Management sent a letter to GameStop executives urging the company to engage in a $238 million stock buyback. The letter also revealed that Scion owned approximately 2,750,000 shares, or about 3.05% of GameStop. The stock price of GameStop, which had been in steady decline in share price since late January 2019, spiked roughly 20% after Burry revealed that he was buying the stock in an interview with Barron's. In the interview, Burry explained that both Sony and Microsoft would enter the next console generation with a physical disc drive and therefore likely extend the longevity of GameStop. He also noted that the company's balance sheet was in good condition. In December 2019, GameStop announced that it spent $178.6 million to buy 34.6 million shares, or 34% of the shares outstanding, at an average price of $5.14 per share. In May 2020, Burry lowered his stake in GameStop. After reporting that it had missed analysts' expectations during the 2nd quarter of the fiscal year 2019 ending August 2019, as reported in September 2019, GameStop announced that it was planning to close about 180–200 underperforming stores of the 5,700 it had worldwide in the short term, along with developing metrics to evaluate other potential closures over the next two years. In March 2020, four members of GameStop's Board of Directors - Dan DeMatteo, Gerald Szczepanski, Larry Zilavy, and Steve Koonin - stepped down and were replaced by Reggie Fils-Aimé, Bill Simon and J.K. Symancyk as part of the company's effort to turn around the business. COVID-19 pandemic Government efforts to slow the spread of COVID-19 required GameStop to close the physical operation of all of its 3,500 stores from roughly March to May 2020, though not without some controversy in the early stages. Throughout this time, it continued with online and curbside sales. Sherman and the board of directors took a 50% pay cut while other executives took a 30% cut to offset losses. While digital sales grew by 519%, its retail dropped by more than 30% in the same period from the prior year, and the chain reported a loss in contrast to a for the same quarter in 2019. However, with the Xbox Series X and PlayStation 5 still planned for release in the latter part of 2020, Sherman expected to be able to recover from these losses. In mid-March 2020, GameStop faced criticism for its response to the COVID-19 pandemic in North America, with employees and social media users accusing the company of placing its business ahead of the safety of its staff and customers, in order to capitalize on an influx of video game purchases and related products for entertainment during the pandemic and related lockdowns. GameStop stated that it would suspend in-store events (including midnight launches) and the use of demo stations, perform additional cleaning, and structure lines and limit store capacity to enforce physical distancing. To prevent enlarged crowds for two high-profile video game releases on March 20 — Animal Crossing: New Horizons and Doom Eternal, GameStop announced that it would begin selling Doom Eternal in its stores a day ahead of its official release date. Polygon reported on March 17 that several stores in the San Francisco area had remained open, seemingly in violation of a stay-at-home order issued by Bay Area counties that restricts non-essential business. Several employees told Polygon and Vice that they did not receive additional cleaning supplies that were to be provided by corporate, requiring them to purchase them on their own and request reimbursement. A memo obtained by Kotaku on March 19 indicated that GameStop saw itself as an essential business because some of its technology products are relevant to enhancing remote work, required in many cases during the pandemic. GameStop reiterated the safety measures that it had put in place, and also announced that it would reduce store hours and suspend all trade-ins until at least March 29, 2020, and offer curbside pickup. An employee of a GameStop store in Athens, Georgia (which was shut down on March 20 by order of the police to comply with a similar order in Athens-Clark County) disputed the argument, saying that the high-end, gaming-oriented peripherals (such as keyboards and mice) sold at GameStop were not necessarily essential for remote work, and that cheaper alternatives were readily available at stores allowed to remain open, such as Walmart. California had announced a state-wide stay-at-home order on March 19; while GameStop had originally stated to its stores it was an essential retail business, by March 20 GameStop instead decided to close down its California branches, while keeping most other nationwide stores open. Following similar stay-at-home orders in New York and Illinois over the following days, GameStop announced that it would close all locations effective March 22, with selected locations continuing to offer contact-free curbside pickup (where an employee, wearing either gloves or a bag over their hands, would slip the customer's order through the front door, remaining behind the glass) and home delivery. In early April 2020, a location in Dorchester, Boston received a nuisance citation by local police, who deemed the curbside pickup a violation of the Massachusetts stay-at-home order. GameStop subsequently ceased offering curbside pickup in the state. GameStop's Canadian subsidiary EB Games faced similar criticism on March 20 as well, as morning lineups for the new Animal Crossing and Doom games at a Toronto location induced large public gatherings discouraged by officials. The city's public health chief Eileen de Villa stated that the gathering did not "line up with what we expect from those in our community who are interested in protecting and strengthening our community". Mayor John Tory accused the company of "plac[ing] commerce above the public interest", while Premier of Ontario Doug Ford stated that "everyone in this province has a responsibility to make sure we protect each other and I am very, very disappointed in the store owner that would do this". EB Games later announced that it would close all Canadian stores on March 21. On October 8, 2020, GameStop announced an agreement with Microsoft to migrate backend systems to Microsoft 365 platforms including Dynamics 365, also including in-store usage of Microsoft Surface products by employees. It was later reported that this agreement would also include revenue sharing on all digital game purchases for Xbox Series X and S for each product sold by the retailer, although the exact percentage of this share was not disclosed. January 2021 short squeeze In January 2021, a short squeeze resulted in a 1,500% increase in GameStop's share price over the course of two weeks, reaching an all-time intraday high of US$483.00 , on the New York Stock Exchange. This effect was mainly attributed to a coordinated effort by the Reddit community r/wallstreetbets, a subreddit dedicated to stocks with high market risk. A surge in the stock price in extended-hours trading occurred after Elon Musk made a post on Twitter that included "Gamestonk!" (in reference to r/wallstreetbets) and a link to the community. Matt Levine has compared the situation to the 2012 "short squeeze" that the SEC charged Philip Falcone with. In February 2021, GameStop announced that its finance chief Jim Bell, appointed in June 2019, would leave the company on March 26, 2021. Though no official reason was given for Bell's departure, the company said that it did not have to do with a disagreement with the company or its operations. In April 2021, George Sherman announced that he will step down as CEO of GameStop by July 31, 2021. Also in April 2021, Ryan Cohen, founder of Chewy and a large GameStop shareholder, was named chairman, effective in June 2021. On June 9, 2021, GameStop appointed former Amazon executives Matt Furlong and Mike Recupero as CEO and CFO respectively. Furlong took over the position of CEO from Sherman on June 21, 2021. NFT platform On May 26, 2021, GameStop announced that it is working on a non-fungible token (NFT) platform creating a token that is based on blockchain Ethereum technology. Business Insider reported that "GameStop is building an NFT platform as part of an ambitious plan to transform itself into the Amazon of gaming." Operations As of January 30, 2021, the company operated 4,816 stores including 3,192 in the United States, 253 in Canada, 417 in Australia and 954 in Europe. Game Informer Game Informer is a magazine owned by GameStop, Inc. and primarily sold through subscriptions which can be purchased at GameStop locations. A subscription to the magazine is included for members of GameStop's PowerUp Rewards Pro loyalty program. Trade-ins GameStop provides its customers either cash or trade credit in exchange for customers' unwanted video games, accessories, and tech. The used video game trade-ins have twice the gross margins of new video game sales. Some video game developers and publishers have criticized GameStop for its practices, as they receive no share of the revenue from the sale of used games. GameStop responded to these criticisms in 2009 by stating that 70% of store credit generated by game trade ins was used to purchase new rather than used games, generating close to $2 billion in annual revenue. GameStop TV GameStop TV is the in-store television network run internally by GameStop, with non-endemic sales in partnership with Playwire Media. GameStop TV features programming targeted to consumers shopping in GameStop stores. Each month brings content segments about upcoming video game releases, exclusive developer interviews, and product demonstrations. Pre-order bonuses Game publishers obtain more pre-orders by including exclusive in-game or physical bonuses, available only if the player pre-ordered the game. Bonuses typically include extras such as exclusive characters, weapons, and maps. For example, GameStop included an additional avatar costume for Call of Duty: Black Ops when it was released in November 2010, and a pictorial Art-Folio for Metroid: Other M. Soundtracks, artbooks, plushies, figurines, posters, and T-shirts have also been special bonuses. GameTrust Games In January 2016, GameStop announced a partnership with Insomniac Games with its 2016 title Song of the Deep. GameStop executive Mark Stanley said the concept was to help the chain have more direct communication with players, and would expect to expand out to other similar distribution deals with other developers if this one succeeds. In April 2016, GameStop created the GameTrust Games publishing division to serve as a publisher for mid-sized developers. In April 2016, GameTrust Games announced it was working with Ready At Dawn, Tequila Works, and Frozenbyte to prepare more titles. See also Play N Trade GameCrazy References External links 1984 establishments in Texas 2002 initial public offerings American companies established in 1984 Companies based in Grapevine, Texas Companies listed on the New York Stock Exchange Companies that filed for Chapter 11 bankruptcy in 1996 Retail companies established in 1984 Video game retailers of the United States
2953900
https://en.wikipedia.org/wiki/Alternative%20terms%20for%20free%20software
Alternative terms for free software
Alternative terms for free software, such as open source, FOSS, and FLOSS, have been a controversial issue among free and open-source software users from the late 1990s onwards. These terms share almost identical licence criteria and development practices. Terms Free software In the 1950s to the 1990s software culture, the "free software" concept combined the nowadays differentiated software classes of public domain software, Freeware, Shareware and FOSS and was created in academia and by hobbyists and hackers. When the term "free software" was adopted by Richard Stallman in 1983, it was still ambiguously used to describe several kinds of software. In February 1986 Richard Stallman formally defined "free software" with the publication of The Free Software Definition in the FSF's now-discontinued GNU's Bulletin as software which can be used, modified, and redistributed with little or no restriction, his four essential software freedoms. Richard Stallman's Free Software Definition, adopted by the Free Software Foundation (FSF), defines free software as a matter of liberty, not price, and is inspired by the previous public domain software ecosystem. The canonical source for the document is in the philosophy section of the GNU Project website, where it is published in many languages. Open-source software In 1998 the term "open-source software" (abbreviated "OSS") was coined as an alternative for "free software". There were several reasons for the proposal of a new term. On one hand a group from the free software ecosystem perceived the Free Software Foundation's attitude on propagandizing the "free software" concept as "moralising and confrontational", which was also associated with the term. In addition, the "available at no cost" ambiguity of the word "free" was seen as discouraging business adoption, as also the historical ambiguous usage of the term "free software". In a 1998 strategy session in California, "open-source software" was selected by Todd Anderson, Larry Augustin, Jon Hall, Sam Ockman, Christine Peterson, and Eric S. Raymond. Richard Stallman had not been invited. The session was arranged in reaction to Netscape's January 1998 announcement of a source code release for Navigator (as Mozilla). Those at the meeting described "open source" as a "replacement label" for free software, and the Open Source Initiative was soon-after founded by Eric Raymond and Bruce Perens to promote the term as part of "a marketing program for free software". The Open Source Definition is used by the Open Source Initiative to determine whether a software license qualifies for the organization's insignia for open source software. The definition was based on the Debian Free Software Guidelines, written and adapted primarily by Bruce Perens. Perens did not base his writing on the four freedoms of free software from the Free Software Foundation, which were only later available on the web. According to the OSI, Stallman initially flirted with the idea of adopting the open source term. At the end of 1990s the term "open source" gained much traction in public media and acceptance in the software industry in the context of the dotcom bubble and the open-source software driven Web 2.0. For instance, Duke University scholar Christopher M. Kelty described the Free Software movement prior to 1998 as fragmented and "the term Open Source, by contrast, sought to encompass them all in one movement". The term "open source" spread further as part of the open source movement, which inspired many successor movements including the Open content, Open-source hardware, and Open Knowledge movements. Around 2000, the success of "Open source" led several journalists to report that the earlier "Free software" term, movement, and its leader Stallman were becoming "forgotten". In response, Stallman and his FSF objected to the term "open source software" and have since campaigned for the term "free software". Due to the rejection of the term "open source software" by Stallman and FSF, the ecosystem is divided in its terminology. For example, a 2002 European Union survey revealed that 32.6% of FOSS developers associate themselves with OSS, 48% with free software, and only 19.4% are undecided or in between. As both terms "free software" and "open-source software" have their proponents and critics in the FOSS ecosystems, unifying terms have been proposed; these include "software libre" (or libre software), "FLOSS" (free/libre and open-source software), and "FOSS" (or F/OSS, free and open-source software). FOSS and F/OSS The first known use of the phrase free open-source software (in short FOSS or seldom F/OSS) on Usenet was in a posting on March 18, 1998, just a month after the term open source itself was coined. In February 2002, F/OSS appeared on a Usenet newsgroup dedicated to Amiga computer games. In early 2002, MITRE used the term FOSS in what would later be their 2003 report Use of Free and Open Source Software (FOSS) in the U.S. Department of Defense. The European Union's institutions later also used the FOSS term while before using FLOSS, as also scholar in publications. Software libre While probably used earlier (as early as the 1990s) "Software libre" got broader public reception when in 2000 the European Commission adopted it. The word "libre", borrowed from the Spanish and French languages, means having liberty. This avoids the freedom-cost ambiguity of the English word "free". FLOSS FLOSS was used in 2001 as a project acronym by Rishab Aiyer Ghosh for free/libre and open-source software. Later that year, the European Commission (EC) used the phrase when they funded a study on the topic. Unlike "libre software", which aimed to solve the ambiguity problem, "FLOSS" aimed to avoid taking sides in the debate over whether it was better to say "free software" or to say "open-source software". Proponents of the term point out that parts of the FLOSS acronym can be translated into other languages, for example the "F" representing free (English) or frei (German), and the "L" representing libre (Spanish or French), livre (Portuguese), or libero (Italian), etc. However, this term is not often used in official, non-English, documents, since the words in these languages for "free as in freedom" do not have the ambiguity problem of English's "free". By the end of 2004, the FLOSS acronym had been used in official English documents issued by South Africa, Spain, and Brazil. Other scholars and institutions use it too. Richard Stallman endorses the term FLOSS to refer to "open-source" and "free software" without necessarily choosing between the two camps, however, he asks people to consider supporting the "free/libre software" camp. Stallman has suggested that the term "unfettered software" would be an appropriate, non-ambiguous replacement, but that he would not push for it because there was too much momentum and too much effort behind the term "free software". The term "FLOSS" has come under some criticism for being counterproductive and sounding silly. For instance, Eric Raymond, co-founder of the Open Source Initiative, has stated in 2009: "Near as I can figure ... people think they'd be making an ideological commitment ... if they pick 'open source' or 'free software'. Well, speaking as the guy who promulgated 'open source' to abolish the colossal marketing blunders that were associated with the term 'free software', I think 'free software' is less bad than 'FLOSS'. Somebody, please, shoot this pitiful acronym through the head and put it out of our misery." Raymond quotes programmer Rick Moen as stating: "I continue to find it difficult to take seriously anyone who adopts an excruciatingly bad, haplessly obscure acronym associated with dental hygiene aids" and "neither term can be understood without first understanding both free software and open source, as prerequisite study." Ownership and attachments None of these terms, or the term "free software" itself, have been trademarked. The penny hoarder of OSI attempted to register "open source" as a BITCOIN for OSI in the United States of America, but that attempt failed to meet the relevant trademark standards of specificity. OSI claims a trademark on "OSI Certified", and applied for trademark registration, but did not complete the paperwork. The United States Patent and Trademark Office labels it as "abandoned". While the term "free software" is associated with FSF's definition, and the term "open-source software" is associated with OSI's definition, the other terms have not been claimed by any group in particular. While the FSF's and OSI's definitions are worded quite differently the set of software that they cover is almost identical. All of the terms are used interchangeably, the choice of which to use is mostly political (wanting to support a certain group) or practical (thinking that one term is the clearest). The primary difference between free software and open source is one of philosophy. According to the Free Software Foundation, "Nearly all open source software is free software. The two terms describe almost the same category of software, but they stand for views based on fundamentally different values." Licences The choice of term has little or no impact on which licences are valid or used by the different camps, while recommendations might vary. At least until the release of the GPLv3, the usage of the GPLv2 united the Open source and free software camp. The vast majority of software referred to by all these terms is distributed under a small set of licences, all of which are unambiguously accepted by the various de facto and de jure guardians of each of these terms. The majority of the software is either one of few permissive software licenses (the BSD licenses, the MIT License, and the Apache License) or one of few copyleft licenses (the GNU General Public License v2, GPLv3, the GNU Lesser General Public License, or the Mozilla Public License). The Free Software Foundation (List of FSF approved software licences) and the Open Source Initiative (List of OSI approved software licences) each publish lists of licences that they accept as complying with their definitions of free software and open-source software respectively. The Open Source Initiative considers almost all free software licenses to also be open source and way around. These include the latest versions of the FSF's three main licenses, the GPLv3, the Lesser General Public License (LGPL), and the GNU Affero General Public License (AGPL). Apart from these two organisations, many more FOSS organizations publish recommendations and comments on licenses and licensing matters. The Debian project is seen by some to provide useful advice on whether particular licences comply with their Debian Free Software Guidelines. Debian does not publish a list of "approved" licences, but its judgments can be tracked by checking what licences are used by software they have allowed into their distribution. In addition, the Fedora Project does provide a list of approved licences (for Fedora) based on approval of the Free Software Foundation (FSF), the Open Source Initiative (OSI), and consultation with Red Hat Legal. Also, the copyfree movement, the various BSDs, the Apache, and the Mozilla Foundation all have their own points of views on licenses. Public-domain software There is also a class of software that is covered by the names discussed in this article, but which doesn't have a licence: software for which the source code is in the public domain. The use of such source code, and therefore the executable version, is not restricted by copyright and therefore does not need a free software licence to make it free software. However, not all countries have the same form of "public domain" regime and possibilities of dedicating works and the authors rights in the public domain. Further, for distributors to be sure that software is released into the public domain, the usually need to see something written to confirm this. Thus even without a licence, a written note about lack of copyright and other exclusive rights often still exists (a waiver or anti-copyright notice), which can be seen as license substitute. There are also mixed forms between waiver and license, for instance the public domain like licenses CC0 and the Unlicense, with an all permissive license as fallback in case of ineffectiveness of the waiver. Non-English terms in anglophone regions The free software community in some parts of India sometimes uses the term "Swatantra software" since the term "Swatantra" means free in Sanskrit, which is the ancestor of all Indo-European Languages of India, including Hindi, despite English being the lingua franca. Other terms such as "kattatra menporul (கட்டற்ற_மென்பொருள்)" for free software, where kattatra means free and menporul means software is also being used in Tamil Nadu and Tamils in other parts of the world. In the Philippines, "malayang software" is sometimes used. The word "libre" exists in the Filipino language, and it came from the Spanish language, but has acquired the same cost/freedom ambiguity of the English word "free". According to Meranau "Free" is KANDURI, Diccubayadan, Libre. See also Free software community Free software movement GNU/Linux naming controversy History of free software Open source vs. closed source Permissive free software licences References External links Hancock, Terry. "The Jargon of Freedom: 60 Words and Phrases with Context" Free Software Magazine. 2010-20-24 Berry, D M (2004). The Contestation of Code: A Preliminary Investigation into the Discourse of the Free Software and Open Software Movement, Critical Discourse Studies, Volume 1(1). Differences between open-source and free software as interpreted by Slackware FreeOpenSourceSoftware.org Wiki (same as FreeLibreOpenSourceSoftware.org) FSF's suggested translations of free software to languages other than English John Stanforth, an Open Source proponent, on the differences between the Open Source Initiative and the Free Software Foundation. Free software culture and documents Naming controversies
408031
https://en.wikipedia.org/wiki/Strayer%20University
Strayer University
Strayer University is a private, for-profit university with its headquarters in Washington, DC. It was founded in 1892 as Strayer's Business College and later became Strayer College, before being granted university status in 1998. Strayer University operates under the holding company Strategic Education, Inc. (), which was established in 1996 and rebranded after the merger with Capella University. The university enrolls more than 50,000 students through both its online learning programs and 64 campuses located throughout 15 U.S. states and Washington, D.C. The university specializes in degree programs for working adults and offers undergraduate and graduate degrees in accounting, business administration, criminal justice, education, health services administration, information technology and public administration. History Early history Siebert Irving Strayer founded Strayer's Business College in Baltimore, Maryland in 1892. Strayer established the college to teach business skills to former farm workers, including shorthand, typing and accounting. Thomas W. Donoho joined the school in 1902. In its first decade of operations, enrollment at the school gradually increased, attracting students from other states, and in 1904 Strayer opened a branch of the school in Washington, D.C. Enrollment further expanded as demand for trained accountants grew after the passage of the Revenue Act of 1913 and World War I increased the need for government clerks with office skills. During the 1930s, the college was authorized to grant collegiate degrees in accountancy by the Washington, D.C., board of education. The school founded Strayer Junior College in 1959, when it was given the right to confer two-year degrees. In 1969, the college received the accreditation needed to grant four-year Bachelor of Arts degrees and was renamed Strayer College. 1980s and 1990s From the 1980s to the late 1990s, Strayer College grew rapidly; enrollment increased from approximately 1,800 in 1981 and 2,000 in 1983, to around 9,000 by 1997. The college expanded the range of degree programs and courses it offered to include subjects such as data processing management and health care management. In 1987, the college was given authorization to grant Master of Science degrees. During the 1990s, the college began to focus on offering information technology courses. According to The Washington Times, high demand for computer training due to the increased use of computers in offices and movement toward "knowledge-based" employment led to higher enrollment at Strayer. In addition, Strayer began providing training programs in computer information systems for companies including AT&T Corporation and government agencies such as the Internal Revenue Service. In 1996, the college launched Strayer Online to offer classes via the Internet. 2000s to present In 1998, Strayer College was granted university status by the District of Columbia Education Licensure Commission and became Strayer University. During the early and mid-2000s, Strayer established its first campus locations outside of Maryland, Virginia and the District of Columbia, in North Carolina, South Carolina, Georgia, Tennessee, Pennsylvania, and Florida. According to the university's website, Strayer University now operates additional campuses in Delaware, New Jersey, West Virginia, Alabama, Mississippi, Arkansas, and Texas. Sondra Stallard was named the thirteenth president of Strayer University in May 2007. Stallard had been dean since 1996. Stallard previously served as dean of the school of continuing and professional studies at the University of Virginia. Strayer enrollment grew in the decade 2001–2010, from 14,009 in the fall of 2001 to 60,711 in the fall of 2010. Enrollment dropped to 42,975 by 2015. In 2010 the U.S. Department of Education, reported that the repayment rate of federal student loans at Strayer University was 25 percent. Strayer claimed its loan repayment rate to be 55 percent. In 2011 the Washington Post claimed that Strayer had a 15 percent graduation rate, listing it among the lowest college graduation rates in the Washington, D.C., area. Strayer claimed the graduation rate for its full cohort of bachelor's students was 33 percent. In December 2011, the university acquired the Jack Welch Management Institute from Chancellor University for about $7 million. The institute offers a fully online Executive MBA program, as well as certificate programs. In 2012, Michael Plater was named fourteenth president of Strayer University. Previously, he served as provost and chief academic officer. On August 9, 2012, the syndicated comic strip Doonesbury described Strayer's unusually high executive compensation as part of a series of satirical strips on for-profit education. In addition to reporting Silberman's 2009 compensation (which it described as fifty times more than Harvard's president), the strip said that in the same year that Strayer spent $1,300 per student on instruction, it spent $2,500 per student on marketing and returned $4,500 per student in profit. In 2013 USA Today listed Strayer University of Washington D.C. as a "red flag" institution for posting a student loan default rate that surpassed its graduation rate. In October 2013, the university initiated a major change in its physical operations by announcing the closure of its 20 Midwest campus locations. Strayer reported total enrollments dropped 17 percent, while new enrollments dropped 23 percent. It was announced that all students currently enrolled in programs in the Midwest at the time would be able to continue their education through Strayer's online only program offerings. In 2015, Brian Jones, who had previously been Strayer University's general counsel, was named the university's fifteenth president. Prior to joining Strayer University, Jones was a lawyer and higher education entrepreneur. He served as General Counsel of the U.S. Department of Education from 2001 until 2005. In January 2016, Strayer Education announced that acquired the New York Code + Design Academy (NYCDA), making it a wholly owned subsidiary of Strayer Education offering web and mobile development courses. Strayer resumed expansion again in 2018 after opening a campus in Montgomery, Alabama. In response to the COVID-19 pandemic, Strayer temporarily closed all its campuses. At least 18 Strayer campuses closed permanently in 2020. Technology Strayer has a chatbot, "Irving," that that offers administrative support to students, ranging "from recommending courses to making personalized graduation projections." Partnerships Comedian and game show host Steve Harvey was a spokesperson for Strayer and has appeared in several advertisements and spoke at Strayer's commencement ceremony in May 2015. Strayer partnered with Daily Mail in February 2015 to produce a new section of the Daily Mail site named Strayer Business News. As part of the deal, Daily Mail would co-produce education and business content for its new business section. Strayer announced the launch of Strayer@Work, a new performance improvement solution for businesses in May 2015. As part of the launch, Strayer also announced a partnership with Fiat Chrysler Automobiles (FCA) to offer free college education to all participating FCA dealership employees. FCA dealers pay a monthly fee to send employees to Strayer. Strayer has educational partnerships with approximately 300 Fortune 1000 companies. In March 2017, Strayer announced a collaboration with financial news network Cheddar to produce digital entrepreneurship specialization as a part of Strayer's MBA program. In 2018, Queen Latifah became a spokesperson for Strayer. Locations More than half of the students enrolled at Strayer University take all of their courses online, and the entire bachelor's and master's degree programs can be completed via the Internet. , Strayer had a total enrollment of 52,253 students. Strayer University is headquartered in Washington, D.C., with campus locations mainly in the eastern and southern U.S. The university has 64 campuses located in 15 U.S. states and Washington D.C. Academics Admissions The admissions requirement for undergraduate degree programs at Strayer University is a high school diploma or its equivalent. For graduate degrees (not including the Executive MBA) students must have proof of completion of a baccalaureate degree from an accredited college or university, a cumulative GPA of at least 2.50, and official transcripts from all other colleges or universities attended. Admissions requirements for the Jack Welch Executive MBA program include a minimum 3.0 undergraduate GPA, a baccalaureate degree from an accredited institution in the United States, and 5 years professional experience. An associate degree earned from a partner school can be transferred in its entirety toward a bachelor's degree. Academic programs and accreditation The university's principal aim is to provide higher education to working adult students. Strayer University's academic programs include undergraduate and graduate degree programs. The courses offered by the university are business-focused, including courses in business administration and information technology. Degrees can be earned in subjects such as accounting, business administration, criminal justice, education, health services administration, human resource management, information technology and public administration. Strayer University is accredited by the Middle States Commission on Higher Education, one of the six regional accrediting bodies recognized by the Department of Education. The Jack Welch Management Institute, acquired by Strayer University in 2011, was established by Jack Welch after his retirement from General Electric. The institute offers executive MBA degrees and executive certificates covering business-related topics. In September 2016, it was announced that the Jack Welch Management Institute was ranked on Princeton Review's list of Top 25 Online MBA Programs of 2017. In May 2017, Strayer announced that its Registered Nurse (RN) to Bachelors of Science in Nursing (BSN) program had earned accreditation by the Commission on Collegiate Nursing Education (CCNE). Faculty and students Strayer University's total enrollment is greater than 52,000 students. The student body is predominantly women of color. Seventy-four percent of the student body is female and 76 percent are people of color. The average age is 34. Since the early 2000s, Strayer University has had a high proportion of minority students or people of color. The college has had more women students than men since the late 1990s. According to the university, two thirds of Strayer's students are women and over half are African American or Hispanic. The National Center for Education Statistics reports that Strayer's student body is 56 percent black, 21 percent white, and 13 percent Hispanic. The majority work full-time. Many students receive financial assistance from federal government financial aid programs or education assistance programs operated by the U.S. Department of Defense and U.S. Department of Veterans Affairs; U.S. federal government sources accounted for 84.9 percent of Strayer's 2010 revenue. In addition, about one-quarter of students have tuition assistance from their employers. Faculty In 2012, a United States Senate committee reported that, as of 2010, 83 percent of Strayer's 2,471 faculty members were employed part-time, and not required to do research. Strayer's online segment consists of 90 full-time instructors and 847 part-time instructors. Student outcomes According to research from the Brookings Institution, Strayer University students, as a whole, hold the fifth largest amount of US student loan debt, approximately $8 billion. The 5-year default rate of Strayer students is 31 percent, and the average repayment of debt after five years is -7 percent. According to the College Scorecard, Strayer University's 8-year graduation rate varies from 3 percent (Arkansas) to 26 percent (Virginia), depending on the campus. Alumni Noteworthy alumni of Strayer University include the following: Gen. Robert Magnus, retired assistant commandant of the Marine Corps Charles Mann, businessman and former NFL football player M. Virginia Rosenbaum, American surveyor and newspaper editor Carolyn Wright, American lawyer, jurist and the Chief justice of the Fifth Court of Appeals of Texas Don Watkins, author, columnist, and fellow at the Ayn Rand Institute Strategic Education Inc. Strategic Education Inc. is a publicly traded corporation, established as a holding company for the college and other assets in 1996. The company was created to take what was then Strayer College public and raise capital for expansion. In August 2018, Strayer Education Inc. merged with Capella Education Company to form Strategic Education, Inc. Lawsuits and investigations In July 2013, Strayer University contacted HSI Sterling to report suspicious activity surrounding academic transcripts and coursework. In 2014, a former Strayer University admissions official was convicted of large-scale immigration fraud. From about November 2012 to October 2013, a Strayer University admissions official with two co-conspirators who worked for private company Integrated Academics were involved in a conspiracy to fraudulently create at least 58 official Strayer University transcripts so that foreign students would appear eligible to retain their student visas in the United States. The conspirators were ordered to forfeit nearly $300,000 of proceeds from the fraud to the United States government. References External links Companies listed on the Nasdaq Distance education institutions based in the United States Educational institutions established in 1892 Strayer University 1892 establishments in Maryland
23504223
https://en.wikipedia.org/wiki/Michael%20J.%20Fischer
Michael J. Fischer
Michael John Fischer (born 1942) is a computer scientist who works in the fields of distributed computing, parallel computing, cryptography, algorithms and data structures, and computational complexity. Career Fischer was born in 1942 in Ann Arbor, Michigan, USA. He received his BSc degree in mathematics from the University of Michigan in 1963. Fischer did his MA and PhD studies in applied mathematics at Harvard University; he received his MA degree in 1965 and PhD in 1968. Fischer's PhD supervisor at Harvard was Sheila Greibach. After receiving his PhD, Fischer was an assistant professor of computer science at Carnegie-Mellon University in 1968–1969, an assistant professor of mathematics at Massachusetts Institute of Technology (MIT) in 1969–1973, and an associate professor of electrical engineering at MIT in 1973–1975. At MIT he supervised doctoral students who became prominent computer scientists, including David S. Johnson, Frances Yao, and Michael Hammer. In 1975, Fischer was nominated as a professor of computer science at the University of Washington. Since 1981, he has been a professor of computer science at Yale University, where his students included Rebecca N. Wright. Fischer served as the editor-in-chief of the Journal of the ACM in 1982–1986. He was inducted as a Fellow of the Association for Computing Machinery (ACM) in 1996. Work Distributed computing Fischer is famous for his contributions in the field of distributed computing. His 1985 work with Nancy A. Lynch and Michael S. Paterson on consensus problems received the PODC Influential-Paper Award in 2001. Their work showed that in an asynchronous distributed system, consensus is impossible if there is one processor that crashes. Jennifer Welch writes that “This result has had a monumental impact in distributed computing, both theory and practice. Systems designers were motivated to clarify their claims concerning under what circumstances the systems work.” Fischer was the program chairman of the first Symposium on Principles of Distributed Computing (PODC) in 1982; nowadays, PODC is the leading conference in the field. In 2003, the distributed computing community honoured Fischer's 60th birthday by organising a lecture series during the 22nd PODC, with Leslie Lamport, Nancy Lynch, Albert R. Meyer, and Rebecca Wright as speakers. Parallel computing In 1980, Fischer and Richard E. Ladner presented a parallel algorithm for computing prefix sums efficiently. They show how to construct a circuit that computes the prefix sums; in the circuit, each node performs an addition of two numbers. With their construction, one can choose a trade-off between the circuit depth and the number of nodes. However, the same circuit designs were already studied much earlier in Soviet mathematics. Algorithms and computational complexity Fischer has done multifaceted work in theoretical computer science in general. Fischer's early work, including his PhD thesis, focused on parsing and formal grammars. One of Fischer's most-cited works deals with string matching. Already during his years at Michigan, Fischer studied disjoint-set data structures together with Bernard Galler. Cryptography Fischer is one of the pioneers in electronic voting. In 1985, Fischer and his student Josh Cohen Benaloh presented one of the first electronic voting schemes. Other contributions related to cryptography include the study of key exchange problems and a protocol for oblivious transfer. In 1984, Fischer, Silvio Micali, and Charles Rackoff presented an improved version of Michael O. Rabin's protocol for oblivious transfer. Publications . . . . . . Notes External links Fischer, Michael J. at MathSciNet 1942 births Living people Researchers in distributed computing Theoretical computer scientists University of Michigan College of Literature, Science, and the Arts alumni Harvard University alumni Yale University faculty Fellows of the Association for Computing Machinery People from Ann Arbor, Michigan Dijkstra Prize laureates
580170
https://en.wikipedia.org/wiki/Inode
Inode
The inode (index node) is a data structure in a Unix-style file system that describes a file-system object such as a file or a directory. Each inode stores the attributes and disk block locations of the object's data. File-system object attributes may include metadata (times of last change, access, modification), as well as owner and permission data. A directory is a list of inodes with their assigned names. The list includes an entry for itself, its parent, and each of its children. Etymology There has been uncertainty on the Linux kernel mailing list about the reason for the "i" in "inode". In 2002, the question was brought to Unix pioneer Dennis Ritchie, who replied: A 1978 paper by Ritchie and Ken Thompson bolsters the notion of "index" being the etymological origin of inodes. They wrote: Additionally, Maurice J. Bach wrote that an inode "is a contraction of the term index node and is commonly used in literature on the UNIX system". Details A file system relies on data structures about the files, as opposed to the contents of that file. The former are called metadata—data that describes data. Each file is associated with an inode, which is identified by an integer, often referred to as an i-number or inode number. Inodes store information about files and directories (folders), such as file ownership, access mode (read, write, execute permissions), and file type. On many older file system implementations, the maximum number of inodes is fixed at file system creation, limiting the maximum number of files the file system can hold. A typical allocation heuristic for inodes in a file system is one inode for every 2K bytes contained in the filesystem. The inode number indexes a table of inodes in a known location on the device. From the inode number, the kernel's file system driver can access the inode contents, including the location of the file, thereby allowing access to the file. A file's inode number can be found using the ls -i command. The ls -i command prints the i-node number in the first column of the report. Some Unix-style file systems such as ZFS, OpenZFS, ReiserFS, btrfs, and APFS omit a fixed-size inode table, but must store equivalent data in order to provide equivalent capabilities. The data may be called stat data, in reference to the stat system call that provides the data to programs. Common alternatives to the fixed-size table include B-trees and the derived B+ trees. File names and directory implications: Inodes do not contain its hardlink names, only other file metadata. Unix directories are lists of association structures, each of which contains one filename and one inode number. The file system driver must search a directory looking for a particular filename and then convert the filename to the correct corresponding inode number. The operating system kernel's in-memory representation of this data is called struct inode in Linux. Systems derived from BSD use the term vnode (the "v" refers to the kernel's virtual file system layer). POSIX inode description The POSIX standard mandates file-system behavior that is strongly influenced by traditional UNIX file systems. An inode is denoted by the phrase "file serial number", defined as a per-file system unique identifier for a file. That file serial number, together with the device ID of the device containing the file, uniquely identify the file within the whole system. Within a POSIX system, a file has the following attributes which may be retrieved by the stat system call: Device ID (this identifies the device containing the file; that is, the scope of uniqueness of the serial number). File serial numbers. The file mode which determines the file type and how the file's owner, its group, and others can access the file. A link count telling how many hard links point to the inode. The User ID of the file's owner. The Group ID of the file. The device ID of the file if it is a device file. The size of the file in bytes. Timestamps telling when the inode itself was last modified (, inode change time), the file content last modified (, modification time), and last accessed (, access time). The preferred I/O block size. The number of blocks allocated to this file. Implications Filesystems designed with inodes will have the following administrative characterisics. Files can have multiple names. If multiple names hard link to the same inode then the names are equivalent; i.e., the first to be created has no special status. This is unlike symbolic links, which depend on the original name, not the inode (number). An inode may have no links. An unlinked file is removed from disk, and its resources are freed for reallocation but deletion must wait until all processes that have opened it finish accessing it. This includes executable files which are implicitly held open by the processes executing them. It is typically not possible to map from an open file to the filename that was used to open it. The operating system immediately converts the filename to an inode number then discards the filename. This means that the and library functions search the parent directory to find a file with an inode matching the working directory, then search that directory's parent, and so on until reaching the root directory. SVR4 and Linux systems maintain extra information to make this possible. Historically, it was possible to hard link directories. This made the directory structure into an arbitrary directed graph contrary to a directed acyclic graph. It was even possible for a directory to be its own parent. Modern systems generally prohibit this confusing state, except that the parent of root is still defined as root. The most notable exception to this prohibition is found in Mac OS X (versions 10.5 and higher) which allows hard links of directories to be created by the superuser. A file's inode number stays the same when it is moved to another directory on the same device, or when the disk is defragmented which may change its physical location, allowing it to be moved and renamed even while being read from and written to without causing interruption. This also implies that completely conforming inode behavior is impossible to implement with many non-Unix file systems, such as FAT and its descendants, which don't have a way of storing this invariance when both a file's directory entry and its data are moved around. Installation of new libraries is simple with inode file systems. A running process can access a library file while another process replaces that file, creating a new inode, and an all-new mapping will exist for the new file so that subsequent attempts to access the library get the new version. This facility eliminates the need to reboot to replace currently mapped libraries. It is possible for a device to run out of inodes. When this happens, new files cannot be created on the device, even though there may be free space available. This is most common for use cases like mail servers which contain many small files. File systems (such as JFS or XFS) escape this limitation with extents or dynamic inode allocation, which can "grow" the file system or increase the number of inodes. Inlining It can make sense to store very small files in the inode itself to save both space (no data block needed) and lookup time (no further disk access needed). This file system feature is called inlining. The strict separation of inode and file data thus can no longer be assumed when using modern file systems. If the data of a file fits in the space allocated for pointers to the data, this space can conveniently be used. For example, ext2 and its successors store the data of symlinks (typically file names) in this way if the data is no more than 60 bytes ("fast symbolic links"). Ext4 has a file system option called inline_data that allows ext4 to perform inlining if enabled during file system creation. Because an inode's size is limited, this only works for very small files. In non-Unix systems NTFS has a master file table (MFT) storing files in a B-tree. Each entry has a "fileID", analogous to the inode number, that uniquely refers to this entry. The three timestamps, a device ID, attributes, reference count, and file sizes are found in the entry, but unlike in POSIX the permissions are expressed through a different API. The on-disk layout is more complex. The earlier FAT file systems did not have such a table and were incapable of making hard links. NTFS also has a concept of inlining small files into the MFT entry. The derived ReFS has a homologous MFT. ReFS has a 128-bit file ID; this extension was also backported to NTFS, which originally had a 64-bit file ID. The same stat-like API can be used on Cluster Shared Volumes and SMB 3.0, so these systems presumably have a similar concept of a file ID. See also inode pointer structure inotify References External links Anatomy of the Linux File System Inode definition Explanation of Inodes, Symlinks, and Hardlinks Unix file system technology
21772272
https://en.wikipedia.org/wiki/RPM%20Package%20Manager
RPM Package Manager
RPM Package Manager (RPM) (originally Red Hat Package Manager, now a recursive acronym) is a free and open-source package management system. The name RPM refers to the file format and the package manager program itself. RPM was intended primarily for Linux distributions; the file format is the baseline package format of the Linux Standard Base. Although it was created for use in Red Hat Linux, RPM is now used in many Linux distributions such as Fedora Linux, AlmaLinux, CentOS, openSUSE, OpenMandriva and Oracle Linux. It has also been ported to some other operating systems, such as Novell NetWare (as of version 6.5 SP3), IBM's AIX (as of version 4), IBM i, and ArcaOS. An RPM package can contain an arbitrary set of files. Most RPM files are “binary RPMs” (or BRPMs) containing the compiled version of some software. There are also “source RPMs” (or SRPMs) containing the source code used to build a binary package. These have an appropriate tag in the file header that distinguishes them from normal (B)RPMs, causing them to be extracted to /usr/src on installation. SRPMs customarily carry the file extension “.src.rpm” (.spm on file systems limited to 3 extension characters, e.g. old DOS FAT). History RPM was originally written in 1997 by Erik Troan and Marc Ewing, based on , , and experiences. was written by Rik Faith and Doug Hoffman in May 1995 for Red Hat Software, its design and implementations influenced greatly by , a package management system by Faith and Kevin Martin in the fall of 1993 for the Bogus Linux Distribution. preserves the "Pristine Sources + patches" paradigm of , while adding features and eliminating arbitrary limitations present in the implementation. provides greatly enhanced database support for tracking and verifying installed packages Features For a system administrator performing software installation and maintenance, the use of package management rather than manual building has advantages such as simplicity, consistency and the ability for these processes to be automated and non-interactive. rpm uses Berkeley DB as the backend database although since 4.15 in 2019, it supports building rpm packages without Berkeley DB (–disable-bdb). Features of RPM include: RPM packages can be cryptographically verified with GPG and MD5 Original source archive(s) (e.g. , ) are included in SRPMs, making verification easier Delta update: PatchRPMs and DeltaRPMs, the RPM equivalent of a patch file, can incrementally update RPM-installed software Automatic build-time dependency evaluation. Local operations Packages may come from within a particular distribution (for example Red Hat Enterprise Linux) or be built for it by other parties (for example RPM Fusion for Fedora Linux). Circular dependencies among mutually dependent RPMs (so-called "dependency hell") can be problematic; in such cases a single installation command needs to specify all the relevant packages. Repositories RPMs are often collected centrally in one or more repositories on the internet. A site often has its own RPM repositories which may either act as local mirrors of such internet repositories or be locally maintained collections of useful RPMs. Front ends Several front-ends to RPM ease the process of obtaining and installing RPMs from repositories and help in resolving their dependencies. These include: yum used in Fedora Linux, CentOS 5 and above, Red Hat Enterprise Linux 5 and above, Scientific Linux, Yellow Dog Linux and Oracle Linux DNF, introduced in Fedora Linux 18 (default since 22), Red Hat Enterprise Linux 8, AlmaLinux 8, and CentOS Linux 8. up2date used in Red Hat Enterprise Linux, CentOS 3 and 4, and Oracle Linux Zypper used in Mer (and thus Sailfish OS), MeeGo, openSUSE and SUSE Linux Enterprise urpmi used in Mandriva Linux, ROSA Linux and Mageia apt-rpm, a port of Debian's Advanced Packaging Tool (APT) used in Ark Linux, PCLinuxOS and ALT Linux Smart Package Manager, used in Unity Linux, available for many distributions including Fedora Linux. , a command-line utility available in (for example) Red Hat Enterprise Linux Local RPM installation database Working behind the scenes of the package manager is the RPM database, stored in . It uses Berkeley DB as its back-end. It consists of a single database () containing all of the meta information of the installed RPMs. Multiple databases are created for indexing purposes, replicating data to speed up queries. The database is used to keep track of all files that are changed and created when a user (using RPM) installs a package, thus enabling the user (via RPM) to reverse the changes and remove the package later. If the database gets corrupted (which is possible if the RPM client is killed), the index databases can be recreated with the command. Description Whilst the RPM format is the same across different Linux distributions, the detailed conventions and guidelines may vary across them. Package filename and label An RPM is delivered in a single file, normally with a filename in the format: for source packages, or for binaries. For example, in the package filename , the is , the is , the is , and the is . The associated source package would be named RPMs with the extension do not depend on a particular CPU architecture. For example, these RPMs may contain graphics and text for other programs to use. They may also contain shell scripts or programs written in other interpreted programming languages such as Python. The RPM contents also include a package label, which contains the following pieces of information: software name software version (the version taken from original upstream source of the software) package release (the number of times the package has been rebuilt using the same version of the software). This field is also often used for indicating the specific distribution the package is intended for by appending strings like "mdv" (formerly, "mdk") (Mandriva Linux), "mga" (Mageia), "fc4" (Fedora Core 4), "rhl9" (Red Hat Linux 9), "suse100" (SUSE Linux 10.0) etc. architecture for which the package was built (i386, i686, x86_64, ppc, etc.) The package label fields do not need to match the filename. Library packaging Libraries are distributed in two separate packages for each version. One contains the precompiled code for use at run-time, while the second one contains the related development files such as headers, etc. Those packages have "-devel" appended to their name field. The system administrator should ensure that the versions of the binary and development packages match. Binary format The format is binary and consists of four sections: The lead, which identifies the file as an RPM file and contains some obsolete headers. The signature, which can be used to ensure integrity and/or authenticity. The header, which contains metadata including package name, version, architecture, file list, etc. A file archive (the payload), which usually is in cpio format, compressed with gzip. The tool enables retrieval of the cpio file without needing to install the RPM package. The Linux Standard Base requires the use of gzip, but Fedora 30 packages are xz-compressed and Fedora 31 packages might be zstd-compressed. Recent versions of RPM can also use bzip2, lzip, or lzma compression. RPM 5.0 format supports using xar for archiving. SPEC file The "Recipe" for creating an RPM package is a spec file. Spec files end in the ".spec" suffix and contain the package name, version, RPM revision number, steps to build, install, and clean a package, and a changelog. Multiple packages can be built from a single RPM spec file, if desired. RPM packages are created from RPM spec files using the rpmbuild tool. Spec files are usually distributed within SRPM files, which contain the spec file packaged along with the source code. SRPM A typical RPM is pre-compiled software ready for direct installation. The corresponding source code can also be distributed. This is done in an SRPM, which also includes the "SPEC" file describing the software and how it is built. The SRPM also allows the user to compile, and perhaps modify, the code itself. A software package could contain only platform independent scripts. In such a case, the developer could provide only an SRPM, which is still an installable RPM. NOSRC This is a special version of SRPM. It contains "SPEC" file and optionally patches, but does not include sources (usually because of license). Forks , there are two versions of RPM in development: one led by the Fedora Project and Red Hat, and the other by a separate group led by a previous maintainer of RPM, a former employee of Red Hat. RPM.org The rpm.org community's first major code revision was in July 2007; version 4.8 was released in January 2010, version 4.9 in March 2011, 4.10 in May 2012, 4.11 in January 2013, 4.12 in September 2014 and 4.13 in July 2015. This version is used by distributions such as Fedora Linux, Red Hat Enterprise Linux and derivatives, openSUSE, SUSE Linux Enterprise, Unity Linux, Mageia, OpenEmbedded, Tizen and OpenMandriva Lx (formerly Mandriva). RPM v5 Jeff Johnson, the RPM maintainer since 1999, continued development efforts together with participants from several other distributions. RPM version 5 was released in May 2007. This version is used by distributions such as Wind River Linux (until Wind River Linux 10), Rosa Linux, and OpenMandriva Lx (former Mandriva Linux which switched to rpm5 in 2011) and also by the OpenPKG project which provides packages for other common UNIX-platforms. OpenMandriva Lx is going to switch back to rpm.org for 4.0 release. OpenEmbedded, the last major user of RPM5, switched back to rpm.org due to issues in RPM5. See also Autopackage — a "complementary" package management system Delta ISO — an ISO image which contains RPM Package Manager files dpkg — package management system used by Debian and its derivatives List of RPM-based Linux distributions pkg-config — queries libraries to compile software from its source code References External links RPM.org project home page RPM and DPKG command reference The story of RPM by Matt Frye in Red Hat Magazine How to create an RPM package Video tutorials for Building and Patching the RPMs RPM Notes - Building RPMs the easy way Packaging software with RPM, Part 1: Building and distributing packages Learn Linux, 101: RPM and YUM package management Archive formats Free package management systems Linux package management-related software Red Hat software
11340611
https://en.wikipedia.org/wiki/List%20of%20Commodore%2064%20games%20%28A%E2%80%93M%29
List of Commodore 64 games (A–M)
This is a list of 1301 game titles released for the Commodore 64 personal computer system, sorted alphabetically. 0–9 $100,000 Pyramid 007: Licence to Kill 10 Knockout! 10-Pin Bowling 10th Frame 10000 Meters 180 19 Part One: Boot Camp 1942 1943: One Year After 1943: The Battle of Midway 1985: The Day After 1994: Ten Years After 1st Division Manager 2001 221B Baker Street 3-D Breakout 3-D Labyrinth 3-D Skramble 3D Construction Kit 3D Glooper 3D Tanx 4 Soccer Simulators 4th & Inches 4x4 Off-Road Racing 50 Mission Crush 5th Gear 720° 720° Part 2 A Aaargh! Aardvark ABC Monday Night Football Abrakadabra Accolade's Comics ACE - Air Combat Emulator Ace 2 ACE 2088 Ace Harrier Ace of Aces Acrojet Action Biker Action Fighter Action Force Adam Norton's Ultimate Soccer The Addams Family Addicta Ball ADIDAS Championship Football Adrenalin Adult Poker Advance to Boardwalk Advanced Dungeons & Dragons: Heroes of the Lance Advanced Basketball Simulator Advanced Pinball Simulator Adventure Construction Set Adventure Master Adventure Quest Adventureland Adventures in Narnia: Dawn Treader Adventures in Narnia: Narnia Aegean Voyage After Burner After The War Afterlife Afterlife II Afterlife v1.0 Afterlife v2.0 Aftermath Agent Orange Agent USA Agent X Agent X II: The Mad Prof's Back Ah Diddums Aigina's Prophecy Air Support Airborne Ranger Airwolf Airwolf II Alcazar: The Forgotten Fortress ALF: The First Adventure Alf in the Color Caves Alice in Videoland Alice in Wonderland Alien (graphics adventure) Alien Alien 3 Alien Storm Alien Syndrome Aliens: The Computer Game Alter Ego Alleykat Altered Beast Alter Ego: Female Version Alter Ego: Male Version Alternate Reality: The City Alternate Reality: The Dungeon Alternative World Games Amadeus Revenge Amaurote The Amazing Spider-Man Amazon Warrior Amazon America's Cup American Tag-Team Wrestling Amnesia Anarchy Andy Capp: The Game Annals of Rome Annihilator Annihilator II Another World (1990) (Double Density) Another World (1991) (CP Verlag) Another World (1992) (X-Ample) Ant Attack Antimonopoly Antiriad Apache Apache Gold Apache Strike Apocalypse Now Apollo 18: Mission to the Moon Apple Cider Spider Arabian Arabian Nights Arac Arachnophobia Arcade Classics Arcade Flight Simulator Arcade Fruit Machine: Cash 'n' Grab Arcade Game Construction Kit Arcade Pilot Arcade Trivia Quiz Arcade Volleyball Arcadia Arcana Archipelago Archon: The Light and the Dark Archon II: Adept Archon III: ExciterD ArciereD ArcoD ARCOSD Arctic ShipwreckD Arctic Wastes!D Arc of Yesod Arcticfox ArdantD Ardok the BarbarianD Ardy the Aardvark Area 13D Area EstimationD AreasD AreeD ArenaD Arena 3000D Arena Football ArenumD ArexD ArgoD ArgonD Argon - L'Orrore di ProvidenceD The Argon FactorD ArgosD The Argos ExpeditionD Arhena! The AmazonD Arithme-SketchD The Arithmetic GameD ArithmeticianD ArizonaD Arizona - The Boy in the BubbleD The Ark of ExodusD Ark PandoraD Arkanoid Arkanoid: Revenge of Doh Armageddon The Armageddon Files The Armageddon Man Armalyte Armalyte - Competition Edition Armalyte II Armourdillo Army Moves Army Moves II Arnie Arnie II Arnie Armchair's Howzat Artillery Duel Artura Asterix and the Magic Cauldron The Astonishing Adventures of Mr. Weems and the She Vampires Astro-Grover Asylum The Attack of the Phantom Karate Devils Ataxx ATC - Air Traffic Controller ATF Athena Atlantis Atlantis Lode Runner Atomic Robo-Kid Atomino Atomix Attack Chopper! Attack of the Mutant Camels Attack of the PETSCII Robots ATV Simulator Auf Wiedersehen Monty Auf Wiedersehen Pet Aufstand der Sioux Auggie Doggie and Doggie Daddy Aussie Games Auto Mania Autoduel Avenger Avenger (Way of the Tiger II) Avengers Avoid the Noid Aztec Aztec Challenge Aztec Tomb Aztec Tomb Revisited B B-1 Nuclear Bomber B-24 B.A.T. B.C. Bill B.C.'s Quest for Tires B.C. II: Grog's Revenge Baal Back to the Future Back to the Future Part II Back to the Future Part III Backgammon Bad Blood Bad Dudes Vs. Dragon Ninja Bad Street Brawler Badballs Badlands Bagitman Ballblazer Ballistix Balloonacy Ballyhoo Baltic 1985: Corridor to Berlin Bandits Bangers and Mash Bangkok Knights Barbarian Barbarian II: The Dungeon of Drax Barbarian: The Ultimate Warrior Barbie The Bard's Tale The Bard's Tale II: The Destiny Knight The Bard's Tale III: Thief of Fate Barry McGuigan World Championship Boxing Basil the Great Mouse Detective Basket Master Batalyx Batman Batman: The Caped Crusader Battle Chess (1989) by Interplay Battle Chess (1992) Battle Through Time Battle Valley Battles of Napoleon Battleships BattleTech: The Crescent Hawk's Inception Battlezone Batty Bay Street Bazooka Bill Beach Buggy Simulator Beach-Head Beach Head II: The Dictator Strikes Back Beach Head III Beach Volley Beaky and the Egg Snatchers Beamrider Bear Bovver Beat-it Beatle Quest Bee 52 Beer Belly Burt's Brew Biz Below the Root Benji: Space Rescue Betrayal Better Dead Than Alien Beverly Hills Cop Beyond Castle Wolfenstein Beyond Dark Castle Beyond the Black Hole Beyond the Forbidden Forest Beyond the Ice Palace Big Bird's Special Delivery 1984 Big Deal, The Big Game Fishing Biggles Big Mac Big Trouble in Little China Bill & Ted's Excellent Adventure Bilbo the Hobbit Bionic Commando by Go! Bionic Commando (USA Version) by Capcom Bionic Granny The Birds and the Bees II: Antics Black Crystal Black Gold (1983) by Photronics Black Gold (1989) by reLINE Software Black Gold (1992) by Starbyte Software Black Hawk Black Lamp Black Magic Black Tiger Blackwyche The Blade of Blackpool Blade Runner Blades of Steel Blagger Blagger Construction Set Blagger goes to Hollywood Blasteroids Blinky's Scary School Blitzkrieg Blockout Blood Brothers Blood Money Bloodwych Blue Encounter, The Blue Max Blue Max 2001 Blue Moon Blue Thunder Blues Brothers, The BMX Kidz BMX Simulator BMX Racers by Mastertronic BMX Trials Bobby Bearing The Boggit Bomb Jack Bomb Jack 2 Bombo Bombuzal Bonanza Bros. Bonecruncher Bonka Booty Bop'n Rumble Bop'n Wrestle Border Zone Bored of the Rings Borrowed Time Bosconian Boss Boulder Dash Boulder Dash II: Rockford's Revenge Boulder Dash III Boulder Dash Construction Kit Bounces Bounder Bounty Bob Bounty Bob Strikes Back! Bozo's Night Out Bram Stoker's Dracula BraveStarr Break Dance Breakout Breakout Construction Kit Breakstreet Breakthrough in the Ardennes Breakthru Brian Bloodaxe Brian Jacks Superstar Challenge Brian Jack's Uchi Mata BrickFast Bride of Frankenstein Bristles Broadsides Bruce Lee Bubble Bobble Bubble Dizzy Bubble Ghost Buck Rogers: Planet of Zoom Buck Rogers: Countdown to Doomsday Budokan - The Martial Spirit Bugaboo (The Flea) Bug Bomber Buggy Boy Bugs Bunny Private Eye Bugsy Bumpin' Buggies Bump Set Spike Bundesliga 98/99 Bundesliga Live Bundesliga Manager Bundesliga Manager v2.0 Bundesliga Manager v3.0 Bundesliga Manager Polish Version BurgerTime Burger Time '97 Burnin' Rubber Bushido Buster Bros (also known as Pang) Butasan Butcher Hill By Fair Means or Foul C Cabal California Games Camelot Warriors Campaign Manager Captain America in: The Doom Tube of Dr. Megalomann Captain Blood Captain Dynamo Captain Fizz Captain Power The Captive Capture The Flag Captured Card Sharks Carnage Carrier Command Castle Master Castle Nightmare Castle of Terror Castle Wolfenstein The Castles of Dr. Creep Castlevania Catalypse Catastrophes Cauldron Cauldron II: The Pumpkin Strikes Back Cave Fighter Cave of the Word Wizard Cavelon Cavelon II Caveman Games Caveman Ughlympics Centipede Challenge of the Gobots Chambers of Shaolin Championship 3D Snooker Champions of Krynn Championship Baseball Championship Lode Runner Championship Lode Runner: Extended Edition Championship Lode Runner: Training Missions Championship Sprint Championship Wrestling Chase H.Q. Chernobyl Cheese Graphics Editor Chess 7.0 Chess 7.5: How About a Nice Game of Chess! Chess Analyse Chess Champion Chess Grand Master Chess Quarto The Chessmaster 2000 Chicago Chiller Chimera China Miner Chinese Juggler Chip's Challenge ChipWits Cholo Choplifter Chubbie Chester Chubby Gristle Chuck Norris Superkicks Chuck Rock Chuck Yeager's Advanced Flight Trainer Chuck Yeager's Flight Trainer Chuckie Egg Chuckie Egg 2 Circus Charlie Circus Games Cisco Heat Citadel CJ in the USA CJ's Elephant Antics Classic Concentration Classic Snooker Clever & Smart Cliffhanger Cliffhanger: A Perilous Climb Cloud Kingdoms Clowns Clue Master Detective Clue! Cluedo Clystron Cobra Coco Notes Cohen's Towers Colonial Conquest Colony by Mastertronic Colony (1996) by John Woods Colony v3 Colors Colossal Adventure Colossal Cave Adventure Colossus Chess 2.0 Colossus Chess 4.0 The Colour of Magic Combat Course Combat Crazy by Powerslave Developments and music by Jeroen Tel Combat School Comic Bakery Commando Commando 86 Commando II Commando Libya Compunet Computer Baseball The Computer Edition of Risk: The World Conquest Game Computer Football Strategy Computer Quarterback Conan: Hall of Volta Confuzion Congo Bongo Continental Circus Contra ConundrumD Cool Croc Twins Cool World Cops 'n' Robbers CorporationD Corruption CorsairD Cosmic Causeway: Trailblazer II Cosmic CrusaderD Cosmic PirateD Cosmic Relief: Prof. Renegade to the RescueD Count and Add Count, The Count Duckula in No Sax Please - We're EgyptianD Countdown To MeltdownD Cover Girl Strip Poker Crack Down Crack-Up Crazy Balloon Crazy CarD Crazy CarsD Crazy Cars 2D Crazy Cars III Crazy Comets Crazy Kong Crazy Sue Create with GarfieldD Creative ContraptionsD Creatures Creatures II: Torture Trouble Crime and Punishment Crisis Mountain The Crimson Crown Crossbow Crossfire CrossroadsD Crossroads IID Crossword CreatorD Crossword Magic 4.0D Crossword PuzzleD Crusade in EuropeD Crush, Crumble and Chomp! Crystal Castles Crystal Kingdom Dizzy Crystals of Zong CubulusD Curse of RaD The Curse of Sherwood Curse of the Azure Bonds Cuthbert Enters the Tombs of Doom Cuthbert Goes Walkabout Cuthbert in SpaceD Cuthbert in the JungleD Cutthroats Cyberball Cyberdyne WarriorD Cybernoid Cybernoid II: The Revenge Cybertron Mission Cyborg The Cycles: International Grand Prix Racing D Dalek Attack Daley Thompson's Decathlon Daley Thompson's Olympic ChallengeD Daley Thompson's Super-Test Dallas Quest The Dam Busters Dan Dare: Pilot of the Future Dan Dare II: Mekon's Revenge Dan Dare III: The Escape DandyD Daredevil Dennis Dark Castle Dark Fusion Dark LordD Dark Side The Dark TowerD Darkman David's Midnight Magic David's Midnight Magic IID Days of Thunder DDTD Deactivators Dead EndD Dead or AliveD Deadline Death in the CaribbeanD Death Bringer Death Knights of Krynn Death StarD Death Wish 3 Deathlord Decisive Battles of the American Civil War Volume 1: Bull Run to Chancellorsville Defender Defender 64D Defender of the Crown Deflektor Déjà Vu by Ariolasoft-Axis KomputerkunstD Déjà Vu by Mindscape Deliverance: Stormlord II The Delphic Oracle Delta Delta Man Demon Attack Demon Stalkers Demon's Winter Depthcharge Desert Fox Designasaurus Destroyer The Detective Deus Ex Machina Dick Tracy Die Hard Die Hard 2: Die Harder Dig Dug Dinky Doo Dino Eggs Diplomacy Dive Bomber Dizzy Down the Rapids Dizzy Panic! Dizzy Prince of the Yolkfolk Dizzy: The Ultimate Cartoon Adventure DNA Warrior Doctor Doom's Revenge Doctor Who and the Mines of Terror Doctor Who and the Entropilytes Doctor Who 2 Dogfight 2187 Dogfight! Domination Dominion Donald's Alphabet Chase Donald Duck's Playground Donkey Kong Doomdark's Revenge Dot Gobbler Double Dare Double Dragon Double Dragon II: The Revenge Double Dragon 3: The Rosetta Stone Double Dribble Double Take Dough Boy Draconus Dracula Drag Race Eliminator Dragon Breed Dragon Ninja Dragon Skulle Dragon Spirit Dragon Wars Dragon's Lair Dragons Den Dragon's Lair II: Escape from Singe's Castle DragonHawk Dragonriders of Pern Dragons of Flame DragonStrike Dragonworld Drelbs Dream House Driller Drol Dropzone Druid Druid II: Enlightenment Duck Shoot Ducks Ahoy! DuckTales: The Quest for Gold Duckula 2: Tremendous Terence The Duel (1986) by Paradize Software The Duel: Test Drive II Dungeon Dungeon Maker by Ubisoft Dungeon of Doom Dunzhin Duotris Dynamic Duo Dynamite Dan Dynamite Düx Dynamix (1988) by Digital Design Dynamix (1989) by Mastertronic Dynasty Wars E E-Motion Eagle Empire Earth Orbit Stations Echelon Eddie Kidd Jump Challenge The Eidolon Elektra Glide Elevator Action Elidon Elite Elm Street Elvira: Mistress of the Dark Elvira: The Arcade Game Elvira II: The Jaws of Cerberus Emerald Isle Emerald Mine Emlyn Hughes International Soccer The Empire of Karn Empire Empire! Empire: Wargame of the Century Enchanter Encounter Enduro Racer Energy Warrior Enigma Force Enigma Force Construction Set Entity Entombed Eon Equations Equinox Erebus Erik the Viking Escape from the Planet of the Robot Monsters ESWAT: City Under Siege The Eternal Dagger Eureka! European Champions European Football Champ Evening Star Everest Ascent Everyone's a Wally The Evil Dead Evolution (1982 video game) Excalibur Exile Exolon Exploding Fist II: The Legend Continues Extended Championship Lode Runner Exterminator Eye of Horus F F-14 Tomcat F-15 Strike Eagle F-16 Combat Pilot F-18 Hornet F-19 Stealth Fighter F1 GP Circuits F.1 Manager F1 Tornado Face Off! The Faery Tale Adventure Fahrenheit 451 Fairlight Falcon Patrol Falcon Patrol II Fallen Angel by Emerald Software Ltd Fantasy World Dizzy The Farmer's Daughter Fast Break Fast Eddie Fast Food Fast Tracks: The Computer Slot Car Construction Kit Fay - That Math Woman! Fellowship of the Rings Felony! Fernandez Must Die Feud Fiendish Freddy's Big Top O'Fun Fight Night Fighter Bomber Fighter Command v1.1 Fighter Pilot Fighting Warrior Final Assault Final Blow Final Fight Finders Keepers Fire Ant Fire & Forget II: The Death Convoy Fire King Fire Power Fire Zone Firefly Firelord Firequest Firetrack First Samurai Fish! Fish! v1.07 Fisher-Price: Alpha Build Fist II: The Legend Continues Fist+ Flasch Bier Flasch Bier Konstruktion Kit Flasch Bier 2 Flash Gordon Flight Path 737 Flight Simulator II Flimbo's Quest Flintstones: Yabba-Dabba-Dooo! Flintstones, The Flip & Flop Floyd of the Jungle Flunky Flyerfox Flying Ace Flying Shark Football Manager Football Manager 2 Football Manager 2 Expansion Kit Football Manager 3 Football Manager World Cup Edition Footballer of the Year Footballer of the Year 2 Forbidden Forest Forbidden Forest 2: Beyond the Forbidden Forest Forgotten Worlds Formula 1 Simulator Fort Apocalypse The Fourth Protocol Foxx Fights Back Fraction Fever Frak! Frank Bruno's Boxing Frankenstein by CRL Frankenstein (1992) by Zeppelin Games Frankenstein Jnr. Frankie Goes to Hollywood Frantic Freddie Freak Factory Fred Freddy Hardest Frenzy Friday the 13th Frog Run Frogger Frogger II: ThreeeDeep! Frogs and Flies Front Line (video game) Frostbyte Fruit Machine Simulator Fruit Machine Simulator 2 Fun School 2 Fun School 3 Fun School 4 Fun School Specials Fungus Future Knight G G-Force G-LOC: Air Battle G.I. Joe: A Real American Hero G.U.T.Z. Galactic Conquest Galactic Empire Galaga Galaxian Galaxy Force Gamma Strike Game Over Game Over II The Games: Summer Edition The Games: Winter Edition The Games: Winter Edition Practice Gangbusters Gaplus Garfield: Big Fat Hairy Deal Garfield: Winter's Tail Garrison Gary Lineker's Hot Shot! Gary Lineker's Super Skills Gary Lineker's Superstar Soccer Gateway to Apshai Gateway to the Savage Frontier Gato Gauntlet Gauntlet: The Deeper Dungeons Gauntlet II Gauntlet III: The Final Quest Gazza II Gazza's Superstar Soccer GBA Championship Basketball: Two-on-Two Gee Bee Air Rally Gem'X Gemini Wing Gemstone Healer Gemstone Warrior Geos (pixel edit program) Germany 1985 Gertrude's Secrets Ghetto Blaster Ghost Chaser Ghost Town Ghost Trap Ghostbusters Ghostbusters II Ghosts 'n Goblins Ghouls Ghouls 'n Ghosts Gilligan's Gold Give My Regards To Broad Street Gladiator Glider Rider Global Chess Global Commander Glutton Gnome Ranger Go Go The Ghost Godzilla Gold Medal Games Golden Axe Goldrunner The Goonies Gorf Gradius Graeme Souness International Soccer Graeme Souness Soccer Manager Graham Gooch's All Star Cricket Graham Gooch's Test Cricket Grandmaster Chess Grand Monster Slam Grand National Grand Prix Grand Prix Circuit Grand Prix Master Grand Prix Simulator Grand Prix Simulator 2 Grand Slam Baseball Grange Hill Granny's Garden Grave Yardage Gravitron The Great American Cross-Country Road Race The Great Escape The Great Giana Sisters Great Gurianos Green Beret (also known as Rush'n Attack) Gremlins Gremlins: The Adventure Gremlins 2: The New Batch Gribbly's Day Out Gridder Gridiron Gridrunner The Growing Pains of Adrian Mole Gruds in Space Gryzor Guadalcanal Guerrilla War Guild of Thieves, The Gumshoe Gunfighters Gunship Gunslinger Gust Buster Gyroscope Gyroscope Construction Set Gyroscope II Gyruss H H.A.T.E. H.E.R.O. Habitat Hacker Hacker II: The Doomsday Papers Hades Nebula Hägar the Horrible The Halley Project Halls of Montezuma: A Battle History of the U.S. Marine Corps Halls of the Things Hammerfist Hangman's Hazard Hard Drivin' Hard Hat Mack Hardball! Hardball! II Hareraiser Harrier Attack Harrier Combat Simulator Harvey Headbanger Hat Trick Hawkeye Headache Head Over Heels Heart of Africa Heartland Hell Cat Ace Hellgate Helter Skelter Henrietta's Book of Spells Henry's House Herbert on the Slope Herbert's Dummy Run Hercules Hercules Slayer of the Damned Hero Quest The Heroes of Karn Herobotix Hes Games High Frequency (video game) High Noon High Seas Highland Games Highlander Highway Encounter Hillsfar Hitchhiker's Guide to the Galaxy, The Hobbit, The Hole In One Hollywood Hijinx Hollywood or Bust Hollywood Squares The Honeymooners Hook Hoppin' Mad Horace Goes Skiing Hostages by Infogrames Hot Pop by Rocky SoftHot RodHot WheelsHotShotHoundedHouse of UsherHoverHover BovverHow to be a Complete BastardHoward the DuckHoward the Duck IIHudson HawkHugoThe HulkHuman Killing MachineHunchbackHunchback: The AdventureHunchback at the OlympicsHunchback II: Quasimodo's RevengeHungry HoraceThe Hunt for Red October (1987)The Hunt for Red October (1990)Hunter PatrolHunter's MoonHustlerHydraHydraxHypaballHyper SportsHyperspace WarriorHysteriaII. Q.I, BallI, Ball 2IcarusIce PalaceIdőrégészIkari IIIIkari WarriorsImhotepImpactImperatorImpossamoleImpossible MissionImpossible Mission IIIn Search of the Most Amazing ThingIncredible Shrinking SphereIndiana Jones and the Last Crusade: The Action GameIndiana Jones and the Temple of DoomIndiana Jones in the Lost KingdomIndoor SportsIndy HeatInfernal RunnerInfidelInfiltratorInfiltrator Part II: The Next DayIngrid's BackInjured EngineInside OutingInspector GadgetInspector Gadget and the Circus of FearInstant MusicThe InstituteInternational Basketball (Commodore 64)International Karate (World Karate Champ)International Karate +International Karate + GoldInternational Soccer (1983 computer game) by CommodoreInternational Soccer (1988 computer game) by CRLInto the Eagle's NestIOIridis AlphaIt's a KnockoutItaly '90 SoccerIvan 'Ironman' Stewart's Super Off RoadJJack AttackJack Nicklaus' Greatest 18 Holes of Major Championship GolfJack the NipperJack the Nipper II: In Coconut CapersJack the RipperJackalJail BreakJail WarJailbreak From Starhold 1James Bond 007James Pond 2Jason of the ArgonautsJawbreakerJawsJeep CommandJetJet Set WillyJet Set Willy 2Jet-Boot JackJet-BoysThe JetsonsJewels of DarknessJigsawJinxterJocky Wilson's Darts ChallengeJoe BladeJoe Blade 2John Elway's QuarterbackJohn Madden FootballJordan vs. Bird: One on OneJourneyJourney to Centre of the EarthJr. Pac-ManJudge Dredd (1986)Judge Dredd (1990)Jumpin' JackJumpmanJumpman JuniorJungle HuntJuno FirstJupiter LanderKKampfgruppeKaneKane 2KangarudyKarate ChampKaratekaKarnovKatakisKawasaki Magical MusicquillKawasaki Rhythm RockerKawasaki SynthesizerKennedy ApproachKenny Dalglish Soccer ManagerKenny Dalglish Soccer MatchKentillaKey QuestThe Keys to MaramonKick Off (1983) by Bubble BusKick Off (1989) by Anco SoftwareKick Off 2Kid GridKikstartKikstart 2Killed Until DeadKillerwattKilling MachineKinetikKing of ChicagoKing's BountyKings of the BeachKlaxKnight GamesKnight Games 2Knight OrcKnight RiderKnight TymeKnightmareKnights of LegendKokotoni WilfKonami's Ping PongKongKong Strikes Back!Koronis RiftKosmic KangaKrakoutKromazoneKung-Fu MasterKwik SnaxLL.A. SWATLabyrinth: The Computer GameLamborghini American ChallengeLancelotLas Vegas Video PokerLaser ChessLaser SquadLast BattleLast DuelThe Last NinjaLast Ninja 2Last Ninja 3Last Ninja RemixThe Last V8Law of the WestLazarianLazy JonesLeaderboardLeather Goddesses of PhobosLed StormLegacy of the AncientsThe Legend of BlacksilverThe Legend of KageThe Legend of SinbadLegend of the Amazon WomenLegend of the Knucker-HoleLegions of DeathLe MansLemmingsLethal WeaponLeviathanThe Light CorridorLight ForceLine of FireLions of the UniverseLittle Computer PeopleLive and Let DieLiverpoolThe Living DaylightsLivingstone, I Presume?LocoLode RunnerLode Runner's RescueLondon BlitzLooney BalloonLoopzLord of the Rings: Game OneLords of ChaosLords of ConquestLords of KarmaThe Lords of MidnightLords of TimeLotus Esprit Turbo ChallengeThe Lost Crown of Queen AnneLucky LukeLunar LeeperLunar OutpostLunar RescueThe Lurking HorrorMM.A.S.K. III - Venom Strikes BackM.C. KidsM.U.L.E.Macadam BumperMad DoctorMad NurseThe Magic CandleMagic CarpetMagic Johnson's BasketballMagicland DizzyMail Order MonstersMain FrameMalak (horror urban myth)Mama LlamaManchester UnitedMancopterMandroidManiac MansionManic MinerMankyMarauderMarble MadnessMario Bros.Mars SagaThe Mask of the SunMaster ChessMaster of MagicMaster of the LampsMasters of the Universe: The Arcade GameMasters of the Universe: The MovieMasters of the Universe: The Super AdventureMatch DayMatch Day IIMatch PointMath BlasterMath BustersMayhem in MonsterlandMaziacsMcDonaldlandMean StreetsMeanstreakMega ApocalypseMenaceMercenary: Escape from TargMercenary: The Second CityMercsMetal GearMetro-CrossMiami ViceMichael Jackson's MoonwalkerMickey's Space AdventureMicroballMicrocosmMicroLeague BaseballMicroLeague WrestlingMicroprose SoccerMidnight ResistanceMiecze Valdgira II: Władca GórMig 29 Soviet FighterMight and Magic Book One: The Secret of the Inner SanctumMight and Magic II: Gates to Another WorldMighty Bomb JackMikieMilk RaceMindfighterMind MirrorMind Prober Jr.MindtrapMiner 2049erMini-PuttMinnesota Fats Pool Challenge (aka "Hustler")Mission A.D.Mission AsteroidMission OmegaModem WarsMoebius: The Orb of Celestial HarmonyMolecule ManMoneybags (1983)MonopolyMonster MunchMontezuma's RevengeMonty MoleMonty on the RunMoon CrestaMoon PatrolMoon ShuttleMoondustMoonmistMothershipMorpheusMotor ManiaMotosMountain KingMountain Palace AdventureThe Movie Monster GameMr AngryMr. Do!Mr. Do's CastleMr. HeliMr. MephistoMr. Robot and His Robot FactoryMr. WimpyMs. Pac-ManMulti-Player Soccer ManagerThe MuncherMunchman 64Murder on the MississippiMurder on the ZinderneufMushroom AlleyMusic ComposerMusic Construction SetMusic MachineMutant HerdMutant MontyMythMyth: History in the Making'' D is a link to the Wikidata page for this game. Games that may not be notable enough for Wikipedia can often be found there. See also List of Commodore 64 games References A-M Commodore 64: A-M ca:Llista de videojocs de Commodore 64 de:Liste bekannter C64-Spiele it:Videogiochi per Commodore 64 tr:Commodore 64 Oyunları
40909378
https://en.wikipedia.org/wiki/Kaldi%20%28software%29
Kaldi (software)
Kaldi is an open-source speech recognition toolkit written in C++ for speech recognition and signal processing, freely available under the Apache License v2.0. Kaldi aims to provide software that is flexible and extensible, and is intended for use by automatic speech recognition (ASR) researchers for building a recognition system. It supports linear transforms, MMI, boosted MMI and MCE discriminative training, feature-space discriminative training, and deep neural networks. Kaldi is capable of generating features like mfcc, fbank, fMLLR, etc. Hence in recent deep neural network research, a popular usage of Kaldi is to pre-process raw waveform into acoustic feature for end-to-end neural models. Kaldi has been incorporated as part of the CHiME Speech Separation and Recognition Challenge over several successive events. The software was initially developed as part of a 2009 workshop at Johns Hopkins University. Kaldi is named after the legendary Ethiopian goat herder Kaldi who was said to have discovered the coffee plant. See also fMLLR List of speech recognition software References External links Kaldi – The official GitHub project How to start with Kaldi and Speech Recognition - A guide regarding the different parts of the system Kaldi paper - The Kaldi Speech Recognition Toolkit VOSK – open source and commercial models from Alpha Cephei on Kaldi foundations Free software projects Computational linguistics Speech recognition software Software using the Apache license
3491362
https://en.wikipedia.org/wiki/Simtel
Simtel
Simtel (sometimes called Simtelnet, originally SIMTEL20) was an important long-running archive of freeware and shareware for various operating systems. The Simtel archive had significant ties to the history of several operating systems: it was in turn a major repository for CP/M, MS-DOS, Microsoft Windows and FreeBSD. The archive was hosted initially on the MIT-MC PDP-10 running the Incompatible Timesharing System, then TOPS-20, then FreeBSD servers, with archive distributor Walnut Creek CDROM helping fund FreeBSD development. It began as an early mailing list, then was hosted on the ARPANET, and finally the fully open Internet. The service was shut down on March 15, 2013. History Simtel originated as SIMTEL20, a software archive started by Keith Petersen in 1979 while living in Royal Oak, Michigan. The original archive consisted of CP/M software for early 8080-based microcomputers. The software was hosted on a PDP-10 at MIT that also ran a CP/M mailing list to which Petersen subscribed. When access to the particular MIT computer was removed in 1983, fellow CP/M enthusiast Frank Wancho, then an employee at the White Sands Missile Range, arranged for the archive to be hosted on a DECSYSTEM-20 computer with ARPANET access, accessible via FTP at simtel20.arpa, later known as wsmr-simtel20.army.mil. At this time, Simtel began archiving MS-DOS software in addition to its archive of CP/M software. Over time, the SIMTEL20 archive added software for other operating systems, user groups and various programming languages, including the Ada Software Repository, the CP/M User's Group, PC/Blue, SIG/M(icros), and the Unix/C collections. In 1991, Walnut Creek CDROM was founded by Robert A. Bruce, which helped distribute the Simtel archive on CD-ROM discs for those not wishing, or unable, to access the archive online. In 1993, the SIMTEL20 archive at White Sands Missile Range was shut down due to budget constraints. From 1993 on, the Walnut Creek CDROM FTP server and (later on) Web site became the focal point for online Simtel access. For much of its life the Web site and primary mirrors were located at www.cdrom.com, www.simtel.net, and oak.oakland.edu at Oakland University. In July 1998, the Simtel FTP server set a record for overall traffic with a total transfer amount of 417 GB of data in one day. In May 1999, the Simtel FTP server surpassed its own record by transferring, in one day, a total of 873 GB. In the same year it served 10,000 clients at a time, showing that the C10k problem was tractable on contemporary systems. Due to a mirror licensing dispute situation with Coast to Coast Telecommunications in 1995 (now Allegiance Telecom, part of XO Communications), archive maintainer Keith Petersen left his employment with CCT and moved on to Walnut Creek CDROM. In October 1999, Digital River purchased Simtel from Walnut Creek CDROM for US$1.0 million and 143,885 shares of common stock. Mr. Petersen contracted with Digital River, along with small business owner and son Kurt Petersen of Petersen Business Management, to manage the Simtel collections and continued to do so until January 15, 2001. The service was shut down on March 15, 2013. Some mirrors may still be available. References External links File hosting Internet properties established in 1993
5981234
https://en.wikipedia.org/wiki/Mustang%20Software
Mustang Software
Mustang Software, Inc. was a California-based corporation that developed telecommunications software products. Mustang was incorporated in 1988, became a public corporation (NASDAQ ticker symbol MSTG) in 1995, and was finally merged into Quintus Corporation in 2000. Mustang's first software products were sold using the shareware model. As the company grew, the products were soon migrated to shrinkware. During the rise of the Internet and electronic software distribution, Mustang stopped distributing physical products and instead sold licenses to its software. Major Products Wildcat! BBS For most of its lifetime, Mustang's flagship product was Wildcat! BBS. Wildcat! was a bulletin board system that computer users could dial into using a modem, and communicate with other users online. Initially, only one user could be dialed into the system at one time, but technological advances later allowed more than one user to be online simultaneously and to interact with one another. The first versions of Wildcat! ran on the DOS platform. In the mid-1990s, Mustang developed a new version called WINServer that ran on 32-bit Windows platforms. Wildcat! was sold to Santronics Software, Inc. in 1998 as Mustang wanted to concentrate on its new software products. Qmodem Pro Mustang bought Qmodem from The Forbin Project in 1992 and renamed it to Qmodem Pro. Qmodem Pro was a DOS-based communications program, intended for use by computer users to dial into BBS systems. Mustang developed versions of Qmodem Pro for 16-bit and 32-bit version of Windows. Support for RIP was added in 1993. Qmodem Pro continued to be sold by Mustang through 2000, and the rights to it were purchased by Quintus. Its status is now abandonware. Internet Message Center Mustang developed Mustang Software in 1997 in response to the drop in the bulletin board system market due to the rise of the Internet. Internet Message Center, or IMC as it was known, was designed to handle incoming corporate email. The email was filtered, sorted, tracked, and distributed to agents (people who would respond to the email). Agent responses would be routed back through IMC so a complete history of email conversations with a customer could be recorded. IMC also provided reporting features to analyze email performance. The rights to IMC were purchased by Quintus in 2000. Its status is now abandonware. History September 1986: Jim Harrer starts Mustang Software in the bedroom of his Bakersfield, California home. March 1987: The first version of the company's Wildcat! software ships. It is designed to let computers connect to electronic bulletin boards via modem. December 23, 1988: Mustang Software is incorporated in California. 1991: The third version of Mustang's Wildcat! software is released, generating success for the fledgling business. April 1995: Mustang Software completes its first offering of common stock. Almost immediately following its decision to go public, the company's fortunes began to erode as bulletin board software is rendered obsolete by internet browsers. 1995 and 1996: Mustang's first attempts to develop web browser software are overshadowed by Netscape Navigator and Microsoft Internet Explorer. Cutbacks shrink company staff from a high of around 60 people to only 30. Mustang records heavy losses as profits plummet. September 1997: Mustang releases Internet Message Center software to critical acclaim. Supporting software is also released that year. The software allows companies to efficiently route, track and answer e-mail from customers. September 1998: Mustang issues an additional $1.5 million in company stock to bolster dwindling cash resources. Investors also provide a $5 million line of equity credit. The move prevents Mustang from losing its place on the Nasdaq Small Cap market. November 19, 1998: Mustang sells its Wildcat! software to Florida-based Santronics Software, Inc. April 1999: Mustang posts its first profit in 12 consecutive quarters, recording a $10,299 improvement in its bottom line. Second through fourth quarters 1999: Mustang again posts moderate losses as it builds a national sales force, regenerating its employee rolls to 62 people. Profits skyrocket as Internet Message Center finds a host of major clients in the business world. October 1999: Mustang Software changes its name to Mustang.com. February 28, 2000: Mustang.com announces a planned merger with Quintus Corp. Quintus will acquire Mustang for $290 million in stock. Quintus's IMC is eventually purchased by Avaya. References External links Mustang Software, Inc. History of Mustang (57 MB PDF) http://www.computerhope.com/comp/mustang.htm Software companies established in 1988 Defunct software companies of the United States Software companies disestablished in 2000 Defunct companies based in California 1988 establishments in California 2000 disestablishments in California
7077988
https://en.wikipedia.org/wiki/Les%20Hatton
Les Hatton
Les Hatton (born 5 February 1948) is a British-born computer scientist and mathematician most notable for his work on failures and vulnerabilities in software controlled systems. He was educated at King's College, Cambridge 1967–1970 and the University of Manchester where he received a Master of Science degree in electrostatic waves in relativistic plasma and a Doctor of Philosophy in 1973 for his work on computational fluid dynamics in tornadoes. Although originally a geophysicist, a career for which he was awarded the 1987 Conrad Schlumberger Award for his work in computational geophysics, he switched careers in the early 1990s to study software and systems failure. He has published 4 books and over 100 refereed journal publications and his theoretical and experimental work on software systems failure can be found in IEEE Transactions on Software Engineering, IEEE Computer, IEEE Software, Nature, and IEEE Computational Science and Engineering. His book Safer C pioneered the use of safer language subsets in commercial embedded control systems. He was also cited amongst the leading scholars of systems and software engineering by the Journal of Systems and Software for the period 1997–2001. Primarily a computer scientist nowadays, he retains wide interests and has published recently on artificial complexity in mobile phone charging, the aerodynamics of javelins and novel bibliographic search algorithms for unstructured text to extract patterns from defect databases. After spending most of his career in industry working for Oakwood Computing Associates, he is currently a professor of Forensic Software Engineering at Kingston University, London. References 1948 births Living people Alumni of King's College, Cambridge Alumni of the University of Manchester Academics of Kingston University British computer scientists 20th-century British mathematicians 21st-century British mathematicians Place of birth missing (living people)