id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
4465402
https://en.wikipedia.org/wiki/Zensar%20Technologies
Zensar Technologies
Zensar Technologies Limited is an Indian publicly traded software and services company. The company's stock trades on the Bombay Stock Exchange and on the National Stock Exchange of India. A subsidiary of RPG Group, the company's chairman is Harsh Goenka. History Zensar traces its origin to 1922 when a British original-equipment manufacturing firm established a regional manufacturing unit in Pune, India. The firm evolved to become the Indian manufacturing arm of British computer maker ICL, and was re-named ICIM (International Computers Indian Manufacture). In 1963, ICIM listed on the Bombay Stock Exchange. In 1991, ICIM established a subsidiary company named International Computers Limited (ICIL), with a focus on software. In 1999, the original hardware division of ICIM was shuttered, leaving ICIM/ICIL as a purely software company. In February 2000, the company renamed to Zensar. Ganesh Natarajan became CEO in 2001, beginning a 15-year tenure. From 2000 to 2005, the company focused on application management for its clients. RPG Group, the Indian industrial conglomerate headquartered in Mumbai, Maharashtra, is the majority shareholder in the company. The company continues to have its headquarters in Pune in Western India. By 2012, Zensar had approximately 11000 employees servicing 400 clients in over 20 global locations and at the end of the 2019 financial year earned revenues of over US$550 million. Sandeep Kishore was named Zensar's CEO in 2016, taking over from Ganesh Natarajan, with Zensar Technology backers being RPG Enterprises. In 2015, APAX Portfolio Company acquired a stake in Zensar from Electra Partners Mauritius. , Zensar Technologies has offices in over 20 countries. It is listed on both the National Stock Exchange of India (NSE) and the Bombay Stock Exchange (BSE) and is a component of several indices including the NSE's Nifty 500 and the S&P BSE 500. In December 2020, Ajay S. Bhutoria was named Zensar's CEO. Zensar’s business is spread between North America, United Kingdom, parts of Europe and South Africa. Zensar’s teams work out of 33 global locations, including offices of acquired entities. Its primary business involves providing digital and technology solutions to global customers. Acquisitions The following is a list of acquisitions made by Zensar since 2005. Locations Zensar has offices in 33 global locations. Recognition Zensar was the innovation award winner in the "Creating an Impact-IT Skills" category at India Perspectives 2018. Gartner Magic Quadrant named the company Niche Player 2019 for managed mobility services. The company is often quoted by analysts in industry media for its expertise on enterprise IT spending, research consulting and advisory services. The company won the 2021 BIG Innovation Award for Technology. The company also won at the 11th Annual Aegis Graham Bell Awards under the AI-Powered Innovation for Enterprise category. See also List of public listed software companies of India References External links Companies based in Pune Software companies established in 1991 Information technology consulting firms of India RPG Group Software companies of India Indian companies established in 1991 1991 establishments in Maharashtra Companies listed on the National Stock Exchange of India Companies listed on the Bombay Stock Exchange
47787704
https://en.wikipedia.org/wiki/Sam%20Karunaratne
Sam Karunaratne
Samarajeewa "Sam" Karunaratne, FIET, FIEE, FIESL (born in 1937) is an emeritus professor of engineering and a leading Sri Lankan academic who is the founding chancellor and president of the Sri Lanka Institute of Information Technology and the former vice-chancellor of the University of Moratuwa. He has held a number of other appointments in the field of higher education in Sri Lanka, including senior professor of electrical engineering and dean of the Faculty of Engineering and Architecture, president of the Institution of Engineers, Sri Lanka. Karunaratne is a pioneer in the development of the use of computers in the field of engineering and played an important role in the development of information technology education and industry in Sri Lanka. Early life and education Karunaratne was born as Samarajeewa Karunaratne to a family of land proprietors, Mr. and Mrs. T. Karunaratne in Makoora, Kegalle. He had his early education at Bandaranaike Maha Vidyalaya, Hettimulla and received his secondary education at St. Mary's College, Kegalle. He then went on to do his higher studies at the University of Ceylon, where he gained a first class honours degree in electrical engineering. He then achieved his MSc degree in engineering from the University of Glasgow and a diploma in electrical engineering from the University of London. He is a chartered engineer of both Sri Lanka and the United Kingdom, a fellow of the British Institution of Engineering and Technology (IET), fellow of the Institution of Engineers, Sri Lanka (FIESL), a fellow of the Institution of Electrical Engineers, London (FIEE), and a fellow of the National Academy of Sciences Sri Lanka. Career Karunaratne took part in many major construction projects in Sri Lanka, pioneering the use of computer aided designing. An electrical engineer by profession, he took up to university teaching and was a lecturer in electrical engineering at the University of Ceylon, Peradeniya before moving to the University of Moratuwa as a professor of electrical engineering in July 1969, where he held the chair from then until his retirement 9 October 2002. He has been the teacher of over 500 electrical engineers who hold high positions in Sri Lanka and internationally. He has been in universities in Sri Lanka and abroad since he joined the university as an undergraduate in 1956, except for a two-year period when he was with the State Engineering Corporation. During his time at the State Engineering Corporation, from 1967 to 1968, he was in charge of the country's first digital computer installation and he computerised the design of civil engineering structures, including the Kalutara Cetiya (Kalutara Degoba), a thick shell design that is the world's only hollow Buddhist shrine. He was responsible for the computerisation of the GCE Ordinary-Level and Advanced-Level examination processing in 1968, with over 350,000 candidates. As the head of the Department of Electrical Engineering, he spearheaded the establishment of the Department of Computer Engineering. He then became the dean of the Faculty of Engineering, and later the vice-chancellor of the University of Moratuwa. He is considered to be the chief contributor towards the development of the Department of Electrical Engineering to its present status, and has been the teacher of over 500 electrical engineers who hold high positions in Sri Lanka and abroad. These are some of the many notable reasons why he has been awarded the degree of Doctor of Science from the University of Moratuwa. He has served many institutions as a member of the Governing Board, including the National Engineering Research and Development Centre (NERD); the Natural Resources, Energy and Science Authority of Sri Lanka (NARESA); the Post-Graduate Institute of Management; the Institution of Engineers, Sri Lanka (IESL); the University of Moratuwa, Sri Lanka; the Arthur C. Clarke Centre for Modern Technologies (ACCMT); and the Sri Lanka Broadcasting Corporation (SLBC). He is the recipient of several scholarships and fellowships [Commonwealth Scholarship, Fulbright Scholarship, Commonwealth Fellowship, I.A.E.A Fellowship, Commonwealth Travelling Fellowship, UNESCO Fellowship]. Karunaratne was the President of the Institution of Engineers Sri Lanka. He was also the Director of the Arthur C. Clarke Centre for Modern Technologies, and was a member of the Board of Governors of the United Nations Centre for Space Science and Technology Education Asia-Pacific established in Dehradun, India. His research is mainly in electrical power systems and digital control system, and he has published several papers on these subject. He is a chartered engineer and a Fellow of the Institution of Electrical Engineers, London (FIEE), a Fellow of the Institution of Engineers Sri Lanka (FIESL), and a Fellow of the National Academy of Sciences. He is the founding President of the Sri Lanka Institute of Information Technology, a leading research and higher education institute in the field of Information Technology, and currently also holds the position of Chancellor and executive head of this institute. He is also a member of the board of directors at the Institute of Technological Studies, Colombo. Personal life Karunaratne married Kusuma Ediriweera Jayasooriya in July 1967, who became a renowned professor and Dean of the Faculty of Graduate Studies, University of Colombo. She is a pioneer in the field of Sinhalese Studies and the first female Dean in Sri Lanka. They have two sons, Savant Kaushalya and Passant Vatsalya, both electrical engineers specialising in Image processing, Graphics, and Video Processing. The elder, Savant Karunaratne has a PhD in Electrical and Computer Engineering from the University of Sydney, Australia. The younger, Passant Karunaratne has a PhD in Electrical Engineering and Computer Science from Northwestern University, in Evanston, Illinois, and is a Principal Research Engineer in the United States. Awards Karunaratne is the recipient of several scholarships and fellowships, including the Commonwealth Scholarship, the Fulbright Scholarship, the Commonwealth Fellowship, the International Atomic Energy Agency Fellowship, the Commonwealth Travelling Fellowship, and the UNESCO Fellowship. In 2006 Karunaratne was awarded an honorary Doctorate from the University of Moratuwa. See also Sri Lanka Institute of Information Technology References 2. Professor Karunaratne's webpage on Department of Electrical Engineering, University of Moratuwa External links Official website of University of Moratuwa Sri Lankan academic administrators Sri Lankan electrical engineers Sri Lankan computer scientists Alumni of the University of Ceylon (Peradeniya) Alumni of the University of London Alumni of the University of Glasgow University of California, Berkeley alumni People associated with the Sri Lanka Institute of Information Technology Fellows of the Institution of Engineering and Technology Living people 1937 births Sinhalese academics
49855806
https://en.wikipedia.org/wiki/List%20of%20computer%20science%20journals
List of computer science journals
Below is a list of computer science journals. Alphabetic list of titles A ACM Computing Reviews ACM Computing Surveys ACM Transactions on Algorithms ACM Transactions on Computational Logic ACM Transactions on Database Systems ACM Transactions on Graphics ACM Transactions on Information Systems ACM Transactions on Multimedia Computing, Communications, and Applications ACM Transactions on Programming Languages and Systems ACM Transactions on Software Engineering and Methodology Acta Informatica Adaptive Behavior ALGOL Bulletin Algorithmica Algorithms Applied Artificial Intelligence Archives of Computational Methods in Engineering Artificial Intelligence Astronomy and Computing Autonomous Agents and Multi-Agent Systems B Journal of the Brazilian Computer Society C Cluster Computing Code Words Cognitive Systems Research Combinatorica Combinatorics, Probability and Computing Communications of the ACM Computational and Mathematical Organization Theory Computational Intelligence Computational Mechanics Computer Aided Surgery The Computer Journal Computer Law & Security Review Computer Networks Computer Science Computers & Graphics Computing Cybernetics and Human Knowing D Data Mining and Knowledge Discovery Discrete Mathematics & Theoretical Computer Science Distributed Computing E Electronic Letters on Computer Vision and Image Analysis Electronic Notes in Theoretical Computer Science Electronic Proceedings in Theoretical Computer Science Empirical Software Engineering Journal EURASIP Journal on Advances in Signal Processing Evolutionary Computation F First Monday Formal Aspects of Computing Foundations and Trends in Communications and Information Theory Foundations and Trends in Computer Graphics and Vision Foundations and Trends in Theoretical Computer Science Fundamenta Informaticae Fuzzy Sets and Systems H Higher-Order and Symbolic Computation I ICGA Journal IEEE/ACM Transactions on Networking IEEE Annals of the History of Computing IEEE Intelligent Systems IEEE Internet Computing IEEE Micro IEEE MultiMedia IEEE Software IEEE Transactions on Computers IEEE Transactions on Control Systems Technology IEEE Transactions on Evolutionary Computation IEEE Transactions on Fuzzy Systems IEEE Transactions on Information Forensics and Security IEEE Transactions on Information Theory IEEE Transactions on Learning Technologies IEEE Transactions on Mobile Computing IEEE Transactions on Multimedia IEEE Transactions on Neural Networks and Learning Systems IEEE Transactions on Pattern Analysis and Machine Intelligence IEEE Transactions on Software Engineering IEEE Transactions on Visualization and Computer Graphics The Imaging Science Journal Information and Computation Information and Software Technology Information Processing Letters Information Services & Use Information Systems Information Systems Journal Innovations in Systems and Software Engineering International Institute for Advanced Studies in Systems Research and Cybernetics International Journal of Advanced Computer Technology International Journal of Applied Mathematics and Computer Science International Journal of Computational Geometry and Applications International Journal of Computational Intelligence and Applications International Journal of Computational Methods International Journal of Computer Assisted Radiology and Surgery International Journal of Computer Processing of Languages International Journal of Computer Vision International Journal of Cooperative Information Systems International Journal of Creative Computing International Journal of Data Warehousing and Mining International Journal of e-Collaboration International Journal of Foundations of Computer Science International Journal of High Performance Computing Applications International Journal of Image and Graphics International Journal of Information Acquisition International Journal of Information Technology & Decision Making International Journal of Innovation and Technology Management International Journal of Intelligent Information Technologies International Journal of Mathematics and Computer Science International Journal of Mobile and Blended Learning International Journal of Modelling and Simulation International Journal of Pattern Recognition and Artificial Intelligence International Journal of Shape Modeling International Journal of Software and Informatics International Journal of Software Engineering and Knowledge Engineering International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems International Journal of Wavelets, Multiresolution and Information Processing International Journal of Web Services Research International Journal of Wireless Information Networks International Journal on Artificial Intelligence Tools International Journal on Semantic Web and Information Systems J Journal of Advances in Information Fusion Journal of Artificial Intelligence Research Journal of Automata, Languages and Combinatorics Journal of Automated Reasoning Journal of Bioinformatics and Computational Biology Journal of Cases on Information Technology Journal of Chemical Information and Modeling Journal of Cheminformatics Journal of Circuits, Systems, and Computers Journal of Communications and Networks Journal of Computational Geometry Journal of Computer and System Sciences Journal of Computer-Mediated Communication Journal of Computing Sciences in Colleges Journal of Cryptology Journal of Database Management Journal of Experimental and Theoretical Artificial Intelligence Journal of Formalized Reasoning Journal of Functional Programming Journal of Global Information Management Journal of Graph Algorithms and Applications Journal of Graphics Tools Journal of Grid Computing Journal of Information Technology & Politics Journal of Interconnection Networks Journal of Logic and Computation Journal of Logical and Algebraic Methods in Programming Journal of Machine Learning Research Journal of Multimedia The Journal of Object Technology Journal of Organizational and End User Computing Journal of Software: Evolution and Process Journal of Statistical Software Journal of Strategic Information Systems The Journal of Supercomputing Journal of Symbolic Computation Journal of Systems and Software Journal of the ACM Journal of Web Semantics K Kybernetes L Logical Methods in Computer Science M Machine Learning Machine Vision and Applications Mathematics and Computer Education Minds and Machines Mobile Computing and Communications Review Molecular Informatics N Natural Computing Neural Networks Neurocomputing P Parallel Processing Letters Pattern Recognition Letters PeerJ Computer Science Performance Evaluation Personal and Ubiquitous Computing Presence: Teleoperators & Virtual Environments Probability in the Engineering and Informational Sciences Proceedings of the IEEE Program: Electronic Library and Information Systems R ReCALL S Science Software Quarterly Scientific Computing & Instrumentation SIAM Journal on Computing SIAM Journal on Scientific Computing Simulation & Gaming Software and Systems Modeling Software Testing, Verification & Reliability T Theoretical Computer Science Theoretical Issues in Ergonomics Science Transactions on Aspect Oriented Software Development TUGboat See also Databases arXiv DBLP (Digital Bibliography & Library Project in computer science) The Collection of Computer Science Bibliographies Lists List of important publications in computer science List of computer science conferences List of computer science conference acronyms List of open problems in computer science List of mathematics journals Categories Biomedical informatics journals Computational statistics journals Cryptography journals Human–computer interaction journals Systems journals External links Top journals in computer science, Times Higher Education, 14 May 2009 Journal Rankings – Computer Science Top 650 Journals of Computer Science Lists of academic journals
51485853
https://en.wikipedia.org/wiki/Token%20Binding
Token Binding
Token Binding is a proposed standard for a Transport Layer Security (TLS) extension that aims to increase TLS security by using cryptographic certificates on both ends of the TLS connection. Current practice often depends on bearer tokens, which may be lost or stolen. Bearer tokens are also vulnerable to man-in-the-middle attacks or replay attacks. In contrast, bound tokens are established by a user agent that generates a private-public key pair per target server, providing the public key to the server, and thereafter proving possession of the corresponding private key on every TLS connection to the server. Token Binding is an evolution of the Transport Layer Security Channel ID (previously known as Transport Layer Security – Origin Bound Certificates (TLS-OBC)) extension. Industry participation is widespread with standards contributors including Microsoft, Google, PayPal, Ping Identity, and Yubico. Browser support remains limited, however. Only versions of Microsoft Edge using the EdgeHTML engine have support for token binding. IETF standards The following group of IETF RFCs and Internet Drafts comprise a set of interrelated specifications for implementing different aspects of the Token Binding standard. The Token Binding Protocol Version 1.0. Allows client/server applications to create long-lived, uniquely identifiable TLS bindings spanning multiple TLS sessions and connections. Applications are then enabled to cryptographically bind security tokens to the TLS layer, preventing token export and replay attacks. To protect privacy, the Token Binding identifiers are only conveyed over TLS and can be reset by the user at any time. Transport Layer Security (TLS) Extension for Token Binding Protocol Negotiation. Extension for the negotiation of Token Binding protocol version and key parameters. Token Binding over HTTP. A collection of mechanisms that allow HTTP servers to cryptographically bind security tokens (such as cookies and OAuth tokens) to TLS connections. Token Binding for Transport Layer Security (TLS) Version 1.3 Connections. This companion document defines a backwards compatible way to negotiate Token Binding on TLS 1.3 connections. HTTPS Token Binding with TLS Terminating Reverse Proxies. Defines HTTP header fields that enable a TLS terminating reverse proxy to convey information to a backend server about the validated Token Binding Message received from a client, which enables that backend server to bind, or verify the binding of, cookies and other security tokens to the client's Token Binding key. This facilitates the reverse proxy and backend server functioning together as though they are a single logical server side deployment of HTTPS Token Binding. Related IETF draft standard: OAuth 2.0 Token Binding. Enables OAuth 2.0 implementations to apply Token Binding to Access Tokens, Authorization Codes, Refresh Tokens, JWT Authorization Grants, and JWT Client Authentication. This cryptographically binds these tokens to a client's Token Binding key pair, possession of which is proven on the TLS connections over which the tokens are intended to be used. This use of Token Binding protects these tokens from man-in-the-middle and token export and replay attacks. Related standards The use of TLS Token Binding allows for more robust web authentication. Several web authentication standards developed by standards bodies outside of IETF are adopting the draft standards. Draft OpenID Connect Token Bound Authentication 1.0. OpenID Connect (OIDC) is a simple identity layer on top of the OAuth 2.0 protocol. OIDC enables Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable, REST-like manner. The OIDC Token Bound Authentication specification enables OIDC implementations to apply Token Binding to the OIDC ID Token. This cryptographically binds the ID Token to the TLS connection over which the authentication occurred. This use of Token Binding protects the authentication flow from man-in-the-middle and token export and replay attacks. W3C Proposed Recommendation for Web Authentication: An API for accessing Public Key Credentials. Web Authentication (WebAuthn), an interface for public-key authentication of users to web-based applications and services, supports Token Binding. References External links Token Binding at BrowserAuth.net Token Binding Presentation at Identiverse 2018 OAuth 2.0 Token Binding Blog Security Transport Layer Security Internet Standards
256913
https://en.wikipedia.org/wiki/Qualcomm
Qualcomm
Qualcomm () is an American multinational corporation headquartered in San Diego, California, and incorporated in Delaware. It creates semiconductors, software, and services related to wireless technology. It owns patents critical to the 5G, 4G, CDMA2000, TD-SCDMA and WCDMA mobile communications standards. Qualcomm was established in 1985 by Irwin M. Jacobs and six other co-founders. Its early research into CDMA wireless cell phone technology was funded by selling a two-way mobile digital satellite communications system known as Omnitracs. After a heated debate in the wireless industry, the 2G standard was adopted with Qualcomm's CDMA patents incorporated. Afterwards there was a series of legal disputes about pricing for licensing patents required by the standard. Over the years, Qualcomm has expanded into selling semiconductor products in a predominantly fabless manufacturing model. It also developed semiconductor components or software for vehicles, watches, laptops, wi-fi, smartphones, and other devices. History Early history Qualcomm was created in July 1985 by seven former Linkabit employees led by Irwin Jacobs. The company was named Qualcomm for "QUALity COMMunications". It started as a contract research and development center largely for government and defense projects. Qualcomm merged with Omninet in 1988 and raised $3.5 million in funding in order to produce the Omnitracs satellite communications system for trucking companies. Qualcomm grew from eight employees in 1986 to 620 employees in 1991, due to demand for Omnitracs. By 1989, Qualcomm had $32 million in revenue, 50 percent of which was from an Omnitracs contract with Schneider National. Omnitracs profits helped fund Qualcomm's research and development into code-division multiple access (CDMA) technologies for cell phone networks. 1990–2015 Qualcomm was operating at a loss in the 1990s due to its investment in CDMA research. To obtain funding, the company filed an initial public offering in September 1991 raising $68 million. An additional $486 million was raised in 1995 through the sale of 11.5 million more shares. The second funding round was done to raise money for the mass manufacturing of CDMA-based phones, base-stations, and equipment, after most US-based cellular networks announced they would adopt the CDMA standard. The company had $383 million in annual revenue in 1995 and $814 million by 1996. In 1998, Qualcomm was restructured, leading to a 700-employee layoff. Its base station and cell-phone manufacturing businesses were spun-off in order to focus on its higher-margin patents and chipset businesses. Since the base station division was losing $400M a year (having never sold another base station after making its 10th sale), profits skyrocketed in the following year, and Qualcomm was the fastest growing stock on the market with a 2,621 percent growth over one year. By 2000, Qualcomm had grown to 6,300 employees, $3.2 billion in revenues, and $670 million in profit. 39 percent of its sales were from CDMA technology, followed by licensing (22%), wireless (22%), and other products (17%). Around this time, Qualcomm established offices in Europe, Asia Pacific, and Latin America. By 2001, 65 percent of Qualcomm's revenues originated from outside the United States with 35 percent coming from South Korea. In 2005, Paul E. Jacobs, son of Qualcomm founder Irwin Jacobs, was appointed as Qualcomm's new CEO. Whereas Irwin Jacobs focused on CDMA patents, Paul Jacobs refocused much of Qualcomm's new research and development on projects related to the internet of things. In the same year they have acquired Flarion Technologies, a developer of wireless broadband Orthogonal Frequency Division Multiplex Access (OFDMA) technology. Qualcomm announced Steven Mollenkopf would succeed Paul Jacobs as CEO in December 2013. Mollenkopf said he would expand Qualcomm's focus to wireless technology for cars, wearable devices, and other new markets. 2015–present: NXP, Broadcom and NUVIA Qualcomm announced its intent to acquire NXP Semiconductors for $47 billion in October 2016. The deal was approved by U.S. antitrust regulators in April 2017 with some standard-essential patents excluded to get the deal approved by antitrust regulators. As the NXP acquisition was ongoing, Broadcom made a $103 billion offer to acquire Qualcomm, and Qualcomm rejected the offer. Broadcom attempted a hostile takeover, and raised its offer, eventually to $121 billion. The potential Broadcom acquisition was investigated by the U.S. Committee on Foreign Investment and blocked by an executive order from U.S. President Donald Trump, citing national security concerns. Qualcomm's NXP acquisition then became a part of the 2018 China–United States trade war. U.S. President Donald Trump blocked China-based ZTE Corporation from buying American-made components, such as those from Qualcomm. The ZTE restriction was lifted after the two countries reached an agreement, but then Trump raised tariffs against Chinese goods. Qualcomm extended a tender offer to NXP at least 29 times pending Chinese approval, before abandoning the deal in July 2018. On January 6, 2021, Qualcomm appointed its president and chip division head Cristiano Amon as its new chief executive. On January 13, 2021, Qualcomm announced agreement to acquire NUVIA for approximately $1.4 billion. NUVIA was a server CPU startup founded in early 2019 by ex-Apple and ex-Google architects. On March 16, 2021, Qualcomm announced the completion of their acquisition of NUVIA, and the first products will be laptop CPUs and would be sampling in the second half of 2022. Wireless CDMA 2G Early history In mid-1985, Qualcomm was hired by Hughes Aircraft to provide research and testing for a satellite network proposal to the Federal Communications Commission (FCC). The following year, Qualcomm filed its first CDMA patent (No. 4,901,307). This patent established Qualcomm's overall approach to CDMA and later became one of the most frequently cited technical documents in history. The project with the FCC was scrapped in 1988, when the FCC told all twelve vendors that submitted proposals to form a joint venture to create a single proposal. Qualcomm further developed the CDMA techniques for commercial use and submitted them to the Cellular Telephone Industries Association (CTIA) in 1989 as an alternative to the time-division multiple access (TDMA) standard for second-generation cell-phone networks. A few months later, CTIA officially rejected Qualcomm's CDMA standard in favor of the more established TDMA standard developed by Ericsson. At the time, CDMA wasn't considered viable in high-volume commercial applications due to the near-far field effect, whereby phones closer to a cell tower with a stronger signal drown out callers that are further away and have a weaker signal. Qualcomm filed three additional patents in 1989. They were for: a power management system that adjusts the signal strength of each call to adjust for the near-far field effect; a "soft handoff" methodology for transferring callers from one cell-tower to the next; and a variable rate encoder, which reduces bandwidth usage when a caller isn't speaking. Holy wars of wireless After the FCC said carriers were allowed to implement standards not approved by the CTIA, Qualcomm began pitching its CDMA technology directly to carriers. This started what is often referred to as "the Holy Wars of Wireless", an often heated debate about whether TDMA or CDMA was better suited for 2G networks. Qualcomm-supported CDMA standards eventually unseated TDMA as the more popular 2G standard in North America, due to its network capacity. Qualcomm conducted CDMA test demonstrations in 1989 in San Diego and in 1990 in New York City. In 1990, Nynex Mobile Communications and Ameritech Mobile Communications were the first carriers to implement CDMA networks instead of TDMA. Motorola, a prior TDMA advocate, conducted CDMA test implementations in Hong Kong and Los Angeles. This was followed by a $2 million trial network in San Diego for Airtouch Communications. In November 1991, 14 carriers and manufacturers conducted large-scale CDMA field tests. Results from the test implementations convinced CTIA to re-open discussions regarding CDMA and the 2G standard. CTIA changed its position and supported CDMA in 1993, adopting Qualcomm's CDMA as the IS-95A standard, also known as cdmaOne. This prompted widespread criticism in forums, trade press, and conventions from businesses that had already invested heavily in the TDMA standard and from TDMA's developer, Ericsson. The first commercial-scale CDMA cellular network was created in Hong Kong in 1995. On July 21, 1995, Primeco, which represented a consortium of Cox Communications, Comcast, Sprint and others, announced it was going to implement CDMA-based services on networks in 15 states. By this time, 11 out of 14 of the world's largest networks supported CDMA. By 1997 CDMA had 57 percent of the US market, whereas 14 percent of the market was on TDMA. International In 1991, Qualcomm and the Electronics and Telecommunications Research Institute (ETRI) agreed to jointly develop CDMA technologies for the Korean telecommunications infrastructure. A CDMA standard was adopted as the national wireless standard in Korea in May 1993 with commercial CDMA networks being launched in 1996. CDMA networks were also launched in Argentina, Brazil, Mexico, India, and Venezuela. Qualcomm entered the Russian and Latin American markets in 2005. By 2007, Qualcomm's technology was in cell phone networks in more than 105 countries. Qualcomm also formed licensing agreements with Nokia in Europe, Nortel in Canada, and with Matsushita and Mitsubishi in Japan. Qualcomm entered the Chinese market through a partnership with China Unicom in 2000, which launched the first CDMA-based network in China in 2003. China became a major market for Qualcomm's semiconductor products, representing more than fifty percent of its revenues, but also the source of many legal disputes regarding Qualcomm's intellectual property. By 2007, $500 million of Qualcomm's annual revenues were coming from Korean manufacturers. Manufacturing Initially, Qualcomm's manufacturing operations were limited to a small ASIC design and manufacturing team to support the Omnitracs system. Qualcomm was forced to expand into manufacturing in the 1990s in order to produce the hardware carriers needed to implement CDMA networks that used Qualcomm's intellectual property. Qualcomm's first large manufacturing project was in May 1993, in a deal to provide 36,000 CDMA phones to US West. For a time, Qualcomm experienced delays and other manufacturing problems, because it was in-experienced with mass manufacturing. In 1994, Qualcomm partnered with Northern Telecom and formed a joint partnership with Sony, in order to leverage their manufacturing expertise. Nokia, Samsung and Motorola introduced their own CDMA phones in 1997. Qualcomm's manufacturing business was losing money due to large capital equipment costs and declining prices caused by competition. Also, in March 1997, after Qualcomm introduced its Q phone, Motorola initiated a lawsuit (settled out of court in 2000) for allegedly copying the design of its Startac phone. In December 1999, Qualcomm sold its manufacturing interests to Kyocera Corporation, a Japanese CDMA manufacturer and Qualcomm licensee. Qualcomm's infrastructure division was sold to competitor Ericsson in 1999 as part of an out-of-court agreement for a CDMA patent dispute that started in 1996. The sale of the infrastructure division marked the beginning of an increase in Qualcomm's stock price and stronger financial performance, but many of the 1,200 employees involved were discontented working for a competitor and losing their stock options. This led to a protracted legal dispute regarding employee stock options, resulting in $74 million in settlements by 2005. 3G 3G standards were expected to force prior TDMA carriers onto CDMA, in order to meet 3G bandwidth goals. The two largest GSM manufacturers, Nokia and Ericsson, advocated for a greater role for GSM, in order to negotiate lower royalty prices from Qualcomm. In 1998, the European Telecommunications Standards Institute (ETSI) voted in support of the WCDMA standard, which relied less on Qualcomm's CDMA patents. Qualcomm responded by refusing to license its intellectual property for the standard. The Telecommunications Industry Association (TIA) and the Third Generation Partnership Program 2, advocated for a competing CDMA-2000 standard developed primarily by Qualcomm. American and European politicians advocated for the CDMA-2000 and WCDMA standards respectively. The ITU said it would exclude Qualcomm's CDMA technology from the 3G standards entirely if a patent dispute over the technology with Ericsson was not resolved. The two reached an agreement out-of-court in 1999, one month before a deadline set by the ITU. Both companies agreed to cross-license their technology to each other and to work together on 3G standards. A compromise was eventually reached whereby the ITU would initially endorse three standards: CDMA2000 1X, WCDMA and TD-SCDMA. Qualcomm agreed to license its CDMA patents for variants such as WCDMA. There were 240 million CDMA 3G subscribers by 2004 and 143 carriers in 67 countries by 2005. Qualcomm claimed to own 38 percent of WCDMA's essential patents, whereas European GSM interests sponsored a research paper alleging Qualcomm only owned 19 percent. Qualcomm consolidated its interests in telecommunications carriers, such as Cricket Communications and Pegaso into a holding company, Leap Wireless, in 1998. Leap was spun-off later that year and sold to AT&T in 2014. 4G Qualcomm initially advocated for the CDMA-based Ultra Mobile Broadband (UMB) standard for fourth generation wireless networks. UMB wasn't backwards compatible with prior CDMA networks and didn't operate as well in narrow bandwidths as the LTE (long-term evolution) standard. No cellular networks adopted UMB. Qualcomm halted development of UMB in 2005 and decided to support the LTE standard, even though it didn't rely as heavily on Qualcomm patents. Then, Qualcomm purchased LTE-related patents through acquisitions. By 2012, Qualcomm held 81 seminal patents used in 4G LTE standards, or 12.46 percent. Qualcomm also became more focused on using its intellectual property to manufacture semiconductors in a fabless manufacturing model. A VLSI Technology Organization division was founded in 2004, followed by a DFX group in 2006, which did more of the manufacturing design in-house. Qualcomm announced it was developing the Scorpion central processing unit (CPU) for mobile devices in November 2005. This was followed by the first shipments of the Snapdragon system-on-chip product, which includes a CPU, GPS, graphics processing unit, camera support and other software and semiconductors, in November 2007. The Gobi family of modems for portable devices was released in 2008. Gobi modems were embedded in many laptop brands and Snapdragon system on chips were embedded into most Android devices. Qualcomm won a government auction in India in 2010 for $1 billion in spectrum and licenses from which to offer broadband services. It formed four joint ventures with Indian holding companies for this purpose. A 49 percent stake in the holding companies was acquired by Bharti in May 2012 and the remaining was acquired in October 2012 by AT&T. 5G According to Fortune Magazine, Qualcomm has been developing technologies for future 5G standards in three areas: radios that would use bandwidth from any network it has access to, creating larger ranges of spectrum by combining smaller pieces, and a set of services for internet of things applications. Qualcomm's first 5G modem chip was announced in October 2016 and a prototype was demonstrated in October 2017. Qualcomm's first 5G antennas were announced in July 2018. As of 2018, Qualcomm has partnerships with 19 mobile device manufacturers and 18 carriers to commercialize 5G technology. By late 2019, several phones were being sold with Qualcomm's 5G technology incorporated. Software and other technology Early software Qualcomm acquired an email application called Eudora in 1991. By 1996, Eudora was installed on 63 percent of PCs. Microsoft Outlook eclipsed Eudora, since it was provided for free by default on Windows-based machines. By 2003 Qualcomm's Eudora was the most popular alternative to Microsoft Outlook, but still had only a five percent share of the market. Software development for Eudora was retired in 2006. In 2001, Qualcomm introduced Brew, a smartphone app development service with APIs to access contacts, billing, app-stores, or multimedia on the phone. South Korean carrier KTFreeTel was the first to adopt the Brew system in November 2001, followed by Verizon in March 2002 for its "Get it Now" program. There were 2.5 million Brew users by the end of 2002 and 73 million in 2003. Other technology In 2004, Qualcomm created a MediaFLO subsidiary to bring its FLO (forward link only) specification to market. Qualcomm built an $800 million MediaFLO network of cell towers to supplement carrier networks with one that is designed for multimedia. In comparison to cellular towers that provide two-way communications with each cell phone individually, MediaFLO towers would broadcast multimedia content to mobile phones in a one-way broadcast. Qualcomm also sold FLO-based semiconductors and licenses. Qualcomm created the FLO Forum standards group with 15 industry participants in July 2005. Verizon was the first carrier to partner with MediaFlo in December 2005 for its Verizon Wireless' V Cast TV, which was followed by the AT&T Mobile TV service a couple months later. The MediaFlo service was launched on Super Bowl Sunday in 2007. Despite the interest the service got among carriers, it was unpopular among consumers. The service required users to pay for a subscription and have phones that were equipped with special semiconductors. The service was discontinued in 2011 and its spectrum was sold to AT&T for $1.93 billion. Qualcomm rebooted the effort in 2013 with LTE Broadcast, which uses pre-existing cell towers to broadcast select content locally on a dedicated spectrum, such as during major sporting events. Based on technology acquired from Iridigm in 2004 for $170 million, Qualcomm began commercializing Mirasol displays in 2007, which was expanded into eight products in 2008. Mirasol uses natural light shining on a screen to provide lighting for the display, rather a backlight, in order to reduce power consumption. The amount of space between the surface of the display and a mirror within a 10 micron-wide "interferometric modulator" determines the color of the reflected light. Mirasol was eventually closed down after an attempt to revive it in 2013 in Toq watches. In June 2011, Qualcomm introduced AllJoyn, a wireless standard for communicating between devices like cell phones, televisions, air-conditioners, and refrigerators. The Alljoyn technology was donated to the Linux Foundation in December 2013. Qualcomm and the Linux Foundation then formed the Allseen Alliance to administer the standard and Qualcomm developed products that used the AllJoyn standard In December 2011, Qualcomm formed a healthcare subsidiary called Qualcomm Life. Simultaneously, the subsidiary released a cloud-based service for managing clinical data called 2net and the Qualcomm Life Fund, which invests in wireless healthcare technology companies. The subsidiary doubled its employee-count by acquiring HealthyCircles Inc., a healthcare IT company, the following May. Qualcomm life was later sold to a private equity firm, Francisco Partners, in 2019. Developments since 2016 In 2016, Qualcomm developed its first beta processor chip for servers and PCs called "Server Development Platform" and sent samples for testing. In January 2017, a second generation data center and PC server chip called Centriq 2400 was released. PC Magazine said the release was "historic" for Qualcomm, because it was a new market segment for the company. Qualcomm also created a Qualcomm Datacenter Technologies subsidiary to focus on the PCs and servers market. In 2017, Qualcomm introduced embedded technology for 3D cameras intended for augmented reality apps. Qualcomm is also developing and demonstrating laptop processors and other parts, as of 2017. In 2000, Qualcomm formed a joint venture with Ford called Wingcast, which created telematics equipment for cars, but was unsuccessful and closed down two years later. Qualcomm acquired the wireless electric car charging company, HaloIPT, in November 2011 and later sold the company to WiTricity in February 2019. Qualcomm also started introducing Snapdragon system-on-chips and Gobi modems and other software or semiconductor products for self-driving cars and modern in-car computers. In 2020, Qualcomm hired Baidu Veteran, Nan Zhou, to head Qualcomm's push into AI. Patents and patent disputes In 2021, the World Intellectual Property Organization (WIPO)’s annual World Intellectual Property Indicators report ranked Qualcomm's number of patent applications published under the PCT System as 5th in the world, with 2,173 patent applications being published during 2020. This position is down from their previous ranking as 4th in 2019 with 2,127 applications. In 2017, Qualcomm owned more than 130,000 current or pending patents. An increase from the early 2000s when Qualcomm had more than 1,000 patents. As the sole early investor in CDMA research and development, Qualcomm's patent portfolio contains much of the intellectual property that is essential to CDMA technologies. Since many of Qualcomm's patents are part of an industry standard, the company has agreed to license those patents under "fair, reasonable, and non-discriminatory" terms. Qualcomm's royalties come out to about 5% or $30 per mobile device. According to Fortune Magazine, this is about 5–10 times more than what is typically charged by other patent-holders. Qualcomm says its patents are more expensive because they are more important and its pricing is within the range of common licensing practices. However, competitors, clients, and regulators often allege Qualcomm charges unreasonable rates or engages in unfair competition for mandatory patents. Broadcom In 2005, Broadcom and Qualcomm were unable to reach an agreement on cross-licensing their intellectual property, and Broadcom sued Qualcomm alleging it was breaching ten Broadcom patents. Broadcom asked the International Trade Commission to prohibit importing the affected technology. A separate lawsuit alleged Qualcomm was threatening to withhold UMTS patent licenses against manufacturers that bought their semiconductors from competitors, in violation of the standards agreement. Qualcomm alleged Broadcom was using litigation as a negotiation tactic and that it would respond with its own lawsuits. Qualcomm sued Broadcom, alleging it was using seven Qualcomm patents without permission. By late 2006, more than 20 lawsuits had been filed between the two parties and both sides claimed to be winning. In September 2006, a New Jersey court judge ruled that Qualcomm's patent monopoly was an inherent aspect of creating industry standards and that Qualcomm's pricing practices were lawful. In May 2007, a jury ordered Qualcomm to pay Broadcom $19.6 million for infringing on three Broadcom patents. In June 2007, the ITC ruled that Qualcomm had infringed on at least one Broadcom patent and banned corresponding imports. Qualcomm and Broadcom reached a settlement in April 2009, resulting in a cross-licensing agreement, a dismissal of all litigation and Qualcomm paying $891 million over four years. During the litigation, Qualcomm claimed it had never participated in the JVT standards-setting process. However, an engineer's testimony led to discovery of 21 JVT-related emails Qualcomm lawyers had withheld from the court, and 200,000 pages of JVT-related documents. Qualcomm's lawyers said the evidence was overlooked by accident, whereas the judge said it was gross misconduct. Qualcomm was fined $8.5 million for legal misconduct. On appeal, the court held that Qualcomm could only enforce the related patents against non-JVT members, based on the agreements signed to participate in JVT. Nokia and Project Stockholm Six large telecommunications companies led by Nokia filed a complaint against Qualcomm with the European Commission's antitrust division in October 2005. They alleged Qualcomm was abusing its market position to charge unreasonable rates for its patents. Qualcomm alleged the six companies were colluding together under the code name Project Stockholm in a legal strategy to negotiate lower rates. These events led to a protracted legal dispute. Qualcomm filed a series of patent-infringement lawsuits against Nokia in Europe, Asia, the US, and with the ITC. The parties initiated more than one dozen lawsuits against one another. Several companies filed antitrust complaints against Qualcomm with the Korean Fair Trade Commission, who initiated an investigation into Qualcomm's practices in December 2006. The dispute between Qualcomm and Nokia escalated, when their licensing agreement ended in April 2007. In February 2008, the two parties agreed to halt any new litigation until an initial ruling is made on the first lawsuit in Delaware. Nokia won three consecutive court rulings with the German Federal Patent Court, the High Court in the United Kingdom, and the International Trade Commission respectively. Each found that Nokia was not infringing on Qualcomm's patents. In July 2008, Nokia and Qualcomm reached an out-of-court settlement that ended the dispute and created a 15-year cross-licensing agreement. Recent disputes ParkerVision filed a lawsuit against Qualcomm in July 2011 alleging that it infringed on seven ParkerVision patents related to converting electromagnetic radio signals to lower frequencies. A $173 million jury verdict against Qualcomm was overturned by a judge. In November 2013, the China National Development and Reform Commission initiated an anti-trust investigation into Qualcomm's licensing division. The Securities and Exchange Commission also started an investigation into whether Qualcomm breached antibribery laws through its activities in China. The Chinese regulator raided Qualcomm's Chinese offices in August 2013. The dispute was settled in 2015 for $975 million. In late 2016 The Korea Fair Trade Commission alleged Qualcomm abused a "dominant market position" to charge cell phone manufacturers excessive royalties for patents and limit sales to companies selling competing semiconductor products. The regulator gave Qualcomm a fine of $854 million, which the company said it will appeal. In April 2017, Qualcomm paid a $814.9 million settlement with BlackBerry as a refund for prepaid licensing fees. In October 2017, Taiwan's Fair Trade Commission fined Qualcomm another $773 million. In late 2018 Qualcomm paid a settlement to Taiwan for $93 million in fines and a promise to spend $700 million in the local Taiwan economy. Apple In January 2017, the Federal Trade Commission (FTC) initiated an investigation into allegations that Qualcomm charged excessive royalties for patents that are "essential to industry standards". That same year, Apple initiated a $1 billion lawsuit against Qualcomm in the U.S. alleging Qualcomm overcharged for semiconductors and failed to pay $1 billion in rebates. Apple also filed lawsuits in China and the United Kingdom. Apple alleged Qualcomm was engaging in unfair competition by selling industry-standard patents at a discount rate in exchange for an exclusivity agreement for its semiconductor products. An FTC report reached similar conclusions. Qualcomm filed counter-claims alleging Apple made false and misleading statements to induce regulators to sue Qualcomm. Qualcomm also sued Apple's suppliers for allegedly not paying Qualcomm's patent royalties, after Apple stopped reimbursing them for patent fees. Qualcomm petitioned the International Trade Commission to prohibit imports of iPhones, on the premise that they contain stolen Qualcomm patents after Apple's suppliers stopped paying. In August 2017, the International Trade Commission responded to Qualcomm's complaints by starting an investigation of Apple's use of Qualcomm patents without royalties. Qualcomm also filed suit against Apple in China for alleged patent infringement in October 2017. The following month, Apple counter-sued, alleging Qualcomm was using patented Apple technology in its Android components. In December 2018, Chinese and German courts held that Apple infringed on Qualcomm patents and banned sales of certain iPhones. Some patents were held to be invalid, while others were infringed by Apple. In April 2019, Apple and Qualcomm reached an agreement to cease all litigation and sign a six-year licensing agreement. The settlement included a one-time payment from Apple of about $4.5 to $4.7 billion. Terms of the six-year licensing agreement were not disclosed, but the licensing fees were expected to increase revenues by $2 per-share. In January 2018, the European Competition Commission fined Qualcomm $1.2 billion for an arrangement to use Qualcomm chips exclusively in Apple's mobile products. Qualcomm is appealing the decision. Federal Trade Commission Stemming from the investigation that led to the Apple lawsuit actions, the FTC filed suit against Qualcomm in 2017 alleging it engaged in antitrust behavior due to its monopoly on wireless broadband technology. The complaints filed by the FTC included that Qualcomm charged "disproportionately high" patent royalty rates to phone manufacturers and refused to sell them broadband chips if they did not license the patents, a policy referred to as "no license, no chips", that Qualcomm refused to license the patent to other chip manufacturers as to maintain their monopoly, and that Qualcomm purposely offered Apple a lower license cost to use their chips exclusively, locking other competitors as well as wireless service providers out of Apple's lucrative market. The trial starting in January 2019, heard by Judge Lucy Koh of the federal Northern District Court that also oversaw the Apple case. Judge Koh ruled in May 2019 against Qualcomm, asserting that Qualcomm's practices did violate antitrust. As part of the ruling, Qualcomm was forced to stop its "no license, no chips" bundling with phone manufacturers, and was required to license its patents to other chip manufacturers. As Qualcomm had expressed its intent to appeal, a panel of judges on the 9th circuit of appeals stayed the orders pending the litigation action. Qualcomm appealed to the Ninth Circuit, which reversed the decision in August 2020. The Ninth Circuit determined that Judge Koh's decision strayed beyond the scope of antitrust law and that whether Qualcomm's patent licensing may be considered reasonable and non-discriminatory licensing does not fall within the scope of antitrust law, but rather is a matter of contract and patent law. The court concluded that the FTC failed to meet its burden of proof and that Qualcomm's business practices were better characterized as "hypercompetitive" rather than "anticompetitive". Operations and market-share Qualcomm develops software, semiconductor designs, patented intellectual property, development tools and services, but does not manufacture physical products like phones or infrastructure equipment. The company's revenues are derived from licensing fees for use of its intellectual property, sales of semiconductor products that are based on its designs, and from other wireless hardware, software or services. Qualcomm divides its business into three categories: QCT (Qualcomm CDMA Technologies): CDMA wireless products; 80% of revenue QTL (Qualcomm Technology Licensing): Licensing; 19% of revenue QSI (Qualcomm strategic initiatives): Investing in other tech companies; less than 1% of revenue Qualcomm is a predominantly fabless provider of semiconductor products for wireless communications and data transfer in portable devices. According to the analyst firm Strategy Analytics, Qualcomm has a 39 percent market-share for smartphone application processors and a 50 percent market-share of baseband processors. Its share of the market for application processors on tablets is 18 percent. According to analyst firm ABI Research, Qualcomm has a 65 percent market-share in LTE baseband. Qualcomm also provides licenses to use its patents, many of which are critical to the CDMA2000, TD-SCDMA and WCDMA wireless standards. The company is estimated to earn $20 for every smartphone sold. Qualcomm is the largest public company in San Diego. It has a philanthropic arm called The Qualcomm Foundation. A January 2013 lawsuit resulted in Qualcomm voluntarily adopting a policy of disclosing its political contributions. According to The New York Times, Qualcomm's new disclosure policy was praised by transparency advocates. See also List of Qualcomm Snapdragon systems-on-chip Adreno Qualcomm Hexagon References External links 1985 establishments in California Companies listed on the Nasdaq Display technology companies Electronics companies established in 1985 Fabless semiconductor companies Graphics hardware companies HSA Foundation Information technology companies of the United States Manufacturing companies based in San Diego Multinational companies headquartered in the United States Networking hardware companies Semiconductor companies of the United States Technology companies based in San Diego Technology companies established in 1985 Telecommunications companies established in 1985 American companies established in 1985 Telecommunications companies of the United States Telecommunications equipment vendors 1991 initial public offerings
218204
https://en.wikipedia.org/wiki/IBM%20PS/2
IBM PS/2
The Personal System/2 or PS/2 is IBM's second generation of personal computers. Released in 1987, it officially replaced the IBM PC, XT, AT, and PC Convertible in IBM's lineup. Many of the PS/2's innovations, such as the 16550 UART (serial port), 1440 KB 3.5-inch floppy disk format, 72-pin SIMMs, the PS/2 port, and the VGA video standard, went on to become standards in the broader PC market. The PS/2 line was created by IBM partly in an attempt to recapture control of the PC market by introducing the advanced yet proprietary Micro Channel architecture (MCA) on higher-end models. These models were in the strange position of being incompatible with the IBM-compatible hardware standards previously established by IBM and adopted in the PC industry. However, IBM's initial PS/2 computers were popular with target market corporate buyers, and by September 1988 IBM reported that it had sold 3 million PS/2 machines. This was only 18 months after the new range had been introduced. Most major PC manufacturers balked at IBM's licensing terms for MCA-compatible hardware, particularly the per-machine royalties. In 1992, Macworld stated that "IBM lost control of its own market and became a minor player with its own technology." The OS/2 operating system was announced at the same time as the PS/2 line and was intended to be the primary operating system for models with Intel 80286 or later processors. However, at the time of the first shipments, only IBM PC DOS 3.3 was available. OS/2 1.0 (text-mode only) and Microsoft's Windows 2.0 became available several months later. IBM also released AIX PS/2, a UNIX operating system for PS/2 models with Intel 386 or later processors. Predecessors 1981 IBM PC 1983 IBM PC XT 1984 IBM Portable Personal Computer 1984 IBM PCjr 1984 IBM PC AT 1986 IBM PC Convertible 1986 IBM PC XT 286 Technology IBM's PS/2 was designed to remain software compatible with their PC/AT/XT line of computers upon which the large PC clone market was built, but the hardware was quite different. PS/2 had two BIOSes: one named ABIOS (Advanced BIOS) which provided a new protected mode interface and was used by OS/2, and CBIOS (Compatible BIOS) which was included to be software compatible with the PC/AT/XT. CBIOS was so compatible that it even included Cassette BASIC. While IBM did not publish the BIOS source code, it did promise to publish BIOS entry points. Micro Channel Architecture With certain models to the IBM PS/2 line, Micro Channel Architecture (MCA) was also introduced. MCA was conceptually similar to the channel architecture of the IBM System/360 mainframes. MCA was technically superior to ISA and allowed for higher speed communications within the system. The majority of MCA's features would be seen in later buses with the exception of: streaming-data procedure, channel-check reporting, error logging and internal bus-level video pass-through for devices like the IBM 8514. Transfer speeds were on par with the much later PCI standard. MCA allowed one-to-one, card to card, and multi-card to processor simultaneous transaction management which is a feature of the PCI-X bus format. Bus mastering capability, bus arbitration, and a primitive form of plug-and-play management of hardware were all benefits of MCA. Gilbert Held in his 2000 book Server Management observes: "MCA used an early (and user-hostile) version of what we know now as 'Plug-N′-Play', requiring a special setup disk for each machine and each card." MCA never gained wide acceptance outside of the PS/2. When setting up the card with its disk, all choices for interrupts and other changes were accomplished automatically by the PC reading the old configuration from the floppy disk. This made necessary changes, then recorded the new configuration to the floppy disk. This meant that the user must keep that same floppy disk matched to that particular PC. For a small organization with a few PCs, this was annoying, but less expensive and time consuming than bringing in a PC technician to do installation. But for large organizations with hundreds or even thousands of PCs, permanently matching each PC with its own floppy disk was a logistical nightmare. Without the original, (and correctly updated) floppy disk, no changes could be made to the PC's cards. In addition to the technical setup, legally, royalties were required for each MCA-compatible machine sold. There was nothing unique in IBM insisting on payment of royalties on the use of its patents applied to Micro Channel-based machines. Up until that time, some companies had failed to pay IBM for the use of its patents on the earlier generation of Personal Computer. Keyboard/mouse Layout The PS/2 IBM Model M keyboard used the same 101-key layout of the previous IBM PC/AT Extended keyboard, itself derived from the original IBM PC keyboard. European variants had 102 keys with the addition of an extra key to the right of the left Shift key. Interface PS/2 systems introduced a new specification for the keyboard and mouse interfaces, which are still in use today (though increasingly supplanted by USB devices) and are thus called "PS/2" interfaces. The PS/2 keyboard interface, inspired by Apple's ADB interface, was electronically identical to the long-established AT interface, but the cable connector was changed from the 5-pin DIN connector to the smaller 6-pin mini-DIN interface. The same connector and a similar synchronous serial interface was used for the PS/2 mouse port. The initial desktop Model 50 and Model 70 also featured a new cableless internal design, based on use of interposer circuit boards to link the internal drives to the planar (motherboard). Additionally these machines could be largely disassembled and reassembled for service without tools. Additionally, the PS/2 introduced a new software data area known as the Extended BIOS Data Area (EBDA). Its primary use was to add a new buffer area for the dedicated mouse port. This also required making a change to the "traditional" BIOS Data Area (BDA) which was then required to point to the base address of the EBDA. Another new PS/2 innovation was the introduction of bidirectional parallel ports which in addition to their traditional use for connecting a printer could now function as a high speed data transfer interface. This allowed the use of new hardware such as parallel port scanners, CD-ROM drives, and also enhanced the capabilities of printers by allowing them to communicate with the host PC and send back signals instead of simply being a passive output device. Graphics Most of the initial range of PS/2 models were equipped with a new frame buffer known as the Video Graphics Array, or VGA for short. This effectively replaced the previous EGA standard. VGA increased graphics memory to 256 KB and provided for resolutions of 640×480 with 16 colors, and 320×200 with 256 colors. VGA also provided a palette of 262,144 colors (as opposed to the EGA palette of 64 colors). The IBM 8514 and later XGA computer display standards were also introduced on the PS/2 line. Key monitors and their maximum resolutions: 8504: 12″, 640×480, 60 Hz non-interlaced, 1991, monochrome 8507: 19″, 1024×768, 43.5 Hz interlaced, 1988, monochrome 8511: 14″, 640×480, 60 Hz non-interlaced, 1987 8512: 14″, 640×480, 60 Hz non-interlaced, 1987 8513: 12″, 640×480, 60 Hz non-interlaced, 1987 8514: 16″, 1024×768, 43.5 Hz interlaced, 1987 8515: 14″, 1024×768, 43.5 Hz interlaced, 1991 8516: 14″, 1024×768, 43.5 Hz interlaced, 1991 8518: 14″, 640×480, 75 Hz non-interlaced, 1992 9515: 14″, 1024×768, 43.5 Hz interlaced, 1992 9517: 16″, 1280×1024, 53 Hz interlaced, 1991 9518: 14″, 640×480, non-interlaced, 1992 38F4737: 10", 640×480, non-interlaced, 1989, amber monochrome plasma screen; this display was exclusive to models P70 and P75 In truth, all "XGA" 1024×768 monitors are multimode, as XGA works as an add-on card to a built-in VGA and transparently passes-thru the VGA signal when not operating in a high resolution mode. All of the listed 85xx displays can therefore sync 640×480 at 60 Hz (or 720×400 at 70 Hz) in addition to any higher mode they may also be capable of. This however is not true of the 95xx models (and some unlisted 85xx's), which are specialist workstation displays designed for use with the XGA-2 or Image Adapter/A cards, and whose fixed frequencies all exceed that of basic VGA – the lowest of their commonly available modes instead being 640×480 at 75 Hz, if not something much higher still. It is also worth noting that these were still merely dual- or "multiple-frequency" monitors, not variable-frequency (also known as multisync); in particular, despite running happily at 640×480/720×400 and 1024×768, an (e.g.) 8514 cannot sync the otherwise common intermediate 800×600 "SVGA" resolution, even at the relatively low 50 to 56 Hz refresh rates initially used. Although the design of these adapters did not become an industry standard as VGA did, their 1024×768 pixel resolution was subsequently widely adopted as a standard by other manufacturers, and "XGA" became a synonym for this screen resolution. The lone exception were the bottom-rung 8086-based Model 25 and 30, which had a cut-down version of VGA referred to as MCGA; the 286 models came with VGA. This supported CGA graphics modes, VGA 320x200x256 and 640x480x2 mode, but not EGA or color 640x480. VGA video connector All of the new PS/2 graphics systems (whether MCGA, VGA, 8514, or later XGA) used a 15-pin D-sub connector for video out. This used analog RGB signals, rather than four or six digital color signals as on previous CGA and EGA monitors. The digital signals limited the color gamut to a fixed 16 or 64 color palette with no room for expansion. In contrast, any color depth (bits per primary) can be encoded into the analog RGB signals so the color gamut can be increased arbitrarily by using wider (more bits per sample) DACs and a more sensitive monitor. The connector was also compatible with analog grayscale displays. Unlike earlier systems (like MDA and Hercules) this was transparent to software, so all programs supporting the new standards could run unmodified whichever type of display was attached. (On the other hand, whether the display was color or monochrome was undetectable to software, so selection between application displays optimized for color or monochrome, in applications that supported both, required user intervention.) These grayscale displays were relatively inexpensive during the first few years the PS/2 was available, and they were very commonly purchased with lower-end models. The VGA connector became the de facto standard for connecting monitors and projectors on both PC and non-PC hardware over the course of the early 1990s, replacing a variety of earlier connectors. Storage Apple had first popularized the 3.5" floppy on the Macintosh line and IBM brought them to the PC in 1986 with the PC Convertible. In addition, they could be had as an optional feature on the XT and AT. The PS/2 line used entirely 3.5" drives which assisted in their quick adoption by the industry, although the lack of 5.25" drive bays in the computers created problems later on in the 1990s as they could not accommodate internal CD-ROM drives. In addition, the lack of built-in 5.25" floppy drives meant that PS/2 users could not immediately run the large body of existing IBM compatible software. However IBM made available optional external 5.25" drives, with internal adapters for the early PS/2 models, to enable data transfer. In the initial lineup, IBM used 720 KB double density (DD) capacity drives on the 8086-based models and 1440 KB high density (HD) on the 80286-based and higher models. By the end of the PS/2 line they had moved to a somewhat standardized capacity of 2880 KB. The PS/2 floppy drives lacked a capacity detector. 1440 KB floppies had a hole so that drives could identify them from 720 KB floppies, preventing users from formatting the smaller capacity disks to the higher capacity (doing so would work, but with a higher tendency of data loss). Clone manufacturers implemented the hole detection, but IBM did not. As a result of this a 720 KB floppy could be formatted to 1440 KB in a PS/2, but the resulting floppy would only be readable by a PS/2 machine. PS/2s primarily used Mitsubishi floppy drives and did not use a separate Molex power connector; the data cable also contained the power supply lines. As the hardware aged the drives often malfunctioned due to bad quality capacitors. The PS/2 used several different types of internal hard drives. Early models used MFM or ESDI drives. Some desktop models used combo power/data cables similar to the floppy drives. Later models used DBA ESDI or Parallel SCSI. Typically, desktop PS/2 models only permitted use of one hard drive inside the computer case. Additional storage could be attached externally using the optional SCSI interface. Memory Later PS/2 models introduced the 72-pin SIMM which became the de facto standard for RAM modules by the mid-1990s in mid-to-late 486 and nearly all Pentium desktop systems. 72-pin SIMMs were 32/36 bits wide and replaced the old 30-pin SIMM (8/9-bit) standard. The older SIMMs were much less convenient because they had to be installed in sets of two or four to match the width of the CPU's 16-bit (Intel 80286 and 80386SX) or 32-bit (80386 and 80486) data bus, and would have been extremely inconvenient to use in Pentium systems (which featured a 64-bit memory bus). 72-pin SIMMs were also made with greater capacities (starting at 1mb and ultimately reaching 128mb, vs 256kb to 16mb and more commonly no more than 4mb for 30-pin) and in a more finely graduated range (powers of 2, instead of powers of 4). Many PS/2 models also used proprietary IBM SIMMs and could not be fitted with commonly available types. However industry standard SIMMs could be modified to work in PS/2 machines if the SIMM-presence and SIMM-type detection bridges, or associated contacts, were correctly rewired. Models At launch, the PS/2 family comprised the Model 30, 50, 60 and 80; the Model 25 was launched a few months later. The PS/2 Models 25 and 30 (IBM 8525 and 8530, respectively) were the lowest-end models in the lineup and meant to replace the IBM PC and XT. Model 25s came with either an 8086 CPU running at 8 MHz, 512 KB of RAM, and 720 KB floppy disks, or 80286 CPU. The 8086s had ISA expansion slots and a built-in MCGA monitor, which could be either color or monochrome, while the 80286 models came with VGA monitor and ISA expansion slots. A cut-down Model M with no numeric keypad was standard, with the normal keyboard being an extra-cost option. There was a very rare later model called the PS/2 Model 25-SX which sported either a 16 MHz or 20 MHz 386 CPU, up to 12 MB of memory, IDE hard drive, VGA Monitor and 16 bit ISA slots making it the highest available model 25 available denoted by model number 8525-L41. The Model 30 had either an 8086 or 286 CPU and sported the full 101-key keyboard and standalone monitor along with three 8-bit ISA expansion slots. 8086 models had 720 KB floppies while 286 models had 1440 KB ones. Both the Model 25 and 30 could have an optional 20 MB ST-506 hard disk (which in the Model 25 took the place of the second floppy drive if so equipped and used a proprietary 3.5" form factor). 286-based Model 30s are otherwise a full AT-class machine and support up to 4 MB of RAM. Later ISA PS/2 models comprised the Model 30-286 (a Model 30 with an Intel 286 CPU), Model 35 (IBM 8535) and Model 40 (IBM 8540) with Intel 386SX or IBM 386SLC processors. The higher-numbered models (above 50) were equipped with the Micro Channel bus and mostly ESDI or SCSI hard drives (models 60-041 and 80-041 had MFM hard drives). PS/2 Models 50 (IBM 8550) and 60 (IBM 8560) used the Intel 286 processor, the PS/2 Models 70 (IBM 8570) and 80 used the 386DX, while the mid-range PS/2 Model 55SX (IBM 8555-081) and used the 16/32-bit 386SX processor. The Model 50 was revised to the Model 50Z still with 10MHz 80286 processor, but with memory run at zero wait state, and a switch to ESDI hard drives. Later Model 70 and 80 variants (B-xx) also used 25 MHz Intel 486 processors, in a complex called the Power Platform. The PS/2 Models 90 (IBM 8590/9590) and 95 (IBM 8595/9595/9595A) used Processor Complex daughterboards holding the CPU, memory controller, MCA interface, and other system components. The available Processor Complex options ranged from the 20 MHz Intel 486 to the 90 MHz Pentium and were fully interchangeable. The IBM PC Server 500, which has a motherboard identical to the 9595A, also uses Processor Complexes. Other later Micro Channel PS/2 models included the Model 65SX with a 16 MHz 386SX; various Model 53 (IBM 9553), 56 (IBM 8556) and 57 (IBM 8557) variants with 386SX, 386SLC or 486SLC2 processors; the Models 76 and 77 (IBM 9576/9577) with 486SX or 486DX2 processors respectively; and the 486-based Model 85 (IBM 9585). The IBM PS/2E (IBM 9533) was the first Energy Star compliant personal computer. It had a 50 MHz IBM 486SLC processor, an ISA bus, four PC card slots, and an IDE hard drive interface. The environmentally friendly PC borrowed many components from the ThinkPad line and was composed of recycled plastics, designed to be easily recycled at the end of its life, and used very little power. The IBM PS/2 Server 195 and 295 (IBM 8600) were 486-based dual-bus MCA network servers supporting asymmetric multiprocessing, designed by Parallan Computer Inc. The IBM PC Server 720 (IBM 8642) was the largest MCA-based server made by IBM, although it was not, strictly speaking, a PS/2 model. It could be fitted with up to six Intel Pentium processors interconnected by the Corollary C-bus and up to eighteen SCSI hard disks. This model was equipped with seven combination MCA/PCI slots. PS/2 portables, laptops and notebooks IBM also produced several portable and laptop PS/2s, including the Model L40 (ISA-bus 386SX), N33 (IBM's first notebook-format computer from year 1991, Model 8533, 386SX), N51 (386SX/SLC), P70 (386DX) and P75 (486DX2). The IBM ThinkPad 700C, aside from being labeled "700C PS/2" on the case, featured MCA and a 486SLC CPU. 6152 Academic System The 6152 Academic System was a workstation computer developed by IBM's Academic Information Systems (ACIS) division for the university market introduced in February 1988. The 6152 was based on the PS/2 Model 60, adding a RISC Adapter Card on the Micro Channel bus. This card was a co-processor that enabled the 6152 to run ROMP software compiled for IBM's Academic Operating System (AOS), a version of BSD UNIX for the ROMP that was only available to select colleges and universities. The RISC Adapter Card contained the ROMP-C microprocessor (an enhanced version of the ROMP that first appeared in the IBM RT PC workstations), a memory management unit (the ROMP had virtual memory), a floating-point coprocessor, and up to 8MB of memory for use by the ROMP. The 6152 was the first computer to use the ROMP-C, which would later be introduced in new RT PC models. Marketing During the 1980s, IBM's advertising of the original PC and its other product lines had frequently used the likeness of Charlie Chaplin. For the PS/2, however, IBM augmented this character with the following jingle: Another campaign featured actors from the television show M*A*S*H playing the staff of a contemporary (i.e. late-1980s) business in roles reminiscent of their characters' roles from the series. Harry Morgan, Larry Linville, William Christopher, Wayne Rogers, Gary Burghoff, Jamie Farr, and Loretta Swit were in from the beginning, whereas Alan Alda joined the campaign later. The profound lack of success of these advertising campaigns led, in part, to IBM's termination of its relationships with its global advertising agencies; these accounts were reported by Wired magazine to have been worth over $500 million a year, and the largest such account review in the history of business. Overall, the PS/2 line was largely unsuccessful with the consumer market, even though the PC-based Models 30 and 25 were an attempt to address that. With what was widely seen as a technically competent but cynical attempt to gain undisputed control of the market, IBM unleashed an industry backlash, which went on to standardize VESA, EISA and PCI. In large part, IBM failed to establish a link in the consumer's mind between the PS/2 MicroChannel architecture and the immature OS/2 1.x operating system; the more capable OS/2 version 2.0 did not release until 1992. The firm suffered massive financial losses for the remainder of the 1980s, forfeiting its previously unquestioned position as the industry leader, and eventually lost its status as the largest manufacturer of personal computers, first to Compaq and then to Dell. From a high of 10,000 employees in Boca Raton before the PS/2 came out, only seven years later, IBM had $600 million in unsold inventory and was laying off staff by the thousands. After the failure of the PS/2 line to establish a new standard, IBM was forced to revert to building ISA PCs—following the industry it had once led—with the low-end PS/1 line and later with the more compatible Aptiva and PS/ValuePoint lines. Still, the PS/2 platform experienced some success in the corporate sector where the reliability, ease of maintenance and strong corporate support from IBM offset the rather daunting cost of the machines. Also, many people still lived with the motto "Nobody ever got fired for buying an IBM". In the mid-range desktop market, the models 55SX and later 56SX were the leading sellers for almost their entire lifetimes. Later PS/2 models saw a production life span that took them into the late 1990s, within a few years of IBM selling off the division. See also Successors IBM PS/ValuePoint Ambra Computer Corporation IBM Aptiva Concurrent IBM PS/1 Notes References Further reading Burton, Greg. IBM PC and PS/2 pocket reference. NDD (the old dealer channel), 1991. Byers, T.J. IBM PS/2: A Reference Guide. Intertext Publications, 1989. . Dalton, Richard and Mueller, Scott. IBM PS/2 Handbook . Que Publications, 1989. . Held, Gilbert. IBM PS/2: User's Reference Manual. John Wiley & Sons Inc., 1989. . Hoskins, Jim. IBM PS/2. John Wiley & Sons Inc., fifth revised edition, 1992. . Leghart, Paul M. The IBM PS/2 in-depth report. Pachogue, NY: Computer Technology Research Corporation, 1988. Newcom, Kerry. A Closer Look at IBM PS/2 Microchannel Architecture. New York: McGraw-Hill, 1988. Norton, Peter. Inside the IBM PC and PS/2. Brady Publishing, fourth edition 1991. . Outside the IBM PC and PS/2: Access to New Technology. Brady Publishing, 1992. . Shanley, Tom. IBM PS/2 from the Inside Out. Addison-Wesley, 1991. . External links IBM Type 8530 IBM PS/2 Personal Systems Reference Guide 1992 - 1995 Computercraft - The PS/2 Resource Center Model 9595 Resource, covers all PS/2 models and adapters PS/2 keyboard pinout Computer Chronicles episode on the PS/2 IBM PS/2 L40 SX (8543) PS/2 OS/2 16-bit computers 32-bit computers Computer-related introductions in 1987
1564592
https://en.wikipedia.org/wiki/Features%20of%20Firefox
Features of Firefox
Mozilla Firefox has features that allow it to be distinguished from other web browsers, such as Chrome and Internet Explorer. Major differences To avoid interface bloat, ship a relatively smaller core customizable to meet individual users' needs, and allow for corporate or institutional extensions to meet their varying policies, Firefox relies on a robust extension system to allow users to modify the browser according to their requirements instead of providing all features in the standard distribution. While Opera and Google Chrome do the same, extensions for these are fewer in number as of late 2013. Internet Explorer also has an extension system but it is less widely supported than that of others. Developers supporting multiple browsers almost always support Firefox, and in many instances exclusively. As Opera has a policy of deliberately including more features in the core as they prove useful, the market for extensions is relatively unstable but also there is less need for them. The sheer number of extensions is not a good guide to the capabilities of a browser. Protocol support and the difficulty of adding new link type protocols also vary widely across not only these browsers but across versions of these browsers. Opera has historically been most robust and consistent about supporting cutting-edge protocols such as robust file sharing eDonkey links or bitcoin transactions. These can be difficult to support in Firefox without relying on unknown small developers, which defeats the privacy purpose of these protocols. Instructions for supporting new link protocols vary widely across operating systems and Firefox versions, and are generally not implementable by end users who lack systems administration comfort and the ability to follow exact detailed instructions to type in strings. Web technologies support Firefox supports most basic Web standards including HTML, XML, XHTML, CSS (with extensions), JavaScript, DOM, MathML, SVG, XSLT and XPath. Firefox's standards support and growing popularity have been credited as one reason Internet Explorer 7 was to be released with improved standards support. Since Web standards are often in contradiction with Internet Explorer's behavior, Firefox, like other browsers, has a quirks mode. This mode attempts to mimic Internet Explorer's quirks modes, which equates to using obsolete rendering standards dating back to Internet Explorer 5, or alternately newer peculiarities introduced in IE 6 or 7. However, it is not completely compatible. Because of the differing rendering, PC World notes that a minority of pages do not work in Firefox, however Internet Explorer 7's quirks mode does not either. CNET notes that Firefox does not support ActiveX controls by default, which can also cause webpages to be missing features or to not work at all in Firefox. Mozilla made the decision to not support ActiveX due to potential security vulnerabilities, its proprietary nature and its lack of cross-platform compatibility. There are methods of using ActiveX in Firefox such as via third-party plugins but they do not work in all versions of Firefox or on all platforms. Beginning on December 8, 2006, Firefox Nightly builds pass the Acid2 CSS standards compliance test, so all future releases of Firefox 3 would pass the test. Firefox also implements a proprietary protocol from Google called "safebrowsing", which is not an open standard. Cross-platform support Mozilla Firefox runs on certain platforms that coincide OS versions in use at the time of release. In 2004 version 1 supported older operating systems such as Windows 95 and Mac OS X 10.1, by 2008 version 3 required at least OS X 10.4 and even Windows 98 support ended. Various releases available on the primary distribution site can support the following operating systems, although not always the latest Firefox version. Various versions of Microsoft Windows, including 98, 98SE, ME, NT 4.0, 2000, XP, Server 2003, Vista, 7, 8 and 10. OS X Linux-based operating systems using X.Org Server or XFree86 Builds for Solaris (x86 and SPARC), contributed by the Sun Beijing Desktop Team, are available on the Mozilla web site. Mozilla Firefox 1.x installation on Windows 95 requires a few additional steps. Since Firefox is open source and Mozilla actively develops a platform independent abstraction for its graphical front end, it can also be compiled and run on a variety of other architectures and operating systems. Thus, Firefox is also available for many other systems. This includes OS/2, AIX, and FreeBSD. Builds for Windows XP Professional x64 Edition are also available. Mozilla Firefox is also the browser of choice for a good number of smaller operating systems, such as SkyOS and ZETA. Firefox uses the same profile format on the different platforms, so a profile may be used on multiple platforms, if all of the platforms can access the same profile; this includes, for example, profiles stored on an NTFS (via FUSE) or FAT32 partition accessible from both Windows and Linux, or on a USB flash drive. This is useful for users who dual-boot their machines. However, it may cause a few problems, especially with extensions. Aero Peek capability Mozilla has included Aero Peek capability for each tab on Windows 7. This feature was previously not enabled by default (but can be user enabled), but now is included as a full feature of Firefox. This resulting in a displayed thumbnail image of the tab. This will create similar functioning to that which is already included in IE8. Security Firefox is free-libre software, and thus in particular its source code is visible to everyone. This allows anyone to review the code for security vulnerabilities. It also allowed the U.S. Department of Homeland Security to give funding for the automated tool Coverity to be run against Firefox code. Additionally, Mozilla has a security bug bounty system - anyone who reports a valid critical security bug receives a $3000 (US) cash reward for each report and a Mozilla T-shirt. With effect from December 15, 2010, Mozilla added Web Applications to its Security Bug Bounty Program. Tabbed browsing Firefox supports tabbed browsing, which allows users to open several pages in one window. This feature was carried over from the Mozilla Application Suite, which in turn had borrowed the feature from the popular MultiZilla extension for Mozilla. Firefox also permits the "homepage" to be a list of URLs delimited with vertical bars (|), which are automatically opened in separate tabs, rather than a single page. Firefox 2 supports more tabbed browsing features, including a "tab overflow" solution that keeps the user's tabs easily accessible when they would otherwise become illegible, a "session store" which lets the user keep the opened tabs across the restarts, and an "undo close tab" feature. The tab browsing feature allows users to open multiple tabs or pages on one window. This is convenient for users who enjoy browsing from one window and is also advantageous in ensuring ease of browsing. The tabs are easily made accessible and users can close tabs that are not in use for better usability. Pop-up blocking Firefox also includes integrated customizable pop-up blocking. Firefox was given this feature early in beta development, and it was a major comparative selling point of the browser until Internet Explorer gained the capability in the Windows XP SP2 release of August 25, 2004. Firefox's pop-up blocking can be turned off entirely to allow pop-ups from all sites. Firefox's pop-up blocking can be inconvenient at times — it prevents JavaScript-based links from opening a new window while a page is loading unless the site is added to a "safe list" found in the options menu. In many cases, it is possible to view the pop-up's URL by clicking the dialog that appears when one is blocked. This makes it easier to decide if the pop-up should be displayed. Private browsing Private browsing was introduced in Firefox 3.5, which released on June 30, 2009. This feature lets users browse the Internet without leaving any traces in the browsing history. Download manager An integrated customizable download manager is also included. Downloads can be opened automatically depending on the file type, or saved directly to a disk. By default, Firefox downloads all files to a user's desktop on Mac and Windows or to the user's home directory on Linux, but it can be configured to prompt for a specific download location. Version 3.0 added support for cross-session resuming (stopping a download and resuming it after closing the browser). From within the download manager, a user can view the source URL from which a download originated as well as the location to which a file was downloaded. Live bookmarks From 2004, live bookmarks allowed users to dynamically monitor changes to their favorite news sources, using RSS or Atom feeds. Instead of treating RSS-feeds as HTML pages as most news aggregators do, Firefox treated them as bookmarks and automatically updated them in real-time with a link to the appropriate source. In December 2018, version 64.0 of Firefox removed live bookmarks and web feeds, with Mozilla suggesting its replacement by add-ons or other software with news aggregator functionality like Mozilla Thunderbird. Other features Find as you type Firefox also has an incremental find feature known as "Find as you type", invoked by pressing Ctrl+F. With this feature enabled, a user can simply begin typing a word while viewing a web page, and Firefox automatically searches for it and highlights the first instance found. As the user types more of the word, Firefox refines its search. Also, if the user's exact query does not appear anywhere on the page, the "Find" box turns red. Ctrl+G can be pressed to go to the next found match. Alternatively the slash (/) key can be used instead to invoke the "quick search". The "quick search", in contrast to the normal search, lacks search controls and is wholly controlled by the keyboard. In this mode highlighted links can be followed by pressing the enter key. The "quick search" has an alternate mode which is invoked by pressing the apostrophe (') key, in this mode only links are matched. Mycroft Web Search A built-in Mycroft Web search function features extensible search-engine listing; by default, Firefox includes plugins for Google and Yahoo!, and also includes plugins for looking up a word on dictionary.com and browsing through Amazon.com listings. Other popular Mycroft search engines include Wikipedia, eBay, and IMDb. Smart Bookmarks Smart Bookmarks (aka Smart keywords) can be used to quickly search for information on specific Web sites. A smart keyword is defined by the user and can be associated with any bookmark, and can then be used in the address bar as a shortcut to quickly get to the site or, if the smart keyword is linked to a searchbox, to search the site. For example, "imdb" is a pre-defined smart keyword; to search for information about the movie 'Firefox' on IMDb, jump to the location bar with the + shortcut, type "imdb Firefox" and press the Enter key or just simply type in "imdb" if one wants to get to the frontpage instead. Chrome The Chrome packages within Firefox control and implement the Firefox user interface. Version 2.0 and above Enhanced search capabilities Search term suggestions will now appear as users type in the integrated search box when using the Google, Yahoo! or Answers.com search engines. A new search engine manager makes it easier to add, remove and re-order search engines, and users will be alerted when Firefox encounters a website that offers new search engines that the user may wish to install. Microsummaries Support for Microsummaries was added in version 2.0. Microsummaries are short summaries of web pages that are used to convey more information than page titles. Microsummaries are regularly updated to reflect content changes in web pages so that viewers of the web page will want to revisit the web page after updates. Microsummaries can either be provided by the page, or be generated by the processing of an XSLT stylesheet against the page. In the latter case, the XSLT stylesheet and the page that the microsummary applies to are provided by a microsummary generator. Support for Microsummaries was removed as of Firefox 6. Live Titles When a website offers a microsummary (a regularly updated summary of the most important information on a Web page), users can create a bookmark with a "Live Title". Compact enough to fit in the space available to a bookmark label, they provide more useful information about pages than static page titles, and are regularly updated with the latest information. There are several websites that can be bookmarked with Live Titles, and even more add-ons to generate Live Titles for other popular websites. Support for Live Titles was removed as of Firefox 6. Session Restore The Session Restore feature restores windows, tabs, text typed in forms, and in-progress downloads from the last user session. It will be activated automatically when installing an application update or extension, and users will be asked if they want to resume their previous session after a system crash. Inline spell checker A built-in spell checker enables users to quickly check the spelling of text entered into Web forms without having to use a separate application. Usability in version 2 Firefox 2 was designed for the average user, hiding advanced configuration and making features that do not require user interaction to function. Jim Repoza of eWEEK states: Firefox also won UK Usability Professionals' Association's 2005 award for "Best software application". Version 3.0 and above Star button Quickly add bookmarks from the location bar with a single click; a second click lets the user file and tag them. Version 5.0 and above Style Inspector Firefox 10 added the CSS Style Inspector to the Page Inspector, which allow users to check out a site's structure and edit the CSS without leaving the browser. Firefox 10 added support for CSS 3D Transforms and for anti-aliasing in the WebGL standard for hardware-accelerated 3D graphics. These updates mean that complex site and Web app animations will render more smoothly in Firefox, and that developers can animate 2D objects into 3D without plug-ins. 3D Page Inspector Firefox 11, released January 2012, introduced a tiltable three-dimensional visualization of the Document Object Model (DOM), where more nested elements protrude further from the page surface. This feature was removed with version 47. Firefox 57 and above Electrolysis and WebExtensions On August 21, 2015, Firefox developers announced that due to planned changes to Firefox's internal operations, including the planned implementation of a new multi-process architecture codenamed "Multiprocess Firefox" or "Electrolysis" (stylized "e10s"), introduced to some users in version 48, Firefox adopted a new extension architecture known as WebExtensions, available to desktop version and Firefox for Android (considered stable in version 48). WebExtensions uses HTML and JavaScript APIs and is designed to be similar to the Google Chrome and Microsoft Edge extension systems, and run within a multi-process environment, but does not enable the same level of access to the browser. Support for XPCOM and XUL add-ons are no longer offered beginning with Firefox 57. HTTPS-only mode Firefox 83 introduced HTTPS-only mode, a security enhancing mode that once enabled forces all connections to websites to use HTTPS. Tags Smart Location Bar Firefox 3 includes a "Smart Location Bar". While most other browsers, such as Internet Explorer, will search through history for matching web sites as the user types a URL into the location bar, the Smart Location Bar will also search through bookmarks for a page with a matching URL. Additionally, Firefox's Smart Location Bar will also search through page titles, allowing the user to type in a relevant keyword, instead of a URL, to find the desired page. Firefox uses frecency and other heuristics to predict which history and bookmark matches the user is most likely to select. Library View, organize and search through bookmarks, tags and browsing history using the new Library window. Create or restore full backups of this data whenever with a few clicks. Smart Bookmark Folders Users can quickly access their most visited bookmarks from the toolbar, or recently bookmarked and tagged pages from the bookmark menu. Smart Bookmark Folders can be created by saving a search query in the Library. Full page zoom From the View menu and via keyboard shortcuts, the new zooming feature lets users zoom in and out of entire pages, scaling the layout, text and images, or optionally only the text size. Zoom settings will be remembered for each site. Text selection improvements In addition to being able to double-click and drag to select text by words; or triple-click and drag to select text by paragraph, Ctrl (Cmd on Mac) can be held down to retain the previous selection and extend it instead of replacing it when doing another selection. Web-based protocol handlers Web applications, such as a user's favorite webmail provider, can now be used instead of desktop applications for handling mail to links from other sites. Similar support is available for other protocols (Web applications will have to first enable this by registering as handlers with Firefox). Add-ons and extensions There are six types of add-ons in Firefox: extensions, themes, language packs, plugins, social features and apps. Firefox add-ons may be obtained from the Mozilla Add-ons web site or from other sources. Extensions Firefox users can add features and change functionality in Firefox by installing extensions. Extension functionality is varied; such as those enabling mouse gestures, those that block advertisements, and those that enhance tabbed browsing. Features that the Firefox developers believe will be used by only a small number of its users are not included in Firefox, but instead left to be implemented as extensions. Many Mozilla Suite features, such as IRC chat (ChatZilla) and calendar have been recreated as Firefox extensions. Extensions are also sometimes a testing ground for features that are eventually integrated to the main codebase. For example, MultiZilla was an extension that provided tabbed browsing when Mozilla lacked that feature. While extensions provide a high level of customizability, PC World notes the difficulty a casual user would have in finding and installing extensions as compared to their features being available by default. Most extensions are not created or supported by Mozilla. Malicious extensions have been created. Mozilla provides a repository of extensions that have been reviewed by volunteers and are believed to not contain malware. Since extensions are mostly created by third parties, they do not necessarily go through the same level of testing as official Mozilla products, and they may have bugs or vulnerabilities. Like applications on Android and iOS, Firefox extensions have permission model: for example before installing of extension user must agree that this extension can have access to all webpages, or maybe have permission to manage downloads, or have no special permissions — in such way the extension can be manually activated and interact with current page. From 2019 Firefox, Chromium based browsers (Google Chrome, Edge, Opera, Vivaldi) have the same format of extension: WebExtensions API, this is mean that web extension developed for Google Chrome can be used on Firefox (in most cases), and vice versa. Themes Firefox also supports a variety of themes for changing its appearance. Prior to the release of Firefox 57, themes are simply packages of CSS and image files. From Firefox 57 onwards, themes consist solely of color modifications through the use of CSS. Many themes can be downloaded from the Mozilla Update web site. Language packs Language packs are dictionaries for spell checking of input fields. Plugins Firefox supports plugins based on Netscape Plugin Application Program Interface (NPAPI), i.e. Netscape-style plugins. As a side note, Opera and Internet Explorer 3.0 to 5.0 also support NPAPI. On June 30, 2004, the Mozilla Foundation, in partnership with Adobe, Apple, Macromedia, Opera, and Sun Microsystems, announced a series of changes to web browser plugins. The then-new API allowed web developers to offer richer web browsing experiences, helping to maintain innovation and standards. The then-new plugin technologies were implemented in the future versions of the Mozilla applications. Mozilla Firefox 1.5 and later versions include the Java Embedding plugin, which allow Mac OS X users to run Java applets with the then-latest 1.4 and 5.0 versions of Java (the default Java software shipped by Apple is not compatible with any browser, except its own Safari). Apps After the releases of Firefox OS based on stack of web technologies, Mozilla added a feature to install mobile apps on PC using Firefox as base. Customizability Beyond the use of Add-ons, Firefox additional customization features. The position of the toolbars and interface are customizable User stylesheets to change the style of webpages and Firefox's user interface. Customizable font colours. A number of internal configuration options are not accessible in a conventional manner through Firefox's preference dialogs, although they are exposed through its about:config interface. References External links Firefox Features at Mozilla.com Microsummaries - MozillaWiki Mozilla Firefox Firefox
66380
https://en.wikipedia.org/wiki/TOPS-20
TOPS-20
The TOPS-20 operating system by Digital Equipment Corporation (DEC) was a proprietary OS used on some of DEC's 36-bit mainframe computers. The Hardware Reference Manual was described as for "DECsystem-10/DECSYSTEM-20 Processor" (meaning the DEC PDP-10 and the DECSYSTEM-20). TOPS-20 began in 1969 as the TENEX operating system of Bolt, Beranek and Newman (BBN) and shipped as a product by DEC starting in 1976. TOPS-20 is almost entirely unrelated to the similarly named TOPS-10, but it was shipped with the PA1050 TOPS-10 Monitor Calls emulation facility which allowed most, but not all, TOPS-10 executables to run unchanged. As a matter of policy, DEC did not update PA1050 to support later TOPS-10 additions except where required by DEC software. TOPS-20 competed with TOPS-10, ITS and WAITS—all of which were notable time-sharing systems for the PDP-10 during this timeframe. TENEX TOPS-20 was based upon the TENEX operating system, which had been created by Bolt Beranek and Newman for Digital's PDP-10 computer. After Digital started development of the KI-10 version of the PDP-10, an issue arose: by this point TENEX was the most popular customer-written PDP-10 operating systems, but it would not run on the new, faster KI-10s. To correct this problem, the DEC PDP-10 sales manager purchased the rights to TENEX from BBN and set up a project to port it to the new machine. In the end, very little of the original TENEX code remained, and Digital ultimately named the resulting operating system TOPS-20. PA1050 Some of what came with TOPS-20 was merely an emulation of the TOPS-10 Operating System's calls. These were known as UUO's, standing for Unimplemented User Operation, and were needed both for compilers, which were not 20-specific, to run, as well as user-programs written in these languages. The package that was mapped into a user's address space was named PA1050: PA as in PAT as in compatibility; 10 as in DEC or PDP 10; 50 as in a PDP 10 Model 50, 10/50, 1050. Sometimes PA1050 was referred to as PAT, a name that was a good fit to the fact that PA1050, "was simply unprivileged user-mode code" that "performed the requested action, using JSYS calls where necessary." TOPS-20 capabilities The major ways to get at TOPS-20 capabilities, and what made TOPS-20 important, were Commands entered via the command processor, EXEC.EXE JSYS (Jump to System) calls from MACro-language (.MAC) programs The "EXEC" accomplished its work primarily using internal code, including calls via JSYS requesting services from "GALAXY" components (e.g. spoolers) Command processor Rather advanced for its day were some TOPS-20-specific features: Command completion Dynamic help in the form of noise-words - typing DIR and then pressing the ESCape key resulted in DIRectory (of files) typing and pressing the key resulted in Information (about) One could then type to find out what operands were permitted/required. Pressing displays status information. Commands The following list of commands are supported by the TOPS-20 Command Processor. ACCESS ADVISE APPEND ARCHIVE ASSIGN ATTACH BACKSPACE BLANK BREAK BUILD CANCEL CLOSE COMPILE CONNECT CONTINUE COPY CREATE CREF CSAVE DAYTIME DDT DEASSIGN DEBUG DEFINE DELETE DEPOSIT DETACH DIRECTORY DISABLE DISCARD DISMOUNT EDIT ENABLE END-ACCESS EOF ERUN EXAMINE EXECUTE EXPUNGE FDIRECTORY FORK FREEZE GET HELP INFORMATION KEEP LOAD LOGIN LOGOUT MERGE MODIFY MOUNT PERUSE PLOT POP PRINT PUNCH PUSH R RECEIVE REENTER REFUSE REMARK RENAME RESET RETRIEVE REWIND RUN SAVE SEND SET SET HOST SKIP START SUBMIT SYSTAT TAKE TALK TDIRECTORY TERMINAL TRANSLATE TYPE UNATTACH UNDELETE UNKEEP UNLOAD VDIRECTORY JSYS features JSYS stands for Jump to SYStem. Operands were at times memory addresses. "TOPS-20 allows you to use 18-bit or 30-bit addresses. Some monitor calls require one kind, some the other; some calls accept either kind. Some monitor calls use only 18 bits to hold an address. These calls interpret 18-bit addresses as locations in the current section." Internally, files were first identified, using a GTJFN (Get Job File Number) JSYS, and then that JFN number was used to open (OPENF) and manipulate the file's contents. PCL (Programmable Command Language) PCL (Programmable Command Language) is a programming language that runs under TOPS-20. PCL source programs are, by default, stored with Filetype .PCL, and enable extending the TOPS-20 EXEC via a verb named DECLARE. Newly compiled commands then become functionally part of the EXEC. PCL language features PCL includes: flow control: DO While/Until, CASE/SELECT, IF-THEN-ELSE, GOTO character string operations (length, substring, concatenation) access to system information (date/time, file attributes, device characteristics) TOPS-20 today Paul Allen maintained several publicly accessible historic computer systems before his death, including an XKL TOAD-2 running TOPS-20. See also SDF Public Access Unix System. See also Time-sharing system evolution References "DIGITAL Computing Timeline". Further reading Storage Organization and Management in TENEX. Daniel L. Murphy. AFIPS Proceedings, 1972 FJCC. Implementation of TENEX on the KI10. Daniel L. Murphy. TENEX Panel Session, NCC 1974. Origins and Development of TOPS-20. Daniel L. Murphy, 1989. "TOPS-20 User's Guide." 1988. "DECSYSTEM-20 Assembly Language Guide." Frank da Cruz and Chris Ryland, 1980. "Running TOPS-20 V4.1 under the SIMH Emulator." External links Origins and Development of TOPS-20 is an excellent longer history. Panda TOPS-20 distribution. SDF Public Access TWENEX. SIMH Simulator capable of simulating the PDP-10 and running TOPS-20. Manuals for DEC 36-bit computers. PDP-10 Software Archive. 36-bits Forever. Request a login to Living Computers: Museum + Labs TOAD-2 running TOPS-20. DEC operating systems Time-sharing operating systems 1969 software
63049259
https://en.wikipedia.org/wiki/Centric%20Software
Centric Software
Centric Software is a Silicon Valley-based software company headquartered in Campbell, California. The company designs software, in particular Product Lifecycle Management (PLM) systems, for fashion, retail, footwear, outdoor, luxury, home décor and consumer goods industries, including formulated products (Food & Beverage and Cosmetic & Personal Care) History Centric Software was founded in 1998. The company began as a PLM vendor and then made developments in enterprise mobility applications suite with visual, touch-screen based digital board solutions. The company has provided PLM solutions to retailers and manufacturers including Volcom, Tesco, Louis Vuitton, SIPLEC, Balenciaga, and Calvin Klein among others. Centric's PLM was also purchased by LIME, a Russian fashion company as part of the company's efforts to its products outside of USA. As of January 2020, the company has 15 offices around the world and 4 virtual centres. Its investors include Dassault Systèmes, Oak Investment Partners, Masthead Venture Partners, and Fung Capital USA. Products Centric Software has integrated 3D, mobility, AI, cloud, SaaS, and Agile Development to develop their products. See also Dassault Systèmes Product life-cycle management (marketing) References 1998 establishments in California Companies based in Silicon Valley Companies based in Campbell, California Software companies based in California Software companies established in 1998 American companies established in 1998
553720
https://en.wikipedia.org/wiki/Australian%20Communications%20and%20Media%20Authority
Australian Communications and Media Authority
The Australian Communications and Media Authority (ACMA) is an Australian government statutory authority within the Communications portfolio. ACMA was formed on 1 July 2005 with the merger of the Australian Broadcasting Authority and the Australian Communications Authority. ACMA is responsible for regulating Australian media and communications. It does this through various legislation, regulations, standards and codes of practice. ACMA is a converged regulator, created to oversee the convergence of telecommunications, broadcasting, radio communications and the internet. Organisation ACMA is an independent agency composed of a Chair, Deputy Chair, three Full-time Members (which includes the Chair and Deputy Chair), and three Associate Members. It is managed by an executive team comprising the Chair (who is also the Agency Head), Deputy Chair (who is also the chief executive officer), four general managers and ten executive managers. The corporate structure comprises four divisions – Communications Infrastructure, Content, Consumer and Citizen, Corporate and Research, and Legal Services. ACMA has responsibilities under four principal Acts – the Broadcasting Services Act 1992, the Telecommunications Act 1997, the Telecommunications (Consumer Protection and Service Standards) Act 1999 and the Radiocommunications Act 1992. There are another 22 Acts to which the agency responds in such areas as spam, the Do Not Call Register and interactive gambling. The ACMA also creates and administers more than 523 legislative instruments including radiocommunications, spam and telecommunications regulations; and licence area plans for free-to-air broadcasters. ACMA collects revenue on behalf of the Australian Government through broadcasting, radiocommunications and telecommunications taxes, charges and licence fees. It also collects revenue from price-based allocation of spectrum. ACMA's main offices are located in Canberra, Melbourne and Sydney. Convergence and change Communications convergence is the merging of the previously distinct services by which information is communicated – telephone, television (free-to-air and subscription) radio and newspapers – over digital platforms. ACMA also works with industry and citizens to solve new concerns and mitigate risks arising in the evolving networked society and information economy, recognising that Australians are interacting with digital communications and content in changing ways. Not only does ACMA address a wide and disparate range of responsibilities, it does so against a backdrop of rapid and disruptive change. Many of the controls on the production and distribution of content and the provision of telecommunications services through licensing or other subsidiary arrangements, or by standards and codes (whether co-regulatory or self-regulatory) are subject to revision and adaptation to the networked society and information economy. Moreover, there are new platforms, applications, business models, value chains and forms of social interaction available with more to come in what is a dynamic, innovative environment. Other challenges for regulators include cross-jurisdictional issues and the need for engagement and collaboration with stakeholders locally, regionally and internationally. The ACMA's response to these pressures is to remain constantly relevant by delivering on its mandated outcomes and its statutory obligations, and by transforming itself into a resilient, e-facing, learning organisation, responsive to the numerous pressures for change that confront it. ACMA has developed a 'converged communications regulator' framework which seeks to bring to the global discussion a 'common ground' which can capture the fundamental tasks any regulator in a convergent environment will engage with to deliver outcomes in the public interest. The four cornerstone parts to the framework, each divided into two sub-streams, are outlined below along with the main functions of ACMA under each task. Bridging to the future – active engagement with the currents of change and proactive development of responses through thought leadership and regulatory development: reviewing industry standards and codes of practice developing more flexible licensing updating spectrum management tools for spectrum sharing technologies research and analysis to examine the effectiveness of current regulation and to provide evidence-informed regulatory development Transforming the agency – adapting the organisation to the changing world of convergence by ensuring a structural fit with convergence and a focus on agency innovation: creating resilience through transformational capacity/capability training evidence-based reporting on industry performance, service offerings, consumer benefits, levels of adoption and use development and administration of spam intelligence database developing and implementing an evidence-based approach to tracking industry performance during digital TV transition Major program delivery – undertaking major development work or program implementation through resource and program management with fully effective corporate governance: development and implementation of a national cybersafety education program administering the Do Not Call Register administering contracts for phone services for people who are deaf or have a hearing or speech impediment developing and implementing a corporate governance framework and ICT strategic plan Effective regulation – doing the 'day job' of the regulatory agency with effective and efficient regulatory administration and operations coupled with extensive stakeholder engagement: regulating telecommunications and broadcasting services, internet content and datacasting services managing access to the radiofrequency spectrum bands through radiocommunications licensing, including amateur radio licensing resolving competing demands for spectrum through broadcasting licence arrangements and price-based allocation methods regulating use of the radio-frequency spectrum and helping in minimising radio communications interference regulating compliance with the relevant legislation, licence conditions, codes of practice, standards, service guarantees and other safeguards promoting and facilitating industry self-regulatory and co-regulatory solutions representing Australia's interests internationally (see International Telecommunication Union) informing industry and consumers about communications regulation. The Convergence Review Committee set up by the Government in 2011 was independent of the ACMA and its final report in 2012 suggested the ACMA be replaced with a new regulator to implement a different approach to regulation. These changes were not enacted by the Labour Government and the new Coalition Government has not made major decisions on the future of the ACMA. The ACMA Hotline for reporting offensive or illegal online content The ACMA administers a complaints mechanism for Australian residents and law enforcement agencies to report prohibited online content, including child sexual abuse material. Within the scheme, which operates under Schedules 5 and 7 of the Broadcasting Services Act 1992, content is assessed with reference to the same criteria within the National Classification Scheme that applies to films and computer games in Australia. The ACMA Hotline is one of a global network of international bodies within INHOPE – the International Association of Internet Hotlines that exchange information on child abuse images, pinpointing the hosting countries to help eradicate them from the web. INHOPE consists of 44 members in 38 countries, with members including the Internet Watch Foundation (UK), the National Centre for Missing and Exploited Children (NCMEC), Cybertip (Canada), Friendly Runet Foundation (Russian Federation) and the Internet Hotline Center Japan. If prohibited online content is found in Australia, it is issued with a take-down notice after being formally classified; if it is hosted overseas it is notified to optional end-user Family Friendly Filters that are accredited by industry through the Internet Industry Association (these are available at cost from ISPs). All potentially illegal content is reported by the ACMA to law enforcement in Australia, or, in the case of child sexual abuse material hosted overseas, through INHOPE for rapid police notification and take-down in the host country. The ACMA publishes comprehensive statistics and information about the ACMA Hotline on its website. The majority of investigations the ACMA conducts concern online child sexual abuse material. Complaints to the ACMA Hotline are usually made via a webform on the ACMA's website. Popularly held misconceptions about the ACMA's regulatory role include that it investigates and takes action on whole websites (it investigates specific URLs, images or files) and that the ACMA causes blocking of content at an ISP level (it notifies overseas hosted content to optional end-user filters). In February 2013, the ACMA and Australian Federal Police announced a new agreement for sharing of information about serious child abuse material, including an arrangement whereby the ACMA can report content through INHOPE based on where content may be produced, as well as where it is hosted During National Child Protection Week 2013, the ACMA Hotline conducted 418 investigations involving over 4,700 images of abused children to Australian police agencies or through the INHOPE international network for action overseas During the week, the ACMA announced it is now working more closely with CrimeStoppers in Australia to make it easier to report illegal online content. The ACMA's online role is not connected to ISP blocking 'worst of the worst' child abuse material, which was operated by ISPs and the Australian Federal Police. In July 2015, this function moved to the Office of the Children's eSafety Commissioner. Do Not Call Register ACMA operates Australia's Do Not Call Register, which is a scheme to reduce unsolicited telemarketing calls and marketing faxes to individuals who have indicated they do not want to receive such calls by registering their private and domestic telephone (including mobile) and fax numbers on the Register. The scheme has been in operation since May 2007. Since mid-2013, Salmat has managed the Register on behalf of ACMA. Spam Act ACMA is responsible for enforcing the Spam Act 2003 which prohibits the sending of unsolicited commercial electronic messages with an Australian link. A message has an Australian link if it originates, or was authorised, in Australia, or if the message was accessed in Australia. Anyone who sends commercial email, SMS, or instant messages must ensure that the message is sent with consent, contains sender identification and contact information and includes a functional unsubscribe facility. Some exemptions apply. Members of the public are able to make complaints and reports about commercial electronic messages to ACMA which may conduct formal investigations and take enforcement actions. The Australian Internet Security Initiative and malware The ACMA developed the Australian Internet Security Initiative (AISI) to help address the problem of computers being compromised by the surreptitious installation of malicious software. 'Malware' enables a computer to be controlled remotely for illegal and harmful activities without the owner's knowledge. Malware can: access sensitive personal information stored on the computer such as resumes, sensitive documents, photographs/videos, and banking and other login or password details gain remote access to the computer's camera and microphone form part of a larger group of computers known as 'botnets'. Among other things, botnets are used to help with the mass distribution of spam and other malware, the hosting of phishing sites and distributed denial of service (DDoS) attacks on websites. The AISI collects data from various sources on computers exhibiting 'bot' behaviour in the Australian internet space. Using this data, the ACMA provides daily reports to internet service providers (ISPs) identifying IP addresses on their networks that have generally been supplied to the ACMA in the previous 24-hour period. ISPs can then inform the customer associated with that IP address that their computer appears to be compromised and provide advice on how they can fix it. The ACMA does not know who the user of an IP address is, so the ISP is a critical link in the process of customer notification. In July 2017, this function moved to the Computer Emergency Response Team (CERT). Telecommunications Sector Security Reform (TSSR) The Telecommunications Sector Security Reform (TSSR) commenced on 18 September 2018. TSSR introduces four new measures: a security obligation, which requires carriers and carriage service providers to protect their networks and facilities against threats to national security from unauthorized access or interference a notification requirement, which requires carriers and nominated carriage service providers to tell Government of any proposed changes to their telecommunications systems or services that are likely to have a material adverse effect on their capacity to comply with their security obligation the ability for Government to obtain more detailed information from carriers and carriage service providers in certain circumstances to support the work of the Critical Infrastructure Centre, and the ability for Government to intervene and issue directions in cases in cases where there are significant national security concerns that cannot be addressed through other means. Internet censorship and criticisms Since January 2000, internet content considered offensive or illegal has been subject to a statutory scheme administered by the ACMA. Established under Schedule 5 to the Broadcasting Services Act 1992, the online content scheme evolved from a tradition of Australian content regulation in broadcasting and other entertainment media. This tradition embodies the principle that – while adults should be free to see, hear and read what they want – children should be protected from material that may be unsuitable for (or harmful to) them, and everyone should be protected from material that is highly offensive. The online content scheme seeks to achieve these objectives by a number of means such as complaint investigation processes, government and industry collaboration, and community awareness and empowerment. While administration of the scheme is the responsibility of the ACMA, the principle of 'co-regulation' underpinning the scheme reflects parliament's intention that government, industry and the community each plays a role in managing internet safety issues in Australia. The ACMA has a significant cyber safety education program called CyberSmart which provides resources for youth, parents and teachers. Some people strongly disagree with this approach. They say the Australian constitution does not clearly provide either the states or the Federal Government power to censor online content, so internet censorship in Australia is typically an amalgam of various plans, laws, acts and policies. The regulator has been criticised for its role in examining internet censorship in Australia and how it is enabled and might further be enabled. Particular criticism has been leveled at the regulator's technical understanding of what is involved overall in internet regulation and censorship. On 10 March 2009, the ACMA issued the Australian web-hosting company, Bulletproof Networks, with an "interim link-deletion notice" due to its customer, the Whirlpool internet community website, not deleting a link to a page on an anti-abortion web site. The web page, which is the 6th of a series of pages featuring images of aborted babies, had been submitted to the ACMA, who determined it was potential prohibited content, by the user whose post on Whirlpool containing the ACMA's reply was later subject to the link-deletion notice. This came with an A$11,000 per day fine if the take down was not actioned after 24 hours. For other URLs contained on the same website to be 'prohibited', a separate complaint would need to be submitted and reviewed by the ACMA. As indicated above (the issuance) of an "interim link-deletion notice" is a consumer and business protection mechanism that ACMA has the authority to issue to a media or telecommunications provider, as required by the Telecommunications Act 1997, section 110 despite providers adopting it voluntarily or not – through the Communications Alliance process. This gives ACMA access to consumer personal details such as name, phone number, address and other details. It is not known thoroughly for which purpose this information is required, protected and used (Citation needed) by other governmental departments and law enforcement agencies. ACMA blacklist leaked On 19 March 2009 it was reported that the ACMA's blacklist of banned sites had been leaked online, and had been published by WikiLeaks. Julian Assange, founder of WikiLeaks, obtained the blacklist after the ACMA blocked several WikiLeaks pages following their publication of the Danish blacklist. Assange said that "This week saw Australia joining China and the United Arab Emirates as the only countries censoring WikiLeaks." Three lists purporting to be from the ACMA were published online over a seven-day period. The leaked list, which was reported to have been obtained from a manufacturer of internet filtering software, contained 2395 sites. Approximately half of the sites on the list were not related to child pornography, and included online gambling sites, YouTube pages, gay, straight, and fetish pornography sites, Wikipedia entries, euthanasia sites, websites of fringe religions, Christian sites, and even the websites of a tour operator and a Queensland dentist. Colin Jacobs, spokesman for lobby group Electronic Frontiers Australia, said that there was no mechanism for a site operator to know they got onto the list or to request to be removed from it. Australia's Communications Minister, Stephen Conroy later blamed the addition of the dentist's website to the blacklist on the "Russian mob". Associate professor Bjorn Landfeldt of the University of Sydney said that the leaked list "constitutes a condensed encyclopedia of depravity and potentially very dangerous material". Stephen Conroy said the list was not the real blacklist and described its leak and publication as "grossly irresponsible" and that it undermined efforts to improve "cyber safety". He said that ACMA was investigating the incident and considering a range of possible actions including referral to the Australian Federal Police, and that Australians involved in making the content available would be at "serious risk of criminal prosecution". Conroy initially denied that the list published on WikiLeaks and the ACMA blacklist were the same, saying "This is not the ACMA blacklist." He stated that the leaked list was alleged to be current on 6 August 2008 and contained 2,400 URLs, where the ACMA blacklist for the same date contained 1,061 URLs. He added that the ACMA advised that there were URLs on the leaked list that had never been the subject of a complaint or ACMA investigation, and had never been included on the ACMA blacklist. He was backed up by ISP Tech 2U, one of six ISPs involved in filtering technology trials. Conroy's denial was called into doubt by the Internet Industry Association (IIA), who publicly condemned the publishing of the list, chief executive Peter Coroneos saying, "No reasonable person could countenance the publication of links which promote access to child abuse images, irrespective of their motivation, which in this case appears to be political." Conroy later claimed the leaked blacklist published on WikiLeaks closely resembled the official blacklist, admitting that the latest list (dated 18 March) "seemed to be close" to ACMA's current blacklist. In an estimates hearing of the Australian Federal Government on 25 May 2009 it was revealed that the leak was taken so seriously that it was referred to the Australian Federal Police for investigation. It was further stated that distribution of further updates to the list have been withheld until recipients can improve their security. Ms Nerida O'Laughlin of the ACMA confirmed that the list has been reviewed and as of 30 April consists of 997 URLs. See also Australian Competition and Consumer Commission (ACCC) Australian Commercial Television Code of Practice Internet censorship in Australia List of telecommunications regulatory bodies Pirate radio in Australia International Telecommunication Union References External links Search ACMA's database 2005 establishments in Australia Commonwealth Government agencies of Australia Communications authorities Communications in Australia Consumer organisations in Australia Entertainment rating organizations Government agencies established in 2005 Mass media complaints authorities Mass media in Australia Regulatory authorities of Australia
1926819
https://en.wikipedia.org/wiki/Willie%20Wood
Willie Wood
William Vernell Wood Sr. (December 23, 1936February 3, 2020) was an American professional football player and coach. He played as a safety with the Green Bay Packers in the National Football League (NFL). Wood was an eight-time Pro Bowler and a nine-time All-Pro. In 1989, Wood was elected to the Pro Football Hall of Fame. Wood played college football for the USC Trojans, becoming the first African-American quarterback to play in what is now the Pac-12 Conference. Undrafted out of USC, he was granted a try-out with Green Bay. Wood changed his position to safety in his rookie year, and played for the Packers from 1960 to 1971, winning five NFL championships. He later coached in the NFL, World Football League (WFL), and Canadian Football League (CFL). College career After graduating from Armstrong High School in Washington, D.C. in 1956, Wood went west and played college football in southern California, playing his freshman year at Coalinga Junior College, where he was a junior college All-American. He transferred to the University of Southern California in Los Angeles in 1957 and played for the Trojans under first-year head coach Don Clark. While there he was the first African American quarterback in the history of the Pacific Coast Conference and its successor AAWU, now the Pac-12 Conference. Wood also played safety. As a junior in 1958, he was sidelined with an injured shoulder, and as a senior in 1959, he separated his right shoulder and missed several games. NFL career Wood was not selected in the 1960 NFL draft, and wrote a letter to head coach Vince Lombardi to request a tryout; the Packers signed him as a rookie free agent in 1960. After a few days with the quarterbacks, he requested a switch to defense and was recast as a free safety, and was a starter in the season. He started until his retirement in 1971. Wood won All-NFL honors nine times in a nine-year stretch from 1962 through the 1971 season, participated in the Pro Bowl eight times, and played in six NFL championship games, winning all except the first in 1960. He was ejected for bumping back judge Tom Kelleher while protesting a call during the third quarter of the 1962 NFL Championship Game vs. the New York Giants Wood was the starting free safety for the Packers in Super Bowl I against the Kansas City Chiefs and Super Bowl II against the Oakland Raiders. In Super Bowl I, he recorded a key interception that helped the Packers put the game away in the second half. In Super Bowl II, he returned five punts for 35 yards, including a 31-yard return that stood as the record for longest punt return in a Super Bowl until Darrell Green's 34-yard return in Super Bowl XVIII. He led the NFL in interceptions and punt return yards in 1962. Wood finished his 12 NFL seasons with 48 interceptions, which he returned for 699 yards and two touchdowns. He also gained 1,391 yards and scored two touchdowns on 187 punt returns. He holds the record for the most consecutive starts by a safety in NFL history. Wood retired as a player after the 1971 season; he was inducted into the Pro Football Hall of Fame in 1989, and the Packers Hall of Fame in 1977. Coaching career After retiring as a player in January 1972, Wood became the defensive backs coach for the San Diego Chargers. In 1975, he was the defensive coordinator of the Philadelphia Bell of the WFL and became the first African-American head coach in professional football of the modern era in late July, days before the first game of the season. The Bell's season lasted only 11 games when the league folded in October. Wood was later an assistant coach for the Toronto Argonauts in the CFL under Forrest Gregg, a Packer teammate. When Gregg left after the 1979 season for the Cincinnati Bengals in the NFL, Wood became the first black head coach in the CFL, but after an 0–10 start in 1981, he was fired. Personal His son, Willie Wood Jr., played for (1992–1993) and later coached the Indiana Firebirds in the Arena Football League, after coaching at Woodrow Wilson High School in Washington, D.C. Wood Jr. also served as the wide receiver/defensive backs coach and special teams coordinator for the Cleveland Gladiators of the Arena Football League. Wood later lived in Washington, D.C. and underwent replacement knee surgery. In his later years, he had dementia. Wood died of natural causes on February 3, 2020 at an assisted living facility in Washington, D.C. at the age of 83. In March 2012, a block of N Street NW in D.C. () was named "Willie Wood Way." References External links 1936 births 2020 deaths American football quarterbacks American football return specialists American football safeties Green Bay Packers players Toronto Argonauts coaches USC Trojans football players Philadelphia Bell coaches National Conference Pro Bowl players Western Conference Pro Bowl players Pro Football Hall of Fame inductees Players of American football from Washington, D.C. African-American coaches of American football African-American coaches of Canadian football African-American players of American football San Diego Chargers coaches 20th-century African-American sportspeople 21st-century African-American people
24757070
https://en.wikipedia.org/wiki/Mary%20Allen%20Wilkes
Mary Allen Wilkes
Mary Allen Wilkes (born September 25, 1937) is a lawyer, former computer programmer and logic designer, known for her work with the LINC computer, now recognized by many as the world's first "personal computer". Career Wilkes was born in Chicago, Illinois and graduated from Wellesley College in 1959 where she majored in philosophy and theology. Wilkes planned to become a lawyer, but was discouraged by friends and mentors from pursuing law because of the challenges women faced in the field. A geography teacher in the eighth grade had told Wilkes, "Mary Allen, when you grow up, you ought to be a computer programmer." She worked in the field as one of the first programmers for a number of years before pursuing law and becoming an attorney in 1975. MIT Wilkes worked under Oliver Selfridge and Benjamin Gold on the Speech Recognition Project at MIT's Lincoln Laboratory in Lexington, Massachusetts from 1959 to 1960, programming the IBM 704 and the IBM 709. She joined the Digital Computer Group, also at Lincoln Laboratory, just as work was beginning on the LINC design under Wesley A. Clark in June 1961. Clark had earlier designed Lincoln's TX-0 and TX-2 computers. Wilkes's contributions to the LINC development included simulating the operation of the LINC during its design phase on the TX-2, designing the console for the prototype LINC and writing the operator's manual for the final console design. In January, 1963, the LINC group left Lincoln Laboratory to form the Center for Computer Technology in the Biomedical Sciences at MIT's Cambridge, Massachusetts campus, where, in the summer of 1963 it trained the first participants in the LINC Evaluation Program, sponsored by the National Institutes of Health. Wilkes taught participants in the program and wrote the early LINC Assembly Programs (LAP) for the 1024-word LINC. She also co-authored the LINC's programming manual, Programming the LINC with Wesley A. Clark. Washington University In the summer of 1964 a core group from the LINC development team left MIT to form the Computer Systems Laboratory at Washington University in St. Louis. Wilkes, who had spent 1964 traveling around the world, rejoined the group in late 1964, but lived and worked from her parents' home in Baltimore until late 1965. She worked there on a LINC provided by the Computer Systems Laboratory and is usually considered to be the first user of a personal computer in the home. By 1965 the LINC team had doubled the size of the LINC memory to 2048 12-bit words, which enabled Wilkes, working on the LINC at home, to develop the more sophisticated operating system, LAP6. LAP6 incorporated a scroll editing technique which made use of an algorithm proposed by her colleagues, Mishell J. Stucki and Severo M. Ornstein. LAP6, which has been described as "outstandingly well human engineered", provided the user the ability to prepare, edit, and manipulate documents (usually LINC programs) interactively in real time, using the LINC's keyboard and display, much like later personal computers. The LINC tapes performed the function of the scroll, and also provided interactive filing capabilities for documents and programs. Program documents could be converted to binary and run. Users could integrate their own programs with LAP6 using a link provided by the system, and swap the small LINC tapes around to share programs, an early "open source" capability. The Computer Systems Laboratory's next project, also headed by Clark, was the design of "Macromodules", computer building blocks. Wilkes designed the multiply macromodule, the most complex of the set. Law career Wilkes left the computer field in 1972 to attend the Harvard Law School. She practiced as a trial lawyer for many years, both in private practice and as head of the Economic Crime and Consumer Protection Division of the Middlesex County District Attorney's Office in Massachusetts. She taught in the Trial Advocacy Program at the Harvard Law School from 1983 to 2011, and sat as a judge for the school's first- and second-year Ames (moot court) competition for 18 years. In 2001 she became an arbitrator for the American Arbitration Association, sitting primarily on cases involving computer science and information technology. From 2005 through 2012, she served as a judge of the Annual Willem C. VIS International Commercial Arbitration Moot competition in Vienna, Austria, organized by Pace University Law School. Notability She is noted in the field of computer science for: Design of the interactive operating system LAP6 for the LINC, one of the earliest such systems for a personal computer. Being the first person to use a personal computer in the home. Her work has been recognized in Great Britain's National Museum of Computing's 2013 exhibition "Heroines of Computing" at Bletchley Park, and by the Heinz Nixdorf Museums Forum in Paderborn, Germany, in its 2015-16 exhibition, Am Anfang war Ada: Frauen in der Computergeschichte (In the beginning was Ada: Women in Computer History). Quotes "I'll bet you don't have a computer in your living room." "Doubling a 1024-word memory produces another small memory."<ref>Preface, LAP6 Handbook.</u></ref> "We had the quaint notion at the time that software should be completely, absolutely free of bugs. Unfortunately it's a notion that never really quite caught on." "To promise the System is a serious thing."<ref>LAP6 Handbook</u>, quoting Søren Kierkegaard, Philosophical Fragments.</ref> Selected publications "LAP5: LINC Assembly Program", Proceedings of the DECUS Spring Symposium, Boston, May 1966. (LAP5 was the "Beta" version of LAP6.) LAP6 Handbook, Washington Univ. Computer Systems Laboratory Tech. Rept. No. 2, May 1967. Programming the Linc, Washington Univ. Computer Systems Laboratory, 2nd ed., January 1969, with W. A. Clark. "Conversational Access to a 2048-word Machine", Comm. of the ACM 13, 7, pp. 407–14, July 1970. (Description of LAP6.) "Scroll Editing: an on-line algorithm for manipulating long character strings", IEEE Trans. on Computers 19, 11, pp. 1009–15, November 1970. The Case for Copyright, Washington Univ. Computer Systems Laboratory Technical Memo., May 1971. "China Diary", Washington Univ. Magazine 43, 1, Fall 1972. Describes the trip six American computer scientists (and their wives, including Wilkes) made to China for 18 days in July 1972 at the invitation of the Chinese government to visit and give seminars to Chinese computer scientists in Canton, Shanghai, and Peking. References Living people American computer scientists Wellesley College alumni 1937 births American women computer scientists Harvard Law School alumni MIT Lincoln Laboratory people Scientists from Chicago
3855083
https://en.wikipedia.org/wiki/Self-service
Self-service
Self-service is the practice of serving oneself, usually when making purchases. Aside from Automatic Teller Machines, which are not limited to banks, and customer-operated supermarket check-out, labor-saving of which has been described as self-sourcing, there is the latter's subset, selfsourcing and a related pair: End-user development and End-user computing. Note has been made how paid labor has been replaced with unpaid labor, and how reduced professionalism and distractions from primary duties has reduced value obtained from employees' time. Over a period of decades, laws have been passed both facilitating and preventing self-pumping of gas and other self-service. Overview Self-service is the practice of serving oneself, usually when purchasing items. Common examples include many gas stations, where the customer pumps their own gas rather than have an attendant do it (full service is required by law in New Jersey, urban parts of Oregon, most of Mexico, and Richmond, British Columbia, but is the exception rather than the rule elsewhere). Automatic Teller Machines (ATMs) in the banking world have also revolutionized how people withdraw and deposit funds; most stores in the Western world, where the customer uses a shopping cart in the store, placing the items they want to buy into the cart and then proceeding to the checkout counter/aisles; or at buffet-style restaurants, where the customer serves their own plate of food from a large, central selection. Patentable business method In 1917, the US Patent Office awarded Clarence Saunders a patent for a "self-serving store." Saunders invited his customers to collect the goods they wanted to buy from the store and present them to a cashier, rather than having the store employee consult a list presented by the customer, and collect the goods. Saunders licensed the business method to independent grocery stores; these operated under the name "Piggly Wiggly." Electronic commerce Self-service is over the phone, web, and email to facilitate customer service interactions using automation. Self-service software and self-service apps (for example online banking apps, web portals with shops, self-service check-in at the airport) become increasingly common. Self-sourcing is a term describing informal and often unpaid labor that benefits the owner of the facility where it is done by replacing paid labor with unpaid labor. Selfsourcing (without a dash) is a subset thereof, and refers to developing computer software intended for use by the person doing the development. Both situations have aspects of Self-service, and where permitted involve benefits to the person doing the work, such as job & personal satisfaction, even though tradeoffs are frequently involved, including long term losses to the company. Doing someone else's job When a loan officer is asked to "self-source" they're taking on a responsibility that's not one of the top seven "Loan Officer Job Duties" listed by a major job placement service. A situation where no payment is made is self-service, such as airport check-in kiosks and checkout machines. International borders have also experimented with traveler-assisted fingerprint verification Another situation is where a company's Human resources department is partially bypassed by departments that "source talent themselves." History of self-sourcing An early use of the term is a 2005 HRO Today article titled "Insourcing, Outsourcing? How about Self-sourcing?" that mined Wikipedia's history of a pair of banks that merged decades ago as Standard Chartered and, after September 11, rebuilt its personnel department in an innovative way. The concept is similar to self-service, and one USA example is pumping gas: New Jersey banned customers from doing this in 1949; now NJ is the only state "where drivers are not allowed to pump their own gasoline." Self-service In 1994 it was considered a radical change to propose permitting self-service at the gas pumps, in Japan, and the New York Times reported that "the push .. (came) from Japanese big business ... trying to cut costs." Automatic Teller Machines are another example, and their expansion beyond banks have led to signs saying Access To Money, which refers to a company with that name; the technology began over half-a-century ago. Selfsourcing Selfsourcing is the internal development and support of IT systems by knowledge workers with minimal contribution from IT specialists, and has been described as essentially outsourcing development effort to the end user. At times they use in-house Data warehouse systems, which often run on mainframes. Various terms have been used to describe end user self service, when someone who is not a professional programmer programs, codes, scripts, writes macros, and in other ways uses a computer in a user-directed data processing accomplishment, such as End user computing and End user development. In the 1990s, Windows versions of mainframe packages were already available. Data sourcing When desktop personal computers became nearly as widely distributed as having a work phone, in companies having a data processing department, the PC was often unlinked to the corporate mainframe, and data was keyed in from printouts. Software was for do-it-yourself/selfsourcing, including spreadsheets, programs written in DOS-BASIC or, somewhat later, dBASE. Use of spreadsheets, the most popular End-user development tool, was estimated in 2005 to done by 13 million American employees. Some data became siloed Once terminal emulation arrived, more data was available, and it was more current. Techniques such as Screen scraping and FTP reduced rekeying. Mainframe products such as FOCUS were ported to the PC, and Business Intelligence (BI) software became more widespread. Companies large enough to have mainframes and use BI, having departments with analysts and other specialists, have people doing this work full-time. Selfsourcing, in such situations, is taking people away from their main job (such as designing ads, creating surveys, planning advertising campaigns); pairs of people, one from an analysis group and another from a "user" group, is the way the company wants to operate. Selfsourcing is not viewed as an improvement. Data warehouse was an earlier term in this space. Issues It is crucial for the system's purposes and goals to be aligned with that of the organizational goals. Developing a system that contradicts organizational goals will most likely lead to a reduction in sales and customer retention. As well, due to the large amount of time it may take for development, it is important allocate your time efficiently as time is valuable. Knowledge workers must also determine what kind of external support they will require. In-house IT specialists can be a valuable commodity and are often included in the planning process. It is important to document how the system works, to ensure that if the developing knowledge workers move on others can use it and even attempt to make needed updates. Advantages Knowledge workers are often strongly aware of their immediate needs, and can avoid formalizations and time needed for "project cost/benefit analysis" and delays due to chargebacks. Additional benefits are: Improved requirement determination: Since they're telling themselves what they want, rather than someone else, this eliminates telling an IT specialist what they want. There is a greater chance for user short-term satisfaction. Increased participation and sense of ownership: Pride and self-push will add desire for completion, sense of ownership and higher worker morale. Increased morale can be infectious and lead to great benefits in several other areas. Facilitates speed of systems development: Since step-by-step details preclude formal documentation, time and resources are concentrated, whereas working with an IT specialists analyzing would be counterproductive. Selfsourcing is usually faster for smaller projects that do not require the full process of development. Disadvantages Inadequate expertise Many knowledge workers involved in selfsourcing do not have experience or expertise with IT tools, resulting in: Pride of ownership has been found to be a major cause of overlooking errors. A 1992 study showed that because Excel "tends to produce output even in the presence of errors" there is "user over-confidence in program correctness." Lost hours and potential: potentially good ideas are lost. These incomplete projects, after consuming many hours, often draw workers away from their primary duties. Lack of organizational focus: These often form a privatized IT system, with poor integration to corporate systems. Data silos may violate policy and even privacy/HIPPA/HIPAA laws. Uncontrolled and duplicate information can become stale, leading to more problems than benefits. Lack of design alternative analysis: Hardware and software opportunities are not analyzed sufficiently, and efficient alternatives may not be noticed and utilized. This can lead to inefficient and costly systems. Lack of security: End users, as a group, do not understand how to build secure applications. Lack of documentation: Knowledge workers may not have supervisors who are aware that, as time goes on, changes will be needed and these compartmentalized systems will require the help of IT specialists. Knowledge workers will usually lack experience with planning for these changes and the ability to adapt their work for the future. Shadow IT Although departmental computing has decades of history, one-person-show situations either suffer from inability to interact with a helpdesk or fail to benefit from wheels already invented. Self-service tools Although self-service tools are also used by professionals, among the basic members of various categories from a more detailed list of self-service tools are: simple office equipment - even in a "paperless office" individual office workers use scotch tape dispensers, staplers and staple-removers. The New York Times mentions their applicability to Home Office businesses. hand-operated tools - screwdrivers, pliers, wrenches, hammers, handsaws mechanized/power hand-held tools - power drill, power saw - Microsoft Word, Powerpoint, Excel, dBase (or Access) represent areas of functionality used for knowledge management, both in finding stored information and in entering new content. of these exist both for locally stored (desktop computer) programs and internet/cloud-based. Human resource departments have enabled Employee self-service, including providing employees with tools for skill building and career planning. See also Automated retail Automated teller machine Decision support system Expert system Insourcing Interactive kiosk Self checkout Shadow work Smart card Ticket machine Unmanned store Vending machine External links https://oemkiosks.com/?page=qsr Stephen Haag, Maeve Cummings, Donald McCubbrey, Alain Pinsonneault and Richard Donovan Third Canadian Edition Management Information Systems for the Information Age Mcgraw-Hill Ryerson, Canada, 2006 References Software distribution Information systems Decision support systems Retail formats Outsourcing Business terms
25018966
https://en.wikipedia.org/wiki/Epsychology
Epsychology
Epsychology is a form of psychological intervention delivered via information and communication technology. epsychology interventions have most commonly been applied in areas of health; examples are depression, adherence to medication, and smoking cessation. Future applications of epsychology interventions are likely to become increasingly more common in information, organization, and management sciences (e.g. organizational change, conflict management and negotiation skills). Recently, several meta-analyses have documented the effects of epsychology interventions. In general, it appears that intensive theory-based interventions that include multiple behaviour change techniques and modes of delivery (e.g. mobile phones and the Internet) are the most effective. More specifically, interventions based on the theory of planned behaviour and cognitive-behavioural therapy seem to provide the most promising results. These findings should, however, be interpreted with caution as many research articles fail to report the theoretical underpinnings of epsychology interventions adequately. Business and commercialization Lifestyle and non-communicable diseases, such as excessive alcohol consumption, depression, and physical inactivity, are the leading causes of morbidity and premature mortality. Thus, there is a great potential for utilizing epsychology to reach out and deliver prevention and treatment to the public by means of information technology. Information technology has a high scalability and given the usage and population statistics on, for example, Internet technology, researchers argue that we simply cannot afford to ignore information technology as a viable approach to public health. Among the first companies to take advantage of the new technological opportunities combined with state-of-the-art psychological research were Health Media in the US (later acquired by Johnson & Johnson) and the privately held Changetech AS in Norway. Epsychology interventions are considered a supplement to existing treatments rather than a substitute, although such interventions can be used as a stand-alone treatment given that they are more cost-effective than standard treatment. Pharmaceutical companies Janssen-Cilag and Novartis were also early out with patient-support programs that came with the patients' medication. The purpose in such programs is primarily to help patients take their medication as prescribed. A lack of medical compliance is a serious health problem even among patients diagnosed with severe and potentially fatal diseases such as cancer or HIV/AIDS. In fact, in one study about 70% of hospital visits for adverse drug reactions were caused by inadequate medical compliance. Although patient-support programs may lack in theoretical orientation, it is clear that they try to help patients manage an inherent psychological problem. See also eHealth Persuasive technology Psychology Cyberpsychology References Further reading Andersson, G., Bergström, J., Holländare, Carlbring, P., Kaldo, V. & Ekselius, L. (2005). "Internet-based self-help for depression: Randomised controlled trial". British Journal of Psychiatry, 187, 456–461. Brendryen, H. & Kraft, P. (2008). "Happy Ending: A randomized controlled trial of a digital multi-media smoking cessation intervention". Addiction, 103, 478–484. Chiauzzi, E., Green, T.C., Lord, S., Thum, C. & Goldstein, M. (2005). "My Student Body: A high-risk drinking prevention web site for college students". Journal of American College Health, 53, 263–274. Christensen, H., Griffiths, K.M. & Jorm, A.F. (2004). "Delivering interventions for depression by using the Internet: Randomised controlled trial". British Medical Journal, 328, 265–270. Cox, D.J., Gonder-Frederick, L., Ritterband, L., Patel, K., Schächinger, H., et al., (2006). "Blood glucose awareness training: What is it, where is it, and where is it going?" Diabetes Spectrum, 19, 43–49. Etter, J.-F. (2005). "Comparing the efficacy of two Internet-based, computer-tailored smoking cessation programs: A randomized trial". Journal of Medical Internet Research, 7: e2. Hayward, L., MacGregor, A.D., Peck, D.F. & Wilkes, P. (2007). "The feasibility and effectiveness of computer-guided CBT (FearFighter) in a rural area". Behavioural and Cognitive Psychotherapy, 35, 409–419. Hester, R.K., Squires, D.D., & Delayne, H.D. (2005). "The Drinker's Check-up: 12-month outcomes of a controlled clinical trial of a stand-alone software program for problem drinkers". Journal of Substance Abuse Treatment, 28, 159–169. Hurling, R., Catt, M., Boni, M.D., Fairley, B.W., Hurst, T., Murray, P., Richardson, A. & Sodhi, J.S. (2007). "Using internet and mobile phone technology to deliver an automated physical activity program: Randomized controlled trial". Journal of Medical Internet Research, 9: e7. Klingberg, T., Fernell, E., Olesen, P.J., Johnson, M., Gustafsson, P., et al. (2005). "Computerized training of working memory in children with ADHD: A randomized, controlled trial". Journal of the American Academy of Child & Adolescent Psychiatry, 44, 177–186. Lange, A., van de Ven, J.-P. & Schrieken, B. (2003). "Interapy: Treatment of post-traumatic stress via the Internet". Cognitive Behaviour Therapy, 32, 110–124. Linke, S., Brown, A., & Wallace, P. (2004). "Down your drink: a web-based intervention for people with excessive alcohol consumption". Alcohol and Alcoholism, 39, 29–32. Steiner, J., Woodall, W.G., & Yeagley, J.A. (2005). The E-Chug: A randomized, controlled study of a web-based binge drinking intervention with college freshman. Poster Presentation, Society for Prevention Research. URL (12.11.2004): https://web.archive.org/web/20070306052319/http://www.e-chug.com/docs/SPR_2005.ppt#256,1,Slide 1. Swartz, L.H.G., Noell, J.W., Schroeder, S.W. & Ary, D.V. (2006). "A randomised control study of a fully automated internet based smoking cessation programme". Tobacco Control, 15, 7–12. Psychotherapies
445234
https://en.wikipedia.org/wiki/StepMania
StepMania
StepMania is a cross-platform rhythm video game and engine. It was originally developed as a clone of Konami's arcade game series Dance Dance Revolution, and has since evolved into an extensible rhythm game engine capable of supporting a variety of rhythm-based game types. Released under the MIT License, StepMania is open-source free software. Several video game series use StepMania as their game engines. This includes In the Groove, Pump It Up Pro, Pump It Up Infinity, and StepManiaX. StepMania was included in a video game exhibition at New York's Museum of the Moving Image in 2005. Development StepMania was originally developed as an open source clone of Konami's arcade game series Dance Dance Revolution (DDR). During the first three major versions, the Interface was based heavily on DDR's. New versions were released relatively quickly at first, culminating in version 3.9 in 2005. In 2010, after almost 5 years of work without a stable release, StepMania creator Chris Danford forked a 2006 build of StepMania, paused development on the bleeding edge branch, and labeled the new branch StepMania 4 beta. A separate development team called the Spinal Shark Collective forked the bleeding edge branch and continued work on it, branding it sm-ssc. On 30 May 2011, sm-ssc gained official status and was renamed StepMania 5.0. Development on the upcoming version, 5.1, has gone cold over the past few years after a couple of betas were released over at GitHub. Project OutFox (formerly known as StepMania 5.3, initially labeled as FoxMania) is a currently closed-source fork of the 5.0 and 5.1 codebase originally planned to reintegrate in StepMania, however further in development it was decided to become an independent project due to its larger scope of goals while still sharing codebase improvements to future versions of StepMania. These improvements includes modernizing the original codebase to improve performance and graphical fidelity, refurbishing aspects of the engine that have been neglected, and to improve and expand its support for other game types and styles. Gameplay The primary game type features the following game play: as arrows scroll upwards on the screen, they meet a normally stationary set of target arrows. When they do, the player presses the corresponding arrows on their keyboard or dance mat. The moving arrows meet the targets based on the beat of the song. The game is scored based upon how accurately the player can trigger the arrows in time to the beat of the song. The player's efforts are given a letter grade and a number score that tell how well they have done. An award of AAA+ (triple A plus, formerly AAAA or quadruple A) is the highest possible award available on a standard installation and indicates that a player has triggered all arrows with "Flawless" timing (within 0.0225 seconds under official settings) and avoided all mines and completed all hold (freeze) arrows. An E indicates failure for a player to survive the length of the song without completely draining their life gauge. Default scoring and grading for StepMania is similar to scoring in Dance Dance Revolution; however, timing and scoring settings can easily be changed. During a song, if the player successfully triggers all arrows with "great" or better timing, the player will receive the message "Full combo" alongside their grade. Players can also achieve "Full perfect combo" for completing a song with all arrows triggered with perfect timing or better, and a "Full flawless combo" if all arrows are triggered with "flawless" timing. StepMania allows for several input options. Specialized adapters that connect console peripherals like PS2 and Xbox controllers or dance pads to one's computer can be used. Alternatively, the keyboard can be used to tap out the rhythms using arrow or other keys. Many song charts designed for keyboard are unable to be passed using a pad. In addition, the game possesses the capability to emulate other music games, such as Beatmania itself, o2Jam and DJMax's 7-key arrangement, Pump It Up and TechnoMotion - scoring however, remains similar to old DDR-style play by default (i.e. more weight is given for later notes). Features Custom Songs ("Stepfiles") also known as "Simfiles": StepMania allows users to create their own custom dance patterns to any song in .ogg or .mp3 format. The program includes a comprehensive Step editor to aid the creation of these stepfiles. Many Simfile websites exist where users share and distributed Simfiles for songs. Additionally, official DDR and In The Groove songs with their original steps are commonly available for StepMania. Background animations: Support for many types of animations behind the arrows onscreen, including sprite-based animation sequences, a single full-motion video or multiple FMV visualization overlays but are disabled if the song contains exclusive video. Modifiers: Visual mods that affect the scroll of arrows and either increase or decrease difficulty. StepMania includes multiple modifiers featured in Dance Dance Revolution as well as dozens of additional modifiers created exclusively for StepMania, including custom SPEED options. Multiple arrow types: Mines ("Shock" arrows in DDR X): An object that scrolls onto the screen along with the arrows. If a player triggers the mines, they will be penalized by having their dance gauge reduced and, customizing a theme, breaks the current combo chain that the player had going. However, the mines in StepMania are different from the Shock Arrows in DDR X in that the latter also turns the notes invisible for a brief period of time and breaks the current combo chain that the player had going. This step type was developed for the StepMania-based arcade game In The Groove, and was ported into StepMania itself during development of that title. There are several variations of these objects that effect scoring in different ways. Holds (also called Freeze Arrows): A long arrow that requires you to keep your feet or finger on the corresponding panel for its duration. Rolls: A special hold arrow which requires a rapid tap on to keep alive. This step type was developed for the sequel to In The Groove, In the Groove 2. Lift: a special type of arrow (colored Gray by default) which requires the key (or panel) to be held down before the note passes and released when the note passes the target arrows. This is different from freeze arrows in that the timing of the press is not important, only when the note is released. Multiple game types, including partial simulation of other rhythm games like Pump It Up, ParaParaParadise and beatmania IIDX. Real-time lyrics, which display on the opposite side of the screen for stepfiles that have accompanying lyric data. Custom themes: users can create their own skins for StepMania. Themes can vary from simple replacement of images to drastic changes that can be implemented by scripting its Lua backend. Dancing characters: 2-dimensional and 3-dimensional character models that dance in the background according to a pre-defined routine. Infinite BPMs: an official implementation in StepMania 4 of a bug in the 3.9 series that could be exploited to create "warps" in stepcharts using negative speeds. Network play: support for lobby-based online play, dubbed StepMania Online. Typically, users connect through the StepMania Online centralized server. Support for network play was added to the StepMania tree in 2005 and is available in all later builds. All players must have a copy of the song chosen by the host in order to play. Availability Some versions of StepMania will run on most common operating systems (Microsoft Windows 98/Me/2000/XP/Vista/7/8, Linux, FreeBSD, Mac OS X), as well as the Xbox console. It has also been used as the base engine in a variety of free software and proprietary products for various platforms. Use in products Several StepMania-based commercial games have been released due to its open nature: In the Groove (ITG) is an arcade dance game series developed by the core StepMania developers, and is based on 3.9 and a CVS build of StepMania often known as version 3.95. To prevent unauthorized copying, StepMania was re-licensed under a more permissive license (changed from GPL to the MIT License with the agreement of all coders, in exchange for their names appearing on the ITG credits screen), not requiring source code to be published on derivative works, and thus allowing ITG's copy control to remain proprietary and closed source. Pump It Up Pro is a spinoff of the Pump it Up series headed by former ITG developers and musicians. The game utilizes a build of StepMania 4 for its engine, which also led to improved Pump support in StepMania itself. Pump It Up Infinity is another spinoff of the Pump it Up series aimed primarily at North American audiences. Unlike the Pro series, however, it is managed directly by Andamiro. The game is based on StepMania 5. StepManiaX is a spiritual successor to In The Groove, with the addition of the Center panel and other features. StepMix StepMania developers conducted StepMix contest for step builders to create stepcharts/stepfiles that can be played using StepMania. StepMix 1, 2, 3, and 4 were run successfully. Participants need to have a song to be used in the stepchart/stepfile. The song must be under a compatible license for distribution or be authorized for use in StepMix 4, or the entry is automatically disqualified. Additionally, if the graphics used in the entry are found to have been copied from another artist and used without their authorization (as happened once in StepMix 2), the entry may be disqualified. The scoring is determined by the overall quality of the song, steps and graphics. Reception StepMania became a quite popular free software game; the game was downloaded alone over SourceForge between 2001 and May 2017 over 6.3 million times. See also List of open source games Project OutFox Frets on Fire osu! UltraStar References External links Cross-platform software Dance video games Fangames Free game engines Free software programmed in C++ Open-source video games Linux games Lua (programming language)-scripted video games MacOS games Music video games Software using the MIT license Video games developed in the United States Video games with custom soundtrack support Windows games
1776684
https://en.wikipedia.org/wiki/System%20Packet%20Interface
System Packet Interface
The System Packet Interface (SPI) family of Interoperability Agreements from the Optical Internetworking Forum specify chip-to-chip, channelized, packet interfaces commonly used in synchronous optical networking and Ethernet applications. A typical application of such a packet level interface is between a framer (for optical network) or a MAC (for IP network) and a network processor. Another application of this interface might be between a packet processor ASIC and a traffic manager device. Context There are two broad categories of chip-to-chip interfaces. The first, exemplified by PCI-Express and HyperTransport, supports reads and writes of memory addresses. The second broad category carries user packets over 1 or more channels and is exemplified by the IEEE 802.3 family of Media Independent Interfaces and the Optical Internetworking Forum family of System Packet Interfaces. Of these last two, the family of System Packet Interfaces is optimized to carry user packets from many channels. The family of System Packet Interfaces is the most important packet-oriented, chip-to-chip interface family used between devices in the Packet over SONET and Optical Transport Network, which are the principal protocols used to carry the internet between cities. Specifications The agreements are: SPI-3 – Packet Interface for Physical and Link Layers for OC-48 (2.488 Gbit/s) SPI-4.1 – System Physical Interface Level 4 (SPI-4) Phase 1: A System Interface for Interconnection Between Physical and Link Layer, or Peer-to-Peer Entities Operating at an OC-192 Rate (10 Gbit/s). SPI-4.2 – System Packet Interface Level 4 (SPI-4) Phase 2: OC-192 System Interface for Physical and Link Layer Devices. SPI-5 – Packet Interface for Physical and Link Layers for OC-768 (40 Gbit/s) SPI-S – Scalable System Packet Interface - useful for interfaces starting with OC-48 and scaling into the Terabit range History of the specifications These agreements grew out of the donation to the OIF by PMC-Sierra of the POS-PHY interface definitions PL-3 and PL-4, which themselves came from the ATM Forum's Utopia definitions. These earlier definitions included: Utopia Level 1, an 8 bit, 25 MHz interface supporting OC-3 and slower links (or multiple links aggregating to less than 200 Mbit/s). Utopia Level 2, a 16 bit, 50 MHz interface supporting OC-12 or multiple links aggregating to less than 800 Mbit/s. System Packet Interface or SPI as it is widely known is a protocol for packet and cell transfers between PHY and LINK layer devices in multi-gigabit applications. This protocol has been developed by Optical Internetworking Forum (OIF) and is fast emerging as one of the most important integration standards in the history of telecommunications and data networking. Devices implementing SPI are typically specified with line rates of 700~800 Mbit/s and in some cases up to 1 Gbit/s. The latest version is SPI 4 Phase 2 also known as SPI 4.2 delivers bandwidth of up to 16 Gbit/s for a 16 bit interface. The Interlaken protocol, a close variant of SPI-5 replaced the System Packet Interface in the marketplace. Technical details SPI 4.2 The SPI 4.2 interface is composed of high speed clock, control, and data lines and lower speed FIFO buffer status lines. The high speed data line include a 16-bit data bus, a 1 bit control line and a double data rate (DDR) clock. The clock can run up to 500 MHz, supporting up to 1 GigaTransfer per second. The FIFO buffer status portion consists of a 2 bit status channel and a clock. SPI 4.2 supports a data width of 16 bits and can be PHY-link, link-link, link-PHY or PHY-PHY connection. The SPI 4.2 interface supports up to 256 port addresses with independent flow control for each. To ensure optimal use of the rx/tx buffers in devices connected with SPI interface, the RBUF/TBUF element size in those devices should match the SPI-4.2 data burst size. See also SerDes Framer Interface Common Electrical I/O Interlaken References Network protocols
19667085
https://en.wikipedia.org/wiki/Abas%20%28mythology%29
Abas (mythology)
In Greek mythology, the name Abas (; Ancient Greek: Ἄβας; gen.: Ἄβαντος means "guileless" or "good-hearted") is attributed to several individuals: Abas, king of Argos. Abas, son of Poseidon and Arethusa. A Thracian by birth, Abas founded a tribe known as the Abantians or Abantes. Abas and his Abantian followers migrated to the island of Euboea, where he subsequently reigned as king. He was father of Canethus and Chalcodon, and through the latter grandfather of Elephenor, who is known to have accidentally killed him. In some accounts, Abas was also called the father of Canthus (alternatively the son of Canethus and thus, his grandson). Abas the father of Alcon, Dias, and Arethusa. His son Dias was said to be the founder of the city of Athens in Euboea, naming it after his fatherland. Abas, son of Metaneira who was changed by Demeter into a lizard, because he mocked the goddess when she had come on her wanderings into the house of his mother, and drank eagerly to quench her thirst. Other traditions relate the same story of a boy, Ascalabus, and call his mother Misme. Abas, an Argive seer, son of Melampus and Iphianeira. He was the father of Coeranus, Idmon, and Lysimache. Abas, companion of Perseus. Abas, a Centaur who attended the wedding of Pirithous and Hippodamia. Abas, defender of Thebes in the war of the Seven against Thebes. He and his sons Cydon and Argus were killed in the battle. Abas, a Theban charioteer during the war of the Seven against Thebes. At the beginning of the battle, he is pierced by Pheres with a spear and left groaning for his life. Abas, son of the Trojan Eurydamas and brother of Polyidus; he fought in the Trojan War and was killed by Diomedes. Abas, servant of King Lycomedes on the island of Scyros. His job was to keep an eye on shipping traffic from the watchtower and to report directly to the king whether ships arrive at the port. When Odysseus came to the island with his ship to persuade Achilles, who was concealed as a girl, to take part in the War against Troy, the dutiful Abas was the first to report to the king that unknown sails were approaching the coast. Abas, another defender of Troy, was killed by Sthenelus. Abas, one of Diomedes' companions, whom Aphrodite turned into a swan. In the Aeneid, the name Abas belongs to two companions of Aeneas: Abas, captain whose ship was routed in the storm off Carthage. Abas, an Etruscan ally from Populonia in the war against the Rutulians and the Latians. He was later on killed by Lausus, the man who led one thousand soldiers from the town of Agylla. Notes References Antoninus Liberalis, The Metamorphoses of Antoninus Liberalis translated by Francis Celoria (Routledge 1992). Online version at the Topos Text Project. Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Apollonius Rhodius, Argonautica translated by Robert Cooper Seaton (1853–1915), R. C. Loeb Classical Library Volume 001. London, William Heinemann Ltd, 1912. Online version at the Topos Text Project. Apollonius Rhodius, Argonautica. George W. Mooney. London. Longmans, Green. 1912. Greek text available at the Perseus Digital Library. Gaius Julius Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library. Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library. Parada, Carlos, Genealogical Guide to Greek Mythology, Jonsered, Paul Åströms Förlag, 1993. . Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library Publius Ovidius Naso, Metamorphoses translated by Brookes More (1859–1942). Boston, Cornhill Publishing Co. 1922. Online version at the Perseus Digital Library. Publius Ovidius Naso, Metamorphoses. Hugo Magnus. Gotha (Germany). Friedr. Andr. Perthes. 1892. Latin text available at the Perseus Digital Library. Publius Papinius Statius, The Achilleid translated by Mozley, J H. Loeb Classical Library Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1928. Online version at the theoi.com Publius Papinius Statius, The Achilleid. Vol. II. John Henry Mozley. London: William Heinemann; New York: G.P. Putnam's Sons. 1928. Latin text available at the Perseus Digital Library. Publius Papinius Statius, The Thebaid translated by John Henry Mozley. Loeb Classical Library Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1928. Online version at the Topos Text Project. Publius Papinius Statius, The Thebaid. Vol I-II. John Henry Mozley. London: William Heinemann; New York: G.P. Putnam's Sons. 1928. Latin text available at the Perseus Digital Library. Publius Vergilius Maro, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library. Publius Vergilius Maro, Bucolics, Aeneid, and Georgics. J. B. Greenough. Boston. Ginn & Co. 1900. Latin text available at the Perseus Digital Library. Quintus Smyrnaeus, The Fall of Troy translated by Way. A. S. Loeb Classical Library Volume 19. London: William Heinemann, 1913. Online version at theio.com Quintus Smyrnaeus, The Fall of Troy. Arthur S. Way. London: William Heinemann; New York: G.P. Putnam's Sons. 1913. Greek text available at the Perseus Digital Library. Stephanus of Byzantium, Stephani Byzantii Ethnicorum quae supersunt, edited by August Meineike (1790-1870), published 1849. A few entries from this important ancient handbook of place names have been translated by Brady Kiesling. Online version at the Topos Text Project. Children of Poseidon Metamorphoses into animals in Greek mythology Trojans Kings in Greek mythology Centaurs Argive characters in Greek mythology Euboean characters in Greek mythology Characters in Greek mythology
2907443
https://en.wikipedia.org/wiki/Interactive%20Systems%20Corporation
Interactive Systems Corporation
Interactive Systems Corporation (styled INTERACTIVE Systems Corporation, abbreviated ISC) was a US-based software company and the first vendor of the Unix operating system outside AT&T, operating from Santa Monica, California. It was founded in 1977 by Peter G. Weiner, a RAND Corporation researcher who had previously founded the Yale University computer science department and had been the Ph. D. advisor to Brian Kernighan, one of Unix's developers at AT&T. Weiner was joined by Heinz Lycklama, also a veteran of AT&T and previously the author of a Version 6 Unix port to the LSI-11 computer. ISC was acquired by the Eastman Kodak Company in 1988, which sold its ISC Unix operating system assets to Sun Microsystems on September 26, 1991. Kodak sold the remaining parts of ISC to SHL Systemhouse Inc in 1993. Several former ISC staff founded Segue Software which partnered with Lotus Development to develop the Unix version of Lotus 1-2-3 and with Peter Norton Computing to develop the Unix version of the Norton Utilities. Products ISC's 1977 offering, IS/1, was a Version 6 Unix variant enhanced for office automation running on the PDP-11. IS/3 and IS/5 were enhanced versions of Unix System III and System V for PDP-11 and VAX. ISC Unix ports to the IBM PC included a variant of System III, developed under contract to IBM, known as PC/IX (Personal Computer Interactive eXecutive, also abbreviated PC-IX), with later versions branded 386/ix and finally INTERACTIVE UNIX System V/386 (based on System V Release 3.2). ISC was AT&T's "Principal Publisher" for System V.4 on the Intel platform. ISC was also involved in the development of VM/IX (Unix as a guest OS in VM/370) and enhancements to IX/370 (a TSS/370-based Unix system that IBM originally developed jointly with AT&T ca. 1980). They also developed the AIX 1.0 (Advanced Interactive eXecutive) for the IBM RT PC, again under contract to IBM, although IBM awarded the development contract for AIX version 2 of AIX/386 and AIX/370 to the competing Locus Computing Corporation. PC/IX Although observers in the early 1980s expected that IBM would choose Microsoft Xenix or a version from AT&T Corporation as the Unix for its microcomputer, PC/IX was the first Unix implementation for the IBM PC XT available directly from IBM. According to Bob Blake, the PC/IX product manager for IBM, their "primary objective was to make a credible Unix system - [...] not try to 'IBM-ize' the product. PC-IX is System III Unix." PC/IX was not, however, the first Unix port to the XT: Venix/86 preceded PC/IX by about a year, although it was based on the older Version 7 Unix. The main addition to PC/IX was the INed screen editor from ISC. INed offered multiple windows and context-sensitive help, paragraph justification and margin changes, although it was not a fully fledged word processor. PC/IX omitted the System III FORTRAN compiler and the tar file archiver, and did not add BSD tools like vi or the C shell. One reason for not porting these was that in PC/IX, individual applications were limited to a single segment of 64 kB of RAM. To achieve good filesystem performance, PC/IX addressed the XT hard drive directly, rather than doing this through the BIOS, which gave it a significant speed advantage compared to MS-DOS. Because of the lack of true memory protection in the 8088 chips, IBM only sold single-user licenses for PC/IX. The PC/IX distribution came on 19 floppy disks and was accompanied by a 1,800-page manual. Installed, PC/IX took approximately 4.5 MB of disk space. An editorial by Bill Machrone in PC Magazine at the time of PC/IX's launch flagged the $900 price as a show stopper given its lack of compatibility with MS-DOS applications. PC/IX was not a commercial success although BYTE in August 1984 described it as "a complete, usable single-user implementation that does what can be done with the 8088", noting that PC/IX on the PC outperformed Venix on the PDP-11/23. INTERACTIVE UNIX System PC/IX was succeeded by 386/ix in 1985, a System VR3 derivative. Later versions were termed INTERACTIVE UNIX System V/386 and based on System V 3.2, though with elements of BSD added. Its SVR3.2 kernel meant diminished compatibility with other Unix ports in the early nineties, but the INTERACTIVE UNIX System was praised by a PC Magazine reviewer for its stability. After its acquisition of Interactive, Sun Microsystems continued to maintain INTERACTIVE UNIX System, offering it as a low-end alternative to its System V.4-based Solaris, even when the latter had been ported to x86-based desktop machines. The last version was "System V/386 Release 3.2 Version 4.1.1", released in July 1998. Official support ended on July 23, 2006, five years after Sun withdrew the product from sale. Until version ISA 3.0.1, INTERACTIVE UNIX System supported only 16 MB of RAM. In the next versions, it supported 256 MB RAM and the PCI bus. EISA versions always supported 256 MB RAM. See also Coherent (operating system) Notes References Further reading Covers and compares PC/IX, Xenix, and Venix. Maurice J. Bach, The Design of the UNIX Operating System, , Prentice Hall, 1986. IBM has snubbed both Microsoft's multimillion dollar investment in Xenix and AT&T's determination to establish System V as the dominant version on Unix. (InfoWorld 20 Feb 1984) IBM's latest hot potato (PC Mag 20 Mar 1984) External links Interactive Unix Documentation Defunct software companies of the United States Unix history Unix variants
3116572
https://en.wikipedia.org/wiki/Non-finite%20clause
Non-finite clause
In linguistics, a non-finite clause is a dependent or embedded clause that represents a state or event in the same way no matter whether it takes place before, during, or after text production. In this sense, a non-finite dependent clause represents one process as a circumstance for another without specifying the time when it takes place as in the following examples: Non-Finite Dependent Clauses I'm going to Broadway to watch a play. I went to Broadway to watch a play. Finite Dependent Clauses I'm going to Broadway so I can watch a play. I went to Broadway so I could watch a play. Similarly, a non-finite embedded clause represents a qualification for something that is being represented as in the following examples: Non-Finite Embedded Clauses I'm on a street called Bellevue Avenue. I was on a street called Bellevue Avenue. Finite Embedded Clauses I'm on a street that is called Bellevue Avenue. I'm on a street that used to be called Bellevue Avenue. I was on a street that is called Bellevue Avenue. I was on a street that used to be called Bellevue Avenue. In meaning-independent descriptions of language, a non-finite clause is a clause whose verbal chain is non-finite; for example, using Priscian's categories for Latin verb forms, in many languages we find texts with non-finite clauses containing infinitives, participles and gerunds. In such accounts, a non-finite clause usually serves a grammatical role – commonly that of a noun, adjective, or adverb – in a greater clause that contains it. Structure A typical finite clause consists of a finite form of the verb together with its objects and other dependents (i.e. a verb phrase or predicate), along with its subject (although in certain cases the subject is not expressed). A non-finite clause is similar, except that the verb must be in a non-finite form (such as an infinitive, participle, gerund or gerundive), and it is consequently much more likely that there will be no subject expressed, i.e. that the clause will consist of a (non-finite) verb phrase on its own. Some examples are given below. Finite clauses Kids play on computers. (an independent clause) I know that kids play on computers. (a dependent (subordinate) clause, but still finite) Play on your computer! (an imperative sentence, an example of an independent finite clause lacking a subject) Non-finite clauses Kids like to play on computers. (an infinitival clause using the English to-infinitive) It's easy for kids to play on computers. (an infinitival clause containing periphrastic expression of the subject) Playing on computers, they whiled the day away. (a participial clause, using a present participle) With the kids playing on their computers, we were able to enjoy some time alone. (a participial clause with a subject) Having played on computers all day, they were pale and hungry. (a participial clause using a past participle) Playing on computers is fun. (a gerund-participial clause) … he be playing on computers all the time. (a gerund-participial subjunctive clause) Some types of non-finite clause have zero in one of the object or complement positions; the gap is usually understood to be filled by a noun from the larger clause in which the non-zero clause appears (as is the subject "gap" in most non-finite clauses). These clauses are also called hollow non-finite clauses. Some examples: He is the man to beat. (infinitival clause with zero object; the man is understood as the object) That car wants looking at straight away. (gerund-participial clause with zero preposition complement after at) The building was given a new lease of life. (past-participial clause with zero indirect object) For more examples of such constructions in English, see English passive voice and . Use As a dependent clause, a non-finite clause plays some kind of grammatical role within a larger clause that contains it. What this role can be, and what the consequent meaning is, depends on the type of non-finite verb involved, the constructions allowed by the grammar of the language in question, and the meanings of those constructions in that language. Some examples are noted below: To live is to suffer in silence. (infinitival clauses used as subject and object) We went there to collect our computers. (infinitival clause used as an adverbial of purpose) They were sitting quietly. (participial clause used as verb complement to express progressive aspect) The man sitting quietly is the man to watch. (participial clause used as noun modifier) Well beaten, we slumped back to the dressing room. (participial clause used as nominative absolute) I like rescuing wasps. (gerund-participial clause used as a noun phrase) Carthago delenda est ("Carthage must be destroyed"; Latin gerundive used as a predicative expression) Different traditions According to Priscian, delenda is a participle because it agrees in number, case, and gender with a noun, namely Carthago, the subject. In Priscian's theory of POS, words are classified according to the inflectional paradigms that are created independent of the grammatical context the word is in. A misapplication of Priscian's verb categories for the modern notion of non-finite clause might thus result on the recognition of clauses where there are none. In linguistics, both Generative Theory and Systemic Functional Theory of Language do not support analyses of Carthago delenda est in the way it is proposed above. For instance, the French active non-finite verbs sorti(e) and entré(e) as in il est sorti/entré and elle est sortie/entrée agree in number and gender with the subject in the same way as delenda does, but these words are not considered a non-finite sentence in Generative Theory nor a non-finite clause in Systemic Functional Theory on their own. In the example Carthago delenda est/Carthago must be destroyed, the verb est is a modal voice auxiliary because it functions both as modal and as voice. A syntactic tree for the clause Carthago must be destroyed is shown below: For more details of the use of such clauses in English, see , and English passive voice. See also English clause syntax Supine Verbal noun Balancing and deranking References Syntactic entities Grammatical construction types
50644682
https://en.wikipedia.org/wiki/Toast%2C%20Inc.
Toast, Inc.
Toast, Inc. is a cloud-based restaurant software company based in Boston, Massachusetts. The company provides a restaurant management and point of sale (POS) system built on the Android operating system. Toast was founded in Cambridge, Massachusetts, in 2012 by Steve Fredette, Aman Narang, and Jonathan Grimm. In February 2020, Toast received $400 million in a round of Series F funding including Bessemer Venture Partners and TPG, at a valuation of $4.9 billion. Toast is used in thousands of restaurants, from small cafes to nationwide enterprises, across the U.S. History Toast founders Steve Fredette, Aman Narang, and Jonathan Grimm met at software company Endeca after graduating from MIT in Cambridge, MA. After Endeca was acquired by Oracle in 2011, they left to start Toast. Grimm, Narang, and Fredette initially created a consumer app centered for mobile payments, customer loyalty, promotions, and social aspects that integrated with restaurants’ existing POS systems. The company concluded that restaurants had uses for a mobile platform beyond being able to accept payments using a phone or tablet. The company pivoted to a full restaurant technology platform, with applications for POS, online ordering, gift cards, analytics and other features. Two years later, the company signed over 1,000 merchants across the United States and grew to more than 120 employees, requiring the company to upgrade its space from 7,000 square feet in Cambridge to 40,000 square feet at the Hatch Fenway co-working space. In 2013, Toast received investments from several Boston executives, including former CEO of Endeca Steve Papa, who now serves as a board member for the company. In early 2015, Toast announced Chris Comparato as its chief executive officer. Comparato previously served as senior vice president of "customer success" at Acquia and senior vice president of "worldwide customer solutions" at Endeca. In August, Toast announced that they signed 1,000 customers in under two years, revealing 550% year-over-year growth. In November, Toast released its first restaurant report, which analyzed the market for restaurant technology as well as restaurant owners’ challenges and preferred technology features. In January 2016, Toast received $30 million in a round of Series B funding from Bessemer Venture Partners and GV, previously known as Google Ventures and an Alphabet company. The funding was led by BVP partner Kent Bennett and general partner and Android co-founder Rich Miner. Several undisclosed investors also participated in the round. In July 2017, Toast raised $101 million in a Series C round of funding that was led by Lead Edge Capital and Generation Investment Management. In February 2018, Needham-based TripAdvisor partnered with Toast Inc. In July, 2018 the company became a unicorn startup company after a Series D round of financing raising $115 million, and valuing the company at $1.4 billion. In the first quarter of 2019, the company raised $250 million based on a valuation of $2.7 billion. The equity raise was led by TCV and Tiger Global Management. In July 2019, Toast acquired the Chicago-based human resources and payroll software company StratEx. The company also said they would add an additional 1,000 employees by the end of 2019. In February 2020, Toast announced a $400 million Series F funding round at a $4.9 billion valuation, led by Bessemer Venture Partners, TPG, Greenoaks Capital, and Tiger Global Management. In April 2020, Toast laid off 50% of its workforce due to the COVID-19 pandemic and its economic impact on the restaurant industry. In November 2020, Toast has a secondary sale that valued the company at around $8 billion, despite laying off half of its employees in April. On September 22, 2021, Toast went public with an initial public offering under the stock symbol TOST. The company offered shares at $40 initially, with a market capitalization of roughly $20 billion, making it one of 2021's largest American IPOs. Products Toast's restaurant management system operates on the Android operating system. The backend can be managed via mobile device or via Web browser in real-time. Associated hardware includes a receipt printer, cash drawer, kitchen display screen, and magstripe card reader. Recognition In 2015, Merchant Maverick named Toast one of the best POS systems for pizza parlors. In 2016, Toast was recognized as a "notable player in the restaurant technology landscape" by PYMNTS.com. In May 2016, the New England Venture Capital Association (NEVCA) named Toast the winner of the Hottest Startup: A+ at the 2016 NEVY awards. See also Point of Sale References Companies based in Boston Companies listed on the New York Stock Exchange Payment systems Point of sale companies Android (operating system) software Mobile technology Business software Cloud computing providers Technology companies of the United States Customer loyalty programs
4668411
https://en.wikipedia.org/wiki/Previsualization
Previsualization
Previsualization (also known as previs, previz, pre-rendering, preview or wireframe windows) is the visualizing of complex scenes in a movie before filming. It is also a concept in still photography. Previsualization is used to describe techniques such as storyboarding, either in the form of charcoal sketches or in digital technology, in the planning and conceptualization of movie scenes. Description The advantage of previsualization is that it allows a director, cinematographer or VFX Supervisor to experiment with different staging and art direction options—such as lighting, camera placement and movement, stage direction and editing—without having to incur the costs of actual production. On a larger budget project, the directors work with actors in visual effects department or dedicated rooms. Previsualizations can add music, sound effects and dialogue to closely emulate the look of fully produced and edited sequences, and are most encountered in scenes that involve stunts and special effects (such as chroma key). Digital video, photography, hand-drawn art, clip art and 3D animation combine in use. Origins Visualization is a central topic in Ansel Adams' writings about photography, where he defines it as "the ability to anticipate a finished image before making the exposure". The term previsualization has been attributed to Minor White who divided visualization into previsualization, referring to visualization while studying the subject; and postvisualization, referring to remembering the visualized image at printing time. However, White himself said that he learned the idea, which he called a "psychological concept" from Ansel Adams and Edward Weston. The earliest planning technique, storyboards, have been used in one form or another since the silent era. The term “storyboard” first came into use at Disney Studios between 1928 and the early 1930s where the typical practice was to present drawn panels of basic action and gags, usually three to six sketches per vertical page. By the 1930s, storyboarding for live action films was common and a regular part of studio art departments. Disney Studios also created what became known as the Leica reel by filming storyboards and editing them to a soundtrack of the completed film. This technique was essentially the predecessor of modern computer previsualization. Other prototyping techniques used in the 1930s were miniature sets often viewed with a “periscope”, a small optical device with deep depth of field that a director could insert into a miniature set to explore camera angles. Set designers were also using a scenic technique called camera angle projection to create perspective drawings from a plan and elevation blueprint. This allowed them to accurately depict the set as seen by a lens of a specific focal length and film format. In the 1970s, with the arrival of cost-effective video cameras and editing equipment, most notably, Sony's ¾-inch video and U-Matic editing systems, animatics came into regular use at ad agencies as sales tool for television commercials and as a guide to the actual production of the work. An animatic is a video recorded version of a hand-drawn storyboard with very limited motion added to convey camera movement or action, accompanied by a soundtrack. Similar to the Leica reel, animatics were primarily used for live action commercials. The making of the first three Star Wars films, beginning in the mid-'70s, introduced low-cost innovations in pre-planning to refine complex visual effects sequences. George Lucas, working with visual effects artists from the newly established Industrial Light & Magic, used footage of aerial dogfights shots from World War II Hollywood movies to cut together a template for the X-wing space battles in the first Star Wars film. Another innovation included shooting video of toy figures attached to rods; these were hand-manipulated in a miniature set to previsualize the chase through the forest on speeder bikes in Return of the Jedi. The most comprehensive and revolutionary use of new technology to plan movie sequences came from Francis Ford Coppola, who in making his 1982 musical feature One From the Heart, developed the process he called “electronic cinema”. Through electronic cinema Coppola sought to provide the filmmaker with on-set composing tools that would function as an extension of his thought processes. For the first time, an animatic would be the basis for an entire feature film. The process began with actors performing a dramatic "radio-style" voice recording of the entire script. Storyboard artists then drew more than 1800 individual storyboard frames. These drawings were then recorded onto analog videodisks and edited according to the voice recordings. Once production began, video taken from the video tap of the 35 mm camera(s) shooting the actual movie was used to gradually replace storyboarded stills to give the director a more complete vision of the film's progress. Instead of working with the actors on set, Coppola directed while viewing video monitors in the "Silverfish" (nickname) Airstream trailer, outfitted with then state-of-the-art video editing equipment. Video feeds from the five stages at the Hollywood General Studios were fed into the trailer, which also included an off-line editing system, switcher, disk-based still store, and Ultimatte keyers. The setup allowed live and/or taped scenes to be composited with both full size and miniature sets. Before desktop computers were widely available, pre-visualization was rare and crude, yet still effective. For example, Dennis Muren of Industrial Light and Magic used toy action figures and a lipstick camera to film a miniature version of the Return of the Jedi speeder bike chase. This allowed the film's producers to see a rough version of the sequence before the costly full-scale production started. 3D computer graphics was relatively unheard of until the release of Steven Spielberg's Jurassic Park in 1993. It included revolutionary visual effects work by Industrial Light and Magic (winning them an Oscar), one of the few companies in the world at the time to use digital technology to create imagery. In Jurassic Park, Lightwave 3D was used for previsualization running on an Amiga computer with a Video Toaster card. As a result, computer graphics lent themselves to the design process, when visual effects supervisor (and Photoshop creator) John Knoll asked artist David Dozoretz to do one of the first ever previsualizations for an entire sequence (rather than just the odd shot here and there) in Paramount Pictures' Mission: Impossible. Producer Rick McCallum showed this sequence to George Lucas, who hired Dozoretz in 1995 for work on the new Star Wars prequels. This represented an early but significant change as it was the first time that previsualization artists reported to the film's director rather than visual effects supervisor. Since then, previsualization has become an essential tool for large scale film productions, and have been essential for Matrix trilogy, The Lord of the Rings trilogy, Star Wars Episode II and III, War of the Worlds, X-Men, and others. One of the largest recent films to rely heavily on the technique is Superman Returns, which used a large crew of artists to create elaborate pre-visualizations. While visual effects companies can offer previsualization services, today many studios hire companies which cater solely to previsualization for large projects. Common software packages are used for previs by these companies, such as Newtek's Lightwave 3D, Autodesk Maya, MotionBuilder and Softimage XSI. Some directors prefer to do previsualization themselves using inexpensive, general purpose 3D programs that are less technically challenging to use such as iClone, Poser, Daz Studio, Vue, and Real3d, while others rely on the dedicated but user-friendly 3D previsualization programs FrameForge 3D Studio which (along with Avid's Motion Builder) won a Technical Achievement Emmy for representing an improvement on existing methods [that] are so innovative in nature that they materially have affected the transmission, recording, or reception of television. Digital previsualization Digital previsualization is merely technology applied to the visual plan for a motion picture. Coppola based his new methods on analog video technology, which was soon to be superseded by an even greater technological advance—personal computers and digital media. By the end of the 1980s, the desktop publishing revolution was followed by a similar revolution in film called multimedia (a term borrowed from the 1960s), but soon to be rechristened desktop video. The first use of 3D computer software to previsualize a scene for a major motion picture was in 1988 by animator Lynda Weinman for Star Trek V: The Final Frontier (1989). The idea was first suggested to Star Trek producer Ralph Winter by Brad Degraff and Michael Whorman of VFX facility Degraff/Whorman. Weinman created primitive 3D motion of the Starship Enterprise using Swivel 3D software designing shots based on feedback from producer Ralph Winter and director William Shatner. Another pioneering previsualization effort, this time using gaming technology, was for James Cameron's The Abyss (1989). Mike Backes, co-founder of the Apple Computing Center at the AFI (American Film Institute), introduced David Smith, creator of the first 3D game, The Colony, to Cameron recognizing the similarities between The Colony's environment and the underwater lab in The Abyss. The concept was to use real time gaming technology to previsualize camera movement and staging for the movie. While the implementation of this idea yielded limited results for The Abyss, the effort led Smith to create Virtus Walkthrough, an architectural previsualization software program, in 1990. Virtus Walkthrough was used by directors such as Brian De Palma and Sydney Pollack for previsualization in the early '90s. The outline for how the personal computer could be used to plan sequences for movies first appeared in the directing guide Film Directing: Shot By Shot (1991) by Steven D. Katz, which detailed specific software for 2D moving storyboards and 3D animated film design, including the use of a real-time scene design using Virtus Walkthrough. While teaching previsualization at the American Film Institute in 1993, Katz suggested to producer Ralph Singleton that a fully animated digital animatic of a seven-minute sequence for the Harrison Ford action movie Clear and Present Danger would solve a variety of production problems encountered when the location in Mexico became unavailable. This was the first fully produced use of computer previsualization that was created for a director outside of a visual effects department and solely for the use of determining the dramatic impact and shot flow of a scene. The 3D sets and props were fully textured and built to match the set and location blueprints of production designer Terrence Marsh and storyboards approved by director Phillip Noyce. The final digital sequence included every shot in the scene including dialog, sound effects and a musical score. Virtual cameras accurately predicted the composition achieved by actual camera lenses as well as the shadow position for the time of day of the shoot. The Clear and Present Danger sequence was unique at the time in that it included both long dramatic passages between virtual actors in addition to action shots in a complete presentation of all aspects of a key scene from the movie. It also signaled the beginning of previsualization as a new category of production apart from the visual effects unit. In 1994, Colin Green began work on previsualization for Judge Dredd (1995). Green had been part of the Image Engineering department at Ride Film, Douglas Trumball's VFX company in the Berkshires of Massachusetts, where he was in charge of using CAD systems to create miniature physical models (rapid prototyping). Judge Dredd required many miniature sets and Green was hired to oversee a new Image Engineering department. However, Green changed the name of the department to Previsualization and shifted his interest to making 3D animatics. The majority of the previsualization for Judge Dredd was a long chase sequence used as an aid to the visual effects department. In 1995, Green started the first dedicated previsualization company, Pixel Liberation Front. By the mid-1990s, digital previsualization was becoming an essential tool in the production of large budget feature film. In 1996, David Dozoretz, working with Photoshop co-creator John Knoll, used scanned-in action figures to create digital animatics for the final chase scene for Mission: Impossible (1996). When Star Wars prequel producer Rick McCallum saw the animatics for Mission: Impossible, he tapped Dozoretz to create them for the pod race in Star Wars: Episode I – The Phantom Menace (1999). The previsualization proved so useful that Dozoretz and his team ended up making an average of four to six animatics of every F/X shot in the film. Finished dailies would replace sections of the animatic as shooting progressed. At various points, the previsualization would include diverse elements including scanned-in storyboards, CG graphics, motion capture data and live action. Dozoretz and previsualization effects supervisor Dan Gregoire then went on to do the previsualization for Star Wars: Episode II – Attack of the Clones (2002) and Gregoire finished with the final prequel, Star Wars: Episode III – Revenge of the Sith (2005). The use of digital previsualization became affordable in the 2000s with the development of digital film design software that is user-friendly and available to any filmmaker with a computer. Borrowing technology developed by the video game industry, today's previsualization software give filmmakers the ability to compose electronic 2D storyboards on their own personal computer and also create 3D animated sequences that can predict with remarkable accuracy what will appear on the screen. More recently, Hollywood filmmakers use the term pre-visualization (also known as pre-vis, pre vis, pre viz, pre-viz, previs, or animatics) to describe a technique in which digital technology aids the planning and efficiency of shot creation during the filmmaking process. It involves using computer graphics (even 3D) to create rough versions of the more complex (visual effects or stunts) shots in a movie sequence. The rough graphics might be edited together along with temporary music and even dialogue. Some pre-viz can look like simple grey shapes representing the characters or elements in a scene, while other pre-vis can be sophisticated enough to look like a modern video game. Nowadays many filmmakers are looking to quick, yet optically-accurate 3D software to help with the task of previsualization in order to lower budget and time constraints, as well as give them greater control over the creative process by allowing them to generate the previs themselves. Previs software One of the popular tools for directors, cinematographers and VFX Supervisors is FrameForge 3D Studio . which won an Emmy from the National Academy of Television Arts & Sciences for the program's "proven track record of saving productions time and money through virtual testing" in addition to a Lumiere Statuette for Technical Achievement from the Advanced Imaging Society. Another product is ShotPro for the iPad and iPhone that combines basic 3D modeling and simplifies the process of creating 3D scenes and outputs them as storyboards, a feature available with most previs products. Shot Designer animates floor plans in 2D. Toonboom Storyboard Pro handles 2D objects and allows sketching and exporting in storyboard format. Moviestorm works with 3D animation and creates realistic previews similar to iClone which offers realistic 3D scenes and animation. See also Animation Screenplay Storyboard List of motion picture-related topics Script breakdown References External links Interview with Colin Green Superman Returns Previsualization Interview Animation techniques Film production Infographics
3685647
https://en.wikipedia.org/wiki/Leroy%20Sibbles
Leroy Sibbles
Leroy Sibbles (born Leroy Sibblies, 29 January 1949) is a Jamaican reggae musician and producer. He was the lead singer for The Heptones in the 1960s and 1970s. In addition to his work with The Heptones, Sibbles was a session bassist and arranger at Clement "Coxsone" Dodd's Jamaica Recording and Publishing Studio and the associated Studio One label during the prolific late 1960s. He was described as "the greatest all-round talent in reggae history" by Kevin O'Brien Chang and Wayne Chen in their 1998 book Reggae Routes. Biography The son of a grocer, Sibbles began singing in the 1950s and also played guitar, having been taught by Trench Town Rastas Brother Huntley and "Carrot". Barry Llewellyn and Earl Morgan had formed The Heptones in 1958, and Sibbles was in a rival group along with two friends. Sibbles joined The Heptones in 1965 after the two groups competed in a street-corner contest. The trio made their first recordings for Ken Lack in 1966 with "School Girls" and "Gun Man Coming to Town", the latter the A-side of their début single. Though the songs did not achieve hit status, the latter composition made the playlists at Radio Jamaica Rediffusion (RJR). They moved on to Clement "Coxsone" Dodd's Studio One where they stayed until 1971. The Heptones were among the most influential groups of the rock steady era, along with The Pioneers, The Gaylads, The Paragons, The Uniques, and The Techniques. Signature Heptones songs included "Baby", "Get in the Groove", "Ting a Ling", "Fattie Fattie", "Got to Fight On (To the Top)", "Party Time", and "Sweet Talking". The group's Studio One output has been collected on albums The Heptones, On Top, Ting a Ling, Freedom Line, and the Heartbeat Records anthology, Sea of Love. Studio One Beyond his work as a singer-songwriter, Sibbles contributed to the collective output of Studio One as a bass player during the late 1960s. Keyboardist and arranger Jackie Mittoo encouraged Sibbles to play the bass when he needed a bassist for his Jazz trio. When Mittoo left full-time duties at Studio One, Sibbles auditioned singers, arranged sessions, sang harmony, and played bass as a part of the studio group variously known as the Sound Dimension and Soul Vendors. These musicians, with engineering supervision Sylvan Morris, played backing tracks used by vocalists Bob Andy, Alton Ellis, Horace Andy, Carlton Manning, The Abyssinians, The Gladiators, Willi Williams, Ken Boothe, John Holt, Burning Spear, Dennis Brown, Slim Smith, and scores of others. Sibbles was a contributor to tracks including "Freedom Blues" (which evolved into the Jamaican rhythm known as "MPLA") by Roy Richards, "Love Me Forever" by Carlton & The Shoes, "Satta Massagana" and "Declaration of Rights" by the Abyssinians, "Stars" and "Queen of the Minstrels" by The Eternals, "Ten to One" by the Mad Lads, "Door Peep (Shall Not Enter)" by Burning Spear, and the instrumental "Full Up", which was used by Musical Youth for their huge worldwide hit "Pass the Dutchie". Because of the Jamaican process of versioning and the liberal recycling of rhythms in subsequent years, many of the songs, rhythms, and melodies written and recorded during the rocksteady era, the aforementioned in particular, continue to be referenced today. The most frequently referenced of Sibbles' bass lines is that found on the instrumental "Full Up", popularised internationally by Musical Youth's recording of "Pass the Dutchie", an adaptation of Mighty Diamonds' "Pass the Kouchie". Sibbles' legacy also endures in Horace Andy's tribute to him, "Mr. Bassie". (While Sibbles has been credited with the original "Real Rock" bassline, this was more likely performed by Boris Gardiner). The bass parts Sibbles and others developed in rocksteady used a rhythmic space found in later roots reggae, where the notes were not necessarily played or sustained on each downbeat of a 4/4 measure. Sibbles has explained that his style was to lag the downbeat slightly. Other musicians involved in the Studio One rock steady sessions included Richard Ace and Robbie Lyn on keyboards; Bunny Williams, Joe Isaacs, and Fil Callendar on drums; Eric Frater and Ernest Ranglin on guitar; and the horn section of Felix "Deadly Headley" Bennett on saxophone and Vin Gordon (a.k.a. "Don D. Jr.") on trombone. Work with other producers After Studio One, Sibbles and the Heptones recorded for other producers including Lee Perry, Harry J, JoJo Hoo Kim, Niney The Observer, Clive Chin, Gussie Clarke, Lloyd Campbell, Prince Buster, Ossie Hibbert, Phil Pratt, Harry Mudie, Geoffrey Chung, Danny Holloway, Rupie Edwards, and Joe Gibbs. Other Heptones releases from the early 1970s were Book of Rules (Trojan Records) and the Harry Johnson-produced album Cool Rasta (Trojan), recorded just before the group benefited from the internationalisation of reggae via Island Records. Danny Holloway produced Night Food and Lee "Scratch" Perry-produced Party Time were the fruit of the association with Island. As a solo artist, Sibbles worked with Lloyd "Bullwackie" Barnes, Lloyd Parks, Sly & Robbie, Augustus Pablo, Bruce Cockburn, and Lee Perry, but primarily produced himself. Sibbles moved to Canada in 1973, where he married and remained for twenty years, and won a U-Know Award for best male vocalist in 1983, and a Juno Award for best reggae album in 1987. He left the Heptones in 1976, midway through a US tour. Also in Canada, he recorded an album for A&M and licensed several albums to Pete Weston's Micron label, including Now and Strictly Roots. In 1990 he collaborated on the one-off single "Can't Repress the Cause", a plea for greater inclusion of hip hop music in the Canadian music scene, with Dance Appeal, a supergroup of Toronto-area musicians that included Devon, Maestro Fresh Wes, Dream Warriors, B-Kool, Michie Mee, Lillian Allen, Eria Fachin, HDV, Dionne, Thando Hyman, Carla Marshall, Messenjah, Jillian Mendez, Lorraine Scott, Lorraine Segato, Self Defense, Zama and Thyron Lee White. Sibbles continued to visit Jamaica, and performed at Reggae Sunsplash in 1980, 1981, 1983, 1986, and 1990. He returned to the Heptones in 1991. In 1996 he recorded "Original Full Up" with Beenie Man. Sibbles is featured in the 2009 documentary Rocksteady: The Roots of Reggae. He continued to perform and record into 2010. Production work Sibbles moved into production in 2009, and set up the Bright Beam record label. He has produced records by singer Sagitar and deejay Chapter, as well as his own recordings, including a successful cover version of "Harry Hippy". Solo discography Now (1980), Micron Strictly Roots (1980), Micron On Top (1982), Micron The Champions Clash (1985), Kingdom – with Frankie Paul Selections (1985), Leggo Sounds – also released as Mean While (1986), Attic It's Not Over (1995), VP Come Rock With Me (1999), Heartbeat Reggae Hit Bass Lines (2009), Ernie B Notes References Barrow, Steve & Dalton, Peter (2004), The Rough Guide to Reggae, 3rd edn., Rough Guides, Bradley, Lloyd (2000), This is Reggae Music: The Story of Jamaica's Music, Grove Press, Cooke, Mel (2006), "Voice, bass put Sibbles on top", Jamaica Gleaner, 3 February 2006, retrieved 2010-10-31 Katz, David (2000), People Funny Boy: The Genius of Lee "Scratch" Perry, Payback Press, Kenner, Rob (1997), "Boom Shots", Vibe, June–July 1997, p. 163 Keyes, Cheryl L. (2004), Rap Music and Street Consciousness, University of Illinois Press, O'Brien, Kevin & Chen, Wayne (1998), Reggae Routes: The Story of Jamaican Music, Ian Randle Publishers, Quill, Greg (2009), "Sibbles sees rocksteady revival", Jamaica Star, 25 July 2009, retrieved 2010-10-31 Thompson, Dave (2002), Reggae & Caribbean Music, Backbeat Books, Walker, Klive (2006), Dubwise: Reasoning from the Reggae Underground, Insomniac Press, External links Leroy Sibbles at Roots Archives 1949 births 20th-century Black Canadian male singers Canadian reggae musicians Jamaican reggae musicians Jamaican emigrants to Canada Living people Trojan Records artists Juno Award for Reggae Recording of the Year winners 20th-century Jamaican male singers 21st-century Jamaican male singers 21st-century Black Canadian male singers VP Records artists A&M Records artists
67418850
https://en.wikipedia.org/wiki/Cryptojacking
Cryptojacking
Cryptojacking is the act of hijacking a computer to mine cryptocurrencies against the users will, through websites, or while the user is unaware. One notable piece of software used for cryptojacking was Coinhive, which was used in over two-thirds of cryptojacks before its March 2019 shutdown. The cryptocurrencies mined the most often are privacy coins--coins with hidden transaction histories--such as Monero and Zcash. Cryptojacking malware Cryptojacking malware is malware that infects computers to use them to mine cryptocurrencies usually without users knowledge. Cryptojacking (also called malicious cryptocurrency mining) is an emerging Internet threat that hides itself on a computer or mobile device, and uses the machine's resources to "mine" various forms of digital currencies known as cryptocurrencies. It is a burgeoning threat that can take over web browsers, as well as compromise all types of devices, from desktops and laptops to smartphones and even network servers. Like most malicious attacks on the computing public, the motive is profit, but unlike other threats, it is designed to remain completely hidden from the user. To understand the mechanics of the threat and how to protect yourself against it, let's start with some background information. Cryptojacking malware can lead to slowdowns and crashes due to straining of computational resources. Notable Events Microsoft Exchange Server In 2021, multiple Zero-day vulnerabilities were found on Microsoft Exchange Servers, allowing remote code execution. References Cryptocurrencies Malware Security breaches Computer programming Cybercrime
753418
https://en.wikipedia.org/wiki/DYNIX
DYNIX
DYNIX (DYNamic UnIX) was a Unix-like operating system developed by Sequent Computer Systems, based on 4.2BSD and modified to run on Intel-based symmetric multiprocessor hardware. The third major (Dynix 3.0) version was released May, 1987; by 1992 DYNIX was succeeded by DYNIX/ptx, which was based on UNIX System V. IBM obtained rights to DYNIX/ptx in 1999, when it acquired Sequent for $810 million. IBM's subsequent Project Monterey was an attempt, circa 1999, "to unify AIX with Sequent's Dynix/ptx operating system and UnixWare." By 2001, however, "the explosion in popularity of Linux ... prompted IBM to quietly ditch" this. A version was named Dynix 4.1.4. References Berkeley Software Distribution Discontinued operating systems
3896290
https://en.wikipedia.org/wiki/KansasFest
KansasFest
KansasFest (also known as KFest) is an annual event for Apple II computer enthusiasts. Held every July at Rockhurst University in Kansas City, Missouri, KansasFest typically lasts five days and features presentations from Apple II experts and pioneers, as well as games, fun events, after-hours hallway chatter, late-night (or all-night) runs out to movies or restaurants, and more. A number of important new products have been released at KansasFest or developed through collaborations between individuals who likely would not have gotten together. Some of the most notable have been the introduction of the LANceGS Ethernet Card, and the Marinetti TCP/IP stack for the Apple IIGS. Due to COVID-19, the 32nd and 33rd annual KansasFests were virtual-only, held July 24–25, 2020, and July 23–24, 2021, respectively. The 34th annual KansasFest is currently planned to be held July 19–24, 2022, at Rockhurst University. History and organization Resource Central Vendor fairs were part of the earliest days of the microcomputer revolution. The Apple II had its debut at the first West Coast Computer Faire in April 1977. The popularity of this faire spawned other similar computer events elsewhere in the country. In the early 1980s, some of these vendor fairs became more computer-specific. For the Apple II computer, it began with AppleFest '81, sponsored by the Apple group in the Boston Computer Society. These festivals spread to be held in various places in the country, and Apple Computer became involved, even to the point of sending executives to give keynote addresses, and holding sessions for developers. After the introduction of the Apple III, Lisa and Macintosh computers, Apple II users and developers were feeling increasingly isolated and ignored by Apple Computer. Tom Weishaar had started a newsletter, Open-Apple (later renamed to A2-Central) about the Apple II, and in it he provided information about the computer, how to use it, product reviews, and more. With time, he created a company named Resource Central to oversee the newsletter and other products available to sell to subscribers. Frustrated by Apple's diminishing emphasis on the Apple II, Weishaar planned a developer's conference that would specifically focus on the Apple II and Apple IIGS. The first event was held in July 1989, and was called the A2-Central Developer Conference, billed as a chance to "meet the people who will make the Apple II's future". The conference brought together programmers, hardware developers, and Apple sent out a number of members of its Apple II group to participate in the meeting. What made it different from many similar meetings was the way in which the accommodations were handled. Resource Central, which was based in Overland Park, Kansas, arranged for the meeting and housing for many of the attendees at Avila College, a Catholic institution located in Kansas City, Missouri, not far from Overland Park. One of the unanticipated effects of this arrangement was that the college dorm environment encouraged interaction between participants in a way that would not have happened in a hotel. Nearly all who made the trip to the conference found it a significant and positive experience, and were more than ready to do it again the following year. Resource Central continued to host these annual summer meetings, changing the name to the A2-Central Summer Conference. By the third meeting in 1991, its attendees had informally given it the name, "KansasFest", a portmanteau of "Kansas" and the "AppleFest" events held elsewhere in the country. Resource Central's sponsorship and management lasted through six KansasFest July conferences, the last being held in 1994. KansasFest continues Due to Apple Computer's decision to discontinue production of the Apple IIGS in late 1992 and the Apple IIe in late 1993, and the rise of the Macintosh and of computers running MS-DOS, the Apple II market began to rapidly diminish. At Resource Central, finances became a problem during 1994, and a crisis hit the company at the start of 1995. Declining renewals of the A2-Central newsletter and other products the company sold could no longer sustain the business, and it was necessary to shut down in February of that year. This put into doubt the prospects of continuing the annual KansasFest meeting. To rescue it, a committee was formed amongst previous attendees, coordinated online via GEnie. By spring of 1995 they had secured Avila for a two-day meeting, and had enough who had committed to come that KansasFest 1995 could be held. From 1995 through 2004, KansasFest continued to be held at Avila (which changed its name to Avila University in 2002). In the earlier years, it served as an annual rallying point for the Apple II community, as it found itself in a world shrinking in resources that would support it. Like Resource Central, other businesses that dealt with the Apple II also found it difficult to survive. The online homes for direct-dial Apple II access (GEnie, CompuServe, Delphi, and America Online) were having problems with either Y2K or transition to the World Wide Web, and were phasing out their text-based access. Although the annual KansasFest event was coordinated on those online services, the physical meeting provided a recurring connection point. By the time its second decade began in 1999, KansasFest was becoming as much about preservation of the past as it was about advancing the Apple II platform. The conference began to also have sessions covering computing on the Macintosh, Newton, and Palm computers. Attendees were often not programmers or developers, but increasingly were those who enjoyed retrocomputing or had a nostalgic connection with the Apple II. It was also a venue for demonstration of new uses for the Apple II that had never been previously considered. For example, Michael Mahon showed off his AppleCrate parallel processing Apple II in 2007, with a number of Apple IIe boards connected together, updated to a seventeen board system by the following year. A programmer, David Schmenk wrote a first-person maze game in 2007, "Escape From The Homebrew Computer Club" using 16-color lo-res graphics, something that could have been run on an Apple II in 1977 if anyone had thought of it. He demonstrated this game at KansasFest in 2011. Furthermore, the committee began to seek out keynote speakers from outside of the immediate community, to increase interest in the event. This trend began in 2003, when Steve Wozniak agreed to speak to a turnout that was double that of the previous year. Other speakers have included David Sztela (of Nibble magazine, later employed at Apple), Lane Roathe (early Apple II game programmer), Jason Scott (digital preservationist), Mark Simonsen (of Beagle Bros), and Bob Bishop (programmer and Apple employee). Starting in 2005, the event began to be held at a new venue, Rockhurst University, nine miles to the north of Avila, and still in Kansas City, Missouri. Though attendance reached an all-time low of 28 in 2006, it has been steadily climbing since. Fans of the Apple II computer come from all over the United States, and have come from Canada, Australia and Great Britain. Committee / Corporation From 1995 through 2014, a volunteer group each year took it upon themselves to arrange the facility for the following year's event, send out invitations, promote the event, and make sure that there were speakers, sessions, contests, and places to go outside of the meeting area. In 2015, the committee officially incorporated KansasFest, better defining the organization in order to continue to steer the event into the future. In 2020, KansasFest became a 501(c)(3) organization (tax ID 47-3514247). Events Advancing the platform Important contributions to the Apple II have made their appearance at KansasFest. In 1996, a meeting between Richard Bennett of Australia, Ewen Wannop of Great Britain, and Geoff Weiss of the United States set the groundwork for the announcement at the 1997 meeting of the Marinetti control panel for the Apple IIGS. This system extension made possible TCP/IP connections to the Internet (something that Apple had never designed the computer to do). Also in 1997, some of the first Apple II web sites began to appear, and by the following year KansasFest had its own web site. In the next several years, it was common to see release of a CD-ROM collection of Apple II files of various kinds. In 2000, an Ethernet card called LANceGS for the Apple IIGS was demonstrated, and plans were made for a post-Delphi text-based contact point for Apple II users on the Internet, Syndicomm Online. Recurring Most years have one or more contests. These have included: HackFest - participants are given a focused period of time while at the event to create from scratch a program that does something cool. Tie One On - wear the most unusual or crazy tie at the banquet Door Decoration - being a college dorm, the doors can be decorated any interesting way desired Bite The Bag - a contest of agility in picking up a paper bag by biting it, with only one extremity touching the floor Games - Contestants attempt to achieve the highest score on classic Apple II games, such as GShisen or Lode Runner. Exhibits - demonstrating products or retro Apple II-related items Another popular event held for many years was a "celebrity" roast of prominent members of the Apple II community. Dates and milestones Apple II Forever awards Starting in 2010, the KansasFest Committee began to award members of the Apple II community who had made significant contributions to the Apple II, either in promoting or developing for the platform during its active years, or in helping to advance or preserve the Apple II since its production had been discontinued. External links KansasFest on Twitter KansasFest on YouTube Notes Apple Inc. conferences Apple II family Apple II periodicals
2541921
https://en.wikipedia.org/wiki/Free%20Geek
Free Geek
Free Geek is a technology related non-profit organization based in Portland, Oregon launched on April 22, 2000. It started as a public event at Pioneer Courthouse Square. In September 2000, it opened a permanent facility as a drop off site for electronic waste. In January 2001, local newspaper The Oregonian ran an article advertising their free computer program for volunteers, which became so successful that they had to start a waiting list. They currently have over 2,000 active volunteers per year. History As part of the COVID-19 pandemic, the company received between $350,000 and $1 million in federally backed small business loan from Columbia State Bank as part of the Paycheck Protection Program. The company stated it would allow them to retain 47 jobs. Activities Free Geek provides free classes and work programs to its volunteers and the general public. Free Geek also offers phone and drop-in technical support for the computers it provides. Build program Volunteers are trained to build and refurbish computers using parts recovered from donations. These computers are then sold online or at a store, donated in many of Free Geek's programs, or given to volunteers completing 24 hours of service. Recycling, reuse, and resale Raw materials, such as electronics, are processed by volunteers, approximately 40% of it is reused. Some of it is sold, either online or in their store, where the proceeds are used to support educational and outreach programs. Any materials which cannot be reused are recycled as safely and sustainably as possible, in order to prevent it from entering the waste stream and damaging the environment. Free Geek also donates refurbished computers and technology directly back into the community; in 2017, for example, Free Geek was able to give away six laptop computers for every ten sold in their Store. In 2016, Free Geek donated 4,400 items of technology to low-income individuals, schools, and nonprofits. Hardware Grant Program The Hardware Grant Program provides qualifying nonprofits and schools with refurbished desktop computers, laptops, printers, and other equipment. Since its inception, it has granted more than 10,500 items to over 2,000 nonprofits. 60% of grantees are based in the Portland Metro area. Volunteers and internships Volunteers do the majority of work at Free Geek. Since its founding, over 20,500 people have volunteered. In 2016, over 2,000 active volunteers and interns volunteered over 47,500 hours. The Volunteer Adoption Program offers a free computer to every volunteer after they have worked 24 hours of volunteer time. Each year Free Geek gives around 550 computers and necessary peripherals to volunteers who have completed 24 hours of service. Free Geek also offers 3-6 month internship programs for skilled volunteers 16 years and up, designed to help develop job skills to help them pursue tech sector careers and make connections in the community. Plug Into Portland Plug Into Portland is a partnership between Free Geek and Portland Public Schools. It started in 2014, and expanded in 2017 to other school districts. It attempts to reduce the digital divide, which hinders low-income students’ learning because they do not have access to a computer at home. Students who volunteer for a total of 24 hours at any nonprofit organization in their community receive a free computer. It served approximately 100 low-income students and families in 2016. Community education Free Geek offers free educational programs about technology to the community. Classes and workshops include such topics as basic digital literacy, "Anatomy of a Computer", programming with JavaScript and Python, web development, social media for organizations, data science, digital privacy and safety, graphic design, digital art, and workplace readiness. In 2016, Free Geek served nearly 1,700 students with over 4,000 classroom hours of instruction. Free software Free Geek's refurbished computers run Linux Mint and other free and open-source software. The use of free software gives a wide range of software without the need to manage licenses or payment. Free Geek was a joint winner of the first Chris Nicol FOSS Prize awarded by the Association for Progressive Communications (APC) in 2007. Locations In addition to Portland, a number of other cities have started their own independent Free Geek organizations. Fayetteville, Arkansas (Free Geek of Arkansas) Athens, GA, USA (Free I.T. Athens) Chicago, IL (Free Geek Chicago) Detroit, MI (Motor City Free Geek) Minneapolis-Saint Paul, MN (Free Geek Twin Cities) Oslo, Norway (Free Geek Norway) Ephrata, Pennsylvania (Free Geek Penn) Toronto, ON (Free Geek Toronto) Vancouver, BC (Free Geek Vancouver) See also Empower Up World Computer Exchange Digital divide in the United States Global digital divide Computer recycling Electronic waste in the United States References External links Free Geek BoingBoing article another BoingBoing article Organizations established in 2000 Organizations based in Portland, Oregon Non-profit organizations based in Oregon Recycling organizations Information technology charities Electronic waste in the United States 2000 establishments in Oregon Charities
40147414
https://en.wikipedia.org/wiki/The%20Hacker%20Ethic
The Hacker Ethic
The Hacker Ethic may refer to: Hacker ethic The Hacker Ethic and the Spirit of the Information Age
1829947
https://en.wikipedia.org/wiki/KITT
KITT
KITT or K.I.T.T. is the short name of two fictional characters from the adventure franchise Knight Rider. While having the same acronym, the KITTs are two different entities: one known as the Knight Industries Two Thousand, which appeared in the original TV series Knight Rider, and the other as the Knight Industries Three Thousand, which appeared first in the two-hour 2008 pilot film for a new Knight Rider TV series and then the new series itself. In both instances, KITT is an artificially intelligent electronic computer module in the body of a highly advanced, very mobile, robotic automobile: the original KITT as a 1982 Pontiac Firebird Trans Am, and the second KITT as a 2008–2009 Ford Shelby GT500KR. During filming, KITT was voiced by a script assistant, with voice actors recording KITT's dialog later. David Hasselhoff and original series voice actor William Daniels first met each other six months after the series began filming. KITT's evil twin is KARR, whose name is an acronym of Knight Automated Roving Robot. KARR was voiced first by Peter Cullen and later by Paul Frees in seasons one and three, respectively, of the NBC original TV series Knight Rider. A 1991 sequel film, Knight Rider 2000, is centered on KITT's original microprocessor unit transferred into the body of the vehicle intended to be his successor, the Knight Industries Four Thousand (Knight 4000), voiced by Carmen Argenziano and William Daniels. Val Kilmer voiced KITT in the 2008–2009 Knight Rider series. Knight Industries Two Thousand (KITT) In the original Knight Rider series, the character of KITT (Knight Industries Two Thousand) was physically embodied as a modified 1982 Pontiac Trans Am. KITT was designed by customizer Michael Scheffe. The convertible and super-pursuit KITTs were designed and built by George Barris. Development In the history of the television show, the first KITT, voiced by William Daniels, was said to have been designed by the late Wilton Knight, a brilliant but eccentric billionaire, who established the Foundation for Law And Government (FLAG) and its parent Knight Industries. The 2008 pilot film implied that Charles Graiman, creator of the Knight Industries Three Thousand, also had a hand in designing the first KITT. An unknown number of KITT's systems were designed at Stanford University. KITT's total initial production cost was estimated at $11,400,000 in 1982 (Episode 5, "Just My Bill"). The 1991 movie Knight Rider 2000 saw the first KITT (Knight Industries Two Thousand) in pieces, and Michael Knight himself reviving the Knight 2000 microprocessor unit, which is eventually transferred into the body of the vehicle intended to be the original KITT's direct successor, the Knight 4000. The new vehicle was a modified 1991 Dodge Stealth, appearing similar to the Pontiac Banshee prototype. In the 1997–1998 spin-off series Team Knight Rider, KITT is employed as a shadow advisor. It is later revealed that "The Shadow" is actually a hologram run by KITT. In "Knight of the Living Dead", Graiman states a third KITT exists as a backup. When KITT is about to die, his memories are downloaded so the third KITT can use them. However, the third AI is not used in the end. While the 2008 pilot movie, and then the new series, appears to be a revamp of the original series, it offers some continuity from the original one. The "new" or "second" KITT (Knight Industries Three Thousand) is a different vehicle and microprocessor unit. In Knight Rider 2000, it is stated that most of the Knight 2000 parts had been sold off. However, Graiman's garage in the 2008 pilot shows a more complete collection of parts than in the boxes recovered by Michael Knight in Knight Rider 2000. The original Knight Industries Two Thousand is also shown in the pilot movie (although in pieces) in the scene where the garage of Charles Graiman (creator of the Knight Industries Three Thousand and implied co-designer of the original KITT) is searched by antagonists. A Trans-Am body (without its hood) is partially covered by a tarp, on which rests the rear spoiler. The famous KITT steering wheel (labelled "Knight Two Thousand") and "KNIGHT" license plate are also shown, along with numerous black muscle car body parts. When the camera shows a full scene of the garage, there are three cars in the garage: the 3000, a 2000 under a tarp, and a complete 2000. AI personality and communication According to the series, the original KITT's main cybernetic processor was first installed in a mainframe computer used by the US government in Washington, D.C. However, Wilton saw better use for "him" in the Foundation's crime-fighting crusade and eventually this AI system was installed in the vehicle. KITT is an advanced supercomputer on wheels. The "brain" of KITT is the Knight 2000 microprocessor, which is the centre of a "self-aware" cybernetic logic module. This allows KITT to think, learn, communicate and interact with humans. He is also capable of independent thought and action. He has an ego that is easy to bruise and displays a very sensitive, but kind and dryly humorous personality. According to Episode 55, "Dead of Knight", KITT has 1,000 megabits of memory with one nanosecond access time. According to Episode 65, "Ten Wheel Trouble", KITT's future capacity is unlimited. KITT's serial number is AD227529, as mentioned in Episode 31, "Soul Survivor". KITT's Voice (Anharmonic) Synthesizer (for speech) and Etymotic Equalizer (audio input) allow his logic module to speak and communicate. With it, KITT can also simulate other sounds. KITT's primary spoken language was English; however, by accessing his language module, he can speak fluently in Spanish, French and much more. The module can be adjusted, giving KITT different accents such as in Episode 82, "Out of the Woods", where KITT uses a "New York City" accent and called Michael "Micky". During the first season, KITT's "mouth" in the interior of the vehicle was indicated by a flashing red square. In episode 14 "Heart of Stone", this was changed to three sectioned vertical bars, as this design proved popular with fans as part of KARR. KITT can also project his voice as a loudspeaker or as a form of ventriloquism (First used in Episode 48, "Knight of the Drones, Pt. 2"). KITT is in constant contact with Michael via a comlink through a two-way communication wristwatch (a modified '80s LCD AM radio watch) Michael wore. The watch also has a micro camera and scanner that KITT can access to gather information. In an emergency, Michael can activate a secret homing beacon hidden inside a gold pendant he wears around his neck. The beacon sends a priority signal that can remotely activate KITT, even if KITT were deactivated, and override his programming so that he rushes to Michael's aid. Used in Episode 42, "A Good Knight's Work" and in "Knights of the Fast Lane". Physical features Dashboard equipment KITT has two CRT video display monitors on his dash. KITT later only has one when his dash was redesigned by Bonnie for the show's third season. Michael can contact home base and communicate with Devon and others by way of a telephone comlink using KITT's video display. The video display is also used for the Graphic Translator system (which sketches likenesses from verbal input to create a Facial composite), as well as for scanning or analysis results. KITT can also print hard copies of data on a dashboard-mounted printer (First used in Episode 15, "The Topaz Connection"). KITT also has an in-dash entertainment system that can play music and video, and run various computer programs including arcade games. KITT can dispense money to Michael when he needed it (First used in Episode 59, "Knight by a Nose"). KITT has an Ultraphonic Chemical Analyzer scanning tray which can analyze the chemical properties of various materials. It can even scan fingerprints and read ballistic information off bullets and compare these with a police database. The system can also analyze chemical information gathered from KITT's exterior sensors (First used in Episode 17, "Chariot of Gold"). KITT can release oxygen into his driver compartment and provide air to passengers if he was ever submerged in water or buried in earth. This is also used to overcome the effects of certain drugs (First used in Episode 5, "Slammin' Sammy's Stunt Show Spectacular".) Scanning and microwave jamming KITT has a front-mounted scanner bar called the Anamorphic Equalizer. The device is a fibre-optic array of electronic eyes. The scanner can see in all visual wavelengths, as well as X-ray and infrared. Its infrared Tracking Scope can monitor the position of specific vehicles in the area within 10 miles. The scanner is also KITT's most vulnerable area. Occasionally, the bar can pulse in different patterns and sweep rapidly or very slowly. Glen A. Larson, the creator of both Knight Rider and Battlestar Galactica has stated that the scanner is a nod to the Battlestar Galactica characters, the Cylons, and even used the iconic Cylon eye scanner audio to that effect. He stated that the two shows have nothing else in common and to remove any fan speculation, stated in the Season One Knight Rider DVD audio-comments, that he simply reused the scanning light for KITT because he liked the effect. KITT also has an array of tiny audio and visual microscanners and sensors threaded throughout his interior and exterior which allows for the tracking of anything around the car. KITT can also "smell" via an atmospheric sampling device mounted in his front bumper. When scanning in Surveillance Mode: KITT could detect people and vehicles and track their movements and discern proximity. KITT could gather structural schematics of buildings, vehicles, or other devices and help Michael avoid potential danger when he was snooping. KITT could monitor radio transmissions and telephone communications within a location and trace those calls. KITT could tap into computer systems to monitor, or upload and download information as long as he could break the access codes. KITT's other sensors include: a medical scanner that includes an electrocardiograph (EKG). The medical scanner can monitor the vital signs of individuals and display them on his monitors. It can indicate such conditions as if they were injured, poisoned, undergoing stress or other emotional behavior (First used in Episode 1, "Knight of the Phoenix (Pt. 2)"); a Voice Stress Analyzer which can process spoken voices and determine if someone may be lying (First used in Episode 26, "Merchants of Death"); and a bomb sniffer module that can detect explosives within a few yards of the vehicle (First used in Episode 25, "Brother's Keeper"); KITT has a microwave jamming system that plays havoc on electrical systems. This lets him take control of electronic machines, allowing things like cheating at slot machines, breaking electronic locks, fouling security cameras, and withdrawing money from ATMs. KITT can also use microwaves to heat a vehicle's brake fluid, causing it to expand and thus apply the brakes of the car. In Episode 26, "Merchants of Death", the microwave system's power has been increased 3 times its normal strength, strong enough to bring down a helicopter at a limited distance. Features KITT is armored with "Tri-Helical Plasteel 1000 MBS" (Molecular Bonded Shell) plating which protects him from almost all forms of conventional firearms and explosive devices. He can only be harmed by heavy artillery and rockets, and even then, the blast usually left most of his body intact and only damaged internal components. This makes KITT's body durable enough to act as a shield for explosives, ram through rigid barriers of strong material without suffering damage himself and sustain frequent long jumps on turbo boost. The shell also protected him from fire. However, it was vulnerable to electricity, as seen in the episode "Lost Knight" (season 3 episode 10), when a surge of electricity shorted out his memory. The shell was also vulnerable to some potent acids and, in episode 70 "Knight Of The Juggernaut", a formula was made (with knowledge of the shell's chemical base) to neutralize it completely. The shell offers little to almost no protection from lasers in certain episodes. The shell is a combination of three secret substances together referred to as the Knight Compound, developed by Wilton Knight, who entrusted parts of the formula to three separate people, who each know only two pieces of the formula. The shell provided a frame tolerance of 223,000 lb (111.5 tons) and a front and rear axle suspension load of 57,000 lb (28.5 tons). In the pilot, "Knight of the Phoenix", the shell is described as the panels of the car itself; in later episodes, especially from season two onward, the idea of the shell being applied to a base vehicle chemically is used. KITT is also protected by a thermal-resistant Pyroclastic lamination coating that can withstand sustained temperatures of up to 800 degrees Fahrenheit (426 °C). First used in Episode 32, "Ring of Fire". KITT can tint the windshield and windows to become opaque (First seen in Episode 14, "Give Me Liberty... or Give Me Death") and can also deflate and re-inflate his tires (First used in Episode #5 "Slammin' Sammy's Stunt Show Spectacular"). KITT's tires can produce traction spikes that allow KITT to overcome steep terrain. First seen in Episode 86 "Hills of Fire". KITT has two front ejection seats, mostly used when Michael needed a boost to fire escapes or rooftops. First used in Episode 1, "Knight of the Phoenix (Pt. 1)". KITT also has a hidden winch and grappling hook system. Most often the hook is connected by a strong cable, but a metal arm has also been seen. The grappling hook is first used in Episode 6, "Not a Drop to Drink"; the winch is first used in Episode 13, "Forget Me Not". KITT has a hidden switch and setting dial under the dash that either completely shuts down his AI module or deactivates certain systems should the need arise. First used in Episode 17, "Chariot of Gold". He also has a function which can be activated in order to completely lock the AI from all the vehicle controls, such as preventing KITT from activating Auto Cruise or anyone inside the car from doing something that would probably hurt them. KITT is still able to protest such actions vocally. First used in Episode 8, "Trust Doesn't Rust". KITT's headlights can flash red and blue as police lights and he has a siren. First used in Episode 38, "Race for Life". KITT is equipped with a parachute. First used in Episode 23, "Goliath Returns (Pt. 1)". Equipment for attack and defense From under the rear bumper, KITT can spray a jet of oil, creating an oil slick; or emit a plume of smoke, creating a smoke screen (both were first used in Episode 1, "Knight of the Phoenix"). KITT can also dispense a cloud of tear gas along with his smoke screen (First used in Episode 13, "Hearts of Stone"). KITT has an induction coil he can produce from under his front bumper and that, being placed on a metal object, KITT can remotely induce electrical voltage or current in that object. First used in "Knight of the Drones (Part I)" to electrify a fence in order to incapacitate two thugs without seriously harming them. KITT has flame throwers mounted under his bumpers. First used in Episode 2, "Deadly Maneuvers". KITT can launch magnesium flares, which can also be used to divert heat-seeking missiles fired at him. First used in Episode 26, "Merchants of Death". KITT can fire a high powered ultra-frequency modulated resonating laser, capable of burning through steel plating. First used in Episode 9, "Trust Doesn't Rust" and was used to try and destroy KARR by hitting KARR's only weak spot. Until the laser was calibrated, KITT could not fire it himself and it could only be fired by KITT's technician Bonnie. Also as pointed out in "Trust Doesn't Rust", if at that time, it was fired more than twice, it would drain KITT's batteries. Later in "Goliath part 2", KITT was installed a more user friendly laser power pack which was very useful in disabling the monstrous 18 wheeler. KITT can automatically open and close his doors, windows, hood, trunk, and T-tops. He could also lock his doors to prevent unauthorized entry into his driver compartment. KITT can also rotate his "KNIGHT" license plate to reveal a fictitious one reading "KNI 667". Michael used this to evade police when an APB was placed on him. First used in Episode 25, "Brother's Keeper". KITT could put out small fires from a CO2 sprayer in his bumpers. He could spray a gas into the driver compartment that could render an unwanted occupant unconscious. KITT could also expel all breathable air from the driver compartment; however, only KARR ever threatened to use it to harm someone. KITT used this to rid the compartment of smoke after bombs were detonated in his trunk. Engine and driving KITT is powered by the Knight Industries turbojet engine, with modified afterburners. and a computer controlled 8-speed turbodrive transmission. This helps him do 0–60 mph in 2 seconds (1.37g), standing to quarter mile 4.286 seconds. Electromagnetic hyper-vacuum disc brakes: 14 foot (4.25 m) braking distance (70–0 mph – 112–0 km/h - 11.7g). KITT primarily uses hydrogen fuel. However, his complex fuel processor allows him to run on any combustible liquid, even regular gasoline. In one episode, KITT mentioned his fuel economy was at least 65 miles per gallon. However, when operating on fuels other than liquid hydrogen, KITT's fuel efficiency and power output may be lowered. Used in most episodes, KITT can employ a "turbo boost". This is a pair of rocket boosters mounted just behind the front tires. These lifted the car, allowing KITT to jump into the air and pass over obstacles in the road. Also, occasionally, Turbo Boost was used to allow KITT to accelerate to incredible speeds in excess of 200 mph (322 km/h). The boosters could fire forward or backward, although the backward booster was never used. In later seasons, a passive laser restraint system helped protect Michael and any passengers from the shock of sudden impacts and hard stopping. It is speculated that this is a primitive form of an inertial damping device. First used in Episode 47, "Knight of the Drones". KITT has four main driving modes: Normal cruise – On "Normal", Michael has control of the car. In an emergency, KITT can still take over and activate Auto Cruise mode. Auto cruise – KITT has an "Alpha Circuit" as part of his main control system, which allows the CPU to drive the car utilizing an advanced Auto Collision Avoidance system. KARR's Alpha Circuit was damaged due to being submerged in water for a time, which required him to have an operator to control his Turbo Boost function. Pursuit mode – "Pursuit" is used during high-speed driving and is a combination of manual and computer assisted operation. KITT could respond to road conditions faster than Michael's reflexes could; however, Michael was technically in control of the vehicle and KITT helped guide certain maneuvers. Silent mode – The feature dampens his engine noise and allows him to sneak around. First used in Episode 37, "White-Line Warriors". Other vehicle modes included: a two-wheel ski drive, which allowed KITT to "ski" (driving up on two wheels) on either left or right side (First used in Episode 1, "Knight of the Phoenix"); an aquatic synthesizer which allows KITT to hydroplane, effectively "driving" on water, using his wheels and turbo system for propulsion (First used in Episode 28, "Return to Cadiz"), but which was removed by the end of the episode because it was faulty; and a High Traction Drop Downs (HTDD) system which hydraulically raises KITT's chassis for better traction when driving off-road (First used in Episode 39, "Speed Demons"). Fourth season update During the first episode of the fourth season, "Knight of the Juggernaut, Part I". KITT's Molecular Bonded Shell is intentionally neutralized by a sprayed combination of chemicals, and KITT is nearly destroyed by the Juggernaut. a custom-designed armored vehicle. KITT is redesigned, and is repaired and rebuilt in "Knight of the Juggernaut, Part II". One main feature of the redesign is that Super-Pursuit mode is added, consisting of improved rocket boosters for enhanced acceleration, retractable spoilers for aerodynamic stability, and movable air inlets for increased cooling. Super-Pursuit Mode provided a 40% boost in speed beyond the car's original top speed of 300 MPH. When Super-Pursuit mode is used at night some of the exterior and under the wheel arches glow red. This also included an emergency braking system which slows KITT down from Super-Pursuit speeds, by using a forward braking booster and air panels that pop out to create air friction (air brakes). While KITT's initial roof was a T-top, the redesigned KITT now has a convertible roof. Michael can bring the top down by pressing the "C" button on KITT's dash. Screen-used cars A total of 23 KITT cars were made for use in filming the series. All except one of these cars survived until the show was axed; all except 5 of the remaining 22 cars were destroyed at the end of filming. Of the 5 that escaped that fate: 1 stunt car (originally at the universal theme park) was shipped to a theme park in Australia, for World Expo '88, in Brisbane, Queensland, but is now believed to be back in the US. Universal kept one 'hero' and one stunt car for use in the Entertainment Center display – the two originals have since been sold to a private collector in the US; another, a convertible, disappeared for a while before being sold to the former Cars of the Stars Motor Museum in Keswick, Cumbria, England This convertible was sold to the Dezer Collection, Orlando, Florida when Cars of the Stars closed. The fifth car is believed to be in private hands in the UK. Press releases regularly appear claiming 'original screen-used' cars are being sold. For example: on April 4, 2007, "one of the four KITT cars used in production of the television series" was reputedly being put up for sale for $149,995 by Johnny Verhoek of Kassabian Motors, Dublin, California. And a story in USA Today from December 2007 states that slain real estate developer and car aficionado, Andrew Kissel, was in possession of one of the surviving cars. Some reports say that Michael Jackson bought an original KITT and former NSYNC band member Joey Fatone also claims to have purchased one of these authentic original KITTs at auction. There have been more 'original' cars auctioned than were built in total for the show. The September 25, 2014 fifth episode of the Dutch TV programme Syndroom, featuring people with Down syndrome who wish to fulfill a dream, features Twan Vermeulen, a Knight Rider fan who wishes to meet David Hasselhoff and KITT. Together with the show's presenter they fly to L.A. and go searching for Hasselhoff's house. They "find" Hasselhoff on the driveway in front of his house, dusting off KITT. After KITT speaks a personal message to Twan, Hasselhoff offers to go with him to take KITT for a spin, "Freak out some people on the freeway", which they did with great pleasure for everyone involved. The right-hand drive KITT, known as the "Official Right Hand Drive KITT" as used in the video "Jump In My Car" by David Hasselhoff, is owned by a company called Wilderness Studios Australia. Knight Industries Three Thousand (KITT) The 2008 update to Knight Rider includes a new KITT – the acronym now standing for Knight Industries Three Thousand. The KITT platform is patterned on a Shelby GT500KR and differs from the original Two Thousand unit in several ways. For example, the 2008 KITT utilizes nano-technology, allowing the car's outer shell to change colors and morph itself into similar forms temporarily. The nanotech platform is written as needing the AI active in order to produce any of these effects, unlike the original car's gadgets and "molecular bonded shell" which allowed it to endure extreme impacts. These extreme down-sides to the use of nanotech have been demonstrated when villains are able to cause significant damage, such as shooting out windows, when the AI is deactivated. It can also turn into two different types of a Ford F-150 4x4 truck (one completely stock and the other with some modifications), a Ford E-150 van, a Ford Crown Victoria Police Interceptor, a special edition Warriors In Pink Mustang (in support of breast cancer awareness month), a Ford Flex, and a 1969 Ford Mustang Mach 1 for disguise or to use the alternate modes' capabilities (such as off-road handling). The car can engage an "Attack Mode", featuring scissor/conventional hybrid doors, which allows it to increase speed and use most of its gadgets (including turbo boost). It had a different looking attack mode in the pilot which was used whenever the car needed to increase speed. Its downside however is that it only seats two. KITT is also capable of functioning submerged, maintaining life support and system integrity while underwater. While the original series stated the original KITT was designed by Wilton Knight, the 2008 TV movie implies Charles Graiman may have co-designed the car and the AI for Wilton Knight, was subsequently relocated to protect him and his family, and later designed the Knight Industries Three Thousand. KITT's weapons include a grappling hook located in the front bumper, usable in normal and attack modes, two gatling-style guns that are retracted from the hood, a laser, and missile launchers usable only in attack mode, which were first used in "Knight of the Hunter". In the Halloween episode "Knight of the Living Dead", KITT demonstrates the ability to cosmetically alter his appearance, becoming a black Mustang convertible with a pink trim as a Halloween costume. This configuration had the scanner bar relocated to behind the grille. Dr. Graiman also reveals in this episode that a backup neural network exists when he suggests downloading KITT's files and reuploading them to the backup, to which replies "The Backup is not me." In the pilot, KITT had shown himself capable of similarly altering his external appearance—changing his color and licence plate. In "Knight of the Zodiac", KITT uses a dispenser located in his undercarriage to spread black ice, and a fingerprint generator in the glovebox to overlay the fingerprints of a captured thief over Mike's. KITT has numerous other features: an olfactory sensor that allows KITT to "smell" via an atmospheric sampling device mounted in his front bumper, turbo boost, a voice stress analyzer that's used to process spoken voices and determine if someone may be lying, a computer printout that could print hard copies of data on a dashboard-mounted printer, a backup mainframe processor, a windshield projection used in place of the center console screen in the pilot for displaying extra information as well as the video communication link with the SSC, a bio matrix scanner used to detect the health status of persons in the immediate area, a hood surface screen, an electromagnetic pulse projector that can disable any electronic circuit or device within the given area, the ability to fire disk-like objects that produce an intense heat source to deter heat-seeking projectiles, the ability to fill the cabin with tear gas to incapacitate thieves, a 3D object printer that allows for the creation of small 3D objects (such as keys) based on available electronic data, a standard printer used for documents and incoming faxes that's located in the passenger side dash, a small arms cache accessible via the glove box area that usually contains two 9MM handguns with extra clips for occupant's protection outside KITT, a first aid inside the glove box that allows for field mending of physical wounds such as lost appendages, and a software program secretly built into KITT that, when activated, by the SSC, turns KITT into a bomb using his fuel as the charge and his computer as the detonator. Knight Industries Four Thousand (Knight 4000) A 1991 made-for-TV movie sequel to the 1982 series, Knight Rider 2000, saw KITT's original microprocessor unit transferred into the body of the vehicle intended to be his successor, the Knight 4000 (referred to as "KIFT" by fans). The vehicle had numerous 21st-century technological improvements over the 1980s Pontiac Trans-Am version of KITT, such as an amphibious mode (which allows the car to travel across water like a speedboat), a virtual reality heads-up display (or VR-HUD, which utilized the entire windshield as a video display), a microwave stun device that could remotely incapacitate a human target, a remote target asset that allows the pilot to aim and fire with a complete and perfect accuracy, voice activated controls, a fax machine, an infrared scanner that could scan on an infrared level to identify laser scope rifles as well as hidden objects giving off heat, a more complex olfactory scan, a voice sampler that could simulate any voice which has been recorded into the Knight 4000's memory, a microwave projector that could cause the temperatures of targeted objects to quickly rise and either ignite or explode, and a thermal sensor that allows the Knight 4000 to watch and record what is happening in a particular place. However, no acknowledgement is made to this spin-off in the 2008–2009 series revival. The studio was unable to use the real Pontiac Banshee IV concept car for the movie, so instead it hired Jay Ohrberg Star Cars Inc. to customize a 1991 Dodge Stealth for the Knight 4000. After filming wrapped, the custom car was used on other TV productions of the time and can also be seen, albeit briefly, as a stolen supercar in CHiPs '99, as repainted future police vehicles in Power Rangers Time Force, in an episode of the television series Black Scorpion in March 2001, and in a hidden camera TV series called Scare Tactics. After being abandoned and unmaintained for 10 years, one of the screen-used cars was offered for sale in January 2021 by Bob's Prop Shop in Las Vegas. KARR KARR (Knight Automated Roving Robot) is the name of a fictional, automated, prototype vehicle featured as a major antagonist of KITT (Knight Industries Two Thousand), in two episodes of the 1982 original series, and was part of a multi-episode story arc in the 2008 revived series. KARR (voiced by Peter Cullen) first appeared in "Trust Doesn't Rust" aired on NBC on November 19, 1982, where he seemingly met his demise at the end. However, he was so popular with viewers that he was brought back again in "K.I.T.T. vs. K.A.R.R.", for a second time (voiced by voice actor Paul Frees) which aired on NBC on November 4, 1984. Trust Doesn't Rust was also printed in book form, written by Roger Hill and Glen A. Larson, following the story and general script of the original television episode, expanding some areas of the plot and adding several extra secondary characters. KARR was brought back in 2009 for "Knight to King's Pawn" of the new "Knight Rider" series of 2008–2009 for a third time (marking it as one of the very few villains in the original series and the new series to make a return appearance). KARR design and development KARR was originally designed by Wilton Knight and built by Knight Industries for military purposes for the Department of Defense. After the completion of the vehicle, the KARR processor was installed and activated. However, a programming error caused the computer to be unstable and potentially dangerous. KARR was programmed for self-preservation, but this proved to be dangerous to the Foundation's humanitarian interests. The project was suspended and KARR was stored until a solution could be found. Once KITT was constructed, it was presumed that his prototype KARR had been deactivated and dismantled. However, the latter did not occur and KARR was placed in storage and forgotten following the death of Wilton Knight. KARR was later unwittingly reactivated by thieves in the original episode Trust Doesn't Rust, and was thought destroyed, but then reappeared in the episode K.I.T.T. vs. K.A.R.R and was seen to be finally destroyed by Michael and KITT. Originally, KARR's appearance was entirely identical to KITT (all black with a red scan bar). but when KARR returns in "K.I.T.T. vs. K.A.R.R.", KARR's scan bar is now amber/yellow and is now very similar to KITT's, and is still 100% black like KITT for the first half of the episode. KARR later gets a brand new black and silver two-tone paint job incorporating a silver lower body into his familiar 100% all-black finish. KARR's scanner originally made a low droning noise, and the sound of KARR's engine originally sounded rough and "fierce", but in the return episode, the scanner and the engine both sounds similar to KITT's but with a slight reverb audio effect added. In "Trust Doesn't Rust," KARR had no license plates, but in KARR's second appearance, he had a California license plate that read "KARR." KARR's voice modulator showed as greenish-yellow on his dash display, a different colour and design than the various incarnations of KITT's red display. Personality Unlike KITT, whose primary directive is to protect human life, KARR was programmed for self-preservation, making him a ruthless and unpredictable threat. He does not appear as streetwise as KITT, being very naïve and inexperienced and having a childlike perception of the world. This has occasionally allowed people to take advantage of his remarkable capabilities for their own gain; however, due to his ruthless nature, he sometimes uses people's weaknesses and greed as a way to manipulate them for his own goals. Despite this, he does ultimately consider itself superior (always referring to KITT as "the inferior production line model") as well as unstoppable, and due to his programming, the villains don't usually get very far. KARR demonstrates a complete lack of respect or loyalty – on one occasion ejecting his passenger to reduce weight and increase his chances of escape. KARR's evil personality is also somewhat different in the comeback episode. His childlike perceptions are diminished into a more devious personality, completely cold and bent on revenge. His self-preservation directive is no longer in play. When KARR is close to exploding after receiving severe damage; he willingly turbo-jumps into a mid-air collision with KITT, hoping that his own destruction would also spell his counterpart's. Even KARR's modus operandi is different; servicing enough in the first episode, he aims to actually make use of other people to serve his own needs. One explanation of this change could be as a result of the damage he received after falling over the cliff at the end of "Trust Doesn't Rust", causing further malfunctions in his programming. Indeed, KITT himself is seen to malfunction and suffer change of personality as a result of damage in several other episodes. KARR 2.0 To mirror the original series, the nemesis and prototype of the second KITT (Knight Industries Three Thousand) is also designated KARR in the new series. KARR 2.0 (Peter Cullen) is mentioned in the new Knight Rider series episode "Knight of the Living Dead", and is said to be a prototype of KITT (Knight Industries Three Thousand). The new KARR acronym was changed to "Knight Auto-cybernetic Roving Robotic-exoskeleton". KARR's visual identity has also had similar changes for the new series. Instead of an automobile, a schematic display shows a heavily armed humanoid-looking robot with wheeled legs that converts into an ambiguous off-road vehicle. KARR has the ability to transform from vehicle mode into a large wheeled robotic exoskeleton, instead of KITT's "Attack Mode". The vehicle mode of KARR is a 2008–2009 Shelby GT500KR with the license plate initials K.R. KARR is once again voiced by Peter Cullen, who also voiced the first appearance of KARR in "Trust Doesn't Rust". KARR was originally designed for military combat. Armed with twin machine guns on each shoulder and missiles, the exoskeleton combines with a human being for easier control. KARR is visually identical to KITT in this iteration, lacking the two-tone black and silver paint job of the 1980s version of KARR. The only difference is the scanner and voice box, which are yellow compared to KITT's red. Once again, similar to the original character, this entirely different "KARR" project (2.0) had an A.I. that was programmed for self-preservation, and he was deactivated and placed in storage after he reprogrammed itself and killed seven people. When KARR finally appears again in the episode "Knight to King's Pawn", he takes a form once again similar to KITT as a 2008 Ford Shelby Mustang GT500KR, and is once again 100% black like KITT 3000, the only difference is that he has a yellow scan light bar and 100% yellow color voice module. In the original series, it was more amber/yellow, and KARR's voice module originally yellow-green in the original series. KARR's scanner sounds much lower with much more of an echo. The sound is especially noticeable when KARR is chasing down KITT while he is still in Ford Mustang mode. F.L.A.G. Mobile Command Center KITT has access to a mobile "garage" called the F.L.A.G. Mobile Command Center, which was a semi-trailer truck owned by the Foundation. In most episodes, it is a GMC General. The trailer has an extendable ramp that dropped down and allows KITT to drive inside even when the truck is in motion. The trailer was loaded with spare parts and equipment for KITT. It also had a computer lab where technicians Bonnie or April would work and conduct repairs and maintenance while in transit. In "KITTnap", KITT is kidnapped and Michael and RC3 use the tractor (which has been disconnected from the trailer) to go and find him. Reception and significance KITT, despite being just an AI without a body and with only a rudimentary personality, has proven to be a popular character. One of the reasons for KITT's attractivness was in the fact that "domesticated" then-powerful technology (computers), making it "accessible, flexible and portable" in a way that was also "reliable and secure". Nickianne Moody has argued that through KITT, Knight Rider became one of "the first popular texts to visualize and narrativize the potential of [computer] technologies to transform daily life". On the other hand, scholars have also looked at the relationship between KITT and its driver, Knight, in the context of the relationship between "man" and "technology", criticizing it for being a metaphor for "masculinity", and even a "classic example of penis-extension". Nonetheless, Moody argues that the relationship between Knight and KITT was more complex and nuanced than many "buddy-ship" relationships of other "Cold War warriors" in the Hollywood works of its era. KITT has also been discussed in the context of the human-robot (or human-AI) interaction. KITT has also proven to be influential for the design of real-world computers for vehicles, with a number of studies noting that the science-fiction vision of the 1980s, portrayed in the show, is coming to be realized in the real life as of the early 21st century. Shaked and Winter noted that it was "one of the most appealing multimodal mobile interfaces of the 1980s", although talking to computers in a way similar to humans is still in its early stages of maturing as a technology as of 2019. Various toy versions of KITT have been released. Among the best-known Knight Rider memorabilia is the remote controlled KITT, the Knight Rider lunch box, and the deluxe version of KITT. The deluxe model of KITT, sold by Kenner Toys and dubbed the "Knight 2000 Voice Car", spoke electronically (actual voice of William Daniels), featured a detailed interior and a Michael Knight action figure. ERTL released die-cast toys of KITT in three different sizes—the common miniature sized model, a 'medium' sized model, and a large sized model. These toys featured red reflective holograms on the nose to represent the scanner. Also in late 2004, 1/18 scale die-cast models of KITT and KARR were produced from ERTL complete with detailed interior and light up moving scanner just like in the series. In September 2006, Hitari, a UK based company that produces remote control toy cars, released the Knight Rider KITT remote control car in 1/15 scale complete with the working red scanner lights, KITT's voice from the TV show and the car's turbine engine sound with the "cylon" scanner sound effect. In December 2012, Diamond Select Toys released a talking electronic 1/15 scale KITT which features a light up dashboard, scanner, foglights and tail lights along with the original voice of KITT, William Daniels, all at a push of a button. Mattel has released two die-cast metal models of KARR. A 1:18 scale model as part of the Hot Wheels Elite collection and a 1:64 scale model as part of the Hot Wheels Retro Nostalgia Entertainment collection. They both resemble KARR's appearance from KITT vs. KARR with silver paint around the bottom half of the vehicle. The small one however lacks the amber scanner light and instead retains the red scanner from KARR's appearance in Trust Doesn't Rust and there is also a KITT which is completely identical to KARR in its first episode in Trust Doesn't Rust. KITT and KARR are both in Knight Rider: The Game and its sequel. They also appear in the Knight Rider World in Lego Dimensions. Featuring the iconic voice of William Daniels, the Knight Rider GPS was a fully working GPS using Mio navigational technology. The GPS featured custom recorded voices so that the unit could "speak to" its owner using their own name if it was one of the ones in the recorded set of names. References Text was copied/adapted from K.I.T.T. (2000) at Knight Rider Wiki, which is released under a Creative Commons Attribution-Share Alike 3.0 (Unported) (CC-BY-SA 3.0) license. External links Bringing KITT Back! as detailed in Project: K.I.T.T. Fictional cars Knight Rider characters Fictional artificial intelligences Fictional computers Television characters introduced in 1982 One-off cars Pontiac Fictional characters who can move at superhuman speeds de:Knight Rider#K.I.T.T.
1327393
https://en.wikipedia.org/wiki/A%C3%A9rospatiale%20SA%20321%20Super%20Frelon
Aérospatiale SA 321 Super Frelon
The Aérospatiale (formerly Sud Aviation) SA 321 Super Frelon ("Super Hornet") is a three-engined heavy transport helicopter produced by aerospace manufacturer Sud Aviation of France. It held the distinction of being the most powerful helicopter to be built in Europe at one point, as well as being the world's fastest helicopter. The Super Frelon was a more powerful development of the original SE.3200 Frelon, which had failed to enter production. On 7 December 1962, the first prototype conducted the type's maiden flight. On 23 July 1963, a modified Super Frelon flew a record-breaking flight, setting the new FAI absolute helicopter world speed record with a recorded speed of . Both civilian and military versions of the Super Frelon were produced; the type was predominantly sold to military customers. In 1981, Aerospatiale, Sud Aviation's successor company, chose to terminate production due to a lack of orders. The Super Frelon was most heavily used by naval air arms, such as the French Navy and the People's Liberation Army Naval Air Force. On 30 April 2010, the type was retired by the French Navy, having been replaced by a pair of Eurocopter EC225 helicopters as a stopgap measure pending the availability of the NHIndustries NH90 helicopter. The Super Frelon was in use for an extended period within China, where it was manufactured under license and sold by the Harbin Aircraft Industry Group as the Harbin Z-8. A modernised derivative of the Z-8, marketed as the Avicopter AC313, performed its first flight on 18 March 2010. Development The SA.3210 Super Frelon was developed by French aerospace company Sud Aviation from the original SE.3200 Frelon. During the type's development, Sud Aviation had risen to prominence as a major helicopter manufacturer, having exported more rotorcraft than any other European rival. Having produced the popular Aérospatiale Alouette II and Aérospatiale Alouette III, the firm was keen to establish a range of helicopters fulfilling various roles, functions, and size requirements; two of the larger models in development by the early 1960s were the Super Frelon and what would become the Aérospatiale SA 330 Puma. The Super Frelon was the largest helicopter in development by the firm, being substantially increased over the earlier Frelon, and was considered to be an ambitious design at the time. The earlier Frelon had been developed to meet the requirements of both the French Navy and the German Navy, which both had released details on its anticipated demands for a heavy helicopter; however, these requirements were revised upwards by the customer, leading to the redesign and emergence of the Super Frelon. Changes included the adoption of much more powerful engines, using three Turbomeca Turmo IIIC turboshaft engines, each capable of generating on the prototypes (later uprated to on production models) in place of the Frelon's Turbomeca Turmo IIIB engines; these drove a six-bladed main rotor, instead of the Frelon's four-bladed one, and a five-bladed (instead of four-bladed) tail rotor. Overall, the modified design provided for a greatly increased gross weight, from , whilst improving the rotorcraft's aerodynamic efficiency and handling qualities. Additional external changes between the Frelon and Super Frelon had been made, such as the original stubby tail boom having been replaced by a more conventional one, albeit with a crank in it to raise the tail rotor clear of vehicles approaching the rear loading ramp. Taking note of American experiments with amphibious helicopters, the Super Frelon's fuselage was redesigned into a hull, featuring a bow, planing bottom and watertight bilge compartments. Various foreign manufacturers participated in the development and manufacturing of the type; American helicopter company Sikorsky was contracted to supply the design of a new six-bladed main rotor and five-bladed tail rotor, while Italian manufacturer Fiat supplied the design for a new main transmission. On 7 December 1962, the first prototype Super Frelon conducted the type's maiden flight. On 28 May 1963, it was followed by the second prototype. The first prototype was tailored towards meeting the needs of the French Air Force, while the second was fully navalised, including lateral stabilising floats fixed to the undercarriage. On 23 July 1963, a modified prototype Super Frelon helicopter was used to break the FAI absolute helicopter world speed record, having attained a maximum speed of during the flight. Flown by Jean Boulet and Roland Coffignot, a total of three international records were broken, these being: speed over 3 km at low altitude, ; speed at any altitude over 15 and 25 km, ; and 100km closed circuit . By April 1964, the two prototypes had accumulated 388 flying hours, which included 30 hours of seaworthiness trials performed with the second prototype. In January 1964, the third Super Frelon prototype made its first flight, the fourth first flew during May 1964, and a pair of pre-production models were completed during the latter half of 1964. The third prototype participated in a series of accelerated wear trials to establish component endurance and overhaul lifespan, while the fourth prototype was assigned to further tests of equipment for the naval environment. By July 1964, the French Government had placed an initial order for the Super Frelon, intended to perform logistic support duties at the Centre Experimental du Pacifique; negotiations for a further order was already being negotiated for the naval version, which were to be equipped for anti-submarine duties. However, West German support for the Super Frelon programme had already declined by this point, partially due to interest in the rival Sikorsky SH-3 Sea King, which was evaluated against the type. Both civilian and military versions of the Super Frelon were built, with the military variants being the most numerous by far, entering service with the French military as well as being exported to Israel, South Africa, Libya, China and Iraq. Three military variants were produced: military transport, anti-submarine and anti-ship. The transport version is able to carry 38 equipped troops, or alternatively 15 stretchers for casualty evacuation tasks. Design The Aérospatiale SA 321 Super Frelon is a large, heavy-lift single-rotor helicopter, furnished with a relatively atypical three-engine configuration; these are Turboméca Turmo IIIC turboshaft engines set on top of the fuselage, a pair of turbines positioned side by side at the front and one located aft of the main rotor. The naval anti-submarine and anti-ship variants are usually equipped with navigation and search radar (ORB-42), and a 50-metre rescue cable. They are most often fitted with a 20 mm cannon, countermeasures, night vision, a laser designator and a Personal Locator System. The Super Frelon can also be fitted for inflight refueling. The front engines have simple individual ram intakes, while the rear one is fitted with a semi-circular scoop to provide air; all three bifurcated exhausts are near to the rotor head. The three engines and the reduction gearbox are mounted on a horizontal bulkhead and firewall which forms the roof of the cabin and upper structural member of the fuselage. The engines are isolated by multiple firewalls, including transverse firewalls separating front and rear engines from the rotor gearbox, and zonal engine firewalls. Eight sturdy hinged doors provide access to the compact Turmo engines, which have ample space around them to enable ground crew to service them without using external platforms. The fuselage is actually a hull, which makes use of a semi-monocoque light alloy construction; according to aerospace publication Flight International, the hull design was "reminiscent of flying-boat engineering". The main cabin lacks any transverse bracing, except for a single bulkhead between the cockpit and cabin. Substantial built-up frames connect the strengthened roof structure with the floor/planing-bottom of transverse under-floor bulkheads and outer skin. A conventional exterior skin is used, employing longitudinal stiffeners as well as two lines of deep channel members, while the under-floor cross members are reinforced with vertical stiffeners. There is no keel, at the floor level there are horizontal members between frames which are stiffened by transverse shear angles. Flexible fuel cells are stored in four watertight under-floor compartments lying fore and aft of the rotor axis, while the floor itself is fitted with removable panels. A hatch set into the floor, positioned approximately underneath the rotor axis, is used for sling-load operations. At the rear of the cabin is a tapered section of simple semi-monocoque construction, which is closed by a robust hinged rear loading ramp, which serves as the main entrance for bulky loads or equipment. The loading ramp is jettisonable in emergency situations. Additionally, there is a sliding door located on the forward starboard side, while a small hinged emergency door is set on the aft port side. The tail boom uses conventional semi-monocoque construction, supported by closely spaced notched channel-section frames and continuous stringers, absent of any major longitudinal sections or longerons. The cranked section carrying the tail rotor and trim plane is more robust, strengthened by a solid-web spar, frames, and stiffeners. The juncture of the main boom and cranked section is hinged in order to reduce the rotorcraft's folded length to . Along the top of the boom, the shaft for the tail rotor is covered by a fairing. The fixed landing gear has twin wheels on each of the three vertical shock absorber-equipped struts. The main leading gear units are mounted on triangulated tubular structures, while the nose gear is bracketed to the cockpit bulkhead via a watertight seal in the planing bottom. The main wheels have hydraulic brakes operated from the pedals, complete with a parking hand brake, while the nose unit is fully castoring. The nose, which is covered by large glazed panels, has a bow chine and planing bottom built as a unit with the flight deck, which is higher than the main cabin floor. Operational history China From December 1975 to April 1977, China took delivery of a batch of 12 SA 321 Super Frelon navalised helicopters. These helicopters came in two variants: anti-submarine warfare (ASW) and search and rescue (SAR) versions. The Super Frelon was the first helicopter of the People's Liberation Army (PLA) to be capable of operating from the flight deck of surface vessels. China has also manufactured a number of Super Frelons locally, where it is known under the designation Z-8 (land-or-ship based ASW/SAR helicopter). The Super Frelon remains operational with the PLA Navy as of 2014. Since the early 1980s, the Super Frelons have been frequently used by the People's Liberation Army Navy (PLAN) for conducting shipborne ASW and SAR operations. For ASW missions, the Z-8 is equipped with surface search radar and a French HS-12 dipping sonar while carrying a Whitehead A244S torpedo under the starboard side of the fuselage. The rotorcraft were also used to ferry supplies from replenishment ship to surface combatants, and transport marines from the landing ship to the shore. A naval SAR version, designated as the Z-8S, was outfitted with upgraded avionics, a searchlight, FLIR turret and a hoist, made its first flight in December 2004. Another rescue variant, furnished with dedicated medivac equipment on board, was also developed for the Navy, designated as the Z-8JH. The Z-8A version was developed as an army transport variant and received certification in February 1999. In 2001, a pair of Z-8As were delivered to the Army for evaluation, however, it ultimately decided to procure additional Mi-17V5s instead. Only a single batch of about six Z-8A were delivered to the Army in November 2002; these still retained the nose weather radar and side floats. Starting in 2007, the People's Liberation Army Air Force (PLAAF) also acquired dozens of upgraded Z-8Ks and Z-8KAs for conducting SAR missions; these were equipped with a FLIR turret and a searchlight underneath the cabin, plus a hoist and a flare dispenser. China has also developed a domestic civil helicopter variant of the Z-8, which is marketed as the Avicopter AC313. The AC313 has a maximum takeoff weight of 13.8 tonnes, is capable of carrying up to 27 passengers, and has a maximum range of 900 km (559 miles). After the 2008 Sichuan earthquake, Z-8 helicopter production received a massive boost as the event had proved the helicopter's value in humanitarian missions. New engine acquisition and design changes were implemented in order to iron out some of known existing issues which had affected the Z-8 for decades. The Chinese People's Armed Police ordered 18 Z-8 helicopters; by 2013, at least five helicopters had been delivered, the majority of these having been assigned to forestry fire fighting units. During subsequent earthquake relief operations, Z-8 helicopters have been deployed to perform rescue and logistical missions. In 2018, the PLA Army Aviation announced that it would begin phasing out its fleet of Z-8 helicopters due to low performance and high maintenance requirements, even though some examples have only been in service for 6 years, the Z-8s will likely to be replaced by the Harbin Z-20 medium lift helicopter. France In October 1965, the SA 321G ASW helicopter joined the French Naval Aviation (Aeronavale). Apart from ship-based ASW missions, the SA321G also carried out sanitisation patrols in support of Redoutable-class ballistic missile submarines. Some aircraft were modified with nose-mounted targeting radar for 'Exocet' anti-ship missiles. Five SA321GA freighters, originally used in support of the Pacific nuclear test centre, were transferred to assault support duties. In 2003, the surviving Aeronavale Super Frelons were assigned to transport duties, including commando transport, VertRep and SAR. The SA321G Super Frelon served with Flotille 32F of the French Aviation navale, operating from Lanvéoc-Poulmic in Brittany in the Search and Rescue role. They were retired on 30 April 2010, replaced by two Eurocopter EC225 helicopters purchased as stop-gaps until the NHIndustries NH90 came into service in 2011–12. Iraq Starting in 1977, a total of 16 Super Frelons were delivered to the Iraqi Air Force; equipped with radar and Exocet missiles, the Iraqi models were designated as the SA 321H. These rotorcraft were deployed in the lengthy Iran–Iraq War and during the 1991 Gulf War, in which at least one example was destroyed. During the Iran–Iraq War, Iraq started using Super Frelon and its other newly purchased Exocet-equipped fighters to target Iranian shipping in Persian Gulf in an event now known as the Tanker War. Two of the Iraqi Super Frelons were downed by Iranian fighters, one by a long-range shot of AIM-54A Phoenix by an F-14 Tomcat (during Operation Pearl) while under way over Persian Gulf, and one by an AGM-65A Maverick fired from an Iranian F-4 Phantom in July 1986, while attempting to take off from an oil rig. Israel During 1965, Israel placed an order for six SA 321K Super Frelons to equip the Israeli Air Force with a heavy lift transport capability. On 20 April 1966, the first of these rotorcraft arrived, enabling the inauguration of 114 Squadron, which operated the type out of Tel Nof. An additional six Super Frelons were ordered during the following year. The Israeli military had initially hoped to use the Super Frelons for deploying Panhard AML-90 light armoured cars in support of airborne operations, but this concept was dropped when tests revealed the helicopter was incapable of handling the vehicle's combat weight. A total of four helicopters had arrived by the start of the 1967 Six-Day War, during which they flew 41 sorties. Israeli Super Frelons saw extensive service during the War of Attrition, participating in operations such as Helem, Tarnegol 53 and Rhodes. The type was once again in service during the Yom Kippur War, following which Israel replaced the original Turbomeca Turmo engines with the General Electric T58-GE-T5D engine. The Super Frelons also took part in the Israeli invasion of Lebanon in June 1982. Due to their relatively high maintenance cost and poor performance capabilities compared to the IAF CH-53s, they were eventually retired in 1991. Libya During 1980–1981, six radar-equipped SA 321GM helicopters and eight SA 321M SAR/transports were delivered to Libya. South Africa South Africa ordered sixteen SA 321L helicopters in 1965, which were delivered by 1967 and adopted by the South African Air Force (SAAF)'s 15 Squadron. At least two were deployed to Mozambique in support of Rhodesian military operations against insurgents of the Zimbabwe African National Liberation Army between 1978 and 1979. Others were mobilised for evacuating South African paratroops from Angola during Operation Reindeer. In August 1978, the South West African People's Organization sparked a major border incident between South Africa and Zambia when its guerrillas fired on an SAAF Super Frelon landing at Katima Mulilo from Zambian soil. The South Africans retaliated with an artillery strike, which struck a Zambian Army position. Syria In October 1975, it was widely reported that Syria had ordered fifteen unspecified Super Frelons from France as part of an arms deal funded by Saudi Arabia. While the Syrian Air Force did issue a requirement for fifteen of the specific aircraft, and recommended purchasing up to fifty, by 1984 the sale had still not materialized. Variants SE.3200 Frelon Prototype transport helicopter powered by three 597 kW (800 hp) Turbomeca Turmo IIIB engines driving a four-bladed rotor of 15.2 m (50 ft) diameter. Two built, the first flying on 10 June 1958. SA 321 Preproduction aircraft. Four built. SA 321G Anti-submarine warfare version for the French Navy, it was powered by three Turbomeca IIIC-6 turboshaft engines; 26 built. SA 321Ga Utility and assault transport helicopter for the French Navy. SA 321GM Export version for Libya, fitted with Omera ORB-32WAS radar. SA 321H Export version for Iraq, it was powered by three Turbomeca Turmo IIIE turboshaft engines, fitted with Omera ORB-31D search radar, and armed with Exocet anti-ship missiles. SA 321F Commercial airline helicopter, powered by three Turbomeca IIIC-3 turboshaft engines, with accommodation for 34 to 37 passengers. SA 321J Commercial transport helicopter and accommodation for 27 passengers. SA 321Ja Improved version of the SA 321J. SA 321K Military transport version for Israel. SA 321L Military transport version for South Africa, fitted with air inlet filters. SA 321M Search and rescue, utility transport helicopter for Libya. Changhe Z-8 Chinese built version with three Changzhou Lan Xiang WZ6 turboshaft engines. Changhe Z-8A Army transport. Changhe Z-8F Chinese built version with Pratt & Whitney Canada PT6B-67A turboshaft engines. Changhe Z-8AEW Chinese AEW helicopter with retractable radar antenna, AESA radar, 360 degree coverage, redesigned nose similar to the AC313, and FLIR. Changhe Z-8L Chinese variant with wide-body fuselage and enlarged fuel sponsons, first spotted in January 2019. The internal width of the load area has been increased from 1.8m to 2.4 m, making it larger than old Z-8 and SA321 variants. Operators People's Liberation Army Air Force People's Liberation Army Navy French Naval Aviation Olympic Airways Indonesian Air Force Pelita Air Service Iraqi Air Force Israeli Air Force Libyan Air Force Libyan Navy South African Air Force Air Force of Zaire Specifications (Naval Super Frelon) See also References Citations Bibliography Donald, David and Jon Lake. Encyclopedia of World Military Aircraft. London: Aerospace Publishing, Single Volume Edition, 1996. . Grolleau, Henri-Paul. "French Navy Super Hornets". Air International, May 2009, Vol 76 No. 5. Stamford, UK:Key Publishing. ISSN 0306-5634. pp. 56–60. Grolleau, Henri-Pierre. "Hello EC225, Goodbye Super Frelon". Air International, June 2010, Vol 78 No. 6. UK:Key Publishing. ISSN 0306-5634. p. 12. Stevens, James Hay. "Super Frelon: Western Europe's Most Powerful Helicopter". Flight International, 9 July 1964. pp. 55–59. Taylor, John W.R. Jane's All The World's Aircraft 1966–1967, London: Sampson Low, Marston & Company, 1966. Taylor, J.W.R. Jane's All the World's Aircraft 1976–77. London:Macdonald and Jane's, 1976. . 1960s French military transport aircraft Aérospatiale aircraft Changhe aircraft Amphibious helicopters 1960s French helicopters Military helicopters Three-turbine helicopters Aircraft first flown in 1962
7152797
https://en.wikipedia.org/wiki/Alfred%20Spector
Alfred Spector
Alfred Zalmon Spector is an American computer scientist and research manager. He was most recently CTO of Two Sigma Investments and previously Vice President of Research and Special Initiatives at Google. Education Spector received his Bachelor of Arts degree in Applied Mathematics from Harvard University, and his PhD in computer science from Stanford University in 1981. His research explored communication architectures for building multiprocessors out of network-linked computers and included measurements of remote procedure call operations on experimental Ethernet. His dissertation was titled Multiprocessing Architectures for Local Computer Networks, and his advisor was Forest Baskett III. Career Spector was an associate professor of computer science at Carnegie Mellon University (CMU). While there, he served as doctoral advisor to Randy Pausch, Jeff Eppinger and Joshua Bloch and seven others. Spector was a founder of Transarc Corporation in 1989 which built and sold distributed transaction processing and wide area file systems software, commercializing the Andrew File System developed at CMU. After Transarc was acquired by IBM, he became a software executive and then vice president of global software and services research for IBM and finally vice president of strategy and technology within IBM's Software Group. Spector joined Google as vice president of research in November 2007 and retired in early 2015. In October 2015 he was hired by technology-driven hedge fund Two Sigma Investments to serve as the CTO, which he did until mid-2020. Advisory committees Spector is involved with academic computer science and has served on numerous advisory committees, including chairing the NSF CISE Advisory Committee from 2004–2005; various university advisory committees including at City College of New York, Carnegie Mellon University, Harvard, Rice University and Stanford. He has served on the National Academy Computer Science and Telecommunication Board from 2006 to 2013 and chaired the Computer Science and Engineering Section of the National Academy of Engineering. Speaking/writing Spector has written and spoken on diverse topics related to computer science and engineering. In 2004, he described the expanding sphere of Computer Science and proposed the need to infuse computer science into all disciplines using the phrase CS+X. He and his co-authors Peter Norvig and Slav Petrov proposed a model for computer science research in industry, based on their experience at Google in their paper, Google’s Hybrid Approach to Research. Since 2016 Spector advocated for a balanced and critical perspective on data science, and in the presentation Opportunities and Perils in Data Science, he argued for a trans-disciplinary study of data science that includes the humanities and social sciences. As a Phi Beta Kappa Visiting Scholar in the 2018–19 academic year, Spector has presented these positions at various universities around the United States. Awards and recognition In 2001, Spector received the IEEE Computer Society's Tsutomu Kanai Award for his contributions to distributed computing systems and applications. He and other researchers at Carnegie Mellon University won the 2016 ACM Software systems Award for developing the Andrew File System (AFS). He was elected to the National Academy of Engineering in 2004. He was inducted as a Fellow of the Association for Computing Machinery in 2006 and the American Academy of Arts and Sciences in 2009, and serves on its council. Alfred appears in the Institutional Investor 2017 Tech 40 and was a Phi Beta Kappa Visiting Scholar during the 2018–19 academic year. References External links American computer scientists Stanford University School of Engineering alumni Google employees Fellows of the Association for Computing Machinery Living people Carnegie Mellon University faculty Harvard School of Engineering and Applied Sciences alumni IBM Research computer scientists IBM employees American chief technology officers 1954 births
2255936
https://en.wikipedia.org/wiki/Unitrends
Unitrends
Unitrends Inc., a Kaseya company, is an American company specializing in backup and business continuity. Products Unitrends produces a number of physical appliances ranging from small desktop backup appliances to large rack-mounted backup appliances. Unitrends also produces virtual appliances for VMware and Hyper-V marketed as Unitrends Backup. The physical appliances typically consist of some level of redundant components; most notably RAID in the form of RAID 1, RAID 6, and RAID 10. The larger appliances also have tiered flash storage in the form of SSD drives. These appliances perform either on-premises backup, off-premises disaster recovery, or through a technology known as cross-vaulting (a form of replication) an appliance may concurrently perform on-premises backup and off-premises disaster recovery. Unitrends also offers a multi-tenant public cloud based service, Unitrends Cloud. The company states that Unitrends Cloud is based on its previous product offering using its backup appliances to perform off-premises disaster recovery in a single-tenant private cloud deployment methodology. Technology Unitrends uses file-and-image-based backup techniques coupled with storage-based and in-flight encryption, compression, data deduplication, and ransomware detection. The technology supports Bare-Metal restore as well as file-based recovery. Unitrends supports a form of disk staging in what it terms as D2D2x (Disk-to-Disk-to-Any) where “x” can be a disk, a tape, a private single-tenant cloud, or a public multi-tenant cloud. The company attempts to sell its support of storage, operating system, and application heterogeneity and claims that it supports over 250 versions of operating systems and applications. History Since its inception in 1989, Unitrends Software Corporation has developed backup and crash-recovery technology. Unitrends originated from a sole proprietorship called Medflex, which was founded in 1985 by Steve W. Schwartz to help fund medical missions. The first product, CTAR (Compressing Tape Archiver), was originally developed to handle the backup problems Schwartz encountered in his own medical office. After perfecting the program, it was sold commercially. In 1988, Unitrends developed the first complete crash-recovery product for Santa Cruz Operation’s Xenix systems originally called Jet RestoreEase. Unitrends started as a stand-alone Unix backup software company and provided BareMetal recovery for platforms like the SCO offerings. BareMetal was later ported to SCO Unix and renamed System Crash Air-Bag. From 1988 to 1991, ten other software products were written for the Xenix and Unix environments and in 1999, Unitrends released their Backup Professional client/server backup software. In September 2002, Unitrends shipped their first hardware-based backup appliance. Initially based out of Myrtle Beach, South Carolina, headquarters were relocated further inland to Columbia in November 2003 while support and development remained on the coast until 2005. In 2014, headquarters were moved again to Burlington, Massachusetts. where they remain today. Growth and acquisition On October 31, 2013, Unitrends was acquired by Insight Venture Partners, a large global private equity and venture capital firm. Shortly after this acquisition, Unitrends obtained PHD Virtual on December 16, 2013. Unitrends acquired another company on May 29, 2014, taking over Australian-based Yuruware. On May 3, 2018, Unitrends merged with Kaseya. References Companies based in Middlesex County, Massachusetts Software companies based in Massachusetts Software companies established in 1989 Software companies of the United States
3544223
https://en.wikipedia.org/wiki/Network%20Utility
Network Utility
Network Utility was an application included with macOS up to MacOS Catalina that provided a variety of tools that could be used related to computer network information gathering and analysis. Starting with macOS Big Sur the application is no longer included and was replaced with a message that it has been deprecated . Network Utility shows information about each of your network connections, including the Mac Address of the interface, the IP address assigned to it, its speed and status, a count of data packets sent and received, and a count of transmission errors and collisions. Services The available services or tools found in the Network Utility: Network interfaces Netstat ping Lookup Traceroute Whois Finger Port scan Actionable Items Examples of what the Network Utility can help with: Check your network connection View network routing tables and statistics Test whether you can contact another computer Test your DNS server Trace the paths of your network traffic Check for open TCP ports Port scan information Network Utility uses the tools supplied in the unix directories for most of its functions, however for the port scan it uses a unix executable in its resources folder, stroke, found at . How To Open It In OS X Mavericks and macOS, Network Utility is in /System/Library/CoreServices/Applications. In OS X Mountain Lion, Lion, and Snow Leopard, Network Utility is in the Utilities folder of your Applications folder. Find Network Utility using Spotlight (software) Gallery References MacOS-only software made by Apple Inc. MacOS
35771514
https://en.wikipedia.org/wiki/Practice%20management%20software
Practice management software
Practice management software may refer to software used for the management of a professional office: Law practice management software Medical practice management software Veterinary practice management software There are also practice management software for accounting, architecture, veterinary, dental, optometry and other practices.
17064793
https://en.wikipedia.org/wiki/Pushd%20and%20popd
Pushd and popd
In computing, pushd and popd are commands used to work with the command line directory stack. They are available on command-line interpreters such as 4DOS, Bash, C shell, tcsh, Hamilton C shell, KornShell, cmd.exe, and PowerShell for operating systems such as DOS, Microsoft Windows, ReactOS, and Unix-like systems. Overview The pushd command saves the current working directory in memory so it can be returned to at any time, pushd moves to the parent directory. The popd command returns to the path at the top of the directory stack. This directory stack is accessed by the command dirs in Unix or Get-Location -stack in Windows PowerShell. The first Unix shell to implement a directory stack was Bill Joy's C shell. The syntax for pushing and popping directories is essentially the same as that used now. Both commands are available in FreeCOM, the command-line interface of FreeDOS. In Windows PowerShell, pushd is a predefined command alias for the Push-Location cmdlet and popd is a predefined command alias for the Pop-Location cmdlet. Both serve basically the same purpose as the pushd and popd commands. Syntax Pushd pushd [path | ..] Arguments: path This optional command-line argument specifies the directory to make the current directory. If path is omitted, the path at the top of the directory stack is used, which has the effect of toggling between two directories. Popd popd Examples Unix-like [user@server /usr/ports] $ pushd /etc /etc /usr/ports [user@server /etc] $ popd /usr/ports [user@server /usr/ports] $ Microsoft Windows and ReactOS C:\Users\root>pushd C:\Users C:\Users>popd C:\Users\root> DOS batch file @echo off rem This batch file deletes all .txt files in a specified directory pushd %1 del *.txt popd echo All text files deleted in the %1 directory See also List of DOS commands List of Unix commands References Further reading External links pushd | Microsoft Docs popd | Microsoft Docs Internal DOS commands Microcomputer software ReactOS commands Windows administration Computing commands
62226328
https://en.wikipedia.org/wiki/Emily%20Ducote
Emily Ducote
Emily Ducote (born January 1, 1994) is an American mixed martial artist, currently competing in the strawweight division of Invicta FC, where she is the incumbent Invicta FC Strawweight Champion. Background The daughter of John Ducote and Yvette Bradstreet, Emily was born in Los Angeles, California. Ducote is Taekwondo Black Belt. She wrestled for Los Gatos High School, ending as a runner-up in state championships in her senior year. With an aspiration to wrestle in the best college team, she moved to the Great Plains in 2012 to study kinesiology in Oklahoma City University. In 2013, she started training Brazilian Jiu Jitsu under Giulliano Gallupi, Ricardo Liborio's Black Belt. Mixed martial arts career Bellator Ducote was scheduled to make her Bellator debut against Bruna Vargas at Bellator 159 on July 22, 2016. She won the fight by a second-round rear-naked choke submission. Ducote was scheduled to face Kenya Miranda at Bellator 161 on September 16, 2016. She won the fight by a second-round armbar submission. Ducote was scheduled to face Ilima-Lei Macfarlane at Bellator 167 on December 3, 2016. Macfarlane won the fight by unanimous decision, with scores of 30-27, 29-28, 29-28. Ducote was scheduled to face Katy Collins at Bellator 174 on March 3, 2017. She won the fight by a first-round rear-naked choke submission. Ducote was scheduled to face Jessica Middleton at Bellator 181 on July 14, 2017. She won the fight by unanimous decision, with scores of 29-27, 29-28, 29-28. Ducote was scheduled to face Ilima-Lei Macfarlane for the inaugural Bellator Women's Flyweight World Championship at Bellator 186 on November 3, 2017. Macfarlane won the fight by a fifth-round armbar submission. Ducote was scheduled to face Kristina Williams at Bellator 196 on March 2, 2018. Williams won the fight by split decision in a fight where Ducote was dominant and most part of the media called it a controversial decision. Two of the judges scored the fight 29-28 for Williams, while the third judge scored the fight 30-27 for Ducote. Ducote was scheduled to face Veta Arteaga at Bellator 202 on July 13, 2018. Arteaga won the fight by unanimous decision, extending Ducote's losing streak to three fights. After she was released by Bellator, Ducote moved down to strawweight for her next bout against Kathryn Paprocki at Xtreme Fight Night 356 on February 1, 2019. She won the fight by a third-round rear-naked choke submission. Invicta FC Ducote was scheduled to make her promotional debut against Janaisa Morandin at Invicta FC 38: Murata vs. Ducote on August 9, 2019. Morandin weighed in three pounds over the strawweight limit, at 119 lbs. Ducote won the fight by a first-round knockout. In her second promotional appearance, Ducote was scheduled to face Kanako Murata for the vacant Invicta FC Strawweight Championship in the main event of Invicta FC 38: Murata vs. Ducote on November 1, 2019. Murata won the fight by split decision, with two judges scoring the bout 48-47 and 49-46 in her favor. The third judge scored the fight 48-47 for Ducote. Ducote was scheduled to face Juliana Lima in the main event of Invicta FC 40: Ducote vs. Lima on July 2, 2020. She won the fight by unanimous decision, with all three judges awarding her a 29-28 scorecard. Ducote was scheduled to face Montserrat Ruiz at Invicta FC 43: King vs. Harrison on November 20, 2020. The fight was later cancelled due to “enhanced COVID-19 safety protocols”. Ducote was scheduled to face Liz Tracy at Invicta FC on AXS TV: Rodríguez vs. Torquato on May 21, 2021. The fight was later cancelled. Invicta FC straweight champion Ducote was scheduled to face UFC veteran Danielle Taylor for the vacant Invicta FC Strawweight Championship at Invicta FC 44: A New Era on August 27, 2021. Her bout with Taylor headlined the first pay-per-view in Invicta FC history. Ducote won the fight by a first-round knockout. She first stopped Taylor in her tracks with a right straight, before flooring her with a head kick. Championships and accomplishments Invicta Fighting Championships Invicta FC Strawweight Championship (One time, current) Mixed martial arts record |- |Win |align=center|10–6 |Danielle Taylor |KO (punch & head kick) |Invicta FC 44: A New Era | |align=center|1 |align=center|2:51 |Kansas City, Kansas, United States | |- |Win | style="text-align:center;"|9–6 |Juliana Lima |Decision (unanimous) |Invicta FC 40: Ducote vs. Lima | | style="text-align:center;"| 3 | style="text-align:center;"| 5:00 |Kansas City, Kansas, United States | |- |Loss | style="text-align:center;"|8–6 |Kanako Murata |Decision (split) |Invicta FC 38: Murata vs. Ducote | | style="text-align:center;"| 5 | style="text-align:center;"| 5:00 |Kansas City, Kansas, United States | |- |Win | style="text-align:center;"|8–5 |Janaisa Morandin |KO (punches) |Invicta FC 36: Sorenson vs. Young | | style="text-align:center;"| 1 | style="text-align:center;"| 4:03 |Kansas City, Kansas, United States | |- |Win | style="text-align:center;"|7–5 |Kathryn Paprocki |Submission (rear-naked choke) |Xtreme Fight Night 356 | | style="text-align:center;"| 3 | style="text-align:center;"| 3:31 |Tulsa, Oklahoma, United States |Strawweight debut |- |Loss | style="text-align:center;"|6–5 |Veta Arteaga |Decision (unanimous) |Bellator 202 | | style="text-align:center;"| 3 | style="text-align:center;"| 5:00 |Thackerville, Oklahoma, United States | |- |Loss | style="text-align:center;"|6–4 |Kristina Williams |Decision (split) |Bellator 196 | | style="text-align:center;"| 3 | style="text-align:center;"| 5:00 |Thackerville, Oklahoma, United States | |- |Loss | style="text-align:center;"|6–3 |Ilima-Lei Macfarlane |Submission (triangle armbar) |Bellator 186 | | style="text-align:center;"| 5 | style="text-align:center;"| 3:42 |University Park, Pennsylvania, United States | |- |Win | style="text-align:center;"|6–2 |Jessica Middleton |Decision (unanimous) |Bellator 181 | | style="text-align:center;"| 3 | style="text-align:center;"| 5:00 |Thackerville, Oklahoma, United States | |- |Win | style="text-align:center;"|5–2 |Katy Collins |Submission (rear-naked choke) |Bellator 174 | | style="text-align:center;"| 1 | style="text-align:center;"| 4:53 |Thackerville, Oklahoma, United States | |- |Loss | style="text-align:center;"|4–2 |Ilima-Lei Macfarlane |Decision (unanimous) |Bellator 167 | | style="text-align:center;"| 3 | style="text-align:center;"| 5:00 |Thackerville, Oklahoma, United States | |- |Win | style="text-align:center;"|4–1 |Kenya Miranda |Submission (armbar) |Bellator 161 | | style="text-align:center;"| 2 | style="text-align:center;"| 4:37 |Cedar Park, Texas, United States | |- |Win | style="text-align:center;"|3–1 |Bruna Vargas |Submission (rear-naked choke) |Bellator 159 | | style="text-align:center;"| 2 | style="text-align:center;"| 0:29 |Mulvane, Kansas, United States | |- |Win | style="text-align:center;"|2–1 |Jianna Denizard |Decision (unanimous) |Xtreme Fighting League | | style="text-align:center;"| 5 | style="text-align:center;"| 3:00 |Tulsa, Oklahoma, United States | |- |Win | style="text-align:center;"|1–1 |Ronnie Nanney |Decision (unanimous) |OKC Charity Fight Night | | style="text-align:center;"| 5 | style="text-align:center;"| 3:00 |Oklahoma City, Oklahoma, United States | |- |Loss | style="text-align:center;"|0–1 |Emily Whitmire |Decision (unanimous) |FCF 50 | | style="text-align:center;"| 3 | style="text-align:center;"| 5:00 |Shawnee, Oklahoma, United States |Flyweight debut |- |} See also List of female mixed martial artists List of current mixed martial arts champions List of current Invicta FC fighters References External links Emily Ducote at Invicta FC 1994 births Living people American female mixed martial artists Strawweight mixed martial artists Flyweight mixed martial artists Mixed martial artists utilizing Muay Thai Mixed martial artists utilizing Brazilian jiu-jitsu American Muay Thai practitioners Female Muay Thai practitioners American practitioners of Brazilian jiu-jitsu Female Brazilian jiu-jitsu practitioners 21st-century American women
8521
https://en.wikipedia.org/wiki/Doom%20%281993%20video%20game%29
Doom (1993 video game)
Doom is a 1993 first-person shooter (FPS) game developed by id Software for MS-DOS. Players assume the role of a space marine, popularly known as Doomguy, fighting their way through hordes of invading demons from Hell. The first episode, comprising nine levels, was distributed freely as shareware and played by an estimated 15–20 million people within two years; the full game, with two further episodes, was sold via mail order. An updated version with an additional episode and more difficult levels, The Ultimate Doom, was released in 1995 and sold at retail. Doom is one of the most significant games in video game history, frequently cited as one of the greatest games ever made. Along with its predecessor Wolfenstein 3D, it helped define the FPS genre and inspired numerous similar games, often called Doom clones. It pioneered online distribution and technologies including 3D graphics, networked multiplayer gaming, and support for custom modifications via packaged WAD files. Its graphic violence and supposed hellish imagery drew controversy. Doom has been ported to numerous platforms. The Doom franchise continued with Doom II: Hell on Earth (1994) and expansion packs including Master Levels for Doom II (1995). The source code was released in 1997 under a proprietary license, and then later in 1999 under the GNU General Public License v2.0 or later. Doom 3, a horror game built with the id Tech 4 engine, was released in 2004, followed by a 2005 Doom film. id returned to the fast-paced action of the classic games with the 2016 game Doom and the 2020 sequel Doom Eternal. Gameplay Doom is a first-person shooter presented with early 3D graphics. The player controls an unnamed space marine—later termed "Doomguy"—through a series of levels set in military bases on the moons of Mars and in Hell. To finish a level, the player must traverse through the area to reach a marked exit room. Levels are grouped together into named episodes, with the final level focusing on a boss fight with a particularly difficult enemy. While the environment is presented in a 3D perspective, the enemies and objects are instead 2D sprites presented from several preset viewing angles, a technique sometimes referred to as 2.5D graphics with its technical name called ray casting. Levels are often labyrinthine, and a full screen automap is available which shows the areas explored to that point. While traversing the levels, the player must fight a variety of enemies, including demons and possessed undead humans, while managing supplies of ammunition, health, and armor. Enemies often appear in large groups, and the game features five difficulty levels which increase the quantity and damage done by enemies, with enemies respawning upon death and moving faster than normal on the hardest difficulty setting. The monsters have very simple behavior, consisting of either moving toward their opponent, or attacking by throwing fireballs, biting, using magic abilities, and clawing. They will reactively fight each other if one monster inadvertently harms another, though most monsters are immune to attacks from their own kind. The environment can include pits of toxic waste, ceilings that lower and crush everything, and locked doors requiring a keycard or a remote switch. The player can find weapons and ammunition throughout the levels or can collect them from dead enemies, including a pistol, a chainsaw, a plasma rifle, and the BFG 9000. Power-ups include health or armor points, a mapping computer, partial invisibility, a safety suit against toxic waste, invulnerability, or a super-strong melee berserker status. The main campaign mode is single-player mode, in an episodic succession of missions. Two multiplayer modes are playable over a network: cooperative, in which two to four players team up to complete the main campaign, and deathmatch, in which two to four players compete. Four-player online multiplayer mode via dialup was made available one year after launch through the DWANGO service. Cheat codes give the player instant super powers including invulnerability, all weapons, and walking through walls. Plot Doom is divided into three episodes: "Knee-Deep in the Dead", "The Shores of Hell", and "Inferno". A fourth episode, "Thy Flesh Consumed", was added in an expanded version of the game, The Ultimate Doom, released on April 30, 1995, two years later and one year after Doom II. The campaign contains very few plot elements, with the minimal story instead given in the instruction manual and in short text segues between episodes. In 2022, an unnamed marine (known as the "Doom marine" or "Doom guy") is posted to a dead-end assignment on Mars after assaulting a superior officer who ordered his unit to fire on civilians. The Union Aerospace Corporation, which operates radioactive waste facilities there, allows the military to conduct secret teleportation experiments that go terribly wrong. A base on Phobos urgently requests military support, while Deimos disappears entirely, and the marine joins a combat force to secure Phobos. He waits at the perimeter as ordered while the entire assault team is wiped out. With no way off the moon, and armed with only a pistol, he enters the base intent on revenge. In "Knee-Deep in the Dead", the marine fights demons and possessed humans in the military and waste processing facilities on Phobos. The episode ends with the marine defeating two powerful Barons of Hell guarding a teleporter to the Deimos base. Emerging from the teleporter, he is overwhelmed and comes to with only a pistol again. In "The Shores of Hell", he fights on through Deimos research facilities that are corrupted with satanic architecture and kills a gigantic cyberdemon. From an overlook he discovers that the moon is floating above Hell and rappels down to the surface. In "Inferno", the marine takes on Hell itself and destroys a cybernetic spider-demon that masterminded the invasion of the moons. A portal to Earth opens and he steps through, only to find that Earth has also been invaded. "Thy Flesh Consumed" follows the marine's initial assault on the Earth invasion force, setting the stage for Doom II: Hell on Earth. Development Concept In May 1992, id Software released Wolfenstein 3D, later called the "grandfather of 3D shooters", specifically first-person shooters, because it established the fast-paced action and technical prowess commonly expected in the genre and greatly increased the genre's popularity. Immediately following its release most of the id Software team began work on a set of episodes for the game, titled Spear of Destiny, while id co-founder and lead programmer John Carmack instead focused on technology research for the company's next game. Following the release of Spear of Destiny in September 1992, the team began to plan their next game. They wanted to create another 3D game using a new engine Carmack was developing, but were largely tired of Wolfenstein. They initially considered making another game in the Commander Keen series, as proposed by co-founder and lead designer Tom Hall, but decided that the platforming gameplay of the series was a poor fit for Carmack's fast-paced 3D engines. Additionally, the other two co-founders of id, designer John Romero and lead artist Adrian Carmack, wanted to create something in a darker style than the Keen games. John Carmack then came up with his own concept: a game about using technology to fight demons, inspired by the Dungeons & Dragons campaigns the team played, combining the styles of Evil Dead II and Aliens. The concept originally had a working title of "Green and Pissed", but Carmack soon renamed the proposed game "Doom" after a line in the film The Color of Money: What you got in there?' / 'In here? Doom. The team agreed to pursue the Doom concept, and development began in November 1992. The initial development team was composed of five people: programmers John Carmack and Romero, artists Adrian Carmack and Kevin Cloud, and designer Hall. They moved offices to a dark office building, which they named "Suite 666", and drew inspiration from the noises coming from the dentist's office next door. They also decided to cut ties with Apogee Software, their previous publisher, and to instead self-publish Doom. Development Early in development, rifts in the team began to appear. At the end of November, Hall delivered a design document, which he named the Doom Bible, that described the plot, backstory, and design goals for the project. His design was a science fiction horror concept wherein scientists on the Moon open a portal from which aliens emerge. Over a series of levels, the player discovers that the aliens are demons while hell steadily infects the level design over the course of the game. John Carmack not only disliked the idea but dismissed the idea of having a story at all: "Story in a game is like story in a porn movie; it's expected to be there, but it's not that important." Rather than a deep story, he wanted to focus on the technological innovations of the game, dropping the levels and episodes of Wolfenstein in favor of a fast, continuous world. Hall disliked the idea, but the rest of the team sided with Carmack. Hall spent the next few weeks reworking the Doom Bible to work with Carmack's technological ideas. Hall was forced to rework it again in December, however, after the team decided that they were unable to create a single, seamless world with the hardware limitations of the time, which contradicted much of the document. At the start of 1993, id put out a press release, touting Hall's story about fighting off demons while "knee-deep in the dead". The press release proclaimed the new game features that John Carmack had created, as well as other features, including multiplayer gaming features, that had not yet even been designed. Early versions of the game were built to match the Doom Bible; a "pre-alpha" version of the first level includes Hall's introductory base scene. Initial versions of the game also retain "arcade" elements present in Wolfenstein 3D, like score points and score items, but those were removed early in development as they were out of tone. Other elements, such as a complex user interface, an inventory system, a secondary shield protection, and lives were modified and slowly removed over the course of development. Soon, however, the Doom Bible as a whole was rejected. Romero wanted a game even "more brutal and fast" than Wolfenstein, which did not leave room for the character-driven plot Hall had created. Additionally, the team believed it emphasized realism over entertaining gameplay, and they did not see the need for a design document at all. Some ideas were retained, but the story was dropped and most of the game design was removed. By early 1993, levels were being created for the game and a demo was produced. John Carmack and Romero, however, disliked Hall's military base-inspired level design. Romero especially believed that the boxy, flat level designs were uninspiring, too similar to Wolfenstein, and did not show off the engine's capabilities. He began to create his own, more abstract levels for the game, which the rest of the team saw as a great improvement. Hall was upset with the reception to his designs and how little impact he was having as the lead designer. He was also upset with how much he was having to fight with John Carmack in order to get what he saw as obvious gameplay improvements, such as flying enemies, and began to spend less time at work. In July the other founders of id fired Hall, who went to work for Apogee. He was replaced in September, ten weeks before the game was released, by game designer Sandy Petersen. The team also added a third programmer, Dave Taylor. Petersen and Romero designed the rest of Doom levels with different aims: the team believed that Petersen's designs were more technically interesting and varied, while Romero's were more aesthetically interesting. In late 1993, after the multiplayer component was coded, the development team began playing four-player multiplayer games matches, which Romero termed "deathmatch". According to Romero, the game's deathmatch mode was inspired by fighting games such as Street Fighter II, Fatal Fury, and Art of Fighting. Engine Doom was programmed largely in the ANSI C programming language, with a few elements in assembly language, targeting the IBM PC and MS-DOS platform by compiling with Watcom C/C++ and using the included royalty-free 80386 DOS-extender. id developed on NeXT computers running the NeXTSTEP operating system. The data used by the game engine, including level designs and graphics files, are stored in WAD files, short for "Where's All the Data?". This allows for any part of the design to be changed without needing to adjust the engine code. Carmack designed this system so fans could easily modify the game; he had been impressed by the modifications made by fans of Wolfenstein 3D, and wanted to support that with an easily swappable file structure along with releasing the map editor online. Unlike Wolfenstein, which had flat levels with walls at right angles, the Doom engine allows for walls and floors at any angle or height, though two traversable areas cannot be on top of each other. The lighting system was based on adjusting the color palette of surfaces directly: rather than calculating how light traveled from light sources to surfaces using ray tracing, the game calculates the light level of a small area based on its distance from light sources. It then modifies the color palette of that section's surface textures to mimic how dark it would look. This same system is used to cause far away surfaces to look darker than close ones. Romero came up with new ways to use Carmack's lighting engine such as strobe lights. He programmed engine features such as switches and movable stairs and platforms. After Romero's complex level designs started to cause problems with the engine, Carmack began to use binary space partitioning to quickly select the reduced portion of a level that the player could see at a given time. Taylor programmed other features into the game, added cheat codes; some, such as , were based on ideas their fans had submitted online while eagerly awaiting the game. Adrian Carmack was the lead artist for Doom, with Kevin Cloud as an additional artist. They designed the monsters to be "nightmarish", with graphics that are realistic and dark instead of staged or rendered, so a mixed media approach was taken. The artists sculpted models of some of the enemies, and took pictures of them in stop motion from five to eight different angles so that they could be rotated realistically in-game. The images were then digitized and converted to 2D characters with a program written by John Carmack. Adrian Carmack made clay models for a few demons, and had Gregor Punchatz build latex and metal sculptures of the others. The weapons were made from combined parts of children's toys. The developers scanned themselves as well, using Cloud's arm for the marine's arm holding a gun, and Adrian's snakeskin boots and wounded knee for textures. Music and sound As with Wolfenstein 3D, id hired composer Bobby Prince to create the music and sound effects. Romero directed Prince to make the music in techno and metal styles. Many tracks were directly inspired by songs by metal bands such as Alice in Chains and Pantera. Prince believed that ambient music would be more appropriate, and produced numerous tracks in both styles in hope of convincing the team, and Romero incorporated both. Prince did not make music for specific levels, as they were composed before the levels were completed; instead, Romero assigned each track to each level late in development. Prince created the sound effects based on short descriptions or concept art of a monster or weapon, and adjusted them to match the completed animations. The monster sounds were created from animal noises, and Prince designed all the sounds to be distinct on the limited sound hardware of the time, even when many sounds were playing at once. He also designed the sound effects to play on different frequencies from those used for the MIDI music, so they would clearly cut through the music. Release With plans to self-publish, the team had to set up the systems to sell Doom as it neared completion. Jay Wilbur, who had been hired as CEO and sole member of the business team, planned the marketing and distribution of Doom. He believed that the mainstream press was uninterested in the game, and as id would make the most money off of copies they sold directly to customers—up to 85 percent of the planned price—he decided to leverage the shareware market as much as possible, buying only a single ad in any gaming magazine. Instead, he reached out directly to software retailers, offering them copies of the first Doom episode for free, allowing them to charge any price for it, in order to spur customer interest in buying the full game directly from id. Dooms original release date was the third quarter of 1993, which the team did not meet. By December 1993, the team was working non-stop on the game, with several employees sleeping at the office. Programmer Dave Taylor claimed that working on the game gave him such a rush that he would pass out from the intensity. Id began receiving calls from people interested in the game or angry that it had missed its planned release date, as hype for the game had been building online. At midnight on December 10, 1993, after working for 30 straight hours, the development team at id uploaded the first episode of the game to the Internet, letting interested players distribute it for them. So many users were connected to the first FTP server that they planned to upload the game to, at the University of Wisconsin–Madison, that even after the network administrator increased the number of connections while on the phone with Wilbur, id was unable to connect, forcing them to kick all other users off to allow id to upload the game. When the upload finished thirty minutes later, 10,000 people attempted to download the game at once, crashing the university's network. Within hours of Dooms release, university networks were banning Doom multiplayer games, as a rush of players overwhelmed their systems. After being alerted by network administrators the morning after release that the game's deathmatch network connection setup was crippling some computer networks, John Carmack quickly released a patch to change it, though many administrators had to implement Doom-specific rules to keep their networks from crashing due to the overwhelming traffic. In late 1995, Doom was estimated to be installed on more computers worldwide than Microsoft's new operating system, Windows 95, even with Microsoft's million-dollar advertising campaigns. In 1995, an expanded version of the game, The Ultimate Doom, was released, containing a fourth episode. Ports Microsoft hired id Software to port Doom to Windows with the WinG API, and Microsoft CEO Bill Gates briefly considered buying the company. Microsoft developed a Windows 95 port of Doom to promote Windows as a gaming platform. The development was led by Gabe Newell, who later founded the game company Valve. One Windows 95 promotional video had Gates digitally superimposed into the game. An unofficial port of Doom to Linux was released by id programmer Dave Taylor in 1994; it was hosted by id but not supported or made official. Official ports were released for Sega 32X, Atari Jaguar, and Mac OS in 1994, SNES and PlayStation in 1995, 3DO in 1996, Sega Saturn in 1997, Acorn Risc PC in 1998, Game Boy Advance in 2001, Xbox 360 in 2006, iOS in 2009, and Nintendo Switch in 2019. Notable exceptions in the list of official ports, as well as Linux, are AmigaOS and Symbian. Some of these were bestsellers even many years after the initial release. Doom has also been ported unofficially to numerous platforms; so many ports exist, including for esoteric devices such as smart thermostats and oscilloscopes, that variations on "It runs Doom" or "Can it run Doom?" are long-running memes. Mods The ability for user-generated content to provide custom levels and other game modifications using WAD files became a popular aspect of Doom. Gaining the first large mod-making community, Doom affected the culture surrounding first-person shooters, and also the industry. Several future professional game designers started their careers making Doom WADs as a hobby, such as Tim Willits, who later became the lead designer at id Software. The first level editors appeared in early 1994, and additional tools have been created that allow most aspects of the game to be edited. Although the majority of WADs contain one or several custom levels mostly in the style of the original game, others implement new monsters and other resources, and heavily alter the gameplay. Several popular movies, television series, other video games and other brands from popular culture have been turned into Doom WADs by fans, including Aliens, Star Wars, The Simpsons, South Park, Sailor Moon, Dragon Ball Z, Pokémon, Beavis and Butt-head, Batman, and Sonic the Hedgehog. Some works, like the Theme Doom Patch, combined enemies from several films, such as Aliens, Predator, and The Terminator. Some add-on files were also made that changed the sounds made by the various characters and weapons. From 1994 to 1995, WADs were primarily distributed online over bulletin board systems or sold in collections on compact discs in computer shops, sometimes bundled with editing guide books. FTP servers became the primary method in later years. A few WADs have been released commercially, including the Master Levels for Doom II, which was released in 1995 along with Maximum Doom, a CD containing 1,830 WADs that had been downloaded from the Internet. The idgames FTP archive contains more than 18,000 files, and this represents only a fraction of the complete output of Doom fans. Third-party programs were also written to handle the loading of various WADs, since all commands must be entered on the DOS command line to run. A typical launcher would allow the player to select which files to load from a menu, making it much easier to start. In 1995, WizardWorks released the D!Zone pack featuring hundreds of levels for Doom and Doom II. D!Zone was reviewed in Dragon by Jay & Dee; Jay gave the pack 1 out of 5 stars, and Dee gave the pack 1½ stars. In 2016, Romero published two new Doom levels: E1M4b ("Phobos Mission Control") and E1M8b ("Tech Gone Bad"). In 2018, for the 25th anniversary of Doom, Romero announced Sigil, an unofficial Episode Five consisting of 9 missions. It was released on May 22, 2019, with a soundtrack by Buckethead. It was then released for free on May 31, with a MIDI soundtrack by James Paddock. Reception Sales With the release of Doom, millions of users installed the Shareware version on their computer and id Software quickly began making $100,000 daily (for $9 per copy). Sandy Petersen later remarked that the game "sold a couple of hundred thousand copies during its first year or so", as piracy kept its initial sales from rising higher, and Wilbur in 1995 estimated first-year sales as 140,000. id sold 3.5 million physical copies from its release through 1999. According to PC Data, which tracked sales in the United States, by April 1998 Dooms shareware edition had yielded 1.36 million units sold and $8.74 million in revenue in the United States. This led PC Data to declare it the country's fourth-best-selling computer game for the period between January 1993 and April 1998. The Ultimate Doom SKU reached sales of 787,397 units by September 1999. At the time, PC Data ranked them as the country's eighth- and 20th-best-selling computer games since January 1993. In addition to its sales, the game's status as shareware dramatically increased its market penetration. PC Zones David McCandless wrote that the game was played by "an estimated six million people across the globe", and other sources estimate that 10–20 million people played Doom within 24 months of its launch. Doom became a problem at workplaces, both occupying the time of employees and clogging computer networks. Intel, Lotus Development, and Carnegie Mellon University were among many organizations reported to form policies specifically disallowing Doom-playing during work hours. At the Microsoft campus, Doom was by one account equal to a "religious phenomenon". Doom was #1 on Computer Gaming Worlds "Playing Lately?" survey for February 1994. One reader said that "No other game even compares to the addictiveness of NetDoom with four devious players! ... The only game I've stayed up 72+ straight hours to play", and another reported that "Linking four people together for a game of Doom is the quickest way to destroy a productive, boring evening of work". Contemporary reviews Although Petersen said Doom was "nothing more than the computer equivalent of Whack-A-Mole", Doom received critical acclaim and was widely praised in the gaming press, broadly considered to be one of the most important and influential titles in gaming history. Upon release, GamesMaster gave it a 90% rating. Dragon gave it five stars, praising the improvements over Wolfenstein 3D, the "fast-moving arcade shoot 'em up" gameplay, and network play. Computer and Video Games gave the game a 93% rating, praising its atmosphere and stating that "the level of texture-mapped detail and the sense of scale is awe inspiring", but criticized the occasionally repetitive gameplay and considered the violence excessive. A common criticism of Doom was that it was not a true 3D game, since the game engine did not allow corridors and rooms to be stacked on top of one another (room-over-room), and instead relied on graphical trickery to make it appear that the player character and enemies were moving along differing elevations. Computer Gaming World stated in February 1994 that Wolfenstein 3D fans should "look forward to a delight of insomnia", and "Since networking is supported, bring along a friend to share in the visceral delights". A longer review in March 1994 said that Doom "was worth the wait ... a wonderfully involved and engaging game", and its technology "a new benchmark" for the gaming industry. The reviewer praised the "simply dazzling" graphics", and reported that "DeathMatches may be the most intense gaming experience available today". While criticizing the "ho-hum endgame" with a too-easy end boss, he concluded that Doom "is a virtuoso performance". Edge praised the graphics and levels but criticized the "simple 3D perspective maze adventure/shoot 'em up" gameplay. The review concluded: "You’ll be longing for something new in this game. If only you could talk to these creatures, then perhaps you could try and make friends with them, form alliances... Now, that would be interesting." The review attracted mockery, and "if only you could talk to these creatures" became a running joke in video game culture. A 2016 piece in the International Business Times defended the sentiment, saying it anticipated the dialogue systems of games such as Skyrim, Mass Effect and Undertale. In 1994, PC Gamer UK named Doom the third-best computer game of all time. The editors wrote: "Although it's only been around for a couple of months, Doom has already done more to establish the PC's arcade clout than any other title in gaming history." In 1994 Computer Gaming World named Doom Game of the Year. The various Doom console ports have received generally favorable reviews. Retrospective reception In 1995, Next Generation said it was "The most talked about PC game ever – and with good reason. Running on a 486 machine (essential for maximum effect), Doom took PC graphics to a totally new level of speed, detail, and realism, and provided a genuinely scary degree of immersion in the gameworld." In 1996, Computer Gaming World named it the fifth best video game of all time, and the third most-innovative game. In 1998, PC Gamer declared it the 34th-best computer game ever released, and the editors called it "Probably the most imitated game of all time, Doom continued what Wolfenstein 3D began and elevated the fledgling 3D-shooter genre to blockbuster status". In 2001, Doom was voted the number one game of all time in a poll among over 100 game developers and journalists conducted by GameSpy. In 2003, IGN ranked it as the 44th top video game of all time and also called it "the breakthrough game of 1993", adding: "Its arsenal of powerful guns (namely the shotgun and BFG), intense level of gore and perfect balance of adrenaline-soaked action and exploration kept this gamer riveted for years." PC Gamer proclaimed Doom the most influential game of all time in its ten-year anniversary issue in April 2004. In 2004, readers of Retro Gamer voted Doom as the ninth top retro game, with the editors commenting: "Only a handful of games can claim that they've changed the gaming world, and Doom is perhaps the most qualified of them all." In 2005, IGN ranked it as the 39th top game. On March 12, 2007, The New York Times reported that Doom was named to a list of the ten most important video games of all time, the so-called game canon. The Library of Congress took up this video game preservation proposal and began with the games from this list. In 2009, GameTrailers ranked Doom as the number one "breakthrough PC game". That year Game Informer put Doom sixth on the magazine's list of the top 200 games of all time, stating that it gave "the genre the kick start it needed to rule the gaming landscape two decades later". Game Informer staff also put it sixth on their 2001 list of the 100 best games ever. IGN included Doom at 2nd place in the Top 100 Video Game Shooters of all Time, just behind Half-Life, citing the game's "feel of running and gunning", memorable weapons and enemies, pure and simple fun, and its spreading on nearly every gaming platform in existence. In 2012, Time named it one of the 100 greatest video games of all time as "it established the look and feel of later shooters as surely as Xerox PARC established the rules of the virtual desktop", adding that "its impact also owes a lot to the gonzo horror sensibility of its designers, including John Romero, who showed a bracing lack of restraint in their deployment of gore and Satanic iconography". Including Doom on the list of the greatest games of all time, GameSpot wrote that "despite its numerous appearances in other formats and on other media, longtime fans will forever remember the original 1993 release of Doom as the beginning of a true revolution in action gaming". In 2018, Complex listed the game #47 in their "The Best Super Nintendo Games of All Time." In 2021, Kotaku listed Doom as the third best game in the series, behind Doom II and Doom (2016). They said that the gameplay "still holds up", but argued it was inferior to Doom II due to the latter's improved enemy variety. Controversies Doom was notorious for its high levels of graphic violence and satanic imagery, which generated controversy from a broad range of groups. Doom for the Genesis 32X was one of the first video games to be given an M for Mature rating from the Entertainment Software Rating Board due to its violent gore and nature. Yahoo! Games listed it as one of the top ten most controversial games of all time. It was criticized by religious organizations for its diabolic undertones and was dubbed a "mass murder simulator" by critic and Killology Research Group founder David Grossman. Doom prompted fears that the then-emerging virtual reality technology could be used to simulate extremely realistic killing. The game again sparked controversy in the United States when it was found that Eric Harris and Dylan Klebold, who committed the Columbine High School massacre on April 20, 1999, were avid players of the game. While planning for the massacre, Harris said in his journal that the killing would be "like playing Doom", and "it'll be like the LA riots, the Oklahoma bombing, World War II, Vietnam, Duke Nukem and Doom all mixed together", and that his shotgun was "straight out of the game". A rumor spread afterwards that Harris had designed a Doom level that looked like the high school, populated with representations of Harris's classmates and teachers, and that he practiced for the shootings by playing the level repeatedly. Although Harris did design custom Doom levels (which later became known as the "Harris levels"), none have been found to be based on Columbine High School. In the earliest release versions, the level E1M4: Command Control contains a swastika-shaped structure, which was put in as a homage to Wolfenstein 3D. The swastika was removed in later versions; according to Romero, the change was done out of respect after id Software received a complaint from a military veteran. Legacy Doom franchise Doom has appeared in several forms in addition to video games, including a Doom comic book, four novels by Dafydd Ab Hugh and Brad Linaweaver (loosely based on events and locations in the games), a Doom board game and a live-action film starring Karl Urban and The Rock released in 2005. The game's development and impact on popular culture is the subject of the book Masters of Doom: How Two Guys Created an Empire and Transformed Pop Culture by David Kushner. The Doom series remained dormant between 1997 and 2000, when Doom 3 was finally announced. A retelling of the original Doom using entirely new graphics technology and a slower paced survival horror approach, Doom 3 was hyped to provide as large a leap in realism and interactivity as the original game and helped renew interest in the franchise when it was released in 2004, under the id Tech 4 game engine. The series again remained dormant for 10 years until a reboot, simply titled Doom and running on the new id Tech 6, was announced with a beta access to players that had pre-ordered Wolfenstein: The New Order. The game held its closed alpha multiplayer testing in October 2015, as closed and open beta access ran during March to April 2016. Returning to the series' roots in fast-paced action and minimal storytelling, the full game eventually released worldwide on May 13, 2016. The project initially started as Doom 4 in May 2008, set to be a remake of Doom II: Hell on Earth and ditching the survival horror aspect of Doom 3. Development completely restarted as id's Tim Willits remarked that Doom 4 was "lacking the personality of the long-running shooter franchise". Clones Doom was influential and dozens of new first-person shooter games appeared following Dooms release, often referred to as "Doom clones". The term was initially popular, and after 1996, gradually replaced by "first-person shooter", which had firmly superseded around 1998. Some of these were cheap clones, hastily assembled and quickly forgotten, and others explored new grounds of the genre with high acclaim. Many of Dooms closely imitated features include the selection of weapons and cheat codes. Some successors include Apogee's Rise of the Triad (based on the Wolfenstein 3D engine) and Looking Glass Studios's System Shock. The popularity of Star Wars-themed WADs is rumored to have been the factor that prompted LucasArts to create their first-person shooter Dark Forces. The Doom game engine id Tech 1 was licensed by id Software to several other companies, who released their own games using the technology, including Heretic, Hexen: Beyond Heretic, Strife: Quest for the Sigil, and Hacx: Twitch 'n Kill. A Doom-based game called Chex Quest was released in 1996 by Ralston Foods as a promotion to increase cereal sales, and the United States Marine Corps produced Marine Doom as a training tool, later released to the public. When 3D Realms released Duke Nukem 3D in 1996, a tongue-in-cheek science fiction shooter based on Ken Silverman's technologically similar Build engine, id Software had nearly finished developing Quake, its next-generation game, which mirrored Dooms success for much of the remainder of the 1990s and reduced interest in its predecessor (Wolfenstein 3D). Community In addition to the thrilling nature of the single-player game, the deathmatch mode was an important factor in the game's popularity. Doom was not the first first-person shooter with a deathmatch mode; Maze War, an FPS released in 1974, was running multiplayer deathmatch over ethernet on Xerox computers by 1977. The widespread distribution of PC systems and the violence in Doom made deathmatching particularly attractive. Two-player multiplayer was possible over a phone line by using a modem, or by linking two PCs with a null-modem cable. Because of its widespread distribution, Doom hence became the game that introduced deathmatching to a large audience and was also the first game to use the term "deathmatch". Although the popularity of the Doom games dropped with the release of more modern first-person shooters, the game still retains a strong fan base that continues to this day by playing competitively and creating WADs, and Doom-related news is still tracked at multiple websites such as Doomworld. Interest in Doom was renewed in 1997, when the source code for the Doom engine was released (it was also placed under the GNU GPL-2.0-or-later on October 3, 1999). Fans then began porting the game to various operating systems, even to previously unsupported platforms such as the Dreamcast. As for the PC, over 50 different Doom source ports have been developed. New features such as OpenGL rendering and scripting allow WADs to alter the gameplay more radically. Devoted players have spent years creating speedruns for Doom, competing for the quickest completion times of individual levels and the whole game and sharing knowledge about routes through the levels and how to exploit bugs in the Doom engine for shortcuts. Doom was one of the first games to have a speedrunning community, which has remained active up until the present day. A record speedrun on E1M1, the first level in the game, was achieved in September 1998, and took 20 years and "tens of thousands of futile attempts" in order to be surpassed. Achievements include the completion of both Doom and Doom II on the "Ultra-Violence" difficulty setting in less than 30 minutes each. In addition, a few players have also managed to complete Doom II in a single run on the difficulty setting "Nightmare!", on which monsters are more aggressive, launch faster projectiles (or, in the case of the Pinky Demon, simply move faster), and respawn roughly 30 seconds after they have been killed (level designer John Romero characterized the idea of such a run as "[just having to be] impossible"). Movies of most of these runs are available from the COMPET-N website. Online co-op and deathmatch play are still continued on fan-created services. References Sources External links Doom (franchise) 1993 video games 3DO Interactive Multiplayer games Acorn Archimedes games Amiga games Amiga 1200 games AmigaOS 4 games Android (operating system) games AROS software Atari Jaguar games Censored video games Commercial video games with freely available source code Cooperative video games Video games about demons Doom engine games DOS games First-person shooters First-person shooter multiplayer online games Game Boy Advance games Games commercially released with DOSBox GT Interactive Software games Video games set in hell Horror video games Id Software games Imagineer games IOS games IRIX games Linux games Classic Mac OS games Video games set on Mars Fiction set on Mars' moons Mobile games MorphOS games Multiplayer and single-player video games Multiplayer null modem games Nintendo Switch games Obscenity controversies in video games PlayStation (console) games PlayStation 3 games PlayStation 4 games Science fantasy video games Sega 32X games Sega Saturn games Split-screen multiplayer games Super Nintendo Entertainment System games Symbian games Video games developed in the United States Video games scored by Bobby Prince Video games with 2.5D graphics Video games with alternative versions Video games with digitized sprites Williams video games Windows games Xbox 360 games Xbox 360 Live Arcade games Xbox Cloud Gaming games Xbox One games Sprite-based first-person shooters
42113668
https://en.wikipedia.org/wiki/2014%20Troy%20Trojans%20football%20team
2014 Troy Trojans football team
The 2014 Troy Trojans football team represented Troy University during the 2014 NCAA Division I FBS football season. They were led by 24th-year head coach Larry Blakeney and played their home games at Veterans Memorial Stadium as a member of the Sun Belt Conference. They finished the season 3–9 overall and 3–4 in Sun Belt play to finish in a tie for seventh place. On October 5, Blakeney announced that he would retire at the end of the 2014 season. He finish his career with a 24-year record of 178–112–1. Schedule ' References Troy Troy Trojans football seasons Troy Trojans football
644223
https://en.wikipedia.org/wiki/Computer-assisted%20translation
Computer-assisted translation
Computer-aided translation (CAT), also referred to as machine-assisted translation (MAT) or machine-aided human translation (MAHT), is the use of software to assist a human translator in the translation process. The translation is created by a human, and certain aspects of the process are facilitated by software; this is in contrast with machine translation (MT), in which the translation is created by a computer, optionally with some human intervention (e.g. pre-editing and post-editing). CAT tools are typically understood to mean programs that specifically facilitate the actual translation process. Most CAT tools have (a) the ability to translate a variety of source file formats in a single editing environment without needing to use the file format's associated software for most or all of the translation process, (b) translation memory, and (c) integration of various utilities or processes that increase productivity and consistency in translation. Range of tools Computer-assisted translation is a broad and imprecise term covering a range of tools. These can include: Translation memory tools (TM tools), consisting of a database of text segments in a source language and their translations in one or more target languages. Spell checkers, either built into word processing software, or available as add-on programs. Grammar checkers, either built into word processing software, or available as add-on programs. Terminology managers, which allow translators to manage their own terminology bank in an electronic form. This can range from a simple table created in the translator's word processing software or spreadsheet, a database created in a program such as FileMaker Pro or, for more robust (and more expensive) solutions, specialized software packages such as SDL MultiTerm, LogiTerm, Termex, TermWeb, etc. Electronic dictionaries, either unilingual or bilingual Terminology databases, either on the host computer or accessible through the Internet, such as TERMIUM Plus or Grand dictionnaire terminologique from the Office québécois de la langue française Full-text search tools (or indexers), which allow the user to query already translated texts or reference documents of various kinds. Some such indexers are ISYS Search Software, dtSearch Desktop and Naturel Concordancers, which are programs that retrieve instances of a word or an expression and their respective context in a monolingual, bilingual or multilingual corpus, such as a bitext or a translation memory Bitext aligners: tools that align a source text and its translation which can then be analyzed using a full-text search tool or a concordancer Project management software that allows linguists to structure complex translation projects in a form of chain of tasks (often called "workflow"), assign the various tasks to different people, and track the progress of each of these tasks Concepts Translation memory software Translation memory programs store previously translated source texts and their equivalent target texts in a database and retrieve related segments during the translation of new texts. Such programs split the source text into manageable units known as "segments". A source-text sentence or sentence-like unit (headings, titles or elements in a list) may be considered a segment. Texts may also be segmented into larger units such as paragraphs or small ones, such as clauses. As the translator works through a document, the software displays each source segment in turn, and provides a previous translation for re-use if it finds a matching source segment in its database. If it does not, the program allows the translator to enter a translation for the new segment. After the translation for a segment is completed, the program stores the new translation and moves on to the next segment. In the dominant paradigm, the translation memory is, in principle, a simple database of fields containing the source language segment, the translation of the segment, and other information such as segment creation date, last access, translator name, and so on. Another translation memory approach does not involve the creation of a database, relying on aligned reference documents instead. Some translation memory programs function as standalone environments, while others function as an add-on or macro for commercially available word-processing or other business software programs. Add-on programs allow source documents from other formats, such as desktop publishing files, spreadsheets, or HTML code, to be handled using the TM program. Language search-engine software New to the translation industry, Language search-engine software is typically an Internet-based system that works similarly to Internet search engines. Rather than searching the Internet, however, a language search engine searches a large repository of Translation Memories to find previously translated sentence fragments, phrases, whole sentences, even complete paragraphs that match source document segments. Language search engines are designed to leverage modern search technology to conduct searches based on the source words in context to ensure that the search results match the meaning of the source segments. Like traditional TM tools, the value of a language search engine rests heavily on the Translation Memory repository it searches against. Terminology management software Terminology management software provides the translator a means of automatically searching a given terminology database for terms appearing in a document, either by automatically displaying terms in the translation memory software interface window or through the use of hot keys to view the entry in the terminology database. Some programs have other hotkey combinations allowing the translator to add new terminology pairs to the terminology database on the fly during translation. Some of the more advanced systems enable translators to check, either interactively or in batch mode, if the correct source/target term combination has been used within and across the translation memory segments in a given project. Independent terminology management systems also exist that can provide workflow functionality, visual taxonomy, work as a type of term checker (similar to spell checker, terms that have not been used correctly are flagged) and can support other types of multilingual term facet classifications such as pictures, videos, or sound. Alignment software Alignment programs take completed translations, divide both source and target texts into segments, and attempt to determine which segments belong together in order to build a translation memory or other reference resource with the content. Many alignment programs allow translators to manually realign mismatched segments. The resulting bitext (also known as parallel text) alignment can then be imported into a translation memory program for future translations or used as a reference document. Interactive machine translation Interactive machine translation is a paradigm in which the automatic system attempts to predict the translation the human translator is going to produce by suggesting translation hypotheses. These hypotheses may either be the complete sentence, or the part of the sentence that is yet to be translated. Augmented translation Augmented translation is a form of human translation carried out within an integrated technology environment that provides translators access to subsegment adaptive machine translation (MT) and translation memory (TM), terminology lookup (CAT), and automatic content enrichment (ACE) to aid their work, and that automates project management, file handling, and other ancillary tasks. Based on the concept of augmented reality, augmented translation seeks to make translators more productive by providing them with relevant information on an as-needed basis. This information adapts to the habits and style of individual translators in order to accelerate their work and increase productivity. It differs from classical postediting of MT, which has linguists revise entire texts translated by machines, in that it provides machine translation and information as suggestions that can be adopted in their entirety, edited, or ignored, as appropriate. Augmented translation extends principles first developed in the 1980s that made their way into CAT tools. However, it integrates several functions that have previously been discrete into one environment. For example, translators historically have had to leave their translation environments to do terminology research, but in an augmented environment, an ACE component would automatically provide links to information about terms and concepts found in the text directly within the environment. As of May 2017, no full implementations of an augmented translation environment exist, although individual developers have created partial systems. See also Comparison of computer-assisted translation tools Computational linguistics Computer-assisted reviewing Fuzzy matching Translation Computer-assisted interpreting References External links Machine Translation and Computer-Assisted Translation:a New Way of Translating? Language software Translation
36924601
https://en.wikipedia.org/wiki/McPixel
McPixel
McPixel is an independently produced puzzle video game by Polish developer Mikołaj Kamiński (also known as Sos Sosowski) in 2012. Gameplay The game centers around the title character, McPixel, who is a parody of both MacGyver and his other parody, MacGruber. The game features numerous references to popular culture characters. McPixel's objective in the game is to defuse bombs or "save the day" in 20 seconds each level. There are four chapters in the game, each with three levels and an unlockable level. Each level contains six sequences. Most of the time McPixel defuses bombs in absurd ways, mainly by kicking almost everything in the level. Release and reception The game is available on Android, iPhone and as a computer game. McPixel received positive reviews, with a critic score of 76/100 on Metacritic for the PC version, and 83/100 critic score for the iOS version. The Verge gave the game a score of 8 out of 10, stating "McPixel is the step further, a parody of a parody. But it's stranger, grosser, funnier and far more blasphemous." The game's creator and developer, Mikolaj "Sos" Kamiński, said: "The largest force driving attention to McPixel at that time were 'Let's Play' videos. Mostly by Jesse Cox and PewDiePie." Sos promoted the distribution of his game on The Pirate Bay to market it. He found out that McPixel was being torrented from a Reddit post. Due to this event, McPixel became the first game ever to be endorsed by the Pirate Bay. As of September 2012, McPixel had sold 3,056 copies. The game was also the first game to be released via Steam Greenlight. During August 15–22, 2013, McPixel featured alongside four other games in the Humble Bundle Weekly Sale ("Hosted by PewDiePie"), which sold 189,927 units. As of October 2013, a Linux version exists, but is not yet available on Steam. Kamiński has stated on the Steam Forums that this is because the Adobe Air run-time can not be distributed via Steam. To fix this and other issues, Kamiński has stated that he intended to rewrite the game engine to not use Adobe Air. Kamiński announced the rewrite in June 2013, writing that he hoped to be done by September 2013, though there had been no news as of September 2014. As of June 2019, the Linux version is not on Steam, however Proton can be used to run the game. Sequel A sequel titled McPixel 3 was announced on the 17th February 2022 References External links McPixel official website 2012 video games Android (operating system) games Indie video games IOS games Linux games MacOS games Parody video games Point-and-click adventure games Retro-style video games Steam Greenlight games Video games about bomb disposal Video games developed in Poland Windows games
1442043
https://en.wikipedia.org/wiki/Pixar%20RenderMan
Pixar RenderMan
Pixar RenderMan (formerly PhotoRealistic RenderMan) is proprietary photorealistic 3D rendering software produced by Pixar Animation Studios. Pixar uses RenderMan to render their in-house 3D animated movie productions and it is also available as a commercial product licensed to third parties. In 2015, a free non-commercial version of RenderMan became available. Name To speed up rendering, Pixar engineers performed experiments with parallel rendering computers using Transputer chips inside a Pixar Image Computer. The name comes from the nickname of a small circuit board (2.5 × 5 inches or 6.4 × 13 cm) containing one Transputer that engineer Jeff Mock could put in his pocket. During that time the Sony Walkman was very popular and Jeff Mock called his portable board Renderman, leading to the software name. Technology RenderMan defines cameras, geometry, materials, and lights using the RenderMan Interface Specification. This specification facilitates communication between 3D modeling and animation applications and the render engine that generates the final high quality images. In the past RenderMan used the Reyes Rendering Architecture. The Renderman standard was first presented at 1993 SIGGRAPH, developed with input from 19 companies and 6 or 7 big partners, with Pat Hanrahan taking a leading role. Ed Catmull said no software product met the RenderMan Standard in 1993. RenderMan met it after about two years. Additionally RenderMan supports Open Shading Language to define textural patterns. When Pixar started development, Steve Jobs described the original goal for RenderMan in 1991: During this time, Pixar used the C language for developing Renderman, which allowed them to port it to many platforms. Historically, RenderMan used the Reyes algorithm to render images with added support for advanced effects such as ray tracing and global illumination. Support for Reyes rendering and the RenderMan Shading Language were removed from RenderMan in 2016. RenderMan currently uses Monte Carlo path tracing to generate images. Awards RenderMan has been used to create digital visual effects for Hollywood blockbuster movies such as Beauty and the Beast, Aladdin, The Lion King, Terminator 2: Judgment Day, Toy Story, Jurassic Park, Avatar, Titanic, the Star Wars prequels, and The Lord of the Rings. RenderMan has received four Academy Scientific and Technical Awards. The first was in 1993 honoring Pat Hanrahan, Anthony A. Apodaca, Loren Carpenter, Rob Cook, Ed Catmull, Darwyn Peachey, and Tom Porter. The second was as part of the 73rd Scientific and Technical Academy Awards ceremony presentation on March 3, 2001: the Academy of Motion Picture Arts and Sciences' Board of Governors honored Ed Catmull, Loren Carpenter and Rob Cook with an Academy Award of Merit "for significant advancements to the field of motion picture rendering as exemplified in Pixar’s RenderMan". The third was in 2010 honoring "Per Christensen, Christophe Hery, and Michael Bunnell for the development of point-based rendering for indirect illumination and ambient occlusion." The fourth was in 2011 honoring David Laur. It has also won the Gordon E. Sawyer Award in 2009 and The Coons Award. It is the first software product awarded an Oscar. See also List of 3D rendering software References External links 3D computer graphics software for Linux 3D graphics software IRIX software Pixar Proprietary commercial software for Linux Rendering systems RenderMan 3D rendering software for Linux
43509183
https://en.wikipedia.org/wiki/Microservices
Microservices
A microservice architecture – a variant of the service-oriented architecture (SOA) structural style – arranges an application as a collection of loosely-coupled services. In a microservices architecture, services are fine-grained and the protocols are lightweight. The goal is that teams can bring their services to life independent of others. Loose coupling reduces all types of dependencies and the complexities around it, as service developers do not need to care about the users of the service, they do not force their changes onto users of the service. Therefore it allows organizations developing software to grow fast, and big, as well as use off the shelf services easier. Communication requirements are less. But it comes at a cost to maintain the decoupling. Interfaces need to be designed carefully and treated as a public API. Techniques like having multiple interfaces on the same service, or multiple versions of the same service, to not break existing users code. Introduction There is no single definition for microservices. A consensus view has evolved over time in the industry. Some of the defining characteristics that are frequently cited include: Services in a microservice architecture are often processes that communicate over a network to fulfill a goal using technology-agnostic protocols such as HTTP. Services are organized around business capabilities. Services can be implemented using different programming languages, databases, hardware and software environments, depending on what fits best. Services are small in size, messaging-enabled, bounded by contexts, autonomously developed, independently deployable, decentralized and built and released with automated processes. A microservice is not a layer within a monolithic application (example, the web controller, or the backend-for-frontend). Rather, it is a self-contained piece of business functionality with clear interfaces, and may, through its own internal components, implement a layered architecture. From a strategy perspective, microservice architecture essentially follows the Unix philosophy of "Do one thing and do it well". Martin Fowler describes a microservices-based architecture as having the following properties: Lends itself to a continuous delivery software development process. A change to a small part of the application only requires rebuilding and redeploying only one or a small number of services. Adheres to principles such as fine-grained interfaces (to independently deployable services), business-driven development (e.g. domain-driven design). It is common for microservices architectures to be adopted for cloud-native applications, serverless computing, and applications using lightweight container deployment. According to Fowler, because of the large number (when compared to monolithic application implementations) of services, decentralized continuous delivery and DevOps with holistic service monitoring are necessary to effectively develop, maintain, and operate such applications. A consequence of (and rationale for) following this approach is that the individual microservices can be individually scaled. In the monolithic approach, an application supporting three functions would have to be scaled in its entirety even if only one of these functions had a resource constraint. With microservices, only the microservice supporting the function with resource constraints needs to be scaled out, thus providing resource and cost optimization benefits. History There are numerous claims as to the origin of the term microservices. Whilst vice president of ThoughtWorks in 2004, Fred George began working on prototype architectures based on what he called the "Baysean Principals" named after Jeff Bay. As early as 2005, Peter Rodgers introduced the term "Micro-Web-Services" during a presentation at the Web Services Edge conference. Against conventional thinking and at the height of the SOAP SOA architecture hype curve he argued for "REST-services" and on slide #4 of the conference presentation, he discusses "Software components are Micro-Web-Services". He goes on to say "Micro-Services are composed using Unix-like pipelines (the Web meets Unix = true loose-coupling). Services can call services (+multiple language run-times). Complex service-assemblies are abstracted behind simple URI interfaces. Any service, at any granularity, can be exposed." He described how a well-designed microservices platform "applies the underlying architectural principles of the Web and REST services together with Unix-like scheduling and pipelines to provide radical flexibility and improved simplicity in service-oriented architectures. Rodgers' work originated in 1999 with the Dexter research project at Hewlett Packard Labs, whose aim was to make code less brittle and to make large-scale, complex software systems robust to change. Ultimately this path of research led to the development of resource-oriented computing (ROC), a generalized computation abstraction in which REST is a special subset. In 2007, Juval Löwy in his writing and speaking called for building systems in which every class was a service. Löwy realized this required the use of a technology that can support such granular use of services, and he extended Windows Communication Foundation (WCF) to do just that, taking every class and treating it as a service while maintaining the conventional programming model of classes. A workshop of software architects held near Venice in May 2011 used the term "microservice" to describe what the participants saw as a common architectural style that many of them had been recently exploring. In May 2012, the same group decided on "microservices" as the most appropriate name. James Lewis presented some of those ideas as a case study in March 2012 at 33rd Degree in Kraków in Micro services - Java, the Unix Way, as did Fred George about the same time. Adrian Cockcroft, former director for the Cloud Systems at Netflix, described this approach as "fine grained SOA", pioneered the style at web scale, as did many of the others mentioned in this article - Joe Walnes, Dan North, Evan Bottcher, and Graham Tackley. Microservices is a specialization of an implementation approach for service-oriented architectures (SOA) used to build flexible, independently deployable software systems. The microservices approach is a first realisation of SOA that followed the introduction of DevOps and is becoming more popular for building continuously deployed systems. In February 2020, the Cloud Microservices Market Research Report predicted that the global microservice architecture market size will increase at a CAGR of 21.37% from 2019 to 2026 and reach $3.1 billion by 2026. Service granularity A key step in defining a microservice architecture is figuring out how big an individual microservice has to be. There is no consensus or litmus test for this, as the right answer depends on the business and organizational context. For instance, Amazon uses a service-oriented architecture where a service often maps 1:1 with a team of 3 to 10 engineers. Generally, the terminology goes as such: services that are dedicated to a single task, such as calling a particular backend system or making a particular type of calculation, are called as atomic services. Similarly, services that call such atomic services in order to consolidate an output, are called as composite services. It is considered bad practice to make the service too small, as then the runtime overhead and the operational complexity can overwhelm the benefits of the approach. When things get too fine-grained, alternative approaches must be considered - such as packaging the function as a library, moving the function into other microservices. If domain-driven design is being employed in modeling the domain for which the system is being built, then a microservice could be as small as an aggregate or as large as a bounded Context. Benefits The benefit of decomposing an application into different smaller services are numerous: Modularity: This makes the application easier to understand, develop, test, and become more resilient to architecture erosion. This benefit is often argued in comparison to the complexity of monolithic architectures. Scalability: Since microservices are implemented and deployed independently of each other, i.e. they run within independent processes, they can be monitored and scaled independently. Integration of heterogeneous and legacy systems: microservices is considered as a viable means for modernizing existing monolithic software application. There are experience reports of several companies who have successfully replaced (parts of) their existing software by microservices, or are in the process of doing so. The process for Software modernization of legacy applications is done using an incremental approach. Distributed development: it parallelizes development by enabling small autonomous teams to develop, deploy and scale their respective services independently. It also allows the architecture of an individual service to emerge through continuous refactoring. Microservice-based architectures facilitate continuous integration, continuous delivery and deployment. Criticism and concerns The microservices approach is subject to criticism for a number of issues: Services form information barriers. Inter-service calls over a network have a higher cost in terms of network latency and message processing time than in-process calls within a monolithic service process. Testing and deployment are more complicated. Moving responsibilities between services is more difficult. It may involve communication between different teams, rewriting the functionality in another language or fitting it into a different infrastructure. However, microservices can be deployed independently from the rest of the application, while teams working on monoliths need to synchronize to deploy together. Viewing the size of services as the primary structuring mechanism can lead to too many services when the alternative of internal modularization may lead to a simpler design. This requires understanding the overall architecture of the applications and interdependencies between components. Two-phased commits are regarded as an anti-pattern in microservices-based architectures as this results in a tighter coupling of all the participants within the transaction. However, lack of this technology causes awkward dances which have to be implemented by all the transaction participants in order to maintain data consistency. Development and support of many services is more challenging if they are built with different tools and technologies - this is especially a problem if engineers move between projects frequently. The protocol typically used with microservices (HTTP) was designed for public-facing services, and as such is unsuitable for working internal microservices that often must be impeccably reliable. While not specific to microservices, the decomposition methodology often uses functional decomposition, which does not handle changes in the requirements while still adds the complexity of services. The very concept of microservice is misleading, since there are only services. There is no sound definition of when a service starts or stops being a microservice. Cognitive load The architecture introduces additional complexity and new problems to deal with, such as network latency, message format design, Backup/Availability/Consistency (BAC), load balancing and fault tolerance. All of these problems have to be addressed at scale. The complexity of a monolithic application does not disappear if it is re-implemented as a set of microservices. Some of the complexity gets translated into operational complexity. Other places where the complexity manifests itself is in increased network traffic and resulting slower performance. Also, an application made up of any number of microservices has a larger number of interface points to access its respective ecosystem, which increases the architectural complexity. Various organizing principles (such as HATEOAS, interface and data model documentation captured via Swagger, etc.) have been applied to reduce the impact of such additional complexity. Technologies Computer microservices can be implemented in different programming languages and might use different infrastructures. Therefore, the most important technology choices are the way microservices communicate with each other (synchronous, asynchronous, UI integration) and the protocols used for the communication (RESTful HTTP, messaging, GraphQL ...). In a traditional system, most technology choices like the programming language impact the whole system. Therefore, the approach for choosing technologies is quite different. The Eclipse Foundation has published a specification for developing microservices, Eclipse MicroProfile. Service mesh In a service mesh, each service instance is paired with an instance of a reverse proxy server, called a service proxy, sidecar proxy, or sidecar. The service instance and sidecar proxy share a container, and the containers are managed by a container orchestration tool such as Kubernetes, Nomad, Docker Swarm, or DC/OS. The service proxies are responsible for communication with other service instances and can support capabilities such as service (instance) discovery, load balancing, authentication and authorization, secure communications, and others. In a service mesh, the service instances and their sidecar proxies are said to make up the data plane, which includes not only data management but also request processing and response. The service mesh also includes a control plane for managing the interaction between services, mediated by their sidecar proxies. There are several options for service mesh architecture: Open Service Mesh, Istio (a joint project among Google, IBM, and Lyft), Linkerd (a CNCF project led by Buoyant), Consul (a HashiCorp product) and many others in the service mesh landscape. The service mesh management plane, Meshery, provides lifecycle, configuration, and performance management across service mesh deployments. A comparison of platforms Implementing a microservice architecture is very difficult. There are many concerns (see table below) that any microservice architecture needs to address. Netflix developed a microservice framework to support their internal applications, and then open-sourced many portions of that framework. Many of these tools have been popularized via the Spring Framework – they have been re-implemented as Spring-based tools under the umbrella of the Spring Cloud project. The table below shows a comparison of an implementing feature from the Kubernetes ecosystem with an equivalent from the Spring Cloud world. One noteworthy aspect of the Spring Cloud ecosystem is that they are all Java-based technologies, whereas Kubernetes is a polyglot runtime platform. See also Conway's law Cross-cutting concern Data mesh, a domain-oriented data architecture DevOps Fallacies of distributed computing GraphQL gRPC Representational state transfer (REST) Service-oriented architecture (SOA) Software modernization Unix philosophy Self-contained system (software) Serverless computing Web-oriented architecture (WOA) References Further reading Special theme issue on microservices, IEEE Software 35(3), May/June 2018, https://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=8354413 I. Nadareishvili et al., Microservices Architecture – Aligning Principles, Practices and Culture, O'Reilly, 2016, S. Newman, Building Microservices – Designing Fine-Grained Systems, O'Reilly, 2015 Wijesuriya, Viraj Brian (2016-08-29) Microservice Architecture, Lecture Notes - University of Colombo School of Computing, Sri Lanka Christudas Binildas (June 27, 2019). Practical Microservices Architectural Patterns: Event-Based Java Microservices with Spring Boot and Spring Cloud. Apress. . Architectural pattern (computer science) Service-oriented (business computing)
22680262
https://en.wikipedia.org/wiki/Gilmore%20Avenue%2C%20Quezon%20City
Gilmore Avenue, Quezon City
Gilmore Avenue is a main road in Quezon City, Metro Manila, the Philippines. It runs one-way from Eulogio Rodriguez Sr. Avenue in New Manila, Quezon City and terminates at Nicanor Domingo Street, continuing on as the two-way Granada Street until it reaches the city border with San Juan, where it becomes Ortigas Avenue. The road is named for Eugene Allen Gilmore, Vice Governor-General of the Philippines from 1922 to 1929 who twice served as acting Governor-General. The one-way, two-lane road segment from Eulogio Rodriguez Sr. Avenue to Aurora Boulevard within New Manila is notorious for many vehicular accidents attributed towards overspeeding as a result of the wide lane width, and due to poor visibility during night time. History Gilmore Avenue was originally named and constructed sometime before 1945 as Pacifica Avenue, which served as a north-south thoroughfare for the New Manila Subdivisions established a few decades prior. South of the New Manila area, new subdivisions and a shopping center were being developed in what would become the Greenhills area in the municipality of San Juan del Monte (now San Juan City) in the 1960s and 1970s, and so the road became known as a passageway for motorists to Greenhills and Ortigas Avenue itself. Commercial development The Greenhills Shopping Center became known as a hub for computer parts and accessories at affordable prices for computer hobbyists and IT enthusiasts alike until the computer boom in the 1990s made computers mainstream and increased the demand for computer retail markets. As the Greenhills Shopping Center had become too crowded for the increasing demand of the computer boom, a computer retail store owner in 1997 decided to set up shop along the once-desolate Gilmore Avenue. The first computer retail store to open in the area was PC Options, which contrary to popular belief, the "PC" in the name does not actually refer to personal computers, but rather the initials of the shop's founder and owner. As the area was being developed commercially, PC Options became popular for pioneering the do-it-yourself concept for computer customizations in the local market, serving as the catalyst for other computer retail shops to open in the area. As several computer shops in Greenhills had to close down due to renovations at the shopping center itself, Gilmore became established as a major IT hub in Metro Manila, with the Gilmore name becoming synonymous to computer retail. Landmarks Saint Paul University Quezon City is also located on Gilmore Avenue corner of Aurora Boulevard. The office of SYKES Asia Inc (K Pointe) is located just opposite of St Paul. Broadway Centrum is also located within the vicinity. Computer and IT hub The Aurora Boulevard intersection of Gilmore Avenue is a popular hub for IT-related products and services for computers and related components. Both the southern corners of Gilmore and Aurora roads are filled with stores selling different kinds of computers and their accessories, both secondhand and brand-new. Transport Bus Route 11 (Gilmore-Taytay) serves the commercialized section of Gilmore Avenue, with its northern namesake terminus being located at the intersection of Gilmore Avenue and Nicanor Domingo Street. It is also served by Bus Route 10 (Doroteo Jose-Cubao), which stops at the intersection of Gilmore Avenue and Aurora Boulevard. The nearest mass transport station from Gilmore Avenue is the Gilmore station of the Manila LRT Line 2, which was named after the road itself, and the future N. Domingo station of the MRT-4 monorail line. Intersections References Streets in Quezon City
35962585
https://en.wikipedia.org/wiki/Flame%20%28malware%29
Flame (malware)
Flame, also known as Flamer, sKyWIper, and Skywiper, is modular computer malware discovered in 2012 that attacks computers running the Microsoft Windows operating system. The program is used for targeted cyber espionage in Middle Eastern countries. Its discovery was announced on 28 May 2012 by the MAHER Center of the Iranian National Computer Emergency Response Team (CERT), Kaspersky Lab and CrySyS Lab of the Budapest University of Technology and Economics. The last of these stated in its report that Flame "is certainly the most sophisticated malware we encountered during our practice; arguably, it is the most complex malware ever found." Flame can spread to other systems over a local network (LAN). It can record audio, screenshots, keyboard activity and network traffic. The program also records Skype conversations and can turn infected computers into Bluetooth beacons which attempt to download contact information from nearby Bluetooth-enabled devices. This data, along with locally stored documents, is sent on to one of several command and control servers that are scattered around the world. The program then awaits further instructions from these servers. According to estimates by Kaspersky in May 2012, Flame had initially infected approximately 1,000 machines, with victims including governmental organizations, educational institutions and private individuals. At that time 65% of the infections happened in Iran, Israel, Palestine, Sudan, Syria, Lebanon, Saudi Arabia, and Egypt, with a "huge majority of targets" within Iran. Flame has also been reported in Europe and North America. Flame supports a "kill" command which wipes all traces of the malware from the computer. The initial infections of Flame stopped operating after its public exposure, and the "kill" command was sent. Flame is linked to the Equation Group by Kaspersky Lab. However, Costin Raiu, the director of Kaspersky Lab's global research and analysis team, believes the group only cooperates with the creators of Flame and Stuxnet from a position of superiority: "Equation Group are definitely the masters, and they are giving the others, maybe, bread crumbs. From time to time they are giving them some goodies to integrate into Stuxnet and Flame." In 2019, researchers Juan Andres Guerrero-Saade and Silas Cutler announced their discovery of the resurgence of Flame. The attackers used 'timestomping' to make the new samples look like they were created before the 'suicide' command. However, a compilation error included the real compilation date (circa 2014). The new version (dubbed 'Flame 2.0' by the researchers) includes new encryption and obfuscation mechanisms to hide its functionality. History Flame (a.k.a. Da Flame) was identified in May 2012 by the MAHER Center of the Iranian National CERT, Kaspersky Lab and CrySyS Lab (Laboratory of Cryptography and System Security) of the Budapest University of Technology and Economics when Kaspersky Lab was asked by the United Nations International Telecommunication Union to investigate reports of a virus affecting Iranian Oil Ministry computers. As Kaspersky Lab investigated, they discovered an MD5 hash and filename that appeared only on customer machines from Middle Eastern nations. After discovering more pieces, researchers dubbed the program "Flame" after one of the main modules inside the toolkit . According to Kaspersky, Flame had been operating in the wild since at least February 2010. CrySyS Lab reported that the file name of the main component was observed as early as December 2007. However, its creation date could not be determined directly, as the creation dates for the malware's modules are falsely set to dates as early as 1994. Computer experts consider it the cause of an attack in April 2012 that caused Iranian officials to disconnect their oil terminals from the Internet. At the time the Iranian Students News Agency referred to the malware that caused the attack as "Wiper", a name given to it by the malware's creator. However, Kaspersky Lab believes that Flame may be "a separate infection entirely" from the Wiper malware. Due to the size and complexity of the program—described as "twenty times" more complicated than Stuxnet—the Lab stated that a full analysis could require as long as ten years. On 28 May, Iran's CERT announced that it had developed a detection program and a removal tool for Flame, and had been distributing these to "select organizations" for several weeks. After Flame's exposure in news media, Symantec reported on 8 June that some Flame command and control (C&C) computers had sent a "suicide" command to infected PCs to remove all traces of Flame. According to estimates by Kaspersky in May 2012, initially Flame had infected approximately 1,000 machines, with victims including governmental organizations, educational institutions and private individuals. At that time the countries most affected were Iran, Israel, the Palestinian Territories, Sudan, Syria, Lebanon, Saudi Arabia, and Egypt. A sample of the Flame malware is available at GitHub Operation Flame is an uncharacteristically large program for malware at 20 megabytes. It is written partly in the Lua scripting language with compiled C++ code linked in, and allows other attack modules to be loaded after initial infection. The malware uses five different encryption methods and an SQLite database to store structured information. The method used to inject code into various processes is stealthy, in that the malware modules do not appear in a listing of the modules loaded into a process and malware memory pages are protected with READ, WRITE and EXECUTE permissions that make them inaccessible by user-mode applications. The internal code has few similarities with other malware, but exploits two of the same security vulnerabilities used previously by Stuxnet to infect systems. The malware determines what antivirus software is installed, then customises its own behaviour (for example, by changing the filename extensions it uses) to reduce the probability of detection by that software. Additional indicators of compromise include mutex and registry activity, such as installation of a fake audio driver which the malware uses to maintain persistence on the compromised system. Flame is not designed to deactivate automatically, but supports a "kill" function that makes it eliminate all traces of its files and operation from a system on receipt of a module from its controllers. Flame was signed with a fraudulent certificate purportedly from the Microsoft Enforced Licensing Intermediate PCA certificate authority. The malware authors identified a Microsoft Terminal Server Licensing Service certificate that inadvertently was enabled for code signing and that still used the weak MD5 hashing algorithm, then produced a counterfeit copy of the certificate that they used to sign some components of the malware to make them appear to have originated from Microsoft. A successful collision attack against a certificate was previously demonstrated in 2008, but Flame implemented a new variation of the chosen-prefix collision attack. Deployment Like the previously known cyber weapons Stuxnet and Duqu, it is employed in a targeted manner and can evade current security software through rootkit functionality. Once a system is infected, Flame can spread to other systems over a local network or via USB stick. It can record audio, screenshots, keyboard activity and network traffic. The program also records Skype conversations and can turn infected computers into Bluetooth beacons which attempt to download contact information from nearby Bluetooth enabled devices. This data, along with locally stored documents, is sent on to one of several command and control servers that are scattered around the world. The program then awaits further instructions from these servers. Unlike Stuxnet, which was designed to sabotage an industrial process, Flame appears to have been written purely for espionage. It does not appear to target a particular industry, but rather is "a complete attack toolkit designed for general cyber-espionage purposes". Using a technique known as sinkholing, Kaspersky demonstrated that "a huge majority of targets" were within Iran, with the attackers particularly seeking AutoCAD drawings, PDFs, and text files. Computing experts said that the program appeared to be gathering technical diagrams for intelligence purposes. A network of 80 servers across Asia, Europe and North America has been used to access the infected machines remotely. Origin On 19 June 2012, The Washington Post published an article claiming that Flame was jointly developed by the U.S. National Security Agency, CIA and Israel's military at least five years prior. The project was said to be part of a classified effort code-named Olympic Games, which was intended to collect intelligence in preparation for a cyber-sabotage campaign aimed at slowing Iranian nuclear efforts. According to Kaspersky's chief malware expert, "the geography of the targets and also the complexity of the threat leaves no doubt about it being a nation-state that sponsored the research that went into it." Kaspersky initially said that the malware bears no resemblance to Stuxnet, although it may have been a parallel project commissioned by the same attackers. After analysing the code further, Kaspersky later said that there is a strong relationship between Flame and Stuxnet; the early version of Stuxnet contained code to propagate via USB drives that is nearly identical to a Flame module that exploits the same zero-day vulnerability. Iran's CERT described the malware's encryption as having "a special pattern which you only see coming from Israel". The Daily Telegraph reported that due to Flame's apparent targets—which included Iran, Syria, and the West Bank—Israel became "many commentators' prime suspect". Other commentators named China and the U.S. as possible perpetrators. Richard Silverstein, a commentator critical of Israeli policies, claimed that he had confirmed with a "senior Israeli source" that the malware was created by Israeli computer experts. The Jerusalem Post wrote that Israel's Vice Prime Minister Moshe Ya'alon appeared to have hinted that his government was responsible, but an Israeli spokesperson later denied that this had been implied. Unnamed Israeli security officials suggested that the infected machines found in Israel may imply that the virus could be traced to the U.S. or other Western nations. The U.S. has officially denied responsibility. A leaked NSA document mentions that dealing with Iran's discovery of FLAME is an NSA and GCHQ jointly-worked event. See also Cyber electronic warfare Cyber security standards Cyberterrorism Operation High Roller Notes References 2012 in computing Rootkits Privilege escalation exploits Cryptographic attacks Cyberwarfare Espionage scandals and incidents Exploit-based worms Cyberwarfare in Iran Cyberattacks on energy sector Spyware Hacking in the 2010s
4107623
https://en.wikipedia.org/wiki/Rupie%20Edwards
Rupie Edwards
Rupert Lloyd "Rupie" Edwards (born 4 July 1945) is a Jamaican reggae singer and record producer. Biography Rupie Edwards was born in Goshen, in Saint Ann Parish. The family moved to Kingston in 1958, where he sang in talent contests, including those run by Vere Johns. He was spotted by producer S.L. Smith, for whom he recorded his debut single, "Guilty Convict" b/w "Just Because", released on Smith's Hi=Lite label nd licensed to Blue Beat Records in 1962. After recording a few further singles, he formed the Ambassadors in 1965 with Paragons singer Junior Menz and guitarist Eric Frater, becoming the Virtues with the addition of Dobby Dobson. They recorded several singles for Harry J, as well as Edwards' first self-production, "Burning Love", credited to Rupie Edwards and the Virtues. The Virtues broke up in 1968, and Edwards started to focus mainly on his work as a producer, although he continued to release his own records in the late 1960s and early 1970s. By the beginning of the 1970s, he had recorded artists like The Heptones, The Mighty Diamonds, Bob Andy, Johnny Clarke, Joe Higgs, Gregory Isaacs ("Lonely Man") and The Ethiopians on his own record labels 'Success' and 'Opportunity', based at his Success record shop in Orange Street, and on the Trojan Records sub-labels Big Records and Cactus. He also worked with DJs such as U-Roy, Dennis Alcapone and I-Roy, and released some instrumental versions with his studio band, The Rupie Edwards All Stars. The group included musicians such as saxophonist Tommy McCook, trombone player Vin Gordon, drummer Carlton 'Santa' Davis, guitarist Hux Brown, pianist Gladstone Anderson, bassist Clifton 'Jackie' Jackson and organist Winston Wright. In 1974, he released an album (Yamaha Skank) containing solely of tracks based on the Uniques' "My Conversation" riddim, credited as the first single-riddim album. In 1974 and 1975, he scored hits in the UK Singles Chart with "Ire Feelings" and "Leggo Skanga". Both tracks were based on the same riddim, first used for Johnny Clarke's "Everyday Wondering", and the Ire Feelings album followed in 1975. Another one-riddim album based on these tracks, Ire Feelings - Chapter and Version, was released by Trojan in 1990. After these successes, Edwards moved to London, and since then has continued producing and recording, working with artists such as Jah Woosh, Gladstone Anderson, Errol Dunkley, Dobby Dobson, and Shorty the President, and releasing a series of Dub Basket albums culled from his earlier productions. He now mainly records Gospel music. Discography Albums Yamaha Skank (1974), Success Ire Feelings (1975), Cactus Jamaica Serenade (1976), Cactus Conversation Stylee (1980), Tad's Lovers Roots (198?), Success - split with Dobby Dobson Pleasure and Pain (1987), Success Sweet Gospel Volume Four, Rupie Edwards Bible Music Citation (2007), Success Compilations Various Artists - Rupie's Gems - 1972-1974 (1974), Cactus Various Artists - Yamaha Skank (1974), Success Rupie Edwards & Various Artists - Hit Picks (1974), Horse Rupie Edwards All Stars - Dub Basket (1975), Cactus - also issued as Dub Classic (1977), Success Rupie Edwards All Stars - Dub Basket Chapter 2 (1976), Cactus Various Artists - Rupie's Gems Volume 2 (1976), Cactus Hit Picks Volume 1 (1977), Success Various Artists - Ire Feelings, Chapter & Version 1973-1975 (1990), Trojan Rupie Edwards & Friends - Let There Be Version (1990), Trojan Rupie Edwards All Stars & Various Artists - Pure Gold - Success Various Artists - House of Lovers Various Artists - Rupie's Scorchers - 1969-1971 - Trybute (2002) Success Archives vols. 1-8 (2006-2007), Success Best of Sweet Gospel, Reggae And Soul - Vols. 1 - 7 (2006), Success Rupie Edwards Presents Success Archives - From Kingston Jamaica to London UK (2013), Rupie Edwards See also List of Jamaican record producers List of reggae musicians References Further reading Edwards, R.L. "Rupie" (2016) Some People, CreateSpace Independent Publishing Platform, External links Rupie Edwards Discography from Roots Archives 1945 births Living people Jamaican record producers Jamaican reggae musicians Jamaican songwriters People from Saint Ann Parish Trojan Records artists
4469698
https://en.wikipedia.org/wiki/Georgia%20Air%20National%20Guard
Georgia Air National Guard
{{Infobox military unit |unit_name= Georgia Air National Guard |image= File:Georgia Air National Guard - Emblem.png |image_size= 220px |caption= Shield of the Georgia Air National Guard |dates= 20 August 1946 – present |country= |allegiance= |branch= |type= |role= "To meet state and federal mission responsibilities." |size= |command_structure= Air National Guard Georgia National Guard |garrison= Georgia Air National Guard, 1693 Glynco Parkway, Brunswick, Georgia 31525 |garrison_label= |nickname= |patron= |motto= |colors= |colors_label= |march= |mascot= |battles= |anniversaries= |decorations= |battle_honours= |commander1= President Joe Biden(Commander-in-Chief)Frank Kendall III(Secretary of the Air Force)Governor Brian Kemp''(Governor of the State of Georgia) |commander1_label= Civilian leadership |commander2= Major General Thomas F. Grabowski |commander2_label= State military leadership |notable_commanders= |aircraft_attack= |aircraft_bomber= |aircraft_Command_and_Control= |aircraft_electronic=E-8 Joint STARS |aircraft_fighter= |aircraft_interceptor= |aircraft_patrol= |aircraft_recon= |aircraft_transport=C-130H Hercules |aircraft_tanker= }} The Georgia Air National Guard (GA ANG) is the aerial militia of the State of Georgia, United States of America. It is, along with the Georgia Army National Guard, an element of the Georgia National Guard. As state militia units, the units in the Georgia Air National Guard are not in the normal United States Air Force chain of command. They are under the jurisdiction of the Governor of Georgia through the office of the Georgia Adjutant General unless they are federalized by order of the President of the United States. The Georgia Air National Guard is headquartered in Atlanta, GA, and its commander is Major General Thomas F. Grabowski. Overview Under the "Total Force" concept, Georgia Air National Guard units are considered to be Air Reserve Components (ARC) of the United States Air Force (USAF). Georgia ANG units are trained and equipped by the Air Force and are operationally gained by a Major Command of the USAF if federalized. In addition, the Georgia Air National Guard forces are assigned to Air Expeditionary Forces and are subject to deployment tasking orders along with their active duty and Air Force Reserve counterparts in their assigned cycle deployment window. Along with their federal reserve obligations, as state militia units the elements of the Georgia ANG are subject to being activated by order of the Governor to provide protection of life and property, and preserve peace, order and public safety. State missions include disaster relief in times of earthquakes, hurricanes, floods and forest fires, search and rescue, protection of vital public services, and support to civil defense. Components The Georgia Air National Guard has 3,000 airmen and officers assigned to two flying wings and six geographically separated units (GSUs) throughout Georgia. Major units of the Georgia ANG are: 116th Air Control Wing Established 30 July 1940 (as: 128th Observation Squadron); operates: E-8C Joint STARS Stationed at: Robins Air Force Base, Warner-Robins Gained by: Air Combat Command The 116th ACW is the only Air National Guard unit operating the E-8C Joint Surveillance Target Attack Radar System (Joint STARS), an advanced ground surveillance and battle management system. 165th Airlift Wing Established 20 August 1946 (as: 158th Fighter Squadron); operates: C-130 Hercules Stationed at: Savannah International Airport, Savannah Gained by: Air Mobility Command The mission of the 165th Airlift Wing is to provide tactical airlift of personnel, equipment and supplies worldwide. Support Unit Functions and Capabilities: 117th Air Control Squadron, Hunter Army Air Field, Savannah Control of the highly charged and congested airspace over a given combat zone is the responsibility of the Georgia Air National Guard's unique 117th Air Control Squadron (117th ACS). 139th Intelligence Squadron, Fort Gordon, Augusta The primary mission of the 139th Intelligence Squadron (IS) is to execute cryptologic intelligence operations to satisfy strategic, operational and tactical intelligence requirements of national decision makers, combatant commands, combat operations, plans and forces. 165th Air Support Operations Squadron, Savannah IAP Deploys with, advise, and assist joint force commanders in planning, requesting, coordinating and controlling close air support, reconnaissance, and tactical airlift missions 224th Joint Communications Support Squadron, Brunswick The 224th Joint Communications Support Squadron (224th JCSS) provides general tactical communications support to a myriad of missions. 283rd Combat Communications Squadron, Dobbins ARB, Marietta Is responsible for "first-in" rapid deployment and "build-up" of an integrated force with state-of-the-art communications equipment and multi-skilled personnel. 530th Air Force Band (Band of the South), Dobbins ARB, Marietta Supports global Air Force and Air National Guard missions by fostering patriotism and providing musical services for the military community as well as the general public. Savannah Combat Readiness Training Center (CRTC), Savannah IAP, Savannah Provide the most realistic training environment possible for today’s war fighter. History The Militia Act of 1903 established the present National Guard system, units raised by the states but paid for by the Federal Government, liable for immediate state service. If federalized by Presidential order, they fall under the regular military chain of command. On 1 June 1920, the Militia Bureau issued Circular No.1 on organization of National Guard air units. The Georgia Air National Guard origins date to 1 May 1948 with the establishment of the 128th Observation Squadron and is oldest unit of the Georgia Air National Guard. The squadron is a descendant organization of the World War I 840th Aero Squadron, established on 1 February 1918. The 840th was a non-flying Air Service support unit, formed in Texas. Deployed to England in May 1918, then serving in the rear area behind the Western Front in France as an aircraft repair squadron beginning in August. Remained in France after the November 1918 Armistice, returning to Langley Field, Virginia in March 1919 and was demobilized. The 128th Observation Squadron was one of the 29 original National Guard Observation Squadrons of the United States Army National Guard formed before World War II. The 128th Observation Squadron was ordered into active service on 15 September 1941 as part of the buildup of the Army Air Corps prior to the United States entry into World War II. On 24 May 1946, the United States Army Air Forces, in response to dramatic postwar military budget cuts imposed by President Harry S. Truman, allocated inactive unit designations to the National Guard Bureau for the formation of an Air Force National Guard. These unit designations were allotted and transferred to various State National Guard bureaus to provide them unit designations to re-establish them as Air National Guard units. The modern Georgia ANG received federal recognition on 20 August 1946 as the 128th Fighter Squadron at Marietta Army Airfield. The 128th was equipped with F-47N Thunderbolts, and its mission was the air defense of the state. Also on 20 August, the 158th Fighter Squadron was activated at Chatam Army Airfield, Pooler, also equipped with F-47Ns. Also, on 20 August 1946, the 54th Fighter Wing at Marietta Army Airfield. The 54th Fighter Wing was a command and control organization for units in the Southeastern region of the United States. The 54th controlled Air National Guard units in Tennessee, North Carolina, South Carolina, Mississippi, Alabama, Florida and Georgia. On 9 September 1946, the 116th Fighter Group, also at Marietta AAF was activated, becoming an intermediate Command and Control organization for the 54th FW. The 116th assumed control of both the 128th and 158th Fighter Squadrons. 18 September 1947, however, is considered the Georgia Air National Guard's official birth concurrent with the establishment of the United States Air Force as a separate branch of the United States military under the National Security Act. At the end of October 1950, the Air National Guard converted to the wing-base Hobson Plan organization. As a result, the wing was withdrawn from the Georgia ANG and was inactivated on 31 October 1950. The 116th Fighter Wing was established by the National Guard Bureau, allocated to the state of Georgia, recognized and activated 1 November 1950; assuming the personnel, equipment and mission of the inactivated 54th Fighter Wing. The 116th Fighter Wing was federalized on 10 October 1950 due to the Korean War. Controlling ANG squadrons from Georgia, Florida and California, the 116th deployed to Japan in August 1950 and was engaged in combat operations from Taegu AB (K-2), South Korea from December 1950 until July 1952 when it returned to the United States. On 10 July 1958, the 158th Fighter-Interceptor Squadron at Travis Field (formerly Chatam AFB), Pooler, was authorized to expand to a group level, and the 165th Fighter-Interceptor Group''' was established by the National Guard Bureau. The 158th FIS becoming the group's flying squadron. In 1962, the 165th began operating C-97F Stratofreighters, and has remained an airlift squadron ever since. Today, the 165th Airlift Wing flies the C-130H Hercules. On 1 April 1996, the 116th Fighter Wing was moved from Marietta to Robins Air Force Base, near Warner-Robins in central Georgia. The 116th became a B-1B Lancer Bomb Wing. However, in order to save money, in 2002 the USAF agreed to reduce its fleet of B-1Bs from 92 to 60 aircraft. The 116th Bomb Wing, having older aircraft was ordered to send its aircraft to "active storage" which meant that they could be quickly returned to service should circumstances dictate. Its first B-1B was flown to AMARC storage at Davis-Monthan AFB, Arizona on 20 August. The 116th was re-designated as the 116th Air Control Wing. The 116th ACW was a blend of active-duty and national guard Airmen into a single unit. The 116th ACW was equipped with the new E-8C Joint STARS airborne battle management aircraft. Its mission is command and control, intelligence, surveillance and reconnaissance. Its primary mission is to provide theater ground and air commanders with ground surveillance to support attack operations and targeting that contributes to the delay, disruption and destruction of enemy forces. The E-8C evolved from Army and Air Force programs to develop, detect, locate and attack enemy armor at ranges beyond the forward area of troops. Starting in 2001, elements of every Air National Guard unit in Georgia were activated in support of the Global War on Terror. Flight crews, aircraft maintenance personnel, communications technicians, air controllers, intelligence analysts and air security personnel deployed to Iraq, Afghanistan, Qatar, Uzbekistan and other locations throughout the Southwest Asia. In 2014, Andrea Lewis became Georgia Air National Guard's first African American female pilot. See also Georgia State Defense Force Georgia Wing Civil Air Patrol References Gross, Charles J (1996), The Air National Guard and the American Military Tradition, United States Dept. of Defense, Georgia Department of Defense website External links 116th Air Control Wing 165th Airlift Wing Aviation Tragedy historical marker United States Air National Guard Military in Georgia (U.S. state) Georgia National Guard
36201332
https://en.wikipedia.org/wiki/Chrome%20Remote%20Desktop
Chrome Remote Desktop
Chrome Remote Desktop is a remote desktop software tool, developed by Google, that allows a user to remotely control another computer's desktop through a proprietary protocol also developed by Google, internally called Chromoting. The protocol transmits the keyboard and mouse events from the client to the server, relaying the graphical screen updates back in the other direction over a computer network. This feature therefore consists of a server component for the host computer, and a client component on the computer accessing the remote server. Note that Chrome Remote Desktop uses a unique protocol, as opposed to using the common Remote Desktop Protocol (developed by Microsoft). Software The Chrome Remote Desktop client was originally a Chrome extension from the Chrome Web Store requiring Google Chrome; the extension is deprecated, and a web portal is available at remotedesktop.google.com. The browser must support WebRTC and other unspecified "modern web platform features". The client software is also available on Android and iOS. If the computer hosts remote access, such as for remote support and system administration, a server package is downloaded. A Chromium-based browser that supports Chromium extensions such as Google Chrome or Microsoft Edge must be used. This is available for Microsoft Windows, OS X, Linux and Chrome OS. The Chrome Remote Desktop allows a permanent, pre-authorized connection to a remote computer, designed to allow a user to connect to another one of their own machines remotely. In contrast, Remote Assistance is designed for short-lived remote connections, and requires an operator on the remote computer to participate in authentication, as remote assistance login is via PIN passwords generated by the remote host's human operator. This method of connection will also periodically block out the control from the connecting user, requiring the person on the host machine to click a button to "Continue sharing" with the connected client. The protocol uses VP8 video encoding to display the remote computer's desktop to the user with high performance over low bandwidth connections. Under Windows, it supports copying and pasting across the two devices and real-time audio feed as well, but lacks an option to disable sharing and transmission of the audio stream. The software is limited to 100 clients. Attempting to add further PCs after reaching 100 will result in a "failed to register computer" error. See also Comparison of remote desktop software Remote Desktop Protocol Chromebook (Chrome OS) References External links Remote desktop Remote desktop software for Linux Google Chrome extensions Google software
2310624
https://en.wikipedia.org/wiki/Extensible%20Metadata%20Platform
Extensible Metadata Platform
The Extensible Metadata Platform (XMP) is an ISO standard, originally created by Adobe Systems Inc., for the creation, processing and interchange of standardized and custom metadata for digital documents and data sets. XMP standardizes a data model, a serialization format and core properties for the definition and processing of extensible metadata. It also provides guidelines for embedding XMP information into popular image, video and document file formats, such as JPEG and PDF, without breaking their readability by applications that do not support XMP. Therefore, the non-XMP metadata have to be reconciled with the XMP properties. Although metadata can alternatively be stored in a sidecar file, embedding metadata avoids problems that occur when metadata is stored separately. The XMP data model, serialization format and core properties is published by the International Organization for Standardization as ISO 16684-1:2012 standard. Data model The defined XMP data model can be used to store any set of metadata properties. These can be simple name/value pairs, structured values or lists of values. The data can be nested as well. The XMP standard also defines particular namespaces for defined sets of core properties (e.g. a namespace for the Dublin Core Metadata Element Set). Custom namespaces can be used to extend the data model. An instance of the XMP data model is called an XMP packet. Adding properties to a packet does not affect existing properties. Software to add or modify properties in an XMP packet should leave properties that are unknown to it untouched. For example, it is useful for recording the history of a resource as it passes through multiple processing steps, from being photographed, scanned, or authored as text, through photo editing steps (such as cropping or color adjustment), to assemble into a final document. XMP allows each software program or device along the workflow to add its own information to a digital resource, which carries its metadata along. The prerequisite is that all involved editors either actively support XMP, or at least do not delete it from the resource. Serialization The abstract XMP data model needs a concrete representation when it is stored or embedded into a file. As serialization format, a subset of the W3C RDF/XML syntax is most commonly used. It is a syntax to express a Resource Description Framework graph in XML. There are various equivalent ways to serialize the same XMP packet in RDF/XML. The most common metadata tags recorded in XMP data are those from the Dublin Core Metadata Initiative, which include things like title, description, creator, and so on. The standard is designed to be extensible, allowing users to add their own custom types of metadata into the XMP data. XMP generally does not allow binary data types to be embedded. This means that any binary data one wants to carry in XMP, such as thumbnail images, must be encoded in some XML-friendly format, such as Base64. XMP metadata can describe a document as a whole (the "main" metadata), but can also describe parts of a document, such as pages or included images. This architecture makes it possible to retain authorship and rights information about, for example, images included in a published document. Similarly, it permits documents created from several smaller documents to retain the original metadata associated with the parts. Example This is an example XML document for serialized XMP metadata in a JPEG photo: <?xpacket begin="?" id="W5M0MpCehiHzreSzNTczkc9d"?> <x:xmpmeta xmlns:x="adobe:ns:meta/" x:xmptk="Adobe XMP Core 5.4-c002 1.000000, 0000/00/00-00:00:00 "> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> <rdf:Description rdf:about="" xmlns:xmp="http://ns.adobe.com/xap/1.0/"> <xmp:CreatorTool>Picasa</xmp:CreatorTool> </rdf:Description> <rdf:Description rdf:about="" xmlns:mwg-rs="http://www.metadataworkinggroup.com/schemas/regions/" xmlns:stDim="http://ns.adobe.com/xap/1.0/sType/Dimensions#" xmlns:stArea="http://ns.adobe.com/xmp/sType/Area#"> <mwg-rs:Regions rdf:parseType="Resource"> <mwg-rs:AppliedToDimensions rdf:parseType="Resource"> <stDim:w>912</stDim:w> <stDim:h>687</stDim:h> <stDim:unit>pixel</stDim:unit> </mwg-rs:AppliedToDimensions> <mwg-rs:RegionList> <rdf:Bag> <rdf:li rdf:parseType="Resource"> <mwg-rs:Type></mwg-rs:Type> <mwg-rs:Area rdf:parseType="Resource"> <stArea:x>0.680921052631579</stArea:x> <stArea:y>0.3537117903930131</stArea:y> <stArea:h>0.4264919941775837</stArea:h> <stArea:w>0.32127192982456143</stArea:w> <stArea:unit>normalized</stArea:unit> </mwg-rs:Area> </rdf:li> </rdf:Bag> </mwg-rs:RegionList> </mwg-rs:Regions> </rdf:Description> <rdf:Description rdf:about="" xmlns:exif="http://ns.adobe.com/exif/1.0/"> <exif:PixelXDimension>912</exif:PixelXDimension> <exif:PixelYDimension>687</exif:PixelYDimension> <exif:ExifVersion>0220</exif:ExifVersion> </rdf:Description> </rdf:RDF> </x:xmpmeta> <!-- whitespace padding --> <?xpacket end="w"?> This metadata describes various properties of the image like the creator tool, image dimension or a face region within the image. Embedding Embedding metadata in files allows easy sharing and transfer of files across products, vendors, platforms, without metadata getting lost; embedding avoids a multitude of problems coming from proprietary vendor-specific metadata databases. XMP can be used in several file formats such as PDF, JPEG, JPEG 2000, JPEG XR, GIF, PNG, WebP, HTML, TIFF, Adobe Illustrator, PSD, MP3, MP4, Audio Video Interleave, WAV, RF64, Audio Interchange File Format, PostScript, Encapsulated PostScript, and proposed for DjVu. In a typical edited JPEG file, XMP information is typically included alongside Exif and IPTC Information Interchange Model data. Location in file types For more details, the XMP Specification, Part 3, Storage in Files listed below has details on embedding in specific file formats. TIFFTag 700 JPEGApplication segment 1 (0xFFE1) with segment header "http://ns.adobe.com/xap/1.0/\x00" JPEG 2000"uuid" atom with UID of 0xBE7ACFCB97A942E89C71999491E3AFAC PNGinside an "iTXt" text block with the keyword "XML:com.adobe.xmp" GIFas an Application Extension with identifier "XMP Data" and authentication code "XMP" MP3inside the ID3 block as a "PRIV" frame with an owner identifier of "XMP". MP4top-level "UUID" box with the UUID 0xBE7ACFCB97A942E89C71999491E3AFAC (Same as JPEG 2000) MOV (QuickTime)"XMP_" atom within a "udta" atom, within a top level "moov" atom. PDFembedded in a metadata stream contained in a PDF object WebPinside the files XMP chunk For file formats that have no support for embedded XMP data, this data can be stored in external .xmp sidecar files. Support and acceptance XMP Toolkit The XMP Toolkit implements metadata handling in two libraries: XMPCore for creation and manipulation of metadata that follows the XMP Data Model. XMPFiles for embedding serialized metadata in files, and for retrieving embedded metadata. Adobe provides the XMP Toolkit free of charge under a BSD license. The Toolkit includes specification and usage documents (PDFs), API documentation (doxygen/javadoc), C++ source code (XMPCore and XMPFiles) and Java source code (currently only XMPCore). XMPFiles is currently available as a C++/Java implementation in Windows, Mac OS, Unix/Linux. Free software and open-source tools (read/write support) Alfresco - open source CMS, DAM component can read/write XMP (Microsoft Windows, Linux) CC PDF Converter - A free open source (GPL) program to convert documents to PDF with embedded Creative-Commons license (Microsoft Windows). darktable - RAW developer, can read/write XMP in supported file formats (Linux, Mac OS X, Microsoft Windows, BSD) digiKam - open source (GPL) image tagger and organiser (Linux, Mac OS X, Microsoft Windows) ExifTool by Phil Harvey, open source Perl module or command line. Can read/write XMP, supports custom XMP schema (platform independent) F-Spot - Linux/GNOME photo manager and editor Geeqie - Lightweight Gtk+ based image manager (formerly GQView) GIMP - GNU Image Manipulation Program Gwenview - Linux/KDE photo manager and editor iText - Open Source Java library that can read and write XMP embedded in PDF files. RawTherapee - Can read "rating" tags from embedded XMP, which are then shown in the File Browser/Filmstrip using RawTherapee’s star rating system. Shotwell - Linux/GNOME photo manager, can read/write Exif, IPTC and XMP metadata TYPO3 - open source Enterprise CMS. DAM component reads XMP (PHP based) Proprietary tools (read/write support) ACDSee Pro can read and write XMP information for DNG, GIF, JPEG, PNG and TIFF files (Microsoft Windows, Mac OS X). Acrobat - can read and write XMP in PDF files (Microsoft Windows, Mac OS X, partially Linux). Aperture - Image management application and RAW developer. Reads/writes XMP sidecar files to (batch) import/export image metadata (Mac OS X). Bibble5 can read/write XMP information for RAW, JPG and TIFF files (Microsoft Windows, Mac OS X, Linux). Bridge - can read/write and batch edit XMP metadata (Microsoft Windows, Mac OS X) Capture One - Photo editing and management software. Reads and writes XMP for all supported image formats (Microsoft Windows, Mac OS X). Corel AfterShot Pro - RAW processor (Bibble successor), reads/writes XMP, uses XMP sidecar files for non-destructive image processing (Microsoft Windows, Mac OS X, Linux). Cumulus - DAM software, can read/write XMP for all supported image formats, InDesign and PDF files (Microsoft Windows, Mac OS X, Linux) DBGallery - Can read/write XMP for JPEG, PSD, RAW, TIFF, DNG, PNG, GIF, JP2, PJX, MPG, MP4, MPEG, MOV (Microsoft Windows). Multi-user, central database system. FastPictureViewer - Image viewer (Windows) with XMP embedding and/or sidecar files creation (xmp:Rating, xmp:Label, photoshop:Urgency) (Microsoft Windows) FrameMaker - publishing tool. Stores document metadata in XMP since version 7.0 (Microsoft Windows) Illustrator - illustration software, writes document metadata in XMP (Microsoft Windows, Mac OS X) Indesign - page layout software, can pass through XMP in placed objects, writes extensive XMP about document contents in layout documents and exported PDF (Microsoft Windows, Mac OS X) iOS Photos app - Saves edits made to photos on an iPhone/iPad losslessly as XMP embedded in the original JPEG. Lightroom - Image management application and RAW developer. Uses XMP for non-destructive image manipulation and import/export of metadata (Microsoft Windows, Mac OS X) MetaLith - can read, analyze and write Exif, IPTC and XMP metadata of multiple JPG and TIFF files Microsoft Windows Vista - Photo Gallery saves tags to XMP (Microsoft Windows) Photo Mechanic - Reads and writes XMP directly into image files or into XMP sidecar files. Photoshop - can read/write XMP in supported images. Allows embedding of non standard XMP data through 'custom XMP panels' (Microsoft Windows, Mac OS X) PicaJet - Can read XMP for JPG, TIFF and DNG formats (Microsoft Windows). Picasa - Image organizer/viewer, uses XMP for face tagging (Microsoft Windows, Mac OS X, Linux) Portfolio - DAM software, can read/write XMP in supported file formats (Microsoft Windows, Mac OS X) Stibo STEP - DAM component reads/writes XMP for all supported formats Windows Imaging Component - Microsoft library for working with and processing digital images and image metadata (Microsoft Windows) Windows Live Photo Gallery - a photo management and sharing application released as a part of Microsoft's Windows Live initiative. It is an upgraded version of Windows Photo Gallery, which is a part of Windows Vista. XnView - can read/write Exif, IPTC and XMP information. Zoner Photo Studio - can read/write Exif, IPTC and XMP information for DNG, JPEG, TIFF, HDP and various RAW files (Microsoft Windows). The mainstream IPTC Information Interchange Model editing tools also support editing of XMP data. Licensing XMP is a registered trademark of Adobe Systems Incorporated. The XMP specification became an ISO standard and is not proprietary anymore. Initially, Adobe released source code for the XMP SDK under a license called the ADOBE SYSTEMS INCORPORATED — OPEN SOURCE LICENSE. The compatibility of this license with the GNU General Public License has been questioned. The license is not listed on the list maintained by the Open Source Initiative and is different from the licenses for most of their open source software. On May 14, 2007, Adobe released the XMP Toolkit SDK under a standard BSD license. On August 28, 2008, Adobe posted a public patent license for the XMP. Adobe continues to distribute these documents under the XMP Specification Public Patent License. History XMP was first introduced by Adobe in April 2001 as part of the Adobe Acrobat 5.0 software product. On June 21, 2004, Adobe announced its collaboration with the International Press Telecommunications Council. In July 2004, a working group led by Adobe Systems' Gunar Penikis and IPTC's Michael Steidl was set up, and volunteers were recruited from AFP (Agence France-Presse), Associated Press, ControlledVocabulary.com, IDEAlliance, Mainichi Shimbun, Reuters, and others, to develop the new schema. The "IPTC Core Schema for XMP" version 1.0 specification was released publicly on March 21, 2005. A set of custom panels for Adobe Photoshop CS can be downloaded from the IPTC. The package includes a User's Guide, example photos with embedded XMP information, the specification document, and an implementation guide for developers. The "User's Guide to the IPTC Core" goes into detail about how each of the fields should be used and is also available directly as a PDF. The next version of the Adobe Creative Suite (CS2) included these custom panels as part of its default set. The Windows Photo Gallery, released with Windows Vista, offers support for the XMP standard, the first time Microsoft has released metadata compatibility beyond Exif. See also IPTC Information Interchange Model Resource Description Framework (RDF) Astronomy Visualization Metadata (AVM) Comparison of metadata editors Exchangeable Image File Format (Exif) References External links Adobe XMP Main Page XMP Specification XMP Information on coverpages.org Creative Commons XMP Recommendation Metadata section in the PDF 1.6 Language Reference IPTC4XMP (IPTC Core) standard Metadata Working Group provides guidance on metadata interoperability ISO standards Digital photography Metadata Open formats
8587760
https://en.wikipedia.org/wiki/Thames%20Valley%20College%20%28London%2C%20Ontario%29
Thames Valley College (London, Ontario)
Thames Valley College of Business & IT, also known as Information Technology Business College Inc., is a private career college in London, Ontario, Canada. It reportedly opened in 2003 and is accredited by the Ministry of Training, Colleges and Universities, under the Private Career Colleges Act of Ontario. Some references indicate that Information Technology Business College was operational earlier than 2003 Thames Valley College offers the following accredited Diploma Programs: Accounting & Payroll Administrator, Marketing Assistant, Police Foundations, Medical Office Administrator, Office Administration, Information Technology Technician, Network Administrator, Network & Internet Security Specialist, Legal Administrative Assistant, Computer Business Applications Specialist, Website Designer (E-Commerce), and Web Developer (E-Commerce). Student loan default rates The Ontario Student Assistance Program reports default rates for private career colleges. For Thames Valley College, the percentage of student loans in default was 35.3% in 2005 and 41.7% in 2004, compared to 22.2% and 25.4% for all private career colleges. References Strategis: Canadian Company profiles - Thames Valley College Thames Valley College Vocational education in Canada Education in London, Ontario 2003 establishments in Ontario
972745
https://en.wikipedia.org/wiki/Electra%20%28Euripides%20play%29
Electra (Euripides play)
{{Infobox play | name = Electra | image = Orestes_Elektra_Hermes_Louvre_K544.jpg | caption = Orestes, Electra and Hermes at Agamemnon's tomb. Side A of a Lucanian red-figure pelike, c. 380–370 BC.| writer = Euripides | chorus = Argive women | characters = Farmer, husband of ElectraElectraOrestesOld servantMessengerClytemnestraCastor | mute = PyladesPolydeucesServants | setting = Argos, at the house of Electra's husband | premiere = | place = City Dionysia | orig_lang = Ancient Greek | series = | subject = | genre = Tragedy | web = }} Euripides' Electra (, Ēlektra) is a play probably written in the mid 410s BC, likely before 413 BC. It is unclear whether it was first produced before or after Sophocles' version of the Electra story. Background Years before the start of the play, near the start of the Trojan War, the Greek general Agamemnon sacrificed his daughter Iphigeneia in order to appease the goddess Artemis. While his sacrifice allowed the Greek army to set sail for Troy, it led to a deep resentment in his wife, Clytemnestra. Upon Agamemnon's return from the Trojan War ten years later, Clytemnestra and her lover Aegisthus murdered him. Plot The play begins with the introduction of Electra, the daughter of Clytemnestra and the late Agamemnon. Several years after Agamemnon's death suitors began requesting Electra's hand in marriage. Out of fear that Electra's child might seek revenge, Clytemnestra and Aegisthus married her off to a peasant of Mycenae. The peasant is kind to her and has respected her family name and her virginity. In return for his kindness, Electra helps her husband with the household chores. Despite her appreciation for her husband's kindness, Electra resents being cast out of her house and laments to the Chorus about her struggles with her drastic change in social status. Upon Agamemnon's murder Clytemnestra and Aegisthus put Orestes, the other child of Clytemnestra and Agamemnon, under the care of the king of Phocis, where he became friends with the king's son, Pylades. Now grown, Orestes and Pylades travel to Electra and her husband's house. Orestes keeps his identity hidden from Electra, claiming to be messengers of Orestes. He uses his anonymity to determine Electra's loyalty to him and Agamemnon before he reveals his plans for revenge. After some time it is clear that Electra is passionate about avenging the death of their father. At this point the aged servant who brought Orestes to Phocis years before enters the play. He recognizes Orestes because of the scar on his brow and the siblings are reunited. They begin to plot how they will murder both Aegisthus and Clytemnestra. The aged servant explains that Aegisthus is currently in his stables, preparing to sacrifice oxen for a feast. Orestes goes to confront Aegisthus while Electra sends the aged servant to tell Clytemnestra that she had a son ten days ago, knowing this will bring Clytemnestra to her house. A messenger arrives and describes Orestes’ successful murder of Aegisthus. Orestes and Pylades return bearing Aegisthus’ body. As Clytemnestra approaches, Orestes begins to waver on his decision to murder their mother. Electra convinces Orestes that he must fulfill his duty to Agamemnon and murder their mother. When Clytemnestra arrives, Orestes and Electra lure her into the house, where they thrust a sword into her throat. The two leave the house, filled with grief and guilt. As they lament, Clytemnestra's deified brothers, Castor and Pollux, appear. They tell Electra and Orestes that their mother received just punishment but their matricide was still a shameful act, and they instruct the siblings on what they must do to atone and purge their souls. Aeschylean parody and Homeric allusion The enduring popularity of Aeschylus' Oresteia trilogy (produced in 458 BC) is evident in Euripides' construction of the recognition scene between Orestes and Electra, which mocks Aeschylus' play. In The Libation Bearers (whose plot is roughly equivalent to the events in Electra), Electra recognizes her brother by a series of tokens: a lock of his hair, a footprint he leaves at Agamemnon's grave, and an article of clothing she had made for him years earlier. Euripides' own recognition scene clearly ridicules Aeschylus' account. In Euripides' play (510ff.), Electra laughs at the idea of using such tokens to recognize her brother because: there is no reason their hair should match; Orestes' footprint would in no way resemble her smaller footprint; and it would be illogical for a grown Orestes to still have a piece of clothing made for him when he was a small child. Orestes is instead recognized from a scar he received on the forehead while chasing a doe in the house as a child (571-74). This is a mock-heroic allusion to a scene from Homer's Odyssey. In Odyssey 19.428-54, the nurse Eurycleia recognizes a newly returned Odysseus from a scar on his thigh that he received as a child while on his first boar hunt. In the Odyssey, Orestes' return to Argos and taking revenge for his father's death is held up several times as a model for Telemachus' behavior (see Telemachy). Euripides in turn uses his recognition scene to allude to the one in Odyssey 19. Instead of an epic heroic boar hunt, Euripides instead invents a semi-comic incident involving a fawn. Translations Edward P. Coleridge, 1891 – prose: full text Aurthur S. Way, 1896 – verse: full text Gilbert Murray, 1911 – verse: full text D. W. Lucas, 1951 – prose Emily Townsend Vermeule, 1958 – verse M. J. Cropp, 1988 – verse J. Lembke & K.J. Reckford, 1994 K. McLeish, 1997 J. Davie, 1998 J. Morwood, 1998 M. MacDonald and J. M. Walton, 2004 – verse G. Theodoridis, 2006 – prose: full text Ian C. Johnston, 2009 – verse: full text Brian Vinero, 2012: verse Emily Wilson, 2016 - verse AdaptationsElectra, 1962 film References Sources Arnott, W. G. 1993. "Double the Vision: A Reading of Euripides' Electra (1981)" In Greek Tragedy. Greece and Rome Studies, Volume II. Edited by Ian McAuslan and Peter Walcot. New York: Oxford University Press Gallagher, Robert L. 2003. "Making the Stronger Argument the Weaker: Euripides, Electra 518-41." Classical Quarterly 53.2: 401-415 Garner, R. 1990. From Homer to Tragedy: The Art of Allusion in Greek Poetry. London: Routledge. Garvie, Alexander F. 2012. "Three Different Electras in Three Different Plots." Lexis 30:283–293. Gellie, G. H. 1981. "Tragedy and Euripides’ Electra." Bulletin of the Institute of Classical Studies 28:1–12. Goff, Barbara. 1999–2000. "Try to Make it Real Compared to What? Euripides’ Electra and the Play of Genres." Illinois Classical Studies 24–25:93–105. Hammond, N. G. L. 1985. "Spectacle and Parody in Euripides’ Electra." Greek, Roman and Byzantine Studies 25:373–387. Morwood, J. H. W. 1981. "The Pattern of the Euripides Electra." American Journal of Philology 102:362–370. Mossman, Judith. 2001. "Women’s Speech in Greek Tragedy: The Case of Electra and Clytemnestra in Euripides’ Electra." Classical Quarterly n 51:374–384. Raeburn, David. 2000. "The Significance of Stage Properties in Euripides’ Electra." Greece & Rome 47:149–168. Solmsen, F. 1967. Electra and Orestes: Three Recognitions in Greek Tragedy. Amsterdam: Noord-Hollandsche Uitgevers Mij. Tarkow, T. 1981. "The Scar of Orestes: Observations on a Euripidean Innovation." Rheinisches Museum 124: 143-53. Wohl, Victoria. 2015. "How to Recognise a Hero in Euripides’ Electra." Bulletin of the Institute of Classical Studies'' 58:61–76. External links Textual criticism. Theatre Database (online). Plays by Euripides Trojan War literature Mythology of Argolis Plays set in ancient Greece Greek plays adapted into films
14211432
https://en.wikipedia.org/wiki/Kronos%20Incorporated
Kronos Incorporated
Kronos Incorporated was an American multinational workforce management and human capital management cloud provider headquartered in Lowell, Massachusetts, United States, which employed more than 6,000 people worldwide. In February 2020, the company announced a merger with Ultimate Software and that the combined company would be led by Aron Ain and be called Ultimate Kronos Group. The merger was officially completed on April 1, 2020. History Kronos was founded in 1977 by Massachusetts Institute of Technology (MIT) and Simon Business School alumnus Mark S. Ain. Under Mark Ain's leadership, Kronos sustained one of the longest records of growth and profitability as a public company in software industry history. In 1979, Kronos delivered the world's first microprocessor-based time clock and, in 1985, delivered its first PC-based time and attendance product. In 1992, Kronos became a publicly-traded company on NASDAQ. Aron Ain, succeeded his brother Mark Ain as chief executive officer in 2005. In March 2007, Kronos went private again, bought out for US$1.74 billion by the lead investor Hellman & Friedman and the secondary investor JMI Equity. In 2012, the Third Circuit Court of Appeals enforced a subpoena seeking production of documents by Kronos, Inc., in an administrative charge before the EEOC alleging disability discrimination. The charge was brought by an individual job applicant against Kroger Food Co., who did not hire the job applicant and used a Kronos assessment as part of its hiring process. Kronos, which was not a party to the litigation, objected to the EEOC's subpoena on the basis that the information requested was irrelevant, and production would require Kronos to disclose protected trade secret information. In 2014, private equity firms Blackstone and GIC invested in Kronos alongside Hellman & Friedman and JMI Equity. In the transaction, Kronos was valued at $4.5 billion. This was the first year that the company achieved $1 billion in annual revenue. In 2017, Kronos moved its corporate headquarters from the nearby town of Chelmsford to the Cross Point Towers in Lowell, Massachusetts, consolidating multiple offices under one roof. In February 2020, the company announced a merger with Ultimate Software and that the combined company will be led by Aron Ain. In April 2020, as a response to the COVID-19 pandemic, Kronos introduced an automated report-generating tool to aid contact tracing. The tool analyzes work and attendance records of employees who test positive (or presumed positive) for COVID-19 to generate a report of potentially affected co-workers. On December 13, 2021, Kronos announced that, on December 11, it was discovered that Kronos had been subject to a ransomware attack which disabled the functionality of the Kronos Private Cloud software for "up to several weeks". This breach forced many Kronos customers including municipalities, universities, and private employers to resort to alternative methods, including issuance of paper checks, to properly pay their employees. Products Originally a manufacturer of time clocks, the majority of Kronos' revenue is now derived from software and services. The company provides cloud applications for workforce management and human capital management, as well as consulting, education, and support services to its customers. Acquisitions Kronos has conducted a number of acquisitions, with some of the most notable including: Principal Decision Systems International (PDSI) The Workforce Solutions software division of SimplexGrinnell Stromberg 3i Systems SaaShr.com Time Link International Corporation (TimeLink) Empower Software Solutions Financial Management Solutions, Inc. (FMSI) Digital Instinct Pty. Limited OptiLink References External links Business software companies Software companies based in Massachusetts Companies based in Lowell, Massachusetts Software companies established in 1977 Human resource management software Cloud computing providers Multinational companies headquartered in the United States Privately held companies based in Massachusetts 1977 establishments in Massachusetts American companies established in 1977 1992 initial public offerings 2007 mergers and acquisitions 2020 mergers and acquisitions Private equity portfolio companies Software companies of the United States
149963
https://en.wikipedia.org/wiki/Framebuffer
Framebuffer
A framebuffer (frame buffer, or sometimes framestore) is a portion of random-access memory (RAM) containing a bitmap that drives a video display. It is a memory buffer containing data representing all the pixels in a complete video frame. Modern video cards contain framebuffer circuitry in their cores. This circuitry converts an in-memory bitmap into a video signal that can be displayed on a computer monitor. In computing, a screen buffer is a part of computer memory used by a computer application for the representation of the content to be shown on the computer display. The screen buffer may also be called the video buffer, the regeneration buffer, or regen buffer for short. Screen buffers should be distinguished from video memory. To this end, the term off-screen buffer is also used. The information in the buffer typically consists of color values for every pixel to be shown on the display. Color values are commonly stored in 1-bit binary (monochrome), 4-bit palettized, 8-bit palettized, 16-bit high color and 24-bit true color formats. An additional alpha channel is sometimes used to retain information about pixel transparency. The total amount of memory required for the framebuffer depends on the resolution of the output signal, and on the color depth or palette size. History Computer researchers had long discussed the theoretical advantages of a framebuffer, but were unable to produce a machine with sufficient memory at an economically practicable cost. In 1947, the Manchester Baby computer used a Williams tube, later the Williams-Kilburn tube, to store 1024 bits on a cathode-ray tube (CRT) memory and displayed on a second CRT. Other research labs were exploring these techniques with MIT Lincoln Laboratory achieving a 4096 display in 1950. A color scanned display was implemented in the late 1960s, called the Brookhaven RAster Display (BRAD), which used a drum memory and a television monitor. In 1969, A. Michael Noll of Bell Labs implemented a scanned display with a frame buffer, using magnetic-core memory. Later on, the Bell Labs system was expanded to display an image with a color depth of three bits on a standard color TV monitor. In the early 1970s, the development of MOS memory (metal-oxide-semiconductor memory) integrated-circuit chips, particularly high-density DRAM (dynamic random-access memory) chips with at least 1kb memory, made it practical to create, for the first time, a digital memory system with framebuffers capable of holding a standard video image. This led to the development of the SuperPaint system by Richard Shoup at Xerox PARC in 1972. Shoup was able to use the SuperPaint framebuffer to create an early digital video-capture system. By synchronizing the output signal to the input signal, Shoup was able to overwrite each pixel of data as it shifted in. Shoup also experimented with modifying the output signal using color tables. These color tables allowed the SuperPaint system to produce a wide variety of colors outside the range of the limited 8-bit data it contained. This scheme would later become commonplace in computer framebuffers. In 1974, Evans & Sutherland released the first commercial framebuffer, the Picture System, costing about $15,000. It was capable of producing resolutions of up to 512 by 512 pixels in 8-bit grayscale, and became a boon for graphics researchers who did not have the resources to build their own framebuffer. The New York Institute of Technology would later create the first 24-bit color system using three of the Evans & Sutherland framebuffers. Each framebuffer was connected to an RGB color output (one for red, one for green and one for blue), with a Digital Equipment Corporation PDP 11/04 minicomputer controlling the three devices as one. In 1975, the UK company Quantel produced the first commercial full-color broadcast framebuffer, the Quantel DFS 3000. It was first used in TV coverage of the 1976 Montreal Olympics to generate a picture-in-picture inset of the Olympic flaming torch while the rest of the picture featured the runner entering the stadium. The rapid improvement of integrated-circuit technology made it possible for many of the home computers of the late 1970s to contain low-color-depth framebuffers. Today, nearly all computers with graphical capabilities utilize a framebuffer for generating the video signal. Amiga computers, created in the 1980s, featured special design attention to graphics performance and included a unique Hold-And-Modify framebuffer capable of displaying 4096 colors. Framebuffers also became popular in high-end workstations and arcade system boards throughout the 1980s. SGI, Sun Microsystems, HP, DEC and IBM all released framebuffers for their workstation computers in this period. These framebuffers were usually of a much higher quality than could be found in most home computers, and were regularly used in television, printing, computer modeling and 3D graphics. Framebuffers were also used by Sega for its high-end arcade boards, which were also of a higher quality than on home computers. Display modes Framebuffers used in personal and home computing often had sets of defined modes under which the framebuffer can operate. These modes reconfigure the hardware to output different resolutions, color depths, memory layouts and refresh rate timings. In the world of Unix machines and operating systems, such conveniences were usually eschewed in favor of directly manipulating the hardware settings. This manipulation was far more flexible in that any resolution, color depth and refresh rate was attainable – limited only by the memory available to the framebuffer. An unfortunate side-effect of this method was that the display device could be driven beyond its capabilities. In some cases this resulted in hardware damage to the display. More commonly, it simply produced garbled and unusable output. Modern CRT monitors fix this problem through the introduction of protection circuitry. When the display mode is changed, the monitor attempts to obtain a signal lock on the new refresh frequency. If the monitor is unable to obtain a signal lock, or if the signal is outside the range of its design limitations, the monitor will ignore the framebuffer signal and possibly present the user with an error message. LCD monitors tend to contain similar protection circuitry, but for different reasons. Since the LCD must digitally sample the display signal (thereby emulating an electron beam), any signal that is out of range cannot be physically displayed on the monitor. Color palette Framebuffers have traditionally supported a wide variety of color modes. Due to the expense of memory, most early framebuffers used 1-bit (2-colors per pixel), 2-bit (4-colors), 4-bit (16-colors) or 8-bit (256-colors) color depths. The problem with such small color depths is that a full range of colors cannot be produced. The solution to this problem was indexed color which adds a lookup table to the framebuffer. Each color stored in framebuffer memory acts as a color index. The lookup table serves as a palette with a limited number of different colors meanwhile the rest is used as an index table. Here is a typical indexed 256-color image and its own palette (shown as a rectangle of swatches): {| style="border-style: none" border="0" cellpadding="0" |- || ||   || |} In some designs it was also possible to write data to the LUT (or switch between existing palettes) on the run, allowing dividing the picture into horizontal bars with their own palette and thus render an image that had a far wider palette. For example, viewing an outdoor shot photograph, the picture could be divided into four bars, the top one with emphasis on sky tones, the next with foliage tones, the next with skin and clothing tones, and the bottom one with ground colors. This required each palette to have overlapping colors, but carefully done, allowed great flexibility. Memory access While framebuffers are commonly accessed via a memory mapping directly to the CPU memory space, this is not the only method by which they may be accessed. Framebuffers have varied widely in the methods used to access memory. Some of the most common are: Mapping the entire framebuffer to a given memory range. Port commands to set each pixel, range of pixels or palette entry. Mapping a memory range smaller than the framebuffer memory, then bank switching as necessary. The framebuffer organization may be packed pixel or planar. The framebuffer may be all points addressable or have restrictions on how it can be updated. RAM on the video card Video cards always have a certain amount of RAM. This RAM is where the bitmap of image data is "buffered" for display. The term frame buffer is thus often used interchangeably when referring to this RAM. The CPU sends image updates to the video card. The video processor on the card forms a picture of the screen image and stores it in the frame buffer as a large bitmap in RAM. The bitmap in RAM is used by the card to continually refresh the screen image. Virtual framebuffers Many systems attempt to emulate the function of a framebuffer device, often for reasons of compatibility. The two most common virtual framebuffers are the Linux framebuffer device (fbdev) and the X Virtual Framebuffer (Xvfb). Xvfb was added to the X Window System distribution to provide a method for running X without a graphical framebuffer. The Linux framebuffer device was developed to abstract the physical method for accessing the underlying framebuffer into a guaranteed memory map that is easy for programs to access. This increases portability, as programs are not required to deal with systems that have disjointed memory maps or require bank switching. Page flipping A frame buffer may be designed with enough memory to store two frames worth of video data. In a technique known generally as double buffering or more specifically as page flipping, the framebuffer uses half of its memory to display the current frame. While that memory is being displayed, the other half of memory is filled with data for the next frame. Once the secondary buffer is filled, the framebuffer is instructed to display the secondary buffer instead. The primary buffer becomes the secondary buffer, and the secondary buffer becomes the primary. This switch is often done after the vertical blanking interval to avoid screen tearing where half the old frame and half the new frame is shown together. Page flipping has become a standard technique used by PC game programmers. Graphics accelerators As the demand for better graphics increased, hardware manufacturers created a way to decrease the amount of CPU time required to fill the framebuffer. This is commonly called graphics acceleration. Common graphics drawing commands (many of them geometric) are sent to the graphics accelerator in their raw form. The accelerator then rasterizes the results of the command to the framebuffer. This method frees the CPU to do other work. Early accelerators focused on improving the performance of 2D GUI systems. While retaining these 2D capabilities, most modern accelerators focus on producing 3D imagery in real time. A common design uses a graphics library such as OpenGL or Direct3D which interfaces with the graphics driver to translate received commands to instructions for the accelerator's graphics processing unit (GPU). The GPU uses those instructions to compute the rasterized results and the results are bit blitted to the framebuffer. The framebuffer's signal is then produced in combination with built-in video overlay devices (usually used to produce the mouse cursor without modifying the framebuffer's data) and any final special effects that are produced by modifying the output signal. An example of such final special effects was the spatial anti-aliasing technique used by the 3dfx Voodoo cards. These cards add a slight blur to output signal that makes aliasing of the rasterized graphics much less obvious. At one time there were many manufacturers of graphics accelerators, including: 3dfx Interactive; ATI; Hercules; Trident; Nvidia; Radius; S3 Graphics; SiS and Silicon Graphics. the market for graphics accelerators for x86-based systems is dominated by Nvidia (acquired 3dfx in 2002), AMD (who acquired ATI in 2006), and Intel (which currently produces only integrated GPUs rather than discrete video cards). Comparisons With a framebuffer, the electron beam (if the display technology uses one) is commanded to perform a raster scan, the way a television renders a broadcast signal. The color information for each point thus displayed on the screen is pulled directly from the framebuffer during the scan, creating a set of discrete picture elements, i.e. pixels. Framebuffers differ significantly from the vector displays that were common prior to the advent of raster graphics (and, consequently, to the concept of a framebuffer). With a vector display, only the vertices of the graphics primitives are stored. The electron beam of the output display is then commanded to move from vertex to vertex, tracing a line across the area between these points. Likewise, framebuffers differ from the technology used in early text mode displays, where a buffer holds codes for characters, not individual pixels. The video display device performs the same raster scan as with a framebuffer, but generates the pixels of each character in the buffer as it directs the beam. See also Bit plane Scanline rendering Swap chain Tile-based video game Tiled rendering References External links Interview with NYIT researcher discussing the 24-bit system History of Sun Microsystems' Framebuffers Computer graphics Computer memory Image processing User interfaces
293121
https://en.wikipedia.org/wiki/Education%20in%20India
Education in India
Education in India is primarily managed by state-run public education system, which fall under the command of the government at three levels: Central, state and local. Under various articles of the Indian Constitution and the Right of Children to Free and Compulsory Education Act, 2009, free and compulsory education is provided as a fundamental right to children aged 6 to 14. The approximate ratio of public schools to private schools in India is 7:5. Major policy initiatives in Indian education are numerous. Up until 1976, education policies and implementation were determined legally by each of India's constitutional states. The 42nd amendment to the constitution in 1976 made education a 'concurrent subject'. From this point on the central and state governments shared formal responsibility for funding and administration of education. In a country as large as India, now with 28 states and eight union territories, this means that the potential for variations between states in the policies, plans, programs and initiatives for elementary education is vast. Periodically, national policy frameworks are created to guide states in their creation of state-level programs and policies. State governments and local government bodies manage the majority of primary and upper primary schools and the number of government-managed elementary schools is growing. Simultaneously the number and proportion managed by private bodies is growing. In 2005-6 83.13% of schools offering elementary education (Grades 1-8) were managed by government and 16.86% of schools were under private management (excluding children in unrecognised schools, schools established under the Education Guarantee Scheme and in alternative learning centers). Of those schools managed privately, one third are 'aided' and two thirds are 'unaided'. Enrolment in Grades 1-8 is shared between government and privately managed schools in the ratio 73:27. However in rural areas this ratio is higher (80:20) and in urban areas much lower (36:66). In the 2011 Census, about 73% of the population was literate, with 81% for males and 65% for females. National Statistical Commission surveyed literacy to be 77.7% in 2017–18, 84.7% for male and 70.3% for female. This compares to 1981 when the respective rates were 41%, 53% and 29%. In 1951 the rates were 18%, 27% and 9%. India's improved education system is often cited as one of the main contributors to its economic development. Much of the progress, especially in higher education and scientific research, has been credited to various public institutions. While enrolment in higher education has increased steadily over the past decade, reaching a Gross Enrolment Ratio (GER) of 26.3% in 2019, there still remains a significant distance to catch up with tertiary education enrolment levels of developed nations, a challenge that will be necessary to overcome in order to continue to reap a demographic dividend from India's comparatively young population. Poorly resourced public schools which suffer from high rates of teacher absenteeism may have encouraged the rapid growth of private (unaided) schooling in India, particularly in urban areas. Private schools divide into two types: recognised and unrecognised schools. Government 'recognition' is an official stamp of approval and for this a private school is required to fulfil a number of conditions, though hardly any private schools that get 'recognition' actually fulfil all the conditions of recognition. The emergence of large numbers of unrecognised primary schools suggests that schools and parents do not take government recognition as a stamp of quality. At the primary and secondary level, India has a large private school system complementing the government run schools, with 29% of students receiving private education in the 6 to 14 age group. Certain post-secondary technical schools are also private. The private education market in India had a revenue of US$450 million in 2008, but is projected to be a US$40 billion market. As per the Annual Status of Education Report (ASER) 2012, 96.5% of all rural children between the ages of 6-14 were enrolled in school. This is the fourth annual survey to report enrolment above 96%. India has maintained an average enrolment ratio of 95% for students in this age group from year 2007 to 2014. As an outcome the number of students in the age group 6-14 who are not enrolled in school has come down to 2.8% in the academic year 2018 (ASER 2018). Another report from 2013 stated that there were 229 million students enrolled in different accredited urban and rural schools of India, from Class I to XII, representing an increase of 2.3 million students over 2002 total enrolment, and a 19% increase in girl's enrolment. While quantitatively India is inching closer to universal education, the quality of its education has been questioned particularly in its government run school system. While more than 95 per cent of children attend primary school, just 40 per cent of Indian adolescents attend secondary school (Grades 9-12). Since 2000, the World Bank has committed over $2 billion to education in India. Some of the reasons for the poor quality include absence of around 25% of teachers every day. States of India have introduced tests and education assessment system to identify and improve such schools. The Human Rights Measurement Initiative finds that India is achieving only 79.0% of what should be possible at its level of income for the right to education. Although there are private schools in India, they are highly regulated in terms of what they can teach, in what form they can operate (must be a non-profit to run any accredited educational institution) and all the other aspects of the operation. Hence, the differentiation between government schools and private schools can be misleading. However, in a report by Geeta Gandhi Kingdon entitled: The emptying of public Schools and growth of private schools in India, it is said that For sensible education policy making, it is vital to take account of the changing trends in the size of the private and public schooling sectors in India. Ignoring these trends involves the risk of poor policies/legislation, with attendant adverse consequences for children's life chances. In January 2019, India had over 900 universities and 40,000 colleges. In India's higher education system, a significant number of seats are reserved under affirmative action policies for the historically disadvantaged Scheduled Castes and Scheduled Tribes and Other Backward Classes. In universities, colleges, and similar institutions affiliated to the central government, there is a maximum 50% of reservations applicable to these disadvantaged groups, at the state level it can vary. Maharashtra had 73% reservation in 2014, which is the highest percentage of reservations in India. History Early education in India commenced under the supervision of a guru or prabhu. The education was delivered through Gurukula. The relationship between Guru and Shishya was very important part of the education. Takshasila (in modern-day Pakistan) was the earliest recorded centre of higher learning in India from possibly 8th century BCE, and it is debatable whether it could be regarded a university or not in modern sense, since teachers living there may not have had official membership of particular colleges, and there did not seem to have existed purpose-built lecture halls and residential quarters in Taxila, in contrast to the later Nalanda university in eastern India. Nalanda was the oldest university-system of education in the world in the modern sense of university. There all subjects were taught in Ariano -Páli language. Secular institutions cropped up along Buddhist monasteries. These institutions imparted practical education, e.g. medicine. A number of urban learning centres became increasingly visible from the period between 500 BCE to 400 CE. The important urban centers of learning were Nalanda (in modern-day Bihar) and Manassa in Nagpur, among others. These institutions systematically imparted knowledge and attracted a number of foreign students to study topics such as Buddhist Páli literature, logic, páli grammar, etc. Chanakya, a Brahmin teacher, was among the most famous teachers, associated with founding of Mauryan Empire. Sammanas and Brahmin gurus historically offered education by means of donations, rather than charging fees or the procurement of funds from students or their guardians. Later, stupas, temples also became centers of education; religious education was compulsory, but secular subjects were also taught. Students were required to be brahmacaris or celibates. The knowledge in these orders was often related to the tasks a section of the society had to perform. The priest class, the Sammanas, were imparted knowledge of religion, philosophy, and other ancillary branches while the warrior class, the Kshatriya, were trained in the various aspects of warfare. The business class, the Vaishya, were taught their trade and the working class of the Shudras was generally deprived of educational advantages. With the advent of Islam in India the traditional methods of education increasingly came under Islamic influence. Pre-Mughal rulers such as Qutb-ud-din Aybak and other Muslim rulers initiated institutions which imparted religious knowledge. Scholars such as Nizamuddin Auliya and Moinuddin Chishti became prominent educators and established Islamic monasteries. Students from Bukhara and Afghanistan visited India to study humanities and science. Islamic institution of education in India included traditional madrassas and maktabs which taught grammar, philosophy, mathematics, and law influenced by the Greek traditions inherited by Persia and the Middle East before Islam spread from these regions into India. A feature of this traditional Islamic education was its emphasis on the connection between science and humanities. British rule and the subsequent establishment of educational institutions saw the introduction of English as a medium of instruction. Some schools taught the curriculum through vernacular languages with English as a second language. The term "pre-modern" was used for three kinds of schools – the Arabic and Sanskrit schools which taught Muslim or Hindu sacred literature and the Persian schools which taught Persian literature. The vernacular schools across India taught reading and writing the vernacular language and arithmetic. British education became solidified into India as missionary schools were established during the 1820s. Educational stages The new National Education Policy 2020 (NEP 2020) introduced by the central government is expected to bring profound changes to education in India. The policy approved by the Cabinet of India on 29 July 2020, outlines the vision of India's new education system. The new policy replaces the 1986 National Policy on Education. The policy is a comprehensive framework for elementary education to higher education as well as vocational training in both rural and urban India. The policy aims to transform India's education system by 2021. The National Education Policy 2020 has 'emphasised' on the use of mother tongue or local language as the medium of instruction till Class 5 while, recommending its continuance till Class 8 and beyond.It also states that no language will be imposed on the students.The language policy in NEP is a broad guideline and advisory in nature; and it is up to the states, institutions, and schools to decide on the implementation. Education in India is a Concurrent List subject. NEP 2020 outlines the vision of India's School education system. The new policy replaces the previous National Policy on Education, 1986. The policy is a comprehensive framework for elementary education to higher education as well as vocational training in both rural and urban India. The policy aims to transform India's education system by 2021. As per NEP2020, the "10 + 2" structure is replaced with "5+3+3+4" model. 5+3+3+4 refers to 5 foundational years, whether in an anganwadi, pre-school or balvatika. This is followed by 3 years of preparatory learning from classes 3 to 5. This is followed by a middle stage that is of 3 years in length and finally a 4 year secondary stage till class 12 or 18 years of age. This model will be implemented as follows: Instead of exams being held every academic year, school students attend three exams, in classes 2, 5 and 8. Board exams are held for classes 10 and 12. Standards for Board exams is established by an assessment body, PARAKH (Performance Assessment, Review and Analysis of Knowledge for Holistic Development). To make them easier, these exams would be conducted twice a year, with students being offered up to two attempts. The exam itself would have two parts, namely the objective and the descriptive. NEP's higher education policy proposes a 4-year multi-disciplinary bachelor's degree in an undergraduate programme with multiple exit options. These will include professional and vocational areas and will be implemented A certificate after completing 1 year of study (vocational) A diploma after completing 2 years of study (vocational) A Bachelor's degree after completion of a 3-year program (professional) A 4-year multidisciplinary Bachelor's degree (the preferred option) (professional) School education The central board and most of the state boards uniformly follow the "10+2" pattern of education. In this pattern, study of 10 years is done in schools and 2 years in Junior colleges (Mumbai, Maharashtra), and then 3 years of study for a bachelor's degree for college. The first 10 years is further subdivided into 4 years of primary education, 6 years of High School followed by 2 years of Junior colleges. This pattern originated from the recommendation of the Education Commission of 1964–66. There are two types of educational institutions in India, 1) Recognized institutions – primary school, secondary school, special schools, intermediate schools, colleges and universities who follow courses as prescribed by D.P.I. , universities or boards and they are also open for inspection by these authorities , 2) Unrecognized Institutions, which do not follow conditions as said in the recognised ones. Adult and youth literacy rates Administration Policy Education policy is prepared by the Central Government and State Governments at national and state levels respectively. The National Policy on Education (NPE), 1986, has provided for environment awareness, science and technology education, and introduction of traditional elements such as Yoga into the Indian secondary school system. A significant feature of India's secondary school system is the emphasis on inclusion of the disadvantaged sections of the society. Professionals from established institutes are often called to support in vocational training. Another feature of India's secondary school system is its emphasis on profession based vocational training to help students attain skills for finding a vocation of his/her choosing. A significant new feature has been the extension of SSA to secondary education in the form of the Rashtriya Madhyamik Shiksha Abhiyan. Rashtriya Madhyamik Shiksha Abhiyan (RMSA) which is the most recent initiative of Government of India to achieve the goal of universalisation of secondary education (USE). It is aimed at expanding and improving the standards of secondary education up to class X. Curriculum and school education boards National Skill Development Agency (NSDA)'s National Skills Qualification Framework (NSQF), is a quality assurance framework which grades and recognises levels of skill based on the learning outcomes acquired through both formal or informal means. School boards set the curriculum, conduct board level exams mostly at 10th and 12th level to award the school diplomas. Exams at the remaining levels (also called standard, grade or class, denoting the years of schooling) are conducted by the schools. National Council of Educational Research and Training (NCERT): The NCERT is the apex body located at New Delhi, Capital City of India. It makes the curriculum related matters for school education across India. The NCERT provides support, guidance and technical assistance to a number of schools in India and oversees many aspects of enforcement of education policies. There are other curriculum bodies governing school education system specially at state level. State government boards of education: Most of the state governments have at least one "State board of secondary school education". However, some states like Andhra Pradesh have more than one. Also the union territories do not have a board. Chandigarh, Dadra and Nagar Haveli, Daman and Diu, and Lakshadweep and Puducherry Lakshadweep share the services with a larger state. The boards set curriculum from Grades 1 to 12 and the curriculum varies from state to state and has more local appeal with examinations conducted in regional languages in addition to English - often considered less rigorous than central curriculums such as CBSE or ICSE/ISC. Most of these conduct exams at 10th and 12th level, but some even at the 5th, 6th and 8th level. Central Board of Secondary Education (CBSE): The CBSE sets curriculum from Grades 1 to 12 and conducts examinations at the 10th and 12th standards that are called board exams. Students studying the CBSE Curriculum take the All India Secondary School Examination (AISSE) at the end of grade 10 and All India Senior School Certificate Examination (AISSCE) at the end of grade 12. Examinations are offered in Hindi and English. Council for the Indian School Certificate Examinations (CISCE): CISCE sets curriculum from Grades 1 to 12 and conducts three examinations, namely, the Indian Certificate of Secondary Education (ICSE - Class/Grade 10); The Indian School Certificate (ISC - Class/Grade 12) and the Certificate in Vocational Education (CVE - Class/Grade 12). CISCE English level has been compared to UK's A-Levels; this board offers more choices of subjects. CBSE exams at grade 10 and 12 have often been compared with ICSE and ISC examinations. ICSE is generally considered to be more rigorous than the CBSE AISSE (grade 10) but the CBSE AISSCE and ISC examinations are almost on par with each other in most subjects with ISC including a slightly more rigorous English examination than the CBSE 12th grade examination. The CBSE and ISC are recognised internationally and most universities abroad accept the final results of CBSE and ISC exams for admissions purposes and as proof of completion of secondary school. National Institute of Open Schooling (NIOS): The NIOS conducts two examinations, namely, Secondary Examination and Senior Secondary Examination (All India) and also some courses in Vocational Education. National Board of education is run by Government of India's HRD Ministry to provide education in rural areas and challenged groups in open and distance education mode. A pilot project started by CBSE to provide high class affordable education, provides education up to 12th standard. Choice of subjects is highly customisable and equivalent to CBSE. Home-schooled students usually take NIOS or international curriculum examinations as they are ineligible to write CBSE or ISC exams. Islamic madrasah: Their boards are controlled by local state governments, or autonomous, or affiliated with Darul Uloom Deoband or Darul Uloom Nadwtul Ulama. Autonomous schools: Such as Woodstock School, Sri Aurobindo International Centre of Education Puducherry, Patha Bhavan and Ananda Marga Gurukula. International Baccalaureate (IB) and Cambridge International Examinations (CIB): These are generally private schools that have dual affiliation with one of the school education board of India as well as affiliated to the International Baccalaureate (IB) Programme and/or the Cambridge International Examinations (CIB). International schools, which offer 10th and 12th standard examinations under the International Baccalaureate, Cambridge Senior Secondary Examination systems or under their home nations school boards (such as run by foreign embassies or the expat communities). Special education: A special Integrated Education for Disabled Children (IEDC) on hurts the poor disproportionately – by diverting funds intended for development, undermining a government's ability to provide basic services, feeding inequality and injustice, and discouraging foreign investment and aid" (Kofi Annan, in his statement on the adoption of the United Nations Convention against Corruption by the General Assembly, NY, November 2003). was started in 1974 with a focus on primary education. but which was converted into Inclusive Education at Secondary Stage. Midday Meal Scheme The Midday Meal Scheme is a school meal programme of the Government of India designed to improve the nutritional status of school-age children nationwide, by supplying free lunches on working days for children in primary and upper primary classes in government, government aided, local body, Education Guarantee Scheme, and alternative innovative education centres, Madarsa and Maqtabs supported under Sarva Shiksha Abhiyan, and National Child Labour Project schools run by the ministry of labour. Serving 120,000,000 children in over 1,265,000 schools and Education Guarantee Scheme centres, it is one of the largest in the world. With the twin objectives of improving health and education of the poor children, India has embarked upon an ambitious scheme of providing mid day meals (MDM) in the government and government-assisted primary schools. The administrative and logistical responsibilities of this scheme are enormous, and, therefore, offering food stamps or income transfer to targeted recipients is considered as an alternative. In a welcome move, Government of India made special allocations for Midday Meal Scheme during nationwide lockdown and school closure period of COVID-19 to continue nutrition delivery to children. However, many experts have differing opinions on ground level implementation of MDM amid pandemic and its actual benefit delivered to school children. Teacher Training In addition, NUEPA (National University of Educational Planning and Administration) and NCTE (National Council for Teacher Education) are responsible for the management of the education system and teacher accreditation. Levels of schooling Pre-primary education The pre-primary stage is the foundation of children's knowledge, skills and behaviour. On completion of pre-primary education, the children are sent to the primary stage but pre-primary education in India is not a fundamental right. In rural India, pre-primary schools are rarely available in small villages. But in cities and big towns, there are many established players in the pre-primary education sector. The demand for the preschools is growing considerably in the smaller towns and cities but still, only 1% of the population under age 6 is enrolled in preschool education. Play group (pre-nursery): At playschools, children are exposed to a lot of basic learning activities that help them to get independent faster and develop their self-help qualities like eating food themselves, dressing up, and maintaining cleanliness. The age limit for admission into pre-nursery is 2 to 3 years. Anganwadi is government-funded free rural childcare & Mothercare nutrition and learning program also incorporating the free Midday Meal Scheme. Nursery: Nursery level activities help children unfold their talents, thus enabling them to sharpen their mental and physical abilities. The age limit for admission in nursery is 3 to 4 years. LKG: It is also called the junior kindergarten (Jr. kg) stage. The age limit for admission in LKG is 4 to 5 years. UKG: It is also called the senior kindergarten (Sr. kg) stage. The age limit for admission in UKG is 5 to 6 years. LKG and UKG stages prepare and help children emotionally, mentally, socially and physically to grasp knowledge easily in the later stages of school and college life. A systematic process of preschool education is followed in India to impart knowledge in the best possible way for a better understanding of the young children. By following an easy and interesting curriculum, teachers strive hard to make the entire learning process enjoyable for the children. Primary education The primary education in India is divided into two parts, namely Lower Primary (Class I-IV) and Upper Primary (Middle school, Class V-VIII). The Indian government lays emphasis on primary education (Class I-VIII) also referred to as elementary education, to children aged 6 to 14 years old. Because education laws are given by the states, duration of primary school visit alters between the Indian states. The Indian government has also banned child labour in order to ensure that the children do not enter unsafe working conditions. However, both free education and the ban on child labour are difficult to enforce due to economic disparity and social conditions. 80% of all recognised schools at the elementary stage are government run or supported, making it the largest provider of education in the country. However, due to a shortage of resources and lack of political will, this system suffers from massive gaps including high pupil to teacher ratios, shortage of infrastructure and poor levels of teacher training. Figures released by the Indian government in 2011 show that there were 5,816,673 elementary school teachers in India. there were 2,127,000 secondary school teachers in India. Education has also been made free for children for 6 to 14 years of age or up to class VIII under the Right of Children to Free and Compulsory Education Act 2009. There have been several efforts to enhance quality made by the government. The District Education Revitalisation Programme (DERP) was launched in 1994 with an aim to universalise primary education in India by reforming and vitalising the existing primary education system. 85% of the DERP was funded by the central government and the remaining 15% was funded by the states. The DERP, which had opened 160,000 new schools including 84,000 alternative education schools delivering alternative education to approximately 3.5 million children, was also supported by UNICEF and other international programs. "Corruption hurts the poor disproportionately – by diverting funds intended for development, undermining a government's ability to provide basic services, feeding inequality and injustice, and discouraging foreign investment and aid" (Kofi Annan, in his statement on the adoption of the United Nations Convention against Corruption by the General Assembly, NY, November 2003). In January 2016, Kerala became the 1st Indian state to achieve 100% primary education through its literacy programme Athulyam. This primary education scheme has also not shown a high gross enrolment ratio of 93–95% for the last three years in some states. Significant improvement in staffing and enrolment of girls has also been made as a part of this scheme. The current scheme for universalisation of Education for All is the Sarva Shiksha Abhiyan which is one of the largest education initiatives in the world. Enrolment has been enhanced, but the levels of quality remain low. Secondary education Secondary education covers children aged 14 to 18, a group comprising 88.5 million children according to the 2001 Census of India. The final two years of secondary is often called Higher Secondary (HS), Senior Secondary, or simply the "+2" stage. The two-halves of secondary education are each an important stage for which a pass certificate is needed, and thus are affiliated by central boards of education under HRD ministry, before one can pursue higher education, including college or professional courses. UGC, NCERT, CBSE and ICSE directives state qualifying ages for candidates who wish to take board exams. Those at least 15 years old by 30 May for a given academic year are eligible to appear for Secondary board exams, and those 17 by the same date are eligible to appear for Higher Secondary certificate board exams. It further states that upon successful completion of Higher Secondary, one can apply to higher education under UGC control such as Engineering, Medical, and Business administration. Secondary education in India is examination-oriented and not course-based: students register for and take classes primarily to prepare for one of the centrally-administered examinations. Senior school or high school is split into 2 parts (grades 9-10 and grades 11–12) with a standardised nationwide examination at the end of grade 10 and grade 12 (usually informally referred to as "board exams"). Grade 10 examination results can be used for admission into grades 11–12 at a secondary school, pre-university program, or a vocational or technical school. Passing a grade 12 board examination leads to the granting of a secondary school completion diploma, which may be used for admission into vocational schools or universities in the country or the world. Most reputable universities in India require students to pass college-administered admissions tests in addition to passing a final secondary school examination for entry into a college or university. School grades are usually not sufficient for college admissions in India. Most schools in India do not offer subject and scheduling flexibility due to budgeting constraints (for e.g.: most students in India are not allowed to take Chemistry and History in grades 11-12 because they are part of different "streams"). Private candidates (i.e. not studying in a school) are generally not allowed to register for and take board examinations but there are some exceptions such as NIOS. 10th (matriculation or secondary) exam Students taking the grade 10 examination usually take six subjects: English, mathematics, social studies, science, one language, and one optional subject depending on the availability of teachers. Elective or optional subjects often include computer applications, economics, physical education, commerce, and environmental science. 12th (senior secondary or Intermediate Course) exam Students taking the grade 12 examination usually take four or five subjects with English or the local language being compulsory. Students re-enrolling in most secondary schools after grade 10 have to make the choice of choosing a "core stream" in addition to English or the local language: science (mathematics/biology, chemistry, and physics), commerce (accounts, business studies, and economics), or humanities (any three of history, political science, sociology, psychology, geography depending on school). Students study mathematics up to single-variable calculus in grade 12. Students taking Biology have a option to give NEET for some courses like (MBBS). Students opting for Mathematics have a option to give JEE for some courses like Engineering. Types of schools Government schools The majority of students study in government schools where poor and vulnerable students study for free until the age of 14. An Education Ministry data, 65.2% (113 million,) of all school students in 20 states go to government schools (c. 2017). These include schools runs by the state and local government as well as the center government. Example of large center government run school systems are Kendriya Vidyalaya in urban areas, Jawahar Navodaya Vidyalaya, for the gifted students, Kasturba Gandhi Balika Vidyalaya for girls belonging to vulnerable SC/ST/OBC classes, Indian Army Public Schools run by the Indian Army for the children of soldiers. Kendriya Vidyalaya project, was started for the employees of the central government of India, who are deployed throughout the country. The government started the Kendriya Vidyalaya project in 1965 to provide uniform education in institutions following the same syllabus at the same pace regardless of the location to which the employee's family has been transferred. Government aided private schools These are usually charitable trust run schools that receive partial funding from the government. Largest system of aided schools is run by D.A.V. College Managing Committee. Private schools (unaided) According to current estimate, 29% of Indian children are privately educated. With more than 50% children enrolling in private schools in urban areas, the balance has already tilted towards private schooling in cities; and, even in rural areas, nearly 20% of the children in 2004-5 were enrolled in private schools. Most middle-class families send their children to private schools, which might be in their own city or at distant boarding schools. Private schools have been established since the British Rule in India and St George's School, Chennai is the oldest private school in India. At such schools, the medium of education is often English, but Hindi and/or the state's official language is also taught as a compulsory subject. Pre-school education is mostly limited to organised neighbourhood nursery schools with some organised chains. Montessori education is also popular, due to Maria Montessori's stay in India during World War II. In 2014, four of the top ten pre-schools in Chennai were Montessori. Many privately owned and managed schools carry the appellation "Public", such as the Delhi Public Schools, or Frank Anthony Public Schools. These are modelled after British public schools, which are a group of older, expensive and exclusive fee-paying private independent schools in England. According to some research, private schools often provide superior results at a multiple of the unit cost of government schools. The reason being high aims and better vision. However, others have suggested that private schools fail to provide education to the poorest families, a selective being only a fifth of the schools and have in the past ignored Court orders for their regulation. In their favour, it has been pointed out that private schools cover the entire curriculum and offer extra-curricular activities such as science fairs, general knowledge, sports, music and drama. The pupil teacher ratios are much better in private schools (1:31 to 1:37 for government schools) and more teachers in private schools are female. There is some disagreement over which system has better educated teachers. According to the latest DISE survey, the percentage of untrained teachers (para-teachers) is 54.91% in private, compared to 44.88% in government schools and only 2.32% teachers in unaided schools receive in-service training compared to 43.44% for government schools. The competition in the school market is intense, yet most schools make profit. However, the number of private schools in India is still low - the share of private institutions is 7% (with upper primary being 21% secondary 32% - source: fortress team research). Even the poorest often go to private schools despite the fact that government schools are free. A study found that 65% school-children in Hyderabad's slums attend private schools. National schools Atomic Energy Central School (established in 1969) Bal Bharati Public School (established in 1944) Bharatiya Vidya Bhavan (established in 1938) Chinmaya Vidyalaya (established in 1965) DAV Public School (established in 1886) Delhi Public School (established in 1949) Indian Army Public Schools (established in 1983) Jawahar Navodaya Vidyalaya (established in 1986) Kendriya Vidyalaya (established in 1963) Padma Seshadri Bala Bhavan (established in 1958) Railway Schools in India (established in 1873) Ramakrishna Mission Schools (established in 1922) Ryan International Schools (established in 1976) Sainik School (established in 1960) Saraswati Shishu Mandir (established in 1952) Seth M.R. Jaipuria Schools (established in 1992) Vivekananda Vidyalaya (established in 1972) Vivekananda Kendra Vidyalaya (established in 1977) Waldorf Schools (India) (established in 2002) International schools , the International Schools Consultancy (ISC) listed India as having 410 international schools. ISC defines an 'international school' in the following terms "ISC includes an international school if the school delivers a curriculum to any combination of pre-school, primary or secondary students, wholly or partly in English outside an English-speaking country, or if a school in a country where English is one of the official languages, offers an English-medium curriculum other than the country's national curriculum and is international in its orientation." This definition is used by publications including The Economist. Home-schooling Home-schooling in India is legal, though it is the less explored option, and often debated by educators. The Indian Government's stance on the issue is that parents are free to teach their children at home, if they wish to and have the means. The then HRD Minister Kapil Sibal has stated that despite the RTE Act of 2009, if someone decides not to send his/her children to school, the government would not interfere. Higher education Students may opt for vocational education or university education. Vocational education India's All India Council of Technical Education (AICTE) reported, in 2013, that there are more than 4,599 vocational institutions that offer degrees, diploma and post-diploma in architecture, engineering, hotel management, infrastructure, pharmacy, technology, town services and others. There were 1740,000 students enrolled in these schools. Total annual intake capacity for technical diplomas and degrees exceeded 3.4 million in 2012. According to the University Grants Commission (UGC) total enrolment in Science, Medicine, Agriculture and Engineering crossed 65 lakh in 2010. The number of women choosing engineering has more than doubled since 2001. Tertiary education After passing the Higher Secondary Examination (the Standard 12 examination), students may enroll in general degree programmes such as bachelor's degree (graduation) in arts, commerce or science, or professional degree programme such as engineering, medicine, nursing, pharmacy, and law graduates. India's higher education system is the third largest in the world, after China and the United States. The main governing body at the tertiary level is the University Grants Commission (India) (UGC), which enforces its standards, advises the government, and helps co-ordinate between the centre and the state up to Post graduation and Doctorate (PhD). Accreditation for higher learning is overseen by 12 autonomous institutions established by the University Grants Commission. , India has 152 central universities, 316 state universities, and 191 private universities. Other institutions include 33,623 colleges, including 1,800 exclusive women's colleges, functioning under these universities and institutions, and 12,748 Institutions offering Diploma Courses. The emphasis in the tertiary level of education lies on science and technology. Indian educational institutions by 2004 consisted of a large number of technology institutes. Distance learning is also a feature of the Indian higher education system. The Government has launched Rashtriya Uchchattar Shiksha Abhiyan to provide strategic funding to State higher and technical institutions. A total of 316 state public universities and 13,024 colleges will be covered under it. Some institutions of India, such as the Indian Institutes of Technology (IITs) and National Institutes of Technology (NITs) have been globally acclaimed for their standard of under-graduate education in engineering. Several other institutes of fundamental research such as the Indian Institute of Science (IISc) Indian Association for the Cultivation of Science (IACS), Tata Institute of Fundamental Research (TIFR), Harish-Chandra Research Institute (HRI), Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR), Indian Institute of Science Education and Research (IISER) are also acclaimed for their standard of research in basic sciences and mathematics. However, India has failed to produce world class universities both in the private sector or the public sector. Besides top rated universities which provide highly competitive world class education to their pupils, India is also home to many universities which have been founded with the sole objective of making easy money. Regulatory authorities like UGC and AICTE have been trying very hard to extirpate the menace of private universities which are running courses without any affiliation or recognition. Indian Government has failed to check on these education shops, which are run by big businessmen & politicians. Many private colleges and universities do not fulfil the required criterion by the Government and central bodies (UGC, AICTE, MCI, BCI etc.) and take students for a ride. For example, many institutions in India continue to run unaccredited courses as there is no legislation strong enough to ensure legal action against them. Quality assurance mechanisms have failed to stop misrepresentations and malpractices in higher education. At the same time regulatory bodies have been accused of corruption, specifically in the case of deemed-universities. In this context of lack of solid quality assurance mechanism, institutions need to step-up and set higher standards of self-regulation. The Government of India is aware of the plight of higher education sector and has been trying to bring reforms, however, 15 bills are still awaiting discussion and approval in the Parliament. One of the most talked about bill is Foreign Universities Bill, which is supposed to facilitate entry of foreign universities to establish campuses in India. The bill is still under discussion and even if it gets passed, its feasibility and effectiveness is questionable as it misses the context, diversity and segment of international foreign institutions interested in India. One of the approaches to make internationalisation of Indian higher education effective is to develop a coherent and comprehensive policy which aims at infusing excellence, bringing institutional diversity and aids in capacity building. Three Indian universities were listed in the Times Higher Education list of the world's top 200 universities – Indian Institutes of Technology, Indian Institutes of Management, and Jawaharlal Nehru University in 2005 and 2006. Six Indian Institutes of Technology and the Birla Institute of Technology and Science—Pilani were listed among the top 20 science and technology schools in Asia by Asiaweek. The Indian School of Business situated in Hyderabad was ranked number 12 in global MBA rankings by the Financial Times of London in 2010 while the All India Institute of Medical Sciences has been recognised as a global leader in medical research and treatment. The University of Mumbai was ranked 41 among the Top 50 Engineering Schools of the world by America's news broadcasting firm Business Insider in 2012 and was the only university in the list from the five emerging BRICS nations viz Brazil, Russia, India, China and South Africa. It was ranked at 62 in the QS BRICS University rankings for 2013 and was India's 3rd best Multi-Disciplinary University in the QS University ranking of Indian Universities after University of Calcutta and Delhi University. In April 2015, IIT Bombay launched the first U.S.-India joint EMBA program alongside Washington University in St. Louis. Technical education From the first Five-year Plan onwards, India's emphasis was to develop a pool of scientifically inclined manpower. India's National Policy on Education (NPE) provisioned for an apex body for regulation and development of higher technical education, which came into being as the All India Council for Technical Education (AICTE) in 1987 through an act of the Indian parliament. At the central level, the Indian Institutes of Technology, the Indian Institute of Space Science and Technology, the National Institutes of Technology and the Indian Institutes of Information Technology are deemed of national importance. The Indian Institutes of Technology (IITs) and National Institutes of Technology (NITs) are among the nation's premier education facilities. The UGC has inter-university centers at a number of locations throughout India to promote common research, e.g. the Nuclear Science Centre at the Jawaharlal Nehru University, New Delhi. Besides there are some British established colleges such as Harcourt Butler Technological Institute situated in Kanpur and King George Medical University situated in Lucknow which are important centre of higher education. In addition to above institutes, efforts towards the enhancement of technical education are supplemented by a number of recognised Professional Engineering Societies such as: Institution of Engineers (India) Institution of Civil Engineers (India) Institution of Mechanical Engineers (India) Institution of Chemical Engineering (India) Institution of Electronics and Tele-Communication Engineers (India) Indian Institute of Metals Institution of Industrial Engineers (India) Institute of Town Planners (India) Indian Institute of Architects that conduct Engineering/Technical Examinations at different levels (Degree and diploma) for working professionals desirous of improving their technical qualifications. The number of graduates coming out of technical colleges increased to over 700,000 in 2011 from 550,000 in FY 2010. However, according to one study, 75% of technical graduates and more than 85% of general graduates lack the skills needed in India's most demanding and high-growth global industries such as Information Technology. These high-tech global information technologies companies directly or indirectly employ about 2.3 million people, less than 1% of India's labour pool. India offers one of the largest pool of technically skilled graduates in the world. Given the sheer numbers of students seeking education in engineering, science and mathematics, India faces daunting challenges in scaling up capacity while maintaining quality. Open and distance learning At the school level, Board of Open Schooling and Skill Education, Sikkim (BOSSE), National Institute of Open Schooling (NIOS) provides opportunities for continuing education to those who missed completing school education. 1.4 million students are enrolled at the secondary and higher secondary level through open and distance learning. In 2012 various state governments also introduced "State Open School" to provide distance education. At higher education level, Indira Gandhi National Open University (IGNOU) co-ordinates distance learning. It has a cumulative enrolment of about 1.5 million, serviced through 53 regional centres and 1,400 study centres with 25,000 counselors. The Distance Education Council (DEC), an authority of IGNOU is co-coordinating 13 State Open Universities and 119 institutions of correspondence courses in conventional universities. While distance education institutions have expanded at a very rapid rate, but most of these institutions need an up gradation in their standards and performance. There is a large proliferation of courses covered by distance mode without adequate infrastructure, both human and physical. There is a strong need to correct these imbalances. Massive open online course are made available for free by the HRD ministry and various educational institutes. Online education Online education in India started during the COVID-19 pandemic. However, currently only a small proportion of the Indian population has access to online education. The Ministry of Human Resource Development (MHRD) recently launched the 'Bharat Padhe Online'. The Indian government has imposed one of the longest school closures globally as it suffered through multiple waves of the COVID-19 pandemic. These school closures have revealed the inequities between urban and rural populations, as well as between girls and boys, in adapting to online learning tools. Quality Literacy According to the Census of 2011, "every person above the age of 7 years who can read and write with understanding in any language is said to be literate". According to this criterion, the 2011 survey holds the national literacy rate to be 74.04%. The youth literacy rate, measured within the age group of 15 to 24, is 81.1% (84.4% among males and 74.4% among females), while 86% of boys and 72% of girls are literate in the 10-19 age group. Within the Indian states, Kerala has the highest literacy rate of 93.91% whereas Bihar averaged 61.8% literacy. The 2001 statistics indicated that the total number of 'absolute non-literates' in the country was 304 million. Gender gap in literacy rate is high, for example in Rajasthan, the state with the lowest female literacy rate in India, average female literacy rate is 52.66% and average male literacy rate is 80.51%, making a gender gap of 27.85%. Attainment , enrolment rates are 58% for pre-primary, 93% for primary, 69% for secondary, and 25% for tertiary education. Despite the high overall enrolment rate for primary education among rural children of age 10, half could not read at a basic level, over 60% were unable to do division, and half dropped out by the age of 14. In 2009, two states in India, Tamil Nadu and Himachal Pradesh, participated in the international PISA exams which is administered once every three years to 15-year-old's. Both states ranked at the bottom of the table, beating out only Kyrgyzstan in score, and falling 200 points (two standard deviations) below the average for OECD countries. While in the immediate aftermath there was a short-lived controversy over the quality of primary education in India, ultimately India decided to not participate in PISA for 2012, and again not to for 2015. While the quality of free, public education is in crisis, a majority of the urban poor have turned to private schools. In some urban cities, it is estimated as high as two-thirds of all students attend private institutions, many of which charge a modest US$2 per month. Public school workforce Officially, the pupil to teacher ratio within the public school system for primary education is 35:1. However, teacher absenteeism in India is exorbitant, with 25% never showing up for work. The World Bank estimates the cost in salaries alone paid toNO-Show teachers who have never attended work is US$2 billion per year. A study on teachers by Kremer etc. found out that 25% of private sector teachers and 40% of public sector medical workers were absent during the survey. Among teachers who were paid to teach, absence rates ranged from 14.6% in Maharashtra to 41.9% in Jharkhand. Only 1 in nearly 3,000 public school head teachers had ever dismissed a teacher for repeated absence. The same study found "only about half were teaching, during unannounced visits to a nationally representative sample of government primary schools in India." Higher education As per Report of the Higher education in India, Issues Related to Expansion, Inclusiveness, Quality and Finance, the access to higher education measured in term of gross enrolment ratio increased from 0.7% in 1950/51 to 1.4% in 1960–61. By 2006/7 the GER increased to about 11%. Notably, by 2012, it had crossed 20% (as mentioned in an earlier section). According to a survey by All India Survey on Higher Education (AISHE) released by the ministry of human resource development, Tamil Nadu which has the highest gross enrolment ratio (GER) in higher education in the country has registered an increase of 2.6% to take GER to 46.9 per cent in 2016–17. Vocational education An optimistic estimate from 2008 was that only one in five job-seekers in India ever had any sort of vocational training. However it's expected to grow as the CBSE has brought changes in its education system which emphasises inclusion of certain number and types of vocational subjects in classes 9th and 11th. Although it's not mandatory for schools to go for it but a good number of schools have voluntarily accepted the suggestion and incorporated the change in their curriculum. Extracurricular activities Extracurricular activities include sports, arts, National Service Scheme, National Cadet Corps, The Bharat Scouts and Guides, etc. Issues Facilities As per 2016 Annual Survey of Education Report (ASER), 3.5% schools in India had no toilet facility while only 68.7% schools had usable toilet facility. 75.5% of the schools surveyed had library in 2016, a decrease from 78.1% in 2014. Percentage of schools with separate girls toilet have increased from 32.9% in 2010 to 61.9%in 2016. 74.1% schools had drinking water facility and 64.5% of the schools had playground. Curriculum issues Modern education in India is often criticised for being based on rote learning rather than problem solving. New Indian Express says that Indian Education system seems to be producing zombies since in most of the schools students seemed to be spending majority of their time in preparing for competitive exams rather than learning or playing. BusinessWeek criticises the Indian curriculum, saying it revolves around rote learning and ExpressIndia suggests that students are focused on cramming. Preschool for Child Rights states that almost 99% of pre-schools do not have any curriculum at all. Also creativity is not encouraged or is considered a form of entertainment in most institutions. The British "essentialist" view of knowledge of the nineteenth century emphasised the individual, scientific, universal, and moral aims of education ahead of the social and cultural. This, combined with the colonial construction of Indian society, designed to preserve the ideological lead of the Empire post-1857,1 helped shape the official nineteenth-century school curriculum. The rejection of nationalist Gopal Krishna Gokhale's Bill (1911) to make primary education free and compulsory by the colonial administration and English-educated and often upper-caste elite further helped sustain a curriculum that focused on colonial objectives. Holmes and McLean (1989, 151) argues that despite tensions between the colonial view of education and the nationalist postcolonial aims of education, British essentialism grew unassailable roots in India partly because "colonial values coincided with those of indigenous traditions." (Batra P. 2015) Rural education Following independence, India viewed education as an effective tool for bringing social change through community development. The administrative control was effectively initiated in the 1950s, when, in 1952, the government grouped villages under a Community Development Block—an authority under national programme which could control education in up to 100 villages. A Block Development Officer oversaw a geographical area of which could contain a population of as many as 70,000 people. Setty and Ross elaborate on the role of such programmes, themselves divided further into individual-based, community based, or the Individual-cum-community-based, in which microscopic levels of development are overseen at village level by an appointed worker: Despite some setbacks the rural education programmes continued throughout the 1950s, with support from private institutions. A sizeable network of rural education had been established by the time the Gandhigram Rural Institute was established and 5,200 Community Development Blocks were established in India. Nursery schools, elementary schools, secondary school, and schools for adult education for women were set up. The government continued to view rural education as an agenda that could be relatively free from bureaucratic backlog and general stagnation. However, in some cases lack of financing balanced the gains made by rural education institutes of India. Some ideas failed to find acceptability among India's poor and investments made by the government sometimes yielded little results. Today, government rural schools remain poorly funded and understaffed. Several foundations, such as the Rural Development Foundation (Hyderabad), actively build high-quality rural schools, but the number of students served is small. Education in rural India is valued differently from in an urban setting, with lower rates of completion. An imbalanced sex ratio exists within schools with 18% of males earning a high school diploma compared with only 10% of females. The estimated number of children who have never attended school in India is near 100 million which reflects the low completion levels. This is the largest concentration in the world of youth who haven't enrolled in school. Women's education Women have a much lower literacy rate than men. Far fewer girls are enrolled in the schools, and many of them drop out. In the patriarchal setting of the Indian family, girls have lower status and fewer privileges than boys. Conservative cultural attitudes prevent some girls from attending school. Furthermore, educated high class women are less likely than uneducated low class women to enter the workforce They opt to stay at home due to the traditional, cultural and religious norms. The number of literate women among the female population of India was between 2–6% from the British Raj onward to the formation of the Republic of India in 1947. Concerted efforts led to improvement from 15.3% in 1961 to 28.5% in 1981. By 2001 literacy for women had exceeded 50% of the overall female population, though these statistics were still very low compared to world standards and even male literacy within India. Recently the Indian government has launched Saakshar Bharat Mission for Female Literacy. This mission aims to bring down female illiteracy by half of its present level. Sita Anantha Raman outlines the progress of women's education in India: Sita Anantha Raman also mentions that while the educated Indian women workforce maintains professionalism, the men outnumber them in most fields and, in some cases, receive higher income for the same positions. The education of women in India plays a significant role in improving livings standards in the country. A higher female literacy rate improves the quality of life both at home and outside the home, by encouraging and promoting education of children, especially female children, and in reducing the infant mortality rate. Several studies have shown that a lower level of women literacy rates results in higher levels of fertility and infant mortality, poorer nutrition, lower earning potential and the lack of an ability to make decisions within a household. Women's lower educational levels is also shown to adversely affect the health and living conditions of children. A survey that was conducted in India showed results which support the fact that infant mortality rate was inversely related to female literacy rate and educational level. The survey also suggests a correlation between education and economic growth. In India, there is a large disparity between female literacy rates in different states. State of Kerala has the highest female literacy rate of 91.98% while Rajasthan has the lowest female literacy rate of 52.66. This correlates to the health levels of states, Kerala has average life expectancy at birth of 74.9 while Rajasthan's average life expectancy at birth is 67.7 years. In India, higher education is defined as the education of an age group between 18 and 24, and is largely funded by the government. Despite women making up 24–50% of higher education enrolment, there is still a gender imbalance within higher education. Only one third of science students and 7% of engineering students, are women. In comparison, however, over half the students studying Education are women. Accreditation In January 2010, the Government of India decided to withdraw Deemed university status from as many as 44 institutions. The Government claimed in its affidavit that academic considerations were not being kept in mind by the management of these institutions and that "they were being run as family fiefdoms". In February 2009, the University Grant Commission found 39 fake institutions operating in India. Employer training Only 10% of manufacturers in India offer in-service training to their employees, compared with over 90% in China. Teacher careers In the Indian education system, a teacher's success is loosely defined. It is either based on a student's success or based on the years of teaching experience, both of which do not necessarily correlate to a teacher's skill set or competencies. The management of an institution could thereby be forced to promote teachers based on the grade level they teach or their seniority, both of which are often not an indicator of a good teacher. This means that either a primary school teacher is promoted to a higher grade, or a teacher is promoted to take up other roles within the institution such as Head of Department, coordinator, Vice Principal or Principal. However, the skills and competencies that are required for each of them vary and a great teacher may not be a great manager. Since teachers do not see their own growth and success in their own hands, they often do not take up any professional development. Thus, there is a need to identify a framework to help a teacher chart a career path based on his/her own competency and help him/her understand his/her own development. Coaching Increased competition to get admission in reputed colleges has given rise to private coaching institutes in India. They prepare students for engineering, medical, MBA, SAT, GRE, banking jobs' entrance tests. There are also coaching institutes that teach subjects like English for employment in India and abroad. Private coaching institutes are of two types: offline coaching and online coaching. There are many online coaching centres and apps available in the market and their usage is growing, especially in tier 2 metro cities. A 2013 survey by ASSOCHAM predicted the size of private coaching industry to grow to $40 billion, or Rs 2.39 trillion (short scale) by 2015. Kota in Rajasthan is the called the capital of engineering and medical colleges' entrance's coaching sector. In Punjab, English language is taught by coaching institutes for foreign visa aspirants to get the right IELTS score for their applications. Mukherjee Nagar and Old Rajinder Nagar in Delhi are considered the hub for UPSC Civil Services Examination coaching. To compete in these exams, Center and some state governments also provide free coaching to students, especially to students from minority communities. Coaching classes have been blamed for the neglect of school education by students. Educationists such as Anandakrishnan have criticised the increasing importance being given to coaching classes as they put students under mental stress and the coaching fees add to the financial burden on parents. These educationists opine that if a good schooling system is put in place, children should not need additional coaching to take any competitive examination. Corruption in education Corruption in Indian education system has been eroding the quality of education and has been creating long-term negative consequences for the society. Educational corruption in India is considered one of the major contributors to domestic black money. In 2021, Manav Bharti University, a private university, was accused of selling tens of thousands of degrees for money over a decade. "Corruption hurts the poor disproportionately – by diverting funds intended for development, undermining a government's ability to provide basic services, feeding inequality and injustice, and discouraging foreign investment and aid" (Kofi Annan, in his statement on the adoption of the United Nations Convention against Corruption by the General Assembly, NY, November 2003). Grade inflation Grade inflation has become an issue in Indian secondary education. In CBSE, a 95 per cent aggregate is 21 times as prevalent today as it was in 2004, and a 90 per cent close to nine times as prevalent. In the ISC Board, a 95 per cent is almost twice as prevalent today as it was in 2012. CBSE called a meeting of all 40 school boards early in 2017 to urge them to discontinue "artificial spiking of marks". CBSE decided to lead by example and promised not to inflate its results. But although the 2017 results have seen a small correction, the board has clearly not discarded the practice completely. Almost 6.5 per cent of mathematics examinees in 2017 scored 95 or more – 10 times higher than in 2004 – and almost 6 per cent of physics examinees scored 95 or more, 35 times more than in 2004. Initiatives Central government involvement Following India's independence, a number of rules were formulated for the backward Scheduled Castes and the Scheduled Tribes of India. In 1960, a list identifying 405 Scheduled Castes and 225 Scheduled Tribes was published by the central government. An amendment was made to the list in 1975, which identified 841 Scheduled Castes and 510 Scheduled Tribes. The total percentage of Scheduled Castes and Scheduled Tribes combined was found to be 22.5% with the Scheduled Castes accounting for 17% and the Scheduled Tribes accounting for the remaining 7.5%. Following the report many Scheduled Castes and Scheduled Tribes increasingly referred to themselves as Dalit, a Marathi language terminology used by B R Ambedkar which literally means "oppressed". The Scheduled Castes and Scheduled Tribes are provided for in many of India's educational programmes. Special reservations are also provided for the Scheduled Castes and Scheduled Tribes in India, e.g. a reservation of 15% in Kendriya Vidyalaya for Scheduled Castes and another reservation of 7.5% in Kendriya Vidyalaya for Scheduled Tribes. Similar reservations are held by the Scheduled Castes and Scheduled Tribes in many schemes and educational facilities in India. The remote and far-flung regions of North-East India are provided for under the Non-Lapsible Central pool of Resources (NLCPR) since 1998–1999. The NLCPR aims to provide funds for infrastructure development in these remote areas. Women from remote, underdeveloped areas or from weaker social groups in Andhra Pradesh, Assam, Bihar, Jharkhand, Karnataka, Kerala, Gujarat, Uttar Pradesh, and Uttarakhand, fall under the Mahila Samakhya Scheme, initiated in 1989. Apart from provisions for education this programme also aims to raise awareness by holding meetings and seminars at rural levels. The government allowed during 2007–08 to carry out this scheme over 83 districts including more than 21,000 villages. Currently there are 68 Bal Bhavans and 10 Bal Kendra affiliated to the National Bal Bhavan. The scheme involves educational and social activities and recognising children with a marked talent for a particular educational stream. A number of programmes and activities are held under this scheme, which also involves cultural exchanges and participation in several international forums. India's minorities, especially the ones considered 'educationally backward' by the government, are provided for in the 1992 amendment of the Indian National Policy on Education (NPE). The government initiated the Scheme of Area Intensive Programme for Educationally Backward Minorities and Scheme of Financial Assistance or Modernisation of Madarsa Education as part of its revised Programme of Action (1992). Both these schemes were started nationwide by 1994. In 2004 the Indian parliament passed an act which enabled minority education establishments to seek university affiliations if they passed the required norms. Ministry of Human Resource and Development, Government of India in collaboration with Ministry of Electronics & Information Technology has also launched a National Scholarship Portal to provide students of India access to National and State Level Scholarships provided by various government authorities. As a Mission Mode Project under the National e-Governance Plan (NeGP), the online service enlists more than 50 scholarship programs every year including the renowned Ministry of Minority Affairs (MOMA) Scholarships for Post-Matric and Pre-Matric studies. In the academic year 2017-18 the MOMA Scholarships facilitated the studies of 116,452 students with scholarships worth ₹3165.7 million. The National Scholarship continues to enlist scholarship programs managed by AICTE (All India Council for Technical Education), UGC (University Grants Commission) and respective state governments. Legislative framework Article 45, of the Constitution of India originally stated: This article was a directive principle of state policy within India, effectively meaning that it was within a set of rules that were meant to be followed in spirit and the government could not be held to court if the actual letter was not followed. However, the enforcement of this directive principle became a matter of debate since this principle held obvious emotive and practical value, and was legally the only directive principle within the Indian constitution to have a time limit. Following initiatives by the Supreme Court of India during the 1990s the 93rd amendment bill suggested three separate amendments to the Indian constitution: The constitution of India was amended to include a new article, 21A, which read: Article 45 was proposed to be substituted by the article which read: Another article, 51A, was to additionally have the clause: The bill was passed unanimously in the Lok Sabha, the lower house of the Indian parliament, on 28 November 2001. It was later passed by the upper house—the Rajya Sabha—on 14 May 2002. After being signed by the President of India the Indian constitution was amended formally for the eighty sixth time and the bill came into effect. Since then those between the age of 6–14 have a fundamental right to education. Article 46 of the Constitution of India holds that: Other provisions for the Scheduled Castes and Scheduled Tribes can be found in Articles 330, 332, 335, 338–342. Both the 5th and the 6th Schedules of the Constitution also make special provisions for the Scheduled Castes and Scheduled Tribes. Central government expenditure on education As a part of the tenth Five-year Plan (2002–2007), the central government of India outlined an expenditure of 65.6% of its total education budget of i.e. on elementary education; 9.9% i.e. on secondary education; 2.9% i.e. on adult education; 9.5% i.e. on higher education; 10.7% i.e. on technical education; and the remaining 1.4% i.e. on miscellaneous education schemes. During the Financial Year 2011–12, the Central Government of India has allocated 38,957 crore for the Department of School Education and Literacy which is the main department dealing with primary education in India. Within this allocation, major share of 21,000 crore, is for the flagship programme 'Sarva Siksha Abhiyan'. However, budgetary allocation of 210,000 million is considered very low in view of the officially appointed Anil Bordia Committee recommendation of 356.59 billion for the year 2011–12. This higher allocation was required to implement the recent legislation 'Right of Children to Free and Compulsory Education Act, 2009. In recent times, several major announcements were made for developing the poor state of affairs in education sector in India, the most notable ones being the National Common Minimum Programme (NCMP) of the United Progressive Alliance (UPA) government. The announcements are; (a) To progressively increase expenditure on education to around 6% of GDP. (b) To support this increase in expenditure on education, and to increase the quality of education, there would be an imposition of an education cess over all central government taxes. (c) To ensure that no one is denied of education due to economic backwardness and poverty. (d) To make right to education a fundamental right for all children in the age group 6–14 years. (e) To universalise education through its flagship programmes such as Sarva Shiksha Abhiyan and Midday Meal Scheme However, even after five years of implementation of NCMP, not much progress has been seen on this front. Although the country targeted towards devoting 6% share of the GDP towards the educational sector, the performance has definitely fallen short of expectations. Expenditure on education has steadily risen from 0.64% of GDP in 1951–52 to 2.31% in 1970–71 and thereafter reached the peak of 4.26% in 2000–01. However, it declined to 3.49% in 2004–05. There is a definite need to step-up again. As a proportion of total government expenditure, it has declined from around 11.1% in 2000–2001 to around 9.98% during UPA rule, even though ideally it should be around 20% of the total budget. A policy brief issued by [Network for Social Accountability (NSA)] titled "[NSA Response to Education Sector Interventions in Union Budget: UPA Rule and the Education Sector] " provides significant revelation to this fact. Due to a declining priority of education in the public policy paradigm in India, there has been an exponential growth in the private expenditure on education also. [As per the available information, the private out of pocket expenditure by the working class population for the education of their children in India has increased by around 1150 per cent or around 12.5 times over the last decade]. Conceptual understandings of Inclusive Education The new National Education Policy 2020(NEP 2020)introduced by the central government is expected to bring profound changes to education in India. The policy approved by the Union Cabinet of India on 29 July 2020, outlines the vision of India's new education system. The new policy replaces the 1986 National Policy on Education. The policy is a comprehensive framework for elementary education to higher education as well as vocational training in both rural and urban India. The policy aims to transform India's education system by 2021. Shortly after the release of the policy, the government clarified that no one will be forced to study any particular language and that the medium of instruction will not be shifted from English to any regional language. The language policy in NEP is a broad guideline and advisory in nature; and it is up to the states, institutions, and schools to decide on the implementation. Education in India is a Concurrent List subject. Although it may not be appropriate to judge the adoption of a northern concept in the south from a northern perspective, hasty use of such globalised terminology without engaging with the thinking behind it may present no more than empty rhetoric, whatever the context. clearly perceives inclusive education as "…a concept that has been adopted from the international discourse, but has not been engaged with in the Indian scenario." She supports this view of lack of conceptual engagement through data collected in semi-structured interviews for her PhD research, where she found that: Many interviewees concurred with the opinions reflected in government documents that inclusion is about children with special needs, as reflected by a disabling condition. A handful of others argue that inclusive education should not be limited to children with disabilities, as it holds relevance for all marginalised groups. Though they were quick to accept that this thinking has not yet prevailed. Indian understandings of disability and educational needs are demonstrated through the interchangeable use of several English terms which hold different meanings in the north. For example, children with special needs or special educational needs tend to be perceived as children with disabilities in India, as demonstrated by Mukhopadhyay and Mani's (2002) chapter on 'Education of Children with Special Needs' in a NIEPA government-funded research report, which solely pertains to children with disabilities. In contrast, the intention of Mary Warnock's term 'special educational needs', coined in the UK in 1978, was to imply that any child, with an impairment or not, may have an individual educational need at some point in their school career (e.g. dyslexia, or language of instruction as a second language) which the teacher should adapt to. This further implies that a child with a disability may not have a special educational need while their able-bodied peers could (Giffard-Lindsay, 2006). In addition, despite the 1987 Mental Health Act finally separating the meaning of learning disability from that of mental illness in India, there is still some confusion in understanding, with the 1995 Persons with Disabilities Act listing both mental retardation and mental illness as categories of disability. Ignorance and fear of genetic inheritance adds to the societal stigma of both. 'Inclusive' and 'integrated' education are also concepts that are used interchangeably, understood as the placement of children with disabilities in mainstream classrooms, with the provision of aids and appliances, and specialist training for the teacher on how to 'deal with' students with disabilities. There is little engagement with the connotations of school, curriculum, and teacher flexibility for all children. These rigid, categorical interpretations of subtly different northern concepts are perhaps a reflection of not only the government tendency to categories and label (Julka, 2005; Singal, 2005a) but also a cultural one, most explicitly enforced through the rigidly categorised caste system. Deaf Education in India History of education in India for the DHH population India is very diverse with eight main religions, hundreds of ethnic groups, and 21 languages with hundreds of dialects. This diversity has made it difficult to educate DHH people in India for generations. There is a history of educating the deaf in India, however, there is no single clear approach to their education. This stems from conditions, some similar to those faced around the world, and others unique to India. For example, prior to independence of India, there were not clear laws and protections for the disabled. Since independence, advancements have been made for rights of the disabled, but this has not fully tackled the issue. Pre-independence there were only 24 schools for the deaf in India, and all of these used an oral approach. The belief was that using sign language would hinder advancements of hearing and speaking in deaf children. Additionally, there was no single Indian sign language, so signs would differ depending on where the school was located. Post-independence, there are more services and resources available for DHH people, however, challenges with education remain. There are organizations around the country that work to advance the spread and quality of education for the deaf. Education for DHH children Oralism and the use of sign language are two competing approaches to education for DHH people. While oralism dominates in India, which is an approach that encourages speaking and hearing, it is usually not realistic for DHH children. There is an Indian Sign Language, however, it is not formally recognized by the government and it is not complete or comprehensive. It varies around the country and is not encouraged by professionals and educators. Beliefs of the past that the use of sign language will hinder the potential advancements of hearing and speaking in DHH children remain. In recent years, there has been a notion to encourage the use of sign language in India and teach it in schools. In 2017, the first ISL dictionary was released. Due to these challenges and beliefs associated with sign language, education for DHH people in India often focuses on teaching children to hear, speak, and read lips, this is known as an oral approach. In India there are regular schools and special schools. Special schools provide education for children with different disabilities. Special schools can be beneficial to DHH children, and provide a better education than they would receive in a regular school. However, these schools aren't available for every deaf child. Sometimes they are located too far from a child's home. Another reason a child may have to attend a regular school is if they receive hearing technology. Since India focuses on hearing and speaking for the deaf, hearing technology is encouraged. Once a child receives hearing technology it is believed that they can attend regular schools. Even with hearing technology, DHH children still need special education in order to succeed. This puts them at a significant disadvantage in regular school and can cause them to fall behind academically, linguistically, and developmentally. For these reasons, many deaf children receive poor education or no education at all, causing the illiteracy rate of deaf children to rise. Education in India in regular schools and deaf schools has problems. Even in deaf schools, sign language isn't usually taught and used. Some use a small amount of sign language but all of the deaf schools in India use or claim to use an oral approach. Some deaf schools secretly teach sign language due to the stigma and beliefs surrounding the use of sign language, and disability in general, in India. Children in deaf schools have to try to learn by hearing or reading lips and writing. In hearing schools, the children have to do the same. There are no special accommodations. Additionally, there aren't any teachers that use sign language in regular schools (maybe a few in deaf schools), and there aren't any interpreters. There are a couple of hundred deaf schools in India and vocational training is becoming more common for DHH people. Higher education There are no deaf colleges or universities in India. A person's education ends with grade school- where they likely weren't able to learn. With lack of education, DHH people then have a very difficult time finding a job. There is one interpreter in one college in India, Delhi University. See also Gender inequality in India Gurukula List of schools in India Macaulayism, historical background to the implementation of English education in India. National Translation Mission Open access in India Two Million Minutes, documentary film Dreams Choked, documentary film Happiness Curriculum Notes References Citations Bibliography Azam, Mehtabul, and Andreas Blom. (2008). "Progress in Participation in Tertiary Education in India from 1983 to 2004" (The World Bank, 2008) online. Basant, Rakesh, and Gitanjali Sen. (2014). "Access to higher education in India: an exploration of its Antecedents." Economic and Political Weekly (2014): 38-45 online. Blackwell, Fritz (2004), India: A Global Studies Handbook, ABC-CLIO, . Elder, Joseph W. (2006), "Caste System", Encyclopedia of India (vol. 1) edited by Stanley Wolpert, 223–229, Thomson Gale: . Ellis, Catriona. "Education for All: Reassessing the Historiography of Education in Colonial India." History Compass (2009) 7#2 pp 363–375 Dharampal, . (2000). The beautiful tree: Indigenous Indian education in the eighteenth century. Biblia Impex Private Limited, New Delhi 1983; reprinted by Keerthi Publishing House Pvt Ltd., Coimbatore 1995. Suri, R.K. and Kalapana Rajaram, eds. "Infrastructure: S&T Education", Science and Technology in India (2008), New Delhi: Spectrum, . India 2009: A Reference Annual (53rd edition), New Delhi: Additional Director General (ADG), Publications Division, Ministry of Information and Broadcasting, Government of India, . Passow, A. Harry et al. The National Case Study: An Empirical Comparative Study of Twenty-One Educational Systems. (1976) online Prabhu, Joseph (2006), "Educational Institutions and Philosophies, Traditional and Modern", Encyclopedia of India (vol. 2) edited by Stanley Wolpert, 23–28, Thomson Gale: . Pathania, Rajni. "Literacy in India: Progress and Inequality." Bangladesh e-Journal of Sociology 17.1 (2020) online. Raman, S.A. (2006). "Women's Education", Encyclopedia of India (vol. 4), edited by Stanley Wolpert, 235–239, Thomson Gale: . Setty, E.D. and Ross, E.L. (1987), "A Case Study in Applied Education in Rural India", Community Development Journal, 22 (2): 120–129, Oxford University Press. Sripati, V. and Thiruvengadam, A.K. (2004), "India: Constitutional Amendment Making The Right to Education a Fundamental Right", International Journal of Constitutional Law, 2#1: 148–158. Tilak, Jandhyala B.G. (2015) "How inclusive is higher education in India?." Social Change 45.2 (2015): 185-223 online. Vrat, Prem (2006), "Indian Institutes of Technology", Encyclopedia of India (vol. 2) edited by Stanley Wolpert, 229–231, Thomson Gale: . Desai, Sonalde, Amaresh Dubey, B.L. Joshi, Mitali Sen, Abusaleh Shariff and Reeve Vanneman. 2010. India Human Development in India: Challenges for a Society in Transition. New Delhi: Oxford University Press. External links Ministry of Human Resource Development Education statistics from the Ministry of Statistics & Programme Implementation
42866
https://en.wikipedia.org/wiki/Jakarta%20Messaging
Jakarta Messaging
The Jakarta Messaging API (formerly Java Message Service or JMS API) is a Java application programming interface (API) for message-oriented middleware. It provides generic messaging models, able to handle the producer–consumer problem, that can be used to facilitate the sending and receiving of messages between software systems. Jakarta Messaging is a part of Jakarta EE and was originally defined by a specification developed at Sun Microsystems before being guided by the Java Community Process. General idea of messaging Messaging is a form of loosely coupled distributed communication, where in this context the term 'communication' can be understood as an exchange of messages between software components. Message-oriented technologies attempt to relax tightly coupled communication (such as TCP network sockets, CORBA or RMI) by the introduction of an intermediary component. This approach allows software components to communicate with each other indirectly. Benefits of this include message senders not needing to have precise knowledge of their receivers. The advantages of messaging include the ability to integrate heterogeneous platforms, reduce system bottlenecks, increase scalability, and respond more quickly to change. Version history JMS 1.0 JMS 1.0.1 (October 5, 1998) JMS 1.0.1a (October 30, 1998) JMS 1.0.2 (December 17, 1999) JMS 1.0.2a (December 23, 1999) JMS 1.0.2b (August 27, 2001) JMS 1.1 (April 12, 2002) JMS 2.0 (May 21, 2013) JMS 2.0a (March 16, 2015) JMS 2.0 is currently maintained under the Java Community Process as JSR 343. JMS 3.0 is under early development as part of Jakarta EE. Elements The following are JMS elements: JMS provider An implementation of the JMS interface for message-oriented middleware (MOM). Providers are implemented as either a Java JMS implementation or an adapter to a non-Java MOM. JMS client An application or process that produces and/or receives messages. JMS producer/publisher A JMS client that creates and sends messages. JMS consumer/subscriber A JMS client that receives messages. JMS message An object that contains the data being transferred between JMS clients. JMS queue A staging area that contains messages that have been sent and are waiting to be read (by only one consumer). As the name queue suggests, the messages are delivered in the order sent. A JMS queue guarantees that each message is processed only once. JMS topic A distribution mechanism for publishing messages that are delivered to multiple subscribers. Models The JMS API supports two distinct models: Point-to-point Publish-and-subscribe Point-to-point model Under the point-to-point messaging system, messages are routed to individual consumers who maintain queues of incoming messages. This messaging type is built on the concept of message queues, senders, and receivers. Each message is addressed to a specific queue, and the receiving clients extract messages from the queues established to hold their messages. While any number of producers can send messages to the queue, each message is guaranteed to be delivered, and consumed by one consumer. Queues retain all messages sent to them until the messages are consumed or until the messages expire. If no consumers are registered to consume the messages, the queue holds them until a consumer registers to consume them. Publish-and-subscribe model The publish-and-subscribe model supports publishing messages to a particular message "topic". Subscribers may register interest in receiving messages published on a particular message topic. In this model, neither the publisher nor the subscriber knows about each other. A good analogy for this is an anonymous bulletin board. Zero or more consumers will receive the message. There is a timing dependency between publishers and subscribers. The publisher has to create a message topic for clients to subscribe. The subscriber has to remain continuously active to receive messages, unless it has established a durable subscription. In that case, messages published while the subscriber is not connected will be redistributed whenever it reconnects. JMS provides a way of separating the application from the transport layer of providing data. The same Java classes can be used to communicate with different JMS providers by using the Java Naming and Directory Interface (JNDI) information for the desired provider. The classes first use a connection factory to connect to the queue or topic, and then use populate and send or publish the messages. On the receiving side, the clients then receive or subscribe to the messages. URI scheme RFC 6167 defines a jms: URI scheme for the Java Message Service. Provider implementations To use JMS, one must have a JMS provider that can manage the sessions, queues and topics. Starting from Java EE version 1.4, a JMS provider has to be contained in all Java EE application servers. This can be implemented using the message inflow management of the Java EE Connector Architecture, which was first made available in that version. The following is a list of common JMS providers: Amazon SQS's Java Messaging Library Apache ActiveMQ Apache Qpid, using AMQP IBM MQ (formerly MQSeries, then WebSphere MQ) IBM WebSphere Application Server's Service Integration Bus (SIBus) JBoss Messaging and HornetQ from JBoss JORAM from the OW2 Consortium Open Message Queue from Oracle OpenJMS from the OpenJMS Group Oracle WebLogic Server and Oracle AQ RabbitMQ from Pivotal Software TIBCO Cloud Messaging from TIBCO Software TIBCO Enterprise Message Service from TIBCO Software See also Message Driven Beans Message queue — the concept underlying JMS Service-oriented architecture Event-driven SOA Messaging technologies that do not implement the JMS API include: Advanced Message Queuing Protocol (AMQP) — standardized message queue protocol with multiple independent implementations Data Distribution Service (DDS) — An Object Management Group (OMG) standardized real-time messaging system with over ten implementations that have demonstrated interoperability between publishers and subscribers Microsoft Message Queuing — similar technology, implemented for .NET Framework References Further reading External links JSR 343: Java Message Service 2.0 API Javadoc documentation Oracle's Java EE 7 JMS tutorial A historical comparison matrix of JMS providers Java enterprise platform Java specification requests Message-oriented middleware Software architecture
84394
https://en.wikipedia.org/wiki/Ingres%20%28database%29
Ingres (database)
Ingres Database ( ) is a proprietary SQL relational database management system intended to support large commercial and government applications. Actian Corporation, which announced April 2018 that it is being acquired by HCL Technologies, controls the development of Ingres and makes certified binaries available for download, as well as providing worldwide support. There was an open source release of Ingres but it's no longer available for download from Actian. However, there is a version of the sourcecode still available on GitHub. In its early years, Ingres was an important milestone in the history of database development. Ingres began as a research project at UC Berkeley, starting in the early 1970s and ending in 1985. During this time Ingres remained largely similar to IBM's seminal System R in concept; it differed in more permissive licensing of source code, in being based largely on DEC machines, both under UNIX and VAX/VMS, and in providing QUEL as a query language instead of SQL. QUEL was considered at the time to run truer to Edgar F. Codd's relational algebra (especially concerning composibility), but SQL was easier to parse and less intimidating for those without a formal background in mathematics. When ANSI preferred SQL over QUEL as part of the 1986 SQL standard (SQL-86), Ingres became less competitive against rival products such as Oracle until future Ingres versions also provided SQL. Many companies spun off of the original Ingres technology, including Actian itself, originally known as Relational Technology Inc., and the NonStop SQL database originally developed by Tandem Computers but now offered by Hewlett Packard Enterprise. Early history Ingres began as a research project at the University of California, Berkeley, starting in the early 1970s and ending in 1985. The original code, like that from other projects at Berkeley, was available at minimal cost under a version of the BSD license. Ingres spawned a number of commercial database applications, including Sybase, Microsoft SQL Server, NonStop SQL and a number of others. Postgres (Post Ingres), a project which started in the mid-1980s, later evolved into PostgreSQL. It is ACID compatible and is fully transactional (including all DDL statements) and is part of the Lisog open-source stack initiative. 1970s In 1973 when the System R project was getting started at IBM, the research team released a series of papers describing the system they were building. Two scientists at Berkeley, Michael Stonebraker and Eugene Wong, became interested in the concept after reading the papers, and started a relational database research project of their own. They had already raised money for researching a geographic database system for Berkeley's economics group, which they called Ingres, for INteractive Graphics REtrieval System. They decided to use this money to fund their relational project instead, and used this as a seed for a new and much larger project. They decided to re-use the original project name, and the new project became University INGRES. For further funding, Stonebraker approached the DARPA, the obvious funding source for computing research and development at the time, but both the DARPA and the Office of Naval Research (ONR) turned them down as they were already funding database research elsewhere. Stonebraker then introduced his idea to other agencies, and, with help from his colleagues he eventually obtained modest support from the NSF and three military agencies: the Air Force Office of Scientific Research, the Army Research Office, and the Navy Electronic Systems Command. Thus funded, Ingres was developed during the mid-1970s by a rotating team of students and staff. Ingres went through an evolution similar to that of System R, with an early prototype in 1974 followed by major revisions to make the code maintainable. Ingres was then disseminated to a small user community, and project members rewrote the prototype repeatedly to incorporate accumulated experience, feedback from users, and new ideas. The research project ended in 1985. Commercialization (1980s) Ingres remained largely similar to IBM's System R in concept, but it was based largely on DEC machines, both under UNIX Unlike System R, the Ingres source code was available (on tape) for a nominal fee. By 1980 some 1,000 copies had been distributed, primarily to universities. Many students from U.C. Berkeley and other universities who used the Ingres source code worked on various commercial database software systems. Berkeley students Jerry Held and later Karel Youseffi moved to Tandem Computers, where they built a system that evolved into NonStop SQL. The Tandem database system was a re-implementation of the Ingres technology. It evolved into a system that ran effectively on parallel computers; that is, it included functionality for distributed data, distributed execution, and distributed transactions (the last being fairly difficult). Components of the system were first released in the late 1970s. By 1989, the system could run queries in parallel and the product became fairly famous for being one of the few systems that scales almost linearly with the number of processors in the machine: adding a second CPU to an existing NonStop SQL server will almost exactly double its performance. Tandem was later purchased by Compaq, which started a re-write in 2000, and now the product is at Hewlett-Packard. In the early 1980s, Ingres competed head-to-head with Oracle. The two products were widely regarded as the leading hardware-independent relational database implementations; they had comparable functionality, performance, market share, and pricing, and many commentators considered Ingres to be a (perhaps marginally) superior product. From around 1985, however, Ingres steadily lost market share. One reason was Oracle's aggressive marketing; another was the increasing recognition of SQL as the preferred relational query language. Ingres originally had provided a different language, QUEL, and the conversion to SQL (delivered in Ingres version 6) took about three years, losing valuable time in the race. Robert Epstein, the chief programmer on the project while he was at Berkeley, formed Britton Lee, Inc. along with other students from the Ingres Project, Paula Hawthorn and Michael Ubell; they were joined later by Eric Allman. Later, Epstein founded Sybase. Sybase had been the #2 product (behind Oracle) for some time through the 1980s and into the 1990s, before Informix came "out of nowhere" and took over in 1997. Sybase's product line had also been licensed to Microsoft in 1992, who rebranded it as Microsoft SQL Server. This relationship soured in the late 1990s, and today SQL Server outsells Sybase by a wide margin. Relational Technologies Inc Several companies used the Ingres source code to produce products. The most successful was a company named Relational Technology, Inc. (RTI), founded in 1980 by Stonebraker and Wong, and another Berkeley professor, Lawrence A. Rowe. RTI was renamed Ingres Corporation in the late 1980s. The company ported the code to DEC VAX/VMS, which was the commercial operating system for DEC VAX computers. They also developed a collection of front-end tools for creating and manipulating databases (e.g., reporterwriters, forms entry and update, etc.) and application development tools. Over time, much of the source was rewritten to add functionality (for example, multiple-statement transactions, SQL, B-tree access method, date/time datatypes, etc.) and improve performance (for example, compiled queries, multithreaded server). The company was purchased by ASK Corporation in November 1990. The founders left the company over the next several months. In 1994, ASK/Ingres was purchased by Computer Associates, who continued to offer Ingres under a variety of brand names (for example, OpenIngres, Ingres II, or Advantage Ingres). In 2004, Computer Associates released Ingres r3 under an open source license. The code includes the DBMS server and utilities and the character-based front-end and application-development tools. In essence, the code has everything except OpenROAD, the Windows 4GL GUI-based development environment. In November 2005, Garnett & Helfrich Capital, in partnership with Computer Associates, created a new company called Ingres Corporation, which provided support and services for Ingres, OpenROAD, and the connectivity products. Recent years In February 2006, Ingres Corporation released Ingres 2006 under the GNU General Public Licence. Ingres 9.3 was released on October 7, 2009. It was a limited release targeted at new application development on Linux and Windows only. Ingres 10 was released on October 12, 2010, as a full release, supporting upgrade from earlier versions of the product. It was available on 32-bit and 64-bit Linux, and 32-bit Microsoft Windows. Open-source community initiatives with Ingres included: Community Bundles – Alliances with other open-source providers and projects, such as Alfresco, JasperSoft, Hibernate, Apache Tomcat, and Eclipse, enable Ingres to provide its platform and technology with other open-source technologies. Established by Ingres and Carleton University, a series of Open Source Boot Camps were held in 2008 to work with other open-source communities and projects to introduce university and college students and staff to the concepts and realities of open source. Other involvement includes: Global Ingres University Alliances, Ingres Engineering Summit, Ingres Janitors Project and several memberships in open-source initiatives. Ingres Icebreaker is an appliance that combines the Ingres Database with the Linux operating system, enabling people to simultaneously deploy and manage a database and operating system. Ingres CAFÉ (Consolidated Application Foundation for Eclipse), created by a team of developers at Carleton University, is an integrated environment that helps software architects accelerate and simplify Java application development. Ingres Geospatial was community-based project to create industry-standards-compliant geospatial storage features in the Ingres DBMS. In other words, for storing map data and providing powerful analysis functions within the DBMS. In November 2010 Garnett & Helfrich Capital acquired the last 20% of equity in Ingres Corp that it did not already own. Actian On September 22, 2011, Ingres Corporation became Actian Corporation. It focused on Action Apps, which use Ingres or Vectorwise RDBMS systems. Postgres The Postgres project was started in the mid 1980s to address limitations of existing database-management implementations of the relational model. Primary among these was their inability to let the user define new domains (or "types") which are combinations of simpler domains (see relational model for an explanation of the term "domain"). The project explored other ideas including the incorporation of write-once media (e.g., optical disks), the use of massive storage (e.g., never delete data), inferencing, and object-oriented data models. The implementation also experimented with new interfaces between the database and application programs (e.g., "portals", which are sometimes referred to as "fat cursors"). The resulting project, named "Postgres", aimed at introducing the minimum number of features needed to add complete types support. These included the ability to define types, but also the ability to fully describe relationships – which up until this time had been widely used but maintained entirely by the user. In Postgres, the database "understood" relationships, and could retrieve information in related tables in a natural way using rules. In the 1990s, Stonebraker started a new company to commercialize Postgres, under the name Illustra. The company and technology were later purchased by Informix Corporation. Actian X - The new Ingres Ingres 11 was released on 18 April 2017 and is now known as Actian X Hybrid Database. See also Applications-By-Forms Comparison of relational database management systems List of relational database management systems References External links The Design and Implementation of INGRES Retrospection on a Database System Ingres FAQ (from 1997) Actian Corp. University INGRES,Version 8.9 Client-server database management systems Cross-platform software Free database management systems Relational database management systems
39398632
https://en.wikipedia.org/wiki/SWAD%20%28software%29
SWAD (software)
SWAD (originally stood for "Sistema Web de Apoyo a la Docencia" in Spanish or "Web System for Education Support", currently stands for "Shared Workspace At a Distance") is a web application to manage the courses, students and teachers of one or more educational institutions. History The first version of SWAD appeared in September 1999. In 2005 its use was extended to the University of Granada. The application was released as free software in January 2010 under Affero General Public License, version 3. In 2010 the system was used by 1,100 professors and 35,000 students. In 2011 it was used by 2,000 professors and 60,000 students in 2,800 courses. SWAD is currently available in 9 languages and used in the University of Granada and the portal OpenSWAD.org. In November 2019 SWAD installation at the University of Granada housed 488 degrees (including undergraduate and graduate) with 7496 courses, 126,060 students and 3514 teachers. The objectives addressed in the development of the platform SWAD can be specified depending on its potential beneficiaries: For teachers and other administrators of the platform, the objectives were carrying through internet the management tasks related to a course and its students, and improved mentoring and general communication with them. For students, the objectives have been improved access to materials and information of the courses, the possibility of self-assessment at a distance, and the improvement of the communication both student-student and student-teacher. A fundamental criterion at the development of the platform has been to facilitate its use by users, emphasizing both the ease of learning and use for students and teachers (usability), and the time saving and quality improvement in various tasks related to teaching. For the institution or company, SWAD has the additional advantage of being fast and efficient, consuming very little computer resources, so being suitable for low-cost installations. Compared to other tools used for the same purpose, since its implementation in C language, SWAD does not require a big hardware and software infrastructure, even in large universities, being sufficient a single server. Technical specifications Server SWAD core is a CGI programmed in C comprising almost all the functionality of the platform. The core is supplemented with some external programs like photo processing module and chat module. The server runs on a Linux system with Apache and a MySQL or MariaDB database. Clients Being a web application, the client can be any modern web browser. To use the chat you must have Java runtime environment. Besides the web client, there is an M-learning application for Android devices called SWADroid, which implements some of the most used features in the web version. Hierarchy and roles Hierarchical organization SWAD can accommodate in a single platform one or multiple educational organizations. It uses the following hierarchical structure: Countries Institutions (universities, academies, organizations, companies,...) Centres (faculties, buildings,...) Degrees (degrees, master's,...) Courses Group types (lectures, practicals, seminars,...) Groups (A, B, morning, afternoon,...) The central element of this hierarchy is the course, which can register several teachers and students. Roles Each user has a role of student, non-editing teacher or teacher at each of the course in which he/she is enrolled. In addition, some users may be administrators of one or more degrees, centres or institutions, as well as global administrators of the platform. See also Virtual Learning Environment Learning Management System ATutor Chamilo Claroline ILIAS Moodle Sakai Project References External links SWAD website C (programming language) software Spanish educational websites Free educational software Free learning management systems Free learning support software Free software programmed in C Learning management systems School-administration software Virtual learning environments Software using the GNU AGPL license
37663509
https://en.wikipedia.org/wiki/ArduPilot
ArduPilot
ArduPilot is an open source, unmanned vehicle Autopilot Software Suite, capable of controlling autonomous: Multirotor drones Fixed-wing and VTOL aircraft Helicopters Ground rovers Boats Submarines Antenna trackers ArduPilot was originally developed by hobbyists to control model aircraft and rovers and has evolved into a full-featured and reliable autopilot used by industry, research organisations and amateurs. Software and Hardware Software suite The ArduPilot software suite consists of navigation software (typically referred to as firmware when it is compiled to binary form for microcontroller hardware targets) running on the vehicle (either Copter, Plane, Rover, AntennaTracker, or Sub), along with ground station controlling software including Mission Planner, APM Planner, QGroundControl, MavProxy, Tower and others. ArduPilot source code is stored and managed on GitHub, with almost 400 total contributors. The software suite is automatically built nightly, with continuous integration and unit testing provided by Travis CI, and a build and compiling environment including the GNU cross-platform compiler and Waf. Pre-compiled binaries running on various hardware platforms are available for user download from ArduPilot's sub-websites. Supported hardware Copter, Plane, Rover, AntennaTracker or Sub software runs on a wide variety of embedded hardware (including full blown Linux computers), typically consisting of one or more microcontroller or microprocessor connected to peripheral sensors used for navigation. These sensors include MEMS gyroscopes and accelerometers at a minimum, necessary for multirotor flight and plane stabilization. Sensors usually include, in addition, one or more compass, altimeter (barometric) and GPS, along with optional additional sensors such as optical flow sensors, airspeed indicators, laser or sonar altimeters or rangefinders, monocular, stereoscopic or RGB-D cameras. Sensors may be on the same electronic board, or external. Ground Station software, used for programming or monitoring vehicle operation, is available for Windows, Linux, macOS, iOS, and Android. ArduPilot runs on a wide variety of hardware platforms, including the following, listed in alphabetical order: Intel Aero (Linux or STM32 Base) APM 2.X (Atmel Mega Microcontroller Arduino base), designed by Jordi Munoz in 2010. APM, for ArduPilotMega, only runs on older versions of ArduPilot. BeagleBone Blue and PXF Mini (BeagleBone Black cape). The Cube, formerly called Pixhawk 2, (ARM Cortex microcontroller base), designed by ProfiCNC in 2015. Edge, drone controller with video streaming system, designed by Emlid. Erle-Brain, (Linux base) designed by Erle Robotics. Intel Minnowboard (Linux Base). Navio2 and Navio+ (Raspberry Pi Linux based), designed by Emlid. Parrot Bebop, and Parrot C.H.U.C.K., designed by Parrot, S.A. , (ARM Cortex microcontroller base), originally designed by Lorenz Meier and ETH Zurich, improved and launched in 2013 by PX4, 3DRobotics, and the ArduPilot development team. PixRacer, (ARM Cortex microcontroller base) designed by AUAV. Qualcomm SnapDragon (Linux base). Virtual Robotics VRBrain (ARM Cortex microcontroller base). Xilinx SoC Zynq processor (Linux base, ARM and FPGA processor). In addition to the above base navigation platforms, ArduPilot supports integration and communication with on-vehicle companion, or auxiliary computers for advanced navigation requiring more powerful processing. These include NVidia TX1 and TX2 ( NVidia Jetson architecture), Intel Edison and Intel Joule, HardKernel Odroid, and Raspberry PI computers. Features Common to all vehicles ArduPilot provides a large set of features, including the following common for all vehicles: Fully autonomous, semi-autonomous and fully manual flight modes, programmable missions with 3D waypoints, optional geofencing. Stabilization options to negate the need for a third party co-pilot. Simulation with a variety of simulators, including ArduPilot SITL. Large number of navigation sensors supported, including several models of RTK GPSs, traditional L1 GPSs, barometers, magnetometers, laser and sonar rangefinders, optical flow, ADS-B transponder, infrared, airspeed, sensors, and computer vision/motion capture devices. Sensor communication via SPI, I²C, CAN Bus, Serial communication, SMBus. Failsafes for loss of radio contact, GPS and breaching a predefined boundary, minimum battery power level. Support for navigation in GPS denied environments, with vision-based positioning, optical flow, SLAM, Ultra Wide Band positioning. Support for actuators such as parachutes and magnetic grippers. Support for brushless and brushed motors. Photographic and video gimbal support and integration. Integration and communication with powerful secondary, or "companion", computers Rich documentation through ArduPilot wiki. Support and discussion through ArduPilot discourse forum, Gitter chat channels, GitHub, Facebook. Copter-specific Flight modes: Stabilize, Alt Hold, Loiter, RTL (Return-to-Launch), Auto, Acro, AutoTune, Brake, Circle, Drift, Guided, (and Guided_NoGPS), Land, PosHold, Sport, Throw, Follow Me, Simple, Super Simple, Avoid_ADSB. Auto-tuning Wide variety of frame types supported, including tricopters, quadcopters, hexacopters, flat and co-axial octocopters, and custom motor configurations Support for traditional electric and gas helicopters, mono copters, tandem helicopters. Plane-specific Fly By Wire modes, loiter, auto, acrobatic modes. Take-off options: Hand launch, bungee, catapult, vertical transition (for VTOL planes). Landing options: Adjustable glide slope, helical, reverse thrust, net, vertical transition (for VTOL planes). Auto-tuning, simulation with JSBSIM, X-Plane and RealFlight simulators. Support for a large variety of VTOL architectures: Quadplanes, Tilt wings, tilt rotors, tail sitters, ornithopters. Optimization of 3 or 4 channel airplanes. Rover-specific Manual, Learning, Auto, Steering, Hold and Guided operational modes. Support for wheeled and track architectures. Submarine-specific Depth hold: Using pressure-based depth sensors, submarines can maintain depth within a few centimeters. Light Control: Control of subsea lighting through the controller. ArduPilot is fully documented within its wiki, totaling the equivalent of about 700 printed pages and divided in six top sections: The Copter, Plane, Rover, and Submarine vehicle related subsections are aimed at users. A developer subsection for advanced uses is aimed primarily at software and hardware engineers, and a Common section regrouping information common to all vehicle types is shared within the first four sections. ArduPilot use cases Hobbyists and amateurs Drone racing. Building and operation of radio control models for recreation. Professional Aerial photogrammetry Aerial photography and filmmaking. Remote sensing Search and rescue Robotic applications Academic research Package delivery History Early years, 2007-2012 The ArduPilot project earliest roots date back to late 2007 when Jordi Munoz, who later co-founded 3DRobotics with Chris Anderson, wrote an Arduino program (which he called "ArduCopter") to stabilize an RC Helicopter. In 2009 Munoz and Anderson released Ardupilot 1.0 (flight controller software) along with a hardware board it could run on. That same year Munoz, who had built a traditional RC helicopter UAV able to fly autonomously, won the first Sparkfun AVC competition. The project grew further thanks to many members of the DIY Drones community, including Chris Anderson who championed the project and had founded the forum based community earlier in 2007. The first ArduPilot version supported only fixed-wing aircraft and was based on a thermopile sensor, which relies on determining the location of the horizon relative to the aircraft by measuring the difference in temperature between the sky and the ground. Later, the system was improved to replace thermopiles with an Inertial Measurement Unit (IMU) using a combination of accelerometers, gyroscopes and magnetometers. Vehicle support was later expanded to other vehicle types which led to the Copter, Plane, Rover, and Submarine subprojects. The years 2011 and 2012 witnessed an explosive growth in the autopilot functionality and codebase size, thanks in large part to new participation from Andrew "Tridge" Tridgell and HAL author Pat Hickey. Tridge's contributions included automatic testing and simulation capabilities for Ardupilot, along with PyMavlink and Mavproxy. Hickey was instrumental in bringing the AP_ HAL library to the code base: HAL (Hardware Abstraction Layer) greatly simplified and modularized the code base by introducing and confining low-level hardware implementation specifics to a separate hardware library. The year 2012 also saw Randy Mackay taking the role of lead maintainer of Copter, after a request from former maintainer Jason Short, and Tridge taking over the role of lead Plane maintainer, after Doug Weibel who went on to earn a Ph.D. in Aerospace Engineering. Both Randy and Tridge are current lead maintainers to date. The free software approach to ArduPilot code development is similar to that of the Linux Operating system and the GNU Project, and the PX4/Pixhawk and Paparazzi Project, where low cost and availability enabled hobbyists to build autonomous small remotely piloted aircraft, such as micro air vehicles and miniature UAVs. The drone industry, similarly, progressively leveraged ArduPilot code to build professional, high-end autonomous vehicles. Maturity, 2013-2016 While early versions of ArduPilot used the APM flight controller, an AVR CPU running the Arduino open-source programming language (which explains the "Ardu" part of the project name), later years witnessed a significant re-write of the code base in C++ with many supporting utilities written in Python. Between 2013 and 2014 ArduPilot evolved to run on a range of hardware platforms and operating system beyond the original Arduino Atmel based microcontroller architecture, first with the commercial introduction of the Pixhawk hardware flight controller, a collaborative effort between PX4, 3DRobotics and the ArduPilot development team, and later to the Parrot's Bebop2 and the Linux-based flight controllers like Raspberry Pi based NAVIO2 and BeagleBone based ErleBrain. A key event within this time period included the first flight of a plane under Linux in mid 2014. Late 2014 saw the formation of DroneCode, formed to bring together the leading open source UAV software projects, and most notably to solidify the relationship and collaboration of the ArduPilot and the PX4 projects. ArduPilot's involvement with DroneCode ended in September 2016. 2015 was also a banner year for 3DRobotics, a heavy sponsor of ArduPilot development, with its introduction of the Solo quadcopter, an off the shelf quadcopter running ArduPilot. Solo's commercial success, however, was not to be. Fall of 2015 again saw a key event in the history of the autopilot, with a swarm of 50 planes running ArduPilot simultaneously flown at the Advanced Robotic Systems Engineering Laboratory (ARSENL) team at the Naval Postgraduate School. Within this time period, ArduPilot's code base was significantly refactored, to the point where it ceased to bear any similarity to its early Arduino years. Current, 2018- ArduPilot code evolution continues with support for integrating and communicating with powerful companion computers for autonomous navigation, plane support for additional VTOL architectures, integration with ROS, support for gliders, and tighter integration for submarines. The project evolves under the umbrella of ArduPilot.org, a project within the Software in the Public Interest (spi-inc.org) not-for-profit organisation. ArduPilot is sponsored in part by a growing list of corporate partners. UAV Outback Challenge In 2012, the Canberra UAV Team successfully took first place in the prestigious UAV Outback Challenge. The CanberraUAV Team included ArduPlane Developers and the airplane flown was controlled by an APM 2 Autopilot. In 2014 the CanberraUAV Team and ArduPilot took first place again, by successfully delivering a bottle to the "lost" hiker. In 2016 ArduPilot placed first in the technically more challenging competition, ahead of strong competition from international teams. Community ArduPilot is jointly managed by a group of volunteers located around the world, using the Internet (discourse based forum, gitter channel) to communicate, plan, develop and support it. The development team meets weekly in a chat meeting, open to all, using Mumble. In addition, hundreds of users contribute ideas, code and documentation to the project. ArduPilot is licensed under the GPL Version 3 and is free to download and use. Customizability The flexibility of ArduPilot makes it very popular in the DIY field but it has also gained popularity with professional users and companies. 3DRobotics' Solo quadcopter, for instance, uses ArduPilot, as have a large number of professional aerospace companies such as Boeing. The flexibility allows for support of a wide variety of frame types and sizes, different sensors, camera gimbals and RC transmitters depending on the operator's preferences. ArduPilot has been successfully integrated into many airplanes such as the Bixler 2.0. The customizability and ease of installation have allowed the ArduPilot platform to be integrated for a variety of missions. The Mission Planner (Windows) ground control station allows the user to easily configure, program, use, or simulate an ArduPilot board for purposes such as mapping, search and rescue, and surveying areas. See also Open-source robotics Other projects for autonomous aircraft control: PX4 autopilot Paparazzi Project Slugs Other projects for ground vehicles & cars driven: OpenPilot Tesla Autopilot References External links ArduPilot.org Unmanned aerial vehicles Unmanned underwater vehicles Free software Robots Unmanned ground vehicles
530016
https://en.wikipedia.org/wiki/Timer
Timer
Timer is a specialized type of clock used for measuring specific time intervals. Timers can be categorized into two main types. A timer that counts upwards from zero for measuring elapsed time is often called a stopwatch, while a device which counts down from a specified time interval is more usually called a timer. A simple example of this type is an hourglass. Working method timers have two main groups: hardware and software timers. Most timers give an indication that the time interval that had been set has expired. Time switches, timing mechanisms that activate a switch, are sometimes also called "timers." Hardware Mechanical Mechanical timers use clockwork to measure time. Manual timers are typically set by turning a dial to the time interval desired; turning the dial stores energy in a mainspring to run the mechanism. They function similarly to a mechanical alarm clock; the energy in the mainspring causes a balance wheel to rotate back and forth. Each swing of the wheel releases the gear train to move forward by a small fixed amount, causing the dial to move steadily backward until it reaches zero when a lever arm strikes a bell. The mechanical kitchen timer was invented in 1926 called a fan fly that spins against air resistance; low-precision mechanical egg-timers are sometimes of this type. The simplest and oldest type of mechanical timer is the hourglass - which is also known as "the glass of the hour" - in which a fixed amount of sand drains through a narrow opening from one chamber to another to measure a time interval. Electromechanical Short-period bimetallic electromechanical timers use a thermal mechanism, with a metal finger made of strips of two metals with different rates of thermal expansion sandwiched together; steel and bronze are common. An electric current flowing through this finger causes heating of the metals, one side expands less than the other, and an electrical contact on the end of the finger moves away from or towards an electrical switch contact. The most common use of this type is in the "flasher" units that flash turn signals in automobiles, and sometimes in Christmas lights. This is a non-electronic type of multivibrator. An electromechanical cam timer uses a small synchronous AC motor turning a cam against a comb of switch contacts. The AC motor is turned at an accurate rate by the alternating current, which power companies carefully regulate. Gears drive a shaft at the desired rate, and turn the cam. The most common application of this timer now is in washers, driers and dishwashers. This type of timer often has a friction clutch between the gear train and the cam, so that the cam can be turned to reset the time. Electromechanical timers survive in these applications because mechanical switch contacts may still be less expensive than the semiconductor devices needed to control powerful lights, motors and heaters. In the past, these electromechanical timers were often combined with electrical relays to create electro-mechanical controllers. Electromechanical timers reached a high state of development in the 1950s and 1960s because of their extensive use in aerospace and weapons systems. Programmable electromechanical timers controlled launch sequence events in early rockets and ballistic missiles. As digital electronics has progressed and dropped in price, electronic timers have become more advantageous. Electronic Electronic timers are essentially quartz clocks with special electronics, and can achieve higher precision than mechanical timers. Electronic timers have digital electronics, but may have an analog or digital display. Integrated circuits have made digital logic so inexpensive that an electronic timer is now less expensive than many mechanical and electromechanical timers. Individual timers are implemented as a simple single-chip computer system, similar to a watch and usually using the same, mass-produced, technology. Many timers are now implemented in software. Modern controllers use a programmable logic controller (PLC) rather than a box full of electromechanical parts. The logic is usually designed as if it were relays, using a special computer language called ladder logic. In PLCs, timers are usually simulated by the software built into the controller. Each timer is just an entry in a table maintained by the software. Computer systems usually have at least one hardware timer. These are typically digital counters that either increment or decrement at a fixed frequency, which is often configurable, and which interrupt the processor when reaching zero. An alternative design uses a counter with a sufficiently large word size that it will not reach its overflow limit before the end of life of the system. More sophisticated timers may have comparison logic to compare the timer value against a specific value, set by software, that triggers some action when the timer value matches the preset value. This might be used, for example, to measure events or generate pulse-width modulated waveforms to control the speed of motors (using a class D digital electronic amplifier). One specialist use of hardware timers in computer systems is as watchdog timers, which are designed to perform a hardware reset of the system if the software fails. Software These types of timers are not devices nor parts of devices; they exist only as software. They rely on the accuracy of a clock generator usually built into a hardware device that runs the software. Applications Nowadays when people are using more and more mobile phones, there are also timer apps that mimic the old mechanical timer, but which have also highly sophisticated functions. These apps are also easier to use, because they are available at once, without any need to purchase or carry separate devices, as today timer is just a software application on a phone, smartwatch, or tablet. Some of these apps are countdown timers, stopwatch timers, etc. These timer apps can be used for tracking working or training time, motivating children to do tasks, replacing an hourglass-form egg timer in board games such as Boggle, or for the traditional purpose of tracking time when cooking and baking. Apps may be superior to hour glasses, or to mechanical timers. Hour glasses are not precise and clear, and they can jam. Mechanical timers lack the customization that applications support, such as sound volume adjustments for individual needs. Most applications will also offer selectable alarm sounds. Some timer applications can help children to understand the concept of time, help them to finish tasks in time, and help them to get motivated. These applications are especially used with children with disabilities like ADHD, Down syndrome, etc., but everybody else can also benefit from them. See also Candle-timers Countdown Time lock Drip irrigation Egg timer Intervalometer Staircase timer Time to digital converter Water clock References External links NIST Recommended Practice Guide: Special Publication 960-12 Stopwatch and Timer Calibrations Online Time Timer Website Online Countdown Timer Digital circuits Control devices Home automation Timers
43029598
https://en.wikipedia.org/wiki/Afzal%20Upal
Afzal Upal
Muhammad Afzal Upal is a writer and a cognitive scientist with contributions to cognitive science of religion, machine learning for planning, and agent-based social simulation. Early life and education He was born in Pakistan with 2 sisters and 3 brothers. His family emigrated to Canada because Ahmadiyya, the form of Islam they practiced, was discriminated against in Pakistan. For his PhD research, he worked under the supervision of Professor Renee Elio at the University of Alberta. In December 1999, he successfully defended his thesis on "Learning to Improve the Quality of Plans Produced by Partial-order Planners". Leadership He was chair of the First International Workshop on Cognition and Culture, the 14th Annual Conference of the North American Association for Computational, Social, and Organizational Sciences, the AAAI-06 Workshop on Cognitive Modeling and Agent-based Social Simulation, Professional career In July 1999, Upal was hired as a tenure-track assistant professor of computer science at Dalhousie University's new Faculty of Computer Science. In 2001, he moved to Information Extraction & Transport (IET) Inc. to work as a senior scientist on various DARPA sponsored projects to develop Bayesian network based decision-aid systems. In July 2003, he joined the University of Toledo's Electrical Engineering & Computer Science Department as a tenure track assistant professor to teach computer science. From 2008 to 2017, he worked as a defense scientist at Defence R & D Canada's Toronto Research Centre. From 2017 to 2020 2017 he served as the head of the Computing and Information Science at Mercyhurst University. Since 2020 , he has been working as the Chair of the Computer Science & Software Engineering Department at University of Wisconsin-Platteville. Scientific contributions He has contributed to research areas of Cognition & Culture and Cognitive science of religion through the development of the Context-based model of minimal counterintuiveness. In a 2005 article in the Journal of Cognition and Culture, he proposed a cognitive science of new religious movements. Upal has also pioneered a knowledge-rich agent-based social simulation technique for simulating the development of complex cultural beliefs. In 2017 his book Moderate Fundamentalists: Ahmadiyya Muslim Jama'at in the lens of cognitive science of religion, was published by DeGruyter Press. The book uses Context-based model of minimal counterintuiveness to explain counterintuitive claims of new religious movement founders such as Mirza Ghulam Ahmad-the founder of Ahmadiyya Islam. He co-edited the Brill Handbook of Islamic Sects & Movements with Professor Carole M. Cusack. References Cognitive scientists Computer scientists Artificial intelligence researchers University of Toledo faculty Living people 1970 births
42047577
https://en.wikipedia.org/wiki/Kareo
Kareo
Kareo is a company based in Irvine, California that provides software as a service for independent medical practices. The company offers cloud computing products and services for electronic health record (EHR) management, medical practice management software, managed billing services and software to help practices engage with their patients. In 2019, the company reported over 55,000 providers using its technology. History Kareo was established by former Scour Inc. founder Dan Rodrigues in 2004. The company is headquartered in Irvine, CA and has offices in Las Vegas, NV and Indianapolis, IN. In July 2013, Kareo made its first acquisition of a full-service provider of medical billing and associated solutions company, ECCO Health, LLC. Over the years, the company has received funding from a variety of venture capital firms including CNET founder Halsey Minor's Minor Ventures and OpenView Venture Partners. On March 14, 2014, Kareo EHR achieved Meaningful Use 2014 Edition Stage 2 certification by the Drummond Group, which is an Office of the National Coordinator for Health Information Technology Authorized Certification Body (ONC-ACB). Kareo acquired DoctorBase, a mobile-based patient engagement and practice marketing platform on March 10, 2015. On April 2, 2015, the company announced that it joined Commonwell Health Alliance, a not-for-profit trade association of health IT companies working together to create universal access to health care data. On November 2, 2021, Kareo announced it was merging with healthcare marketing company PatientPop to form Tebra. Awards In 2020, Kareo was ranked on the Investopedia "7 Best Medical Billing Companies" and rated as "Best for Independent Physician." In 2022, Kareo was rated #65 on the "100 Best Large Companies to Work For in Los Angeles" by BuiltIn LA. References External links Electronic health record software companies Companies based in Irvine, California 2004 establishments in California Software companies established in 2004 Software companies based in California Software companies of the United States
4650723
https://en.wikipedia.org/wiki/LaFarr%20Stuart
LaFarr Stuart
LaFarr Stuart (born July 6, 1934 in Clarkston, Utah), was an early computer music pioneer, computer engineer and member of the Homebrew Computer Club. Career Computer music In 1961, Stuart programmed Iowa State University's Cyclone computer, a derivative of the ILLIAC, to play simple, recognizable tunes through an amplified speaker that had been attached to the system originally for administrative and diagnostic purposes. A recording of an interview with Stuart and his computer music was broadcast nationally on the National Broadcasting Company's NBC Radio Network program Monitor on February 10, 1962. In a subsequent interview with the Harold Journal, Navel Hunsaker, head of the Utah State University mathematics department, said of Stuart, "He always was a whiz with calculators." From the late 1970s, Stuart mentored John Carlsen, who later contributed to the rapid growth of personal computer (PC) sound-card maker Media Vision and to SigmaTel. Control Data In the late 1960s and early 1970s, Stuart worked for Control Data Corporation (CDC), where Seymour Cray designed the CDC 6600, the first commercial supercomputer. Forth During the 1970s, Stuart created a version of the programming language Forth, which became known as LaFORTH. It is notable for its implementation without an input buffer. Zytrex In the 1980s, Stuart worked for Zytrex, which manufactured complementary metal–oxide–semiconductor (CMOS) Programmable Array Logic (PAL) programmable logic devices (PLDs). Real-time clocks Stuart conceived installing battery-operated real-time clocks into computers, for which he received royalty payments until nearly 2000. Stuart jokingly admits contributing to the Year 2000 problem. Preserving computer history Stuart owns the first Digital Equipment Corporation (DEC) PDP-11 to enter California and often visits the Computer History Museum in Mountain View, California. See also E-mu Systems Robert Moog References External links 1934 births American computer programmers Control Data Corporation Living people People from Cache County, Utah Computer real-time clocks
1038529
https://en.wikipedia.org/wiki/Popular%20Electronics
Popular Electronics
Popular Electronics was an American magazine published by John August Media, LLC, and hosted at TechnicaCuriosa.com. The magazine was started by Ziff-Davis Publishing Company in October 1954 for electronics hobbyists and experimenters. It soon became the "World's Largest-Selling Electronics Magazine". In April 1957 Ziff-Davis reported an average net paid circulation of 240,151 copies. Popular Electronics was published until October 1982 when, in November 1982, Ziff-Davis launched a successor magazine, Computers & Electronics. During its last year of publication by Ziff-Davis, Popular Electronics reported an average monthly circulation of 409,344 copies. The title was sold to Gernsback Publications, and their Hands-On Electronics magazine was renamed to Popular Electronics in February 1989, and published until December 1999. The Popular Electronics trademark was then acquired by John August Media, who revived the magazine, the digital edition of which is hosted at TechnicaCuriosa.com, along with sister titles, Mechanix Illustrated and Popular Astronomy. A cover story on Popular Electronics could launch a new product or company. The most famous issue, January 1975, had the Altair 8800 computer on the cover and ignited the home computer revolution. Paul Allen showed that issue to Bill Gates. They wrote a BASIC interpreter for the Altair computer and started Microsoft. How it started Radio & Television News was a magazine for professionals and the editors wanted to create a magazine for hobbyists. Ziff-Davis had started Popular Aviation in 1927 and Popular Photography in 1934 but found that Gernsback Publications had the trademark on Popular Electronics. It was used in Radio-Craft from 1943 until 1948. Ziff-Davis bought the trademark and started Popular Electronics with the October 1954 issue. Many of the editors and authors worked for both Ziff-Davis magazines. Initially Oliver Read was the editor of both Radio & Television News and Popular Electronics. Read was promoted to Publisher in June 1956. Oliver Perry Ferrell took over as editor of Popular Electronics and William A. Stocklin became editor of Radio & Television News. In Radio & TV News John T. Frye wrote a column on a fictional repair shop where the proprietor, Mac, would interact with other technicians and customers. The reader would learn repair techniques for servicing radios and TVs. In Popular Electronics his column was about two high school boys, Carl and Jerry. Each month the boys would have an adventure that would teach the reader about electronics. By 1954 building audio and radio kits was a growing pastime. Heathkit and many others offered kits that included all of the parts with detailed instructions. The premier cover shows the assembly of a Heathkit A-7B audio amplifier. Popular Electronics would offer projects that were built from scratch; that is, the individual parts were purchased at a local electronics store or by mail order. The early issues often showed these as father and son projects. Most of the early projects used vacuum tubes, as transistors (which had just become available to hobbyists) were expensive: the small-signal Raytheon CK722 transistor was US$3.50 in the December 1954 issue, while a typical small-signal vacuum tube (the 12AX7) was $0.61. Lou Garner wrote the feature story for the first issue, a battery-powered tube radio that could be used on a bicycle. Later he was given a column called Transistor Topics (June 1956). Transistors soon cost less than a dollar and transistor projects became common in every issue of Popular Electronics. The column was renamed to Solid State in 1965 and ran under his byline until December 1978. Typical 1962 issue The July 1962 issue had 112 pages, the editor was Oliver P. Ferrell and the monthly circulation was 400,000. The magazine had a full page of electronics news that was called "POP'tronics News Scope." In January 2000 a successor magazine was renamed Poptronics. In the 1960s, Fawcett Publications had a competing magazine, Electronics Illustrated. The cover showed a 15-inch (38 cm) black and white TV kit by Conar that cost $135. The feature construction story was a "Radiation Fallout Monitor" for "keeping track of the radiation level in your neighborhood." (The Cuban Missile Crisis happened that October.) Other construction projects included "The Fish Finder", an underwater temperature probe; the "Transistorized Tremolo" for an electric guitar; and a one tube VHF receiver to listen to aircraft. There were regular columns for Citizens Band (CB), amateur radio and shortwave listening (SWL). These would show a reader with his radio equipment each month. (Almost all of the readers were male.) Lou Garner's Transistor Topics covers the new transistorized FM stereo receivers and several readers' circuits. John T. Frye's fictional characters, Carl and Jerry, use a PH meter to locate the source of pollution in a river. Authors and kits As Editor, Olivier Ferrell built a stable of authors who contributed interesting construction projects. These projects established the style of Popular Electronics for years to come. Two of the most prolific authors were Daniel Meyer and Don Lancaster. Daniel Meyer graduated from Southwest Texas State (1957) and became an engineer at Southwest Research Institute in San Antonio, Texas. He soon started writing hobbyist articles. The first was in Electronics World (May 1960) and latter he had a 2 part cover feature for Radio-Electronics (October, November 1962). The March 1963 issue of Popular Electronics featured his ultrasonic listening device on the cover. Don Lancaster graduated from Lafayette College (1961) and Arizona State University (1966). A 1960s fad was to have colored lights synchronized with music. This psychedelic lighting was made economical by the development of the silicon-controlled rectifier (SCR). Don's first published article was "Solid-State 3-Channel Color Organ" in the April 1963 issue of Electronics World. He was paid $150 for the story. The projects in Popular Electronics changed from vacuum tube to solid state in the early 1960s. Tube circuits used a metal chassis with sockets, transistor circuits worked best on a printed circuit board. They would often contain components that were not available at the local electronics parts store. Dan Meyer saw the business opportunity in providing circuit boards and parts for the Popular Electronics projects. In January 1964 he left Southwest Research Institute to start an electronics kit company. He continued to write articles and ran the mail order kit business from his home in San Antonio, Texas. By 1965 he was providing the kits for other authors such as Lou Garner. In 1967 he sold a kit for Don Lancaster's "IC-67 Metal Locator". In early 1967 Meyer moved his growing business from his home to a new building on a 3-acre (12,000 m2) site in San Antonio. The Daniel E. Meyer Company (DEMCO) became Southwest Technical Products Corporation (SWTPC) that fall. In 1967, Popular Electronics had 6 articles by Dan Meyer and 4 by Don Lancaster. Seven of that year's cover stories featured kits sold by SWTPC. In the years 1966 to 1971 SWTPC's authors wrote 64 articles and had 25 cover stories in Popular Electronics. (Don Lancaster alone had 23 articles and 10 were cover stories.) The San Antonio Express-News did a feature story on Southwest Technical Products in November 1972. "Meyer built his mail-order business from scratch to more than $1 million in sales in six years." The company was shipping 100 kits a day from 1800 square feet (1,700 m2) of buildings. Others noticed SWTPC success. Forrest Mims, a founder of MITS (Altair 8800), tells about his "Light-Emitting Diodes" cover story (Popular Electronics, November 1970) in an interview with Creative Computing. In March, I sold my first article to Popular Electronics magazine, a feature about light-emitting diodes. At one of our midnight meetings I suggested that we emulate Southwest Technical Products and develop a project article for Popular Electronics. The article would give us free advertising for the kit version of the project, and the magazine would even pay us for the privilege of printing it! The November 1970 issue also has an article by Forrest M. Mims and Henry E. Roberts titled "Assemble an LED Communicator - The Opticon." A kit of parts could be ordered from MITS in Albuquerque, New Mexico. Popular Electronics paid $400 for the article. Merger with Electronics World Radio & Television News became Electronics World in 1959 and in January 1972 was merged into Popular Electronics. The process started in the summer of 1971 with a new editor, Milton S. Snitzer, replacing the longtime editor, Oliver P. Ferrell. The publishers decided to focus on topics with prosperous advertisers, such as CB Radio and audio equipment. Construction projects were no longer the feature articles. They were replaced by new product reviews. The change in editorial direction upset many authors. Dan Meyer wrote a letter in his SWTPC catalog referring to the magazine, Popular Electronics with Electronics World, as "PEEW". He urged his customers to switch to Radio-Electronics. Don Lancaster, Daniel Meyer, Forrest Mims, Ed Roberts, John Simonton and other authors switched to Radio-Electronics. Even Solid State columnist Lou Garner moved to Radio-Electronics for a year. Les Solomon, the Popular Electronics Technical Editor, wrote 6 articles in the rival Radio-Electronics using the pseudonym "B. R. Rogen". In 1972 and 1973 some of the best projects appeared in Radio-Electronics as the new Popular Electronics digested the merger. The upcoming personal computer benefited from this competition between Radio-Electronics and Popular Electronics. In September 1973 Radio-Electronics published Don Lancaster's TV Typewriter, a low cost video display. In July 1974 Radio-Electronics published the Mark-8 Personal Minicomputer based on the Intel 8008 processor. The publishers noted the success of Radio-Electronics and Arthur P. Salsberg took over as Editor in 1974. Salsberg and Technical Editor, Leslie Solomon, brought back the featured construction projects. Popular Electronics needed a computer project so they selected Ed Roberts' Altair 8800 computer based on the improved Intel 8080 processor. The January 1975 issue of Popular Electronics had the Altair computer on the cover and this launched the home computer revolution. (However, Walter Isaacson's biography of Steve Jobs incorrectly identified the magazine that ran the article as Popular Mechanics.) The magazine was digest size () for the first 20 years. The cover logo was a sans-serif typeface in a rectangular box. The covers featured a large image of the feature story, usually a construction project. In September 1970 the cover logo was changed to an underlined serif typeface. The magazine's content, typography and layout were also updated. In January 1972 the cover logo added a second line, "including Electronics World", and the volume number was restarted at 1. This second line was used for two years. The large photo of the feature project was gone, replaced by a textual list of articles. In August 1974 the magazine switched to a larger letter size format (). This was done to allow larger illustrations such as schematics, to switch printing to offset presses, and respond to advertisers desire for larger ad pages. The longtime tag line, "World's Largest Selling Electronics Magazine", was moved from the Table of Contents page to the cover. Personal computers There is debate about what machine was the first personal computer, the Altair 8800 (1975), the Mark-8 (1974), or even back to Kenbak-1 (1971). The computer in the January 1975 issue of Popular Electronics captured the attention of the 400,000 or so readers. Before then, home computers were lucky to sell a hundred units. The Altair sold thousands in the first year. By the end of 1975 there were a dozen companies producing computer kits and peripherals using the Altair circuit bus, later renamed the S-100 bus and set as an IEEE standard. The February 1975 issue featured an "All Solid-State TV Camera" by three Stanford University students: Terry Walker, Harry Garland and Roger Melen. While the Cyclops Camera, as it was called, was designed to use an oscilloscope for the image display, the article mentions that it could also be connected to the Altair computer. It soon was, the authors got one of the first Altair computers and designed an interface for the camera. They also designed a full color video display for the Altair, "The TV Dazzler", that appeared on the cover of the February 1976 issue. This was the start of Cromemco, a computer company that grew to over 500 employees by 1983. The internet did not exist in 1975 but time-sharing computers did. With a computer terminal and a modem a user could dial into a large multi-user computer. Lee Felsenstein wanted make low-cost versions of modems and terminals available to the hobbyist. The March 1976 issue had the "Pennywhistle Modem" and the July 1976 issue had the "SOL Intelligent Terminal". The SOL, built by Processor Technology, was really an Altair compatible computer and became one of the most successful personal computers at that time. Popular Electronics had many other computer projects such as the Altair 680, the Speechlab voice recognition board and the COSMAC ELF. They did not have the field to themselves. A dedicated computer magazine, Byte, was started in September 1975. It was soon followed by other new magazines. By the end of 1977, fully assembled computers such as Apple II, Radio Shack TRS-80, and the Commodore PET were on the market. Building computer kits was soon replaced by plugging in assembled boards. In 1982, Popular Electronics helped to introduce personal computer programming with its Programmer’s Notebook column written by Jim Keogh . Each column focused on a game programming. The column continued onto Computer & Electronics Magazine. Computers & Electronics Popular Electronics continued with a full range of construction projects using the newest technologies such as microprocessors and other programmable devices. In November 1982 the magazine became Computers & Electronics. There were more equipment reviews and fewer construction projects. One of the last major projects was a bidirectional analog-to-digital converter for the Apple II computer published in July and August 1983. Art Salsberg left at the end of 1983 and Seth R. Alpert became editor. The magazine dropped all project articles and just reviewed hardware and software. The circulation was almost 600,000 in January 1985 when Forrest Mims wrote about the tenth anniversary of the Altair 8800 computer. In October 1984 Art Salsberg started a competing magazine, Modern Electronics. Editor Alexander W. Burawa and contributors Forrest Mims, Len Feldman, and Glenn Hauser moved to Modern Electronics. Here is how Art Salsberg described the new magazine. Directed to enthusiasts like yourselves, who savor learning more about the latest developments in electronics and computer hardware, Modern Electronics shows you what's new in the world of electronics/computers, how this equipment works, how to use them, and construction plans for useful electronic devices. Many of you probably know of me from my decade-long stewardship of Popular Electronics magazine, which changed its name and editorial philosophy last year to distance itself from active electronics enthusiasts who move fluidly across electronics and computer product areas. In a sense, then, Modern Electronics is the successor to the original concept of Popular Electronics … The last issue of Computers & Electronics was April 1985. The magazine still had 600,000 readers but the intense competition from other computer magazines resulted in flat advertising revenues. Ziff-Davis asset sale In 1953, William B. Ziff, Jr. (age 23) was thrust into the publishing business when his father died of a heart attack. In 1982, Ziff was diagnosed with prostate cancer so he asked his three sons (ages 14 to 20) if they wanted to run a publishing empire. They did not. Ziff wanted to simplify the estate by selling some of the magazines. In November 1984, CBS bought the consumer group for $362.5 million and Rupert Murdoch bought the business group for $350 million. This left Ziff-Davis with the computer group and the database publisher (Information Access Company.) These groups were not profitable. Ziff took time off to successfully battle the prostate cancer. (He lived until 2006.) When he returned he focused on magazines like PC Magazine and MacUser to rebuild Ziff-Davis. In 1994 he and his sons sold Ziff-Davis for $1.4 billion. Gernsback Publications The title Popular Electronics was sold to Gernsback Publications and their Hands-On Electronics magazine was renamed to Popular Electronics in February 1989. This version was published until it was merged with Electronics Now to become Poptronics in January 2000. In late 2002 Gernsback Publications went out of business and the January 2003 Poptronics was the last issue. See also WGU-20 - an unusual radio station first explained by Popular Electronics Nuts and Volts - an electronic hobbyists' magazine still in print Elektor - another electronic hobbyists' magazine still in print References External links Popular Electronics Magazine History Online scans of selected Popular Electronics issues Index of all of John T. Frye's Carl and Jerry stories STARTUP: Albuquerque and the Personal Computer Revolution America Radio History archives of Popular Electronics issues Popular Electronics website continuing the magazine title Archived Popular Electronics on the Internet Archive Science and technology magazines published in the United States Magazines established in 1954 Magazines disestablished in 1985 Monthly magazines published in the United States Defunct computer magazines published in the United States Hobby electronics magazines
20399244
https://en.wikipedia.org/wiki/List%20of%20software%20that%20supports%20Office%20Open%20XML
List of software that supports Office Open XML
This is an overview of software support for the Office Open XML format, a document file format for saving and exchanging editable office documents. The list here is not exhaustive. ECMA-376 1st edition implementations The ECMA-376 1st edition Office Open XML standard is supported by a number of applications from various vendors; listed alphabetically they include: Text documents (.docx) Word processors AbiWord includes an input filter for Office Open XML text documents beginning with version 2.6.0. Export of Office Open XML text documents is supported beginning with version 2.6.5. Adobe Buzzword beta, the online word processor by Adobe Systems, imports and exports Microsoft Word (DOC), Office Open XML (DOCX) and Word 2003 XML files. Compatibility is limited due to beta status of development. Apache OpenOffice reads some .docx. It does not write .docx. Apple Inc.'s iWork '08 suite has read-only support for Office Open XML word processing file formats in Pages. Apple Inc.'s iPhone has read-only support for Office Open XML attachments to email. Apple Inc.'s TextEdit, the built-in word processing program of Mac OS X, has very basic read and write support for Office Open XML text files starting with Mac OS X v10.5. Atlantis Word Processor includes input and export filters for Office Open XML text documents (DOCX) beginning with version 1.6.3. Collabora Office enterprise-ready edition of LibreOffice has built-in support for opening and writing Office Open XML files. It is available for Windows, macOS, Linux, Android, iOS, iPadOS and Chomebooks. Collabora Online a web-based enterprise-ready edition of LibreOffice word processor and suite has built-in support for opening and writing Office Open XML files. It is available online via html. Corel WordPerfect Office X5 can both read and write Office Open XML. DataViz' Documents To Go for Android, Palm OS, Windows Mobile and Symbian OS (UIQ, S80) supports Office Open XML documents. Evermore Software EIOffice Word Processor has import only Office Open XML support for text documents. It is available for Windows and Linux. Google Docs, a web-based word processor and spreadsheet application supports importing Office Open XML text documents. As of June 2014, DOCX files can be edited "natively," without conversion. IBM Lotus Symphony includes an input filter for Office Open XML text documents beginning with version 1.3. Jarte 3.0+ for Windows has import only Office Open XML support for text documents. JustSystems Ichitaro 2008 (Japanese) has built-in support for Office Open XML files. It is available for Windows and Linux. LibreOffice has built-in support for opening and writing Office Open XML files. It is available for Windows, macOS, Linux, BSDs, etc. MadCap Flare is a Help authoring tool that can generate multiple outputs including Office Open XML text documents, PDF, Clean XHTML output and other formats. Microsoft Office 2007, Microsoft Office 2010, and Microsoft Office 2013 for Windows use the Office Open XML format as the default, but is unable to read files from other office tools that use Office Open XML as their format, e.g. SoftMaker Office files. Some older versions of Microsoft Word and Microsoft Office (2000, XP and 2003) are able to read and write docx files after installation of the free compatibility pack provided by Microsoft, but some items such as equations are converted into images that cannot be edited. The compatibility pack is available for Windows 2000 Service Pack 4 and newer operating systems. It does not require Microsoft Office but does require Microsoft Windows. It can be used as a standalone converter with products that read Office's older binary formats, such as OpenOffice.org. Microsoft Office 2008 for Mac and Microsoft Office for Mac 2011 support the Office Open XML format. For older versions of Office on the Mac, a converter is available. Microsoft Office Mobile 6.1 supports Office Open XML on Mobile devices. For Microsoft Word, see Microsoft Office above. Microsoft's version of Wordpad included with Windows 7 supports opening and saving in the docx format. The Mac OS X-based NeoOffice office suite supports opening, editing, and saving of most Office Open XML documents since version 2.1. Nisus Writer Pro has built-in, but rather limited, support for opening OOXML documents. ONLYOFFICE, an online office suite, can read and write Office Open XML format. OpenOffice.org had built-in support for opening Office Open XML text documents beginning with OpenOffice.org version 3.0 (October 2008). QuickOffice, a mobile office suite for Symbian and Palm OS, supports wordprocessing in Office Open XML format. Schreibchen 1.0.1 for Mac OS X can open and write Office Open XML text documents. It is a very simple word processor for disabled persons, children and other peoples that can not use (or like) other word processors or text editors. Schreiben 4.0.1, a simple and fast word processor for Mac OS X supports Office Open XML text documents. SoftMaker Office 2016 and 2012, an office suite for Windows, Linux, and Google Android supports .docx, .xlsx, and .pptx in its word processor, spreadsheet and presentation-graphics software respectively. SoftMaker Office 2018 uses Office Open XML as its default file format. The online Thinkfree Office supports Office Open XML word processing files. WPS Office Writer 2019 ( Windows, Linux, Android, iOS and Mac) supports Office Open XML. Online word processor Zoho Writer supports exporting to the Office Open XML WordprocessingML format. Viewers, filters and converters Apple Inc.'s Quick Look, the built-in quick preview feature of Mac OS X, supports Office Open XML files starting with Mac OS X v10.5. Collabora Office can also run headless online or locally as a filter and converter for Office Open XML files. It will do this under Windows, macOS, Linux. Collabora Online a web-based word processor and suite has built-in support for Office Open XML files. It can be embedded into html applications for viewers, filters and converters. DataViz MacLinkPlus Deluxe 16 supports Office Open XML file formats. Google Search supports direct HTML view of Office Open XML files. Found files can be viewed directly in a converted HTML view. Microsoft Office Open XML Converter for Mac OS X can convert Office Open XML files to the former binary file formats used in older versions of Microsoft Office. NativeWinds Docx2Rtf supports Office Open XML text documents. SoftMaker TextMaker Viewer 2009 is a free application that supports viewing and printing of documents in many word processing formats including Office Open XML text documents. Translation support OmegaT – OmegaT is a free translation memory application written in Java. Swordfish Translation Editor, a cross-platform CAT tool based on XLIFF 1.2 open standard published by OASIS that provides support for translation of Office Open XML files. Bibliographic RefWorks – Web-based commercial citation manager, supports uploading DOCX files for citation formatting. Programmatic support Apache POI supports Office Open XML as of the 3.5 release. Aspose.Words - Aspose supports Office Open XML formats for word processing documents for developers through Aspose.Words API. Text Control TX Text Control, a family of reusable wordprocessing components for developers support reading and writing of Office Open XML wordprocessing files. Zend Framework 1.7 provides a PHP search engine that allows searching information from within Office Open XML files. Other products Altova DiffDog supports detailed differencing for Office Open XML and ZIP archive file pairs. Altova StyleVision adds Word 2007 (Office Open XML) wordprocessing capabilities to its graphical stylesheet design tool. Altova XMLSpy, an XML editor for modeling, editing, transforming, and debugging XML technologies has capabilities for accessing, editing, transforming, and querying Office Open XML file formats. IBM DB2 Content Manager V8.4 clients support Office Open XML file formats. IBM Lotus Notes 8.0.2+ supports Office Open XML text documents. IBM Lotus Quickr V8.0 includes support for Office Open XML documents. IBM WebSphere Portal supports Office Open XML text documents. IBM WebSphere Business Modeler supports Office Open XML text documents. Mindjet MindManager supports the Office Open XML format. Nuance OmniPage Professional 16, an OCR and Document Conversion Software, was the first desktop OCR application to provide native support for the Office Open XML standard. Oxygen XML Editor provides ready to use validation, editing and processing support for Office Open XML files. These capabilities allow developers to use data from office documents together with validation and transformations (using XSLT or XQuery) to other file formats. Validation is done using the latest ECMA-376 XML Schemas. RIM BlackBerry Enterprise Server software version 4.1 SP4 (4.1.4) supports Office Open XML file formats. Serif PagePlus X3 – Desktop publishing (page layout) program for Windows includes an Office Open XML text import filter. Planned and beta software Haansoft's Hangul Word Processor will support reading and writing of Office Open XML documents in its next version for Windows, which will be published in the end of 2009. SoftMaker's TextMaker (part of SoftMaker Office) will support Office Open XML text documents in upcoming versions. Unified Office Format (UOF) Open Source Translator is being developed by Beihang University and partners to convert from Office Open XML to UOF and vice versa. Spreadsheet documents (.xlsx) Spreadsheet software 280 North, Inc.'s 280 Slides is a web-based presentation app which can import and export the Office Open XML presentation format, though does not implement all of the features of the specification. Apache OpenOffice reads some .xlsx. It does not write .xlsx. Apple Inc.'s iWork '08 suite has read-only support for Office Open XML spreadsheet file formats in Numbers. Apple Inc.'s iPhone has read-only support for Office Open XML attachments to email. Collabora Office enterprise-ready edition of LibreOffice has built-in support for opening and writing Office Open XML files. It is available for Windows, macOS, Linux, Android, iOS, iPadOS and Chomebooks. Collabora Online a web-based enterprise-ready edition of LibreOffice word processor and suite has built-in support for opening and writing Office Open XML files. It is available online via html. Corel WordPerfect Office X4 includes import-only support for Office Open XML. DataViz' Documents To Go for Android, Palm OS, Windows Mobile and Symbian OS (UIQ, S80) supports Office Open XML documents. Datawatch supports Office Open XML spreadsheets in its report mining tool Monarch v9.0. Gnumeric has limited SpreadsheetML support. Google Sheets, a web-based spreadsheet application can import and export Office Open XML spreadsheet documents. As of June 2014, users of the Google Sheets app (for Android) or the Chrome browser can edit .xlsx files directly. IBM Lotus Symphony includes an input filter for Office Open XML spreadsheet documents beginning with version 1.3. JustSystems JUST Suite 2009 Sanshiro (Japanese) for Windows supports Office Open XML spreadsheet documents. LibreOffice has built-in support for reading and writing Office Open XML files. It is available for Windows, macOS, Linux, etc. Microsoft Office 2007, Microsoft Office 2010, and Microsoft Office 2013 for Windows use the Office Open XML format as the default. Older versions of Microsoft Office (2000, XP and 2003) require a free compatibility pack provided by Microsoft. It is available for Windows 2000 Service Pack 4 and newer operating systems. The compatibility pack does not require Microsoft Office, but does require Microsoft Windows. It can be used as a standalone converter with products that read Office's older binary formats, such as OpenOffice.org. Microsoft Office 2008 for Mac and Microsoft Office for Mac 2011 support the Office Open XML format. For older versions of Office on the Mac, a converter is available. Microsoft Office Mobile 6.1 supports Office Open XML on Mobile devices. The Mac OS X-based NeoOffice office suite supports opening, editing, and saving of most Office Open XML documents since version 2.1. ONLYOFFICE, an online office suite, can read and write Office Open XML format. OpenOffice.org read .docx beginning with OpenOffice.org version 3.0 (October 2008). QuickOffice, a mobile office suite for Symbian and Palm OS, supports spreadsheets in Office Open XML format. The online Thinkfree Office will support Office Open XML spreadsheets and presentation files in the future. WPS Office Spreadsheets 2019 (Windows, Linux, Android, iOS and Mac) supports Office Open XML. Viewers, filters and converters Apple Inc.'s Quick Look, the built-in quick preview feature of Mac OS X, supports Office Open XML files starting with Mac OS X v10.5. Collabora Office can also run headless online or locally as a filter and converter for Office Open XML files. It will do this under Windows, macOS, Linux. Collabora Online a web-based word processor and suite has built-in support for Office Open XML files. It can be embedded into html applications for viewers, filters and converters. DataViz MacLinkPlus Deluxe 16 supports Office Open XML file formats. Google Search supports direct HTML view of Office Open XML files. Found files can be viewed directly in a converted HTML view. Microsoft Office Open XML Converter for Mac OS X can convert Office Open XML files to the former binary file formats used in older versions of Microsoft Office. OxygenOffice includes xmlfilter which is the code that OpenOffice.org 3 will use to process Office Open XML files, and xmlfilter is completely different from OdfConverter. This filter, however, is only for importing Office Open XML files not for exporting them. Translation support OmegaT – OmegaT is a free translation memory application written in Java. OmegaT+ – Free computer assisted translation tools platform Cross-platform (Java). Programmatic support Apache POI supports Office Open XML as of the 3.5 release. Zend Framework 1.7 provides a PHP search engine that allows searching information from within Office Open XML files. Other products Altova XMLSpy, an XML editor for modeling, editing, transforming, and debugging XML technologies provides capabilities for accessing, editing, transforming, and querying Office Open XML file formats. IBM DB2 Content Manager V8.4 clients support Office Open XML file formats. IBM Lotus Notes 8.0.2+ supports Office Open XML spreadsheet documents. IBM Lotus Quickr V8.0 includes support for Office Open XML documents. IBM WebSphere Portal supports Office Open XML spreadsheet documents. Mindjet MindManager supports the Office Open XML format. Nuance OmniPage Professional 16, an OCR and Document Conversion Software, was the first desktop OCR application to provide native support for the Office Open XML standard. Oxygen XML Editor provides ready to use validation, editing and processing support for Office Open XML files. These capabilities allow developers to use data from office documents together with validation and transformations (using XSLT or XQuery) to other file formats. Validation is done using the latest ECMA-376 XML Schemas. RIM BlackBerry Enterprise Server software version 4.1 SP4 (4.1.4) supports Office Open XML file formats. Presentation documents (.pptx) Presentation software Apache OpenOffice reads some .pptx. It does not write .pptx. Apple Inc.'s iWork '08 suite has read-only support for Office Open XML presentation file formats in Keynote. Apple Inc.'s iPhone has read-only support for Office Open XML attachments to email. Collabora Office enterprise-ready edition of LibreOffice has built-in support for opening and writing Office Open XML files. It is available for Windows, macOS, Linux, Android, iOS, iPadOS and Chomebooks. Collabora Online a web-based enterprise-ready edition of LibreOffice word processor and suite has built-in support for opening and writing Office Open XML files. It is available online via html. Corel WordPerfect Office X4 includes import-only support for Office Open XML. DataViz' Documents To Go for Android, Palm OS, Windows Mobile and Symbian OS (UIQ, S80) supports Office Open XML documents. Google Slides, a web-based slideware application can import and export Office Open XML presentation documents. As of June 2014, users of the Google Slides app (for Android) or the Chrome browser can edit .pptx files directly. IBM Lotus Symphony includes an input filter for Office Open XML presentation documents beginning with version 1.3. JustSystems JUST Suite 2009 Agree (Japanese) for Windows supports Office Open XML presentation documents. LibreOffice has built-in support for reading and writing Office Open XML files. It is available for Windows, macOS, Linux, etc. Microsoft Office 2007, Microsoft Office 2010, and Microsoft Office 2013 for Windows use the Office Open XML format as the default. Older versions of Microsoft Office (2000, XP and 2003) require a free compatibility pack provided by Microsoft. It is available for Windows 2000 Service Pack 4 and newer operating systems. The compatibility pack does not require Microsoft Office, but does require Microsoft Windows. It can be used as a standalone converter with products that read Office's older binary formats, such as OpenOffice.org. Microsoft Office 2008 for Mac and Microsoft Office for Mac 2011 support the Office Open XML format. For older versions of Office on the Mac, a converter is available. Microsoft Office Mobile 6.1 supports Office Open XML on Mobile devices. The Mac OS X-based NeoOffice office suite supports opening, editing, and saving of most Office Open XML documents since version 2.1. OnlyOffice, an online office suite, can read and write Office Open XML format. OpenOffice.org read .pptx beginning with OpenOffice.org version 3.0 (October 2008). The online Thinkfree Office will support Office Open XML spreadsheets and presentation files in the future. WPS Office, presentation 2019 ( Windows, Linux, Android, iOS and Mac) supports Office Open XML. Viewers, filters and converters Apple Inc.'s Quick Look, the built-in quick preview feature of Mac OS X, supports Office Open XML files starting with Mac OS X v10.5. Collabora Office can also run headless online or locally as a filter and converter for Office Open XML files. It will do this under Windows, macOS, Linux. Collabora Online a web-based word processor and suite has built-in support for Office Open XML files. It can be embedded into html applications for viewers, filters and converters. DataViz MacLinkPlus Deluxe 16 supports Office Open XML file formats. Google Search supports direct HTML view of Office Open XML files. Found files can be viewed directly in a converted HTML view. Microsoft Office Open XML Converter for Mac OS X can convert Office Open XML files to the former binary file formats used in older versions of Microsoft Office. OxygenOffice includes xmlfilter which is the code that OpenOffice.org 3 will use to process Office Open XML files, and xmlfilter is completely different from OdfConverter. This filter, however, is only for importing Office Open XML files not for exporting them. OmegaT – OmegaT is a free translation memory application written in Java. Other products Altova DiffDog supports detailed differencing for Office Open XML and ZIP archive file pairs. Altova XMLSpy, an XML editor for modeling, editing, transforming, and debugging XML technologies provides capabilities for accessing, editing, transforming, and querying Office Open XML file formats. IBM DB2 Content Manager V8.4 clients support Office Open XML file formats. IBM Lotus Notes 8.0.2+ supports Office Open XML presentation documents. IBM Lotus Quickr V8.0 includes support for Office Open XML documents. IBM WebSphere Portal supports Office Open XML presentation documents. Mindjet MindManager supports the Office Open XML format. Nuance OmniPage Professional 16, an OCR and Document Conversion Software, was the first desktop OCR application to provide native support for the Office Open XML standard. Oxygen XML Editor provides ready to use validation, editing and processing support for Office Open XML files. These capabilities allow developers to use data from office documents together with validation and transformations (using XSLT or XQuery) to other file formats. Validation is done using the latest ECMA-376 XML Schemas. RIM BlackBerry Enterprise Server software version 4.1 SP4 (4.1.4) supports Office Open XML file formats. Planned and beta software Apache POI will support Office Open XML in the forthcoming 3.5 release, currently still in Beta SoftMaker's Presentations (part of SoftMaker Office) will support Office Open XML presentation documents in upcoming versions. Unified Office Format (UOF) Open Source Translator is being developed by Beihang University and partners to convert from Office Open XML to UOF and vice versa. Search tools Google supports searching in content of DOCX, XLSX, and PPTX files and also searching for these filetypes. Found files can be viewed directly in a converted HTML view. Apple Spotlight supports indexed searching of Office Open XML files. Copernic Desktop Search for Windows supports indexed searching of Office Open XML files. ISO/IEC 29500:2008 / ECMA-376 2nd edition implementations LibreOffice, Collabora Office and Collabora Online LibreOffice, Collabora Office and Collabora Online have built-in support for importing and exporting Office Open XML files in ISO/IEC 29500 standard. The Collabora suites are enterprise-ready editions of LibreOffice Microsoft Office 2016 Microsoft Office 2016 continues to use the strict ISO version. Microsoft Office 2010 In 2008, Microsoft stated that Microsoft Office 2010 would be the first version of Microsoft Office to support ISO/IEC 29500. The official release of this version of the product reads and writes files conformant to ISO/IEC 29500 Transitional, and reads files conformant to ISO/IEC 29500 Strict. Microsoft Office 2007 On July 28, 2008 Murray Sargent, a software development engineer in the Microsoft Office team confirmed that Word 2007 will have a service pack release that enables it to read and write ISO standard OOXML files. However, as of Service Pack 2 (released 2009) Microsoft is not claiming Microsoft Office 2007 compatibility with the ISO OOXML standard. Open XML Format SDK Microsoft Open XML Format SDK contains a set of managed code libraries to create and manipulate Office Open XML files programmatically. Version 1.0 was released on June 10, 2008 and incorporates the changes made to the Office Open XML specification made during the current ISO/IEC standardization process. Version 2 of the Open XML SDK supports validating Office Open XML documents against the Office Open XML schema, as well as searching in Office Open XML documents. On March 13, 2008 Doug Mahugh, a senior product manager at Microsoft specializing in Office client interoperability and the Open XML file formats, confirmed that version 1.0 of the Open XML Format SDK "will definitely be 100% compliant with the final ISO/IEC 29500 spec, including the changes accepted at the BRM". In a ComputerWorld interview from 2008, Doug Mahugh said that "Microsoft would continue to update the SDK to make sure that applications built with it remained compliant with an Open XML standard as changes were made in the future". By June 2014, the Open XML SDK was at version 2.5 and had been released as open source under the Apache License 2.0 on GitHub. See also Comparison of Office Open XML software Network effect Open format Office suite OpenDocument Format References External links Microsoft's office open XML Community website Office Open XML Lists of software
869876
https://en.wikipedia.org/wiki/Geologic%20modelling
Geologic modelling
Geologic modelling, geological modelling or geomodelling is the applied science of creating computerized representations of portions of the Earth's crust based on geophysical and geological observations made on and below the Earth surface. A geomodel is the numerical equivalent of a three-dimensional geological map complemented by a description of physical quantities in the domain of interest. Geomodelling is related to the concept of Shared Earth Model; which is a multidisciplinary, interoperable and updatable knowledge base about the subsurface. Geomodelling is commonly used for managing natural resources, identifying natural hazards, and quantifying geological processes, with main applications to oil and gas fields, groundwater aquifers and ore deposits. For example, in the oil and gas industry, realistic geologic models are required as input to reservoir simulator programs, which predict the behavior of the rocks under various hydrocarbon recovery scenarios. A reservoir can only be developed and produced once; therefore, making a mistake by selecting a site with poor conditions for development is tragic and wasteful. Using geological models and reservoir simulation allows reservoir engineers to identify which recovery options offer the safest and most economic, efficient, and effective development plan for a particular reservoir. Geologic modelling is a relatively recent subdiscipline of geology which integrates structural geology, sedimentology, stratigraphy, paleoclimatology, and diagenesis; In 2-dimensions (2D), a geologic formation or unit is represented by a polygon, which can be bounded by faults, unconformities or by its lateral extent, or crop. In geological models a geological unit is bounded by 3-dimensional (3D) triangulated or gridded surfaces. The equivalent to the mapped polygon is the fully enclosed geological unit, using a triangulated mesh. For the purpose of property or fluid modelling these volumes can be separated further into an array of cells, often referred to as voxels (volumetric elements). These 3D grids are the equivalent to 2D grids used to express properties of single surfaces. Geomodelling generally involves the following steps: Preliminary analysis of geological context of the domain of study. Interpretation of available data and observations as point sets or polygonal lines (e.g. "fault sticks" corresponding to faults on a vertical seismic section). Construction of a structural model describing the main rock boundaries (horizons, unconformities, intrusions, faults) Definition of a three-dimensional mesh honoring the structural model to support volumetric representation of heterogeneity (see Geostatistics) and solving the Partial Differential Equations which govern physical processes in the subsurface (e.g. seismic wave propagation, fluid transport in porous media). Geologic modelling components Structural framework Incorporating the spatial positions of the major formation boundaries, including the effects of faulting, folding, and erosion (unconformities). The major stratigraphic divisions are further subdivided into layers of cells with differing geometries with relation to the bounding surfaces (parallel to top, parallel to base, proportional). Maximum cell dimensions are dictated by the minimum sizes of the features to be resolved (everyday example: On a digital map of a city, the location of a city park might be adequately resolved by one big green pixel, but to define the locations of the basketball court, the baseball field, and the pool, much smaller pixels – higher resolution – need to be used). Rock type Each cell in the model is assigned a rock type. In a coastal clastic environment, these might be beach sand, high water energy marine upper shoreface sand, intermediate water energy marine lower shoreface sand, and deeper low energy marine silt and shale. The distribution of these rock types within the model is controlled by several methods, including map boundary polygons, rock type probability maps, or statistically emplaced based on sufficiently closely spaced well data. Reservoir quality Reservoir quality parameters almost always include porosity and permeability, but may include measures of clay content, cementation factors, and other factors that affect the storage and deliverability of fluids contained in the pores of those rocks. Geostatistical techniques are most often used to populate the cells with porosity and permeability values that are appropriate for the rock type of each cell. Fluid saturation Most rock is completely saturated with groundwater. Sometimes, under the right conditions, some of the pore space in the rock is occupied by other liquids or gases. In the energy industry, oil and natural gas are the fluids most commonly being modelled. The preferred methods for calculating hydrocarbon saturations in a geologic model incorporate an estimate of pore throat size, the densities of the fluids, and the height of the cell above the water contact, since these factors exert the strongest influence on capillary action, which ultimately controls fluid saturations. Geostatistics An important part of geologic modelling is related to geostatistics. In order to represent the observed data, often not on regular grids, we have to use certain interpolation techniques. The most widely used technique is kriging which uses the spatial correlation among data and intends to construct the interpolation via semi-variograms. To reproduce more realistic spatial variability and help assess spatial uncertainty between data, geostatistical simulation based on variograms, training images, or parametric geological objects is often used. Mineral Deposits Geologists involved in mining and mineral exploration use geologic modelling to determine the geometry and placement of mineral deposits in the subsurface of the earth. Geologic models help define the volume and concentration of minerals, to which economic constraints are applied to determine the economic value of the mineralization. Mineral deposits that are deemed to be economic may be developed into a mine. Technology Geomodelling and CAD share a lot of common technologies. Software is usually implemented using object-oriented programming technologies in C++, Java or C# on one or multiple computer platforms. The graphical user interface generally consists of one or several 3D and 2D graphics windows to visualize spatial data, interpretations and modelling output. Such visualization is generally achieved by exploiting graphics hardware. User interaction is mostly performed through mouse and keyboard, although 3D pointing devices and immersive environments may be used in some specific cases. GIS (Geographic Information System) is also a widely used tool to manipulate geological data. Geometric objects are represented with parametric curves and surfaces or discrete models such as polygonal meshes. Research in Geomodelling Problems pertaining to Geomodelling cover: Defining an appropriate Ontology to describe geological objects at various scales of interest, Integrating diverse types of observations into 3D geomodels: geological mapping data, borehole data and interpretations, seismic images and interpretations, potential field data, well test data, etc., Better accounting for geological processes during model building, Characterizing uncertainty about the geomodels to help assess risk. Therefore, Geomodelling has a close connection to Geostatistics and Inverse problem theory, Applying of the recent developed Multiple Point Geostatistical Simulations (MPS) for integrating different data sources, Automated geometry optimization and topology conservation History In the 70's, geomodelling mainly consisted of automatic 2D cartographic techniques such as contouring, implemented as FORTRAN routines communicating directly with plotting hardware. The advent of workstations with 3D graphics capabilities during the 80's gave birth to a new generation of geomodelling software with graphical user interface which became mature during the 90's. Since its inception, geomodelling has been mainly motivated and supported by oil and gas industry. Geologic modelling software Software developers have built several packages for geologic modelling purposes. Such software can display, edit, digitise and automatically calculate the parameters required by engineers, geologists and surveyors. Current software is mainly developed and commercialized by oil and gas or mining industry software vendors: Geologic modelling and visualisation IRAP RMS Suite GeoticMine Geomodeller3D DecisionSpace Geosciences Suite Dassault Systèmes GEOVIA provides Surpac, GEMS and Minex for geologic modeling GSI3D Mira Geoscience provides GOCAD Mining Suite, a 3D geological modelling software that compiles, models, and analyzes for valid interpretation that honours all data. Seequent provides Leapfrog 3D geological modeling & Geosoft GM-SYS and VOXI 3D modelling software. Maptek provides Vulcan, 3D modular software visualisation for geological modelling and mine planning Micromine is a comprehensive and easy to use exploration and mine design solution, which offers integrated tools for modelling, estimation, design, optimisation and scheduling. Promine Petrel Rockworks SGS Genesis Move SKUA-GOCAD Datamine Software provides Studio EM and Studio RM for geological modelling BGS Groundhog Desktop free-to-use software developed by the GeoAnalytics and Modelling directorate of British Geological Survey. Groundwater modelling FEFLOW FEHM MODFLOW GMS Visual MODFLOW ZOOMQ3D Moreover, industry Consortia or companies are specifically working at improving standardization and interoperability of earth science databases and geomodelling software: Standardization: GeoSciML by the Commission for the Management and Application of Geoscience Information, of the International Union of Geological Sciences. Standardization: RESQML(tm) by Energistics Interoperability: OpenSpirit, by TIBCO(r) See also Numerical modeling (geology) Petroleum engineering Seismic to simulation References Bolduc, A.M., Riverin, M-N., Lefebvre, R., Fallara, F. et Paradis, S.J., 2006. Eskers: À la recherche de l'or bleu. La Science au Québec : http://www.sciencepresse.qc.ca/archives/quebec/capque0606f.html Faure, Stéphane, Godey, Stéphanie, Fallara, Francine and Trépanier, Sylvain. (2011). Seismic Architecture of the Archean North American Mantle and Its Relationship to Diamondiferous Kimberlite Fields. Economic Geology, March–April 2011, v. 106, p. 223–240. http://econgeol.geoscienceworld.org/content/106/2/223.abstract Fallara, Francine, Legault, Marc and Rabeau, Olivier (2006). 3-D Integrated Geological Modeling in the Abitibi Subprovince (Québec, Canada): Techniques and Applications. Exploration and Mining Geology, Vol. 15, Nos. 1–2, pp. 27–41. http://web.cim.org/geosoc/docs/pdf/EMG15_3_Fallara_etal.pdf Berg, R.C., Mathers, S.J., Kessler, H., and Keefer, D. A., 2011. Synopsis of Current Three-dimensional Geological Mapping and Modeling in Geological Survey Organization, Champaign, Illinois: Illinois State Geological Survey, Circular 578. https://web.archive.org/web/20111009122101/http://library.isgs.uiuc.edu/Pubs/pdfs/circulars/c578.pdf (GSA Denver Annual Meeting. Poster) Kevin B. Sprague & Eric A. de Kemp. (2005) Interpretive Tools for 3-D Structural Geological Modelling Part II: Surface Design from Sparse Spatial Data http://portal.acm.org/citation.cfm?id=1046957.1046969&coll=&dl=ACM de Kemp, E.A. (2007). 3-D geological modelling supporting mineral exploration. In: Goodfellow, W.D., ed., Mineral Deposits of Canada: A Synthesis of Major Deposit Types, District Metallogeny, the Evolution of Geological Provinces, and Exploration Methods: Geological Association of Canada, Mineral Deposits Division, Special Publication 5, p. 1051–1061. https://web.archive.org/web/20081217170553/http://gsc.nrcan.gc.ca/mindep/method/3d/pdf/dekemp_3dgis.pdf Footnotes External links Geological Modelling at the British Geological Survey Economic geology Petroleum geology Geology software
35685828
https://en.wikipedia.org/wiki/Foswiki
Foswiki
Foswiki is an enterprise wiki, typically used to run a collaboration platform, knowledge base or document management system. Users can create wiki applications using the Topic Markup Language (TML), and developers can extend its functionality with plugins. The Foswiki project was launched in October 2008 when a dispute about the future direction of TWiki could not be settled, resulting in the decision of nearly all key TWiki contributors to fork. Since then the codebases have diverged significantly. However, Foswiki continues to maintain compatibility with content written for TWiki. Foswiki stands for "free and open source" wiki to emphasize its commitment to open source software. The project is governed by the Foswiki Association e.V, a volunteer run, non-profit foundation. The Foswiki website is seen by some as one of the more popular Perl-related websites based upon Alexa rankings of all websites in the world. Features Foswiki features an open architecture programmed and implemented in the Perl and JavaScript languages and runs on standard web servers such as Apache, Nginx and lighttpd. With almost 70 contributors providing over 56,000 commits since its inception, the Foswiki team not only develops the code but also offers on-line support, including on IRC. Core features include a TinyMCE WYSIWYG editor, built-in search engine, default text database, and skinnable user interface, as well as RSS/Atom feeds, e-mail support, and database interfaces to support scalable database solutions such as MongoDB and MySQL. Additional security-related features include an auditable version control system, user authentication, an access control system, cross-site request forgery protection, and improved spam-prevention extensions. Extensions Users have contributed over 300 extensions. Most of these extensions have been developed by or for corporate users, and are maintained by developers and users, as documented in the individual extension histories. Extensions have been developed to link into databases, create charts, tags, sort tables, write spreadsheets, create image gallery and slideshows, make drawings, write blogs, plot graphs, interface to many different authentication schemes, including single sign-on, track Extreme Programming projects, and others. Application platform Foswiki is a structured wiki that acts as an application platform for web-based applications. Specifically it provides database-like manipulation of fields stored on pages, and offers a SQL-like query language to support the embedding reports in wiki pages. Wiki applications are often called situational applications because they are created ad-hoc by users for very specific needs. For example, users have built Foswiki applications that include call center status boards, to-do lists, inventory systems, employee handbooks, bug trackers, blog applications, discussion forums, status reports with rollups and more. User interface The user interface is customizable through use of templates, themes and CSS. It includes support for internationalization, with support for multiple character sets, UTF-8 URLs etc. The English user interface has been translated by users into Bulgarian, Chinese, Czech, Danish, Dutch, French, German, Greek, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Turkish and Klingon. Deployment Foswiki is expected to be used primarily at the workplace as a corporate wiki to coordinate team activities, track projects, implement workflows and as an Intranet Wiki, for example in academia. Foswiki (among other components) was used in several research programs including Data Integration Platform for Systems Biology Collaborations, an interactive data integration platform supporting collaborative research projects, based on Foswiki, Solr/Lucene, and custom helper applications. Implementation Foswiki is implemented in Perl and JavaScript (using jQuery), though it can be used without JavaScript being enabled in the browser. By default, wiki pages are stored on the server in plain text files. Everything, including meta-data such as access control settings, are version controlled using RCS. RCS is optional since an all-Perl version control system is provided. Other server-side databases, such as MongoDB, are supported through use of extensions. Informal user reports suggest that Foswiki scales reasonably well even though it uses plain text files and no relational database to store page data, especially where load balancing and caching are used to improve performance. Support Foswiki is an entirely community-driven project, and has no controlling commercial interest behind it. User support is provided by the community, via the mechanisms of IRC and the main website. History Foswiki started life as a fork of the TWiki project. Since the fork it has been worked on continuously by a relatively large development team. Notable developments since the fork include adoption of the jQuery JavaScript user interface framework, interfacing to the MongoDB NoSQL database, interfacing to the Solr search system, page caching and a modified editing interface. See also Comparison of wiki software References Free software programmed in Perl Free software programmed in JavaScript Free wiki software Enterprise wikis Free content management systems Groupware Cross-platform software
4412616
https://en.wikipedia.org/wiki/Das%20U-Boot
Das U-Boot
Das U-Boot (subtitled "the Universal Boot Loader" and often shortened to U-Boot; see History for more about the name) is an open-source, primary boot loader used in embedded devices to package the instructions to boot the device's operating system kernel. It is available for a number of computer architectures, including 68k, ARM, Blackfin, MicroBlaze, MIPS, Nios, SuperH, PPC, RISC-V and x86. Functionality U-Boot is both a first-stage and second-stage bootloader. It is loaded by the system's ROM (e.g. onchip ROM of the ARM CPU) from a supported boot device, such as an SD card, SATA drive, NOR flash (e.g. using SPI or I²C), or NAND flash. If there are size constraints, U-Boot may be split into stages: the platform would load a small SPL (Secondary Program Loader), which is a stripped-down version of U-Boot, and the SPL would do initial hardware configuration and load the larger, fully featured version of U-Boot. Regardless of whether the SPL is used, U-Boot performs both first-stage (e.g., configuring memory controllers and SDRAM) and second-stage booting (performing multiple steps to load a modern operating system from a variety of devices that must be configured, presenting a menu for users to interact with and control the boot process, etc.). U-Boot implements a subset of the UEFI specification as defined in the Embedded Base Boot Requirements (EBBR) specification. UEFI binaries like GRUB or the Linux kernel can be booted via the boot manager or from the command-line interface. U-Boot runs a command-line interface on a console or a serial port. Using the CLI, users can load and boot a kernel, possibly changing parameters from the default. There are also commands to read device information, read and write flash memory, download files (kernels, boot images, etc.) from the serial port or network, manipulate device trees, and work with environment variables (which can be written to persistent storage, and are used to control U-Boot behavior such as the default boot command and timeout before auto-booting, as well as hardware data such as the Ethernet MAC address). Unlike PC bootloaders which obscure or automatically choose the memory locations of the kernel and other boot data, U-Boot requires its boot commands to explicitly specify the physical memory addresses as destinations for copying data (kernel, ramdisk, device tree, etc.) and for jumping to the kernel and as arguments for the kernel. Because U-Boot's commands are fairly low-level, it takes several steps to boot a kernel, but this also makes U-Boot more flexible than other bootloaders, since the same commands can be used for more general tasks. It's even possible to upgrade U-Boot using U-Boot, simply by reading the new bootloader from somewhere (local storage, or from the serial port or network) into memory, and writing that data to persistent storage where the bootloader belongs. U-Boot has support for USB, so it can use a USB keyboard to operate the console (in addition to input from the serial port), and it can access and boot from USB Mass Storage devices such as SD card readers. Data storage and boot sources U-Boot boots an operating system by reading the kernel and any other required data (e.g. device tree or ramdisk image) into memory, and then executing the kernel with the appropriate arguments. U-Boot's commands are actually generalized commands which can be used to read or write any arbitrary data. Using these commands, data can be read from or written to any storage system that U-Boot supports, which include: (Note: These are boot sources from which U-Boot is capable of loading data (e.g. a kernel or ramdisk image) into memory. U-Boot itself must be booted by the platform, and that must be done from a device that the platform's ROM or BIOS is capable of booting from, which naturally depends on the platform.) Onboard or attached storage SD card SATA SCSI I²C (e.g. EEPROMs or NOR flash) SPI (e.g. NOR or NAND flash) ONFI (raw NAND flash) eMMC (managed NOR or NAND flash) NVMe USB mass storage device Serial port (file transfer) Kermit S-Record YMODEM Network boot (optionally using DHCP, BOOTP, or RARP) TFTP NFS Compatible file systems U-Boot does not need to be able to read a filesystem in order for the kernel to use it as a root filesystem or initial ramdisk; U-Boot simply provides an appropriate parameter to the kernel, and/or copies the data to memory without understanding its contents. However, U-Boot can also read from (and in some cases, write to) filesystems. This way, rather than requiring the data that U-Boot will load to be stored at a fixed location on the storage device, U-Boot can read the filesystem to search for and load the kernel, device tree, etc., by pathname. U-Boot includes support for these filesystems: btrfs CBFS (coreboot file system) Cramfs ext2 ext3 ext4 FAT FDOS JFFS2 ReiserFS Squashfs UBIFS ZFS Device tree Device tree is a data structure for describing hardware layout. Using Device tree, a vendor might be able to use an unmodified mainline U-Boot on otherwise special purpose hardware. As also adopted by the Linux kernel, Device tree is intended to ameliorate the situation in the embedded industry, where a vast number of product specific forks (of U-Boot and Linux) exist. The ability to run mainline software practically gives customers indemnity against lack of vendor updates. History The project's origin is a 8xx PowerPC bootloader called 8xxROM written by Magnus Damm. In October 1999 Wolfgang Denk moved the project to SourceForge.net and renamed it to PPCBoot, because SF.net did not allow project names starting with digits. Version 0.4.1 of PPCBoot was first publicly released July 19, 2000. In 2002 a previous version of the source code was briefly forked into a product called ARMBoot, but was merged back into the PPCBoot project shortly thereafter. On October 31, 2002 PPCBoot−2.0.0 was released. This marked the last release under the PPCBoot name, as it was renamed to reflect its ability to work on other architectures besides the PPC ISA. PPCBoot−2.0.0 became U−Boot−0.1.0 in November 2002, expanded to work on the x86 processor architecture. Additional architecture capabilities were added in the following months: MIPS32 in March 2003, MIPS64 in April, Nios II in October, ColdFire in December, and MicroBlaze in April 2004. The May 2004 release of U-Boot-1.1.2 worked on the products of 216 board manufacturers across the various architectures. The current name Das U-Boot adds a German definite article, to create a bilingual pun on the classic 1981 German submarine film Das Boot, which takes place on a World War II German U-boat. It is free software released under the terms of the GNU General Public License. It can be built on an x86 PC for any of its intended architectures using a cross development GNU toolchain, for example crosstool, the Embedded Linux Development Kit (ELDK) or OSELAS.Toolchain. The importance of U-Boot in embedded Linux systems is quite succinctly stated in the book Building Embedded Linux Systems, by Karim Yaghmour, whose text about U-Boot begins, "Though there are quite a few other bootloaders, 'Das U-Boot', the universal bootloader, is arguably the richest, most flexible, and most actively developed open source bootloader available." Usages The ARM-based Chromebooks ship with U-Boot. The Celeron- and i5-based Chromebooks use it as payload for coreboot. The PowerPC based series of AmigaOne computers running AmigaOS use U-Boot, in particular the Sam440ep and Sam460ex by ACube Systems Srl, and the AmigaOne X5000 by A-Eon, the successor of the AmigaOne X1000. Ubiquiti Networks devices use U-Boot Amazon Kindle & Kobo eReader devices use U-Boot as their bootloader. TP-Link and several other OpenWRT/LEDE compatible MIPS based wireless routers use U-Boot for bootloading. Teltonika cellular routers use bootloader based on U-Boot. SpaceX's Falcon and Dragon both use U-Boot See also Comparison of boot loaders RedBoot Coreboot Barebox Notes References External links Barebox (formerly known as U-Boot-V2) Firmware Free boot loaders Free software programmed in C High-priority free software projects Software related to embedded Linux
7787600
https://en.wikipedia.org/wiki/John%20Makepeace%20Bennett
John Makepeace Bennett
John Makepeace Bennett (31 July 1921 – 9 December 2010) was an early Australian computer scientist. He was Australia's first professor of computer science and the founding president of the Australian Computer Society. His pioneering career included work on early computers such as EDSAC, Ferranti Mark 1* and SILLIAC, and spreading the word about the use of computers through computing courses and computing associations. Personal life John Bennett was born in 1921 at Warwick, Queensland, the son of Albert John Bennett and Elsie Winifred née Bourne. In 1952 he married Rosalind Mary Elkington (who was also working at Ferranti). They had four children: Christopher John, Ann Margaret, Susan Elizabeth and Jane Mary. In 1986 Bennett, aged 65, retired with his wife to Sydney's Northern Beaches. Bennett died at home on 9 December 2010 and was survived by his wife, four children and six grandchildren. Education and War Service John Bennett was educated at The Southport School. After which, he went to the University of Queensland to study civil engineering. From 1942 until 1946 (during WWII), he served in the RAAF. He worked on a radar unit on the Wessel Islands and later worked in airfield construction. He then returned to the University of Queensland to study electrical and mechanical engineering and mathematics. Professional life In 1947 he went to Cambridge University to become Maurice Vincent Wilkes' first research assistant as part of the team working to build EDSAC. This was the world's first practical stored program electronic computer, and the world's first computer in regular operation from 1949. He used EDSAC to carry out the first ever structural engineering calculations on a computer as part of his PhD. He worked for Ferranti in Manchester and London as a computer specialist. Here he designed the instruction set for Ferranti Mark 1*, which was the main improvement of that machine over Ferranti Mark 1. In 1956, Bennett returned to Australia to become Numerical Analyst (and later Senior Numerical Analyst) to the Adolph Basser Laboratory at the University of Sydney. His main work was the development of software for SILLIAC. Until 1958 he taught associated courses in the use of computers. In 1958 he established a Postgraduate Diploma in Numerical Analysis and Computing which was later changed to the Postgraduate Diploma in Computer Science. In 1961, the Basser Laboratory became the Basser Computing Department and John Bennet became Professor of Physics (Electronic Computing). In 1972 the Basser Computing Department was split into the Basser Department of Computer Science (for teaching and research) and the University Computer Centre. John Bennett was appointed head of the new Basser Department of Computer Science, but it was not until 1982 that John Bennett's title was changed to be Professor of Computer Science - a title which he held until his retirement in 1987. He was also the Foundation Chairman of the Australian Committee on Computation and Automatic Control from 1959 to 1963, the President of the New South Wales Computer Society from 1965 to 1966, and the Foundation President of the Australian Computer Society from 1966 to 1967. In 1981 he helped found the Research Foundation for Information Technology at the University. Awards In 1983 he was appointed an Officer of the Order of Australia (AO) for service to computing science. In 2001 he was awarded the Centenary Medal for service to Australian society in computer science and technology. In 2004 Dr Bennett was awarded the Pearcey Medal, an annual award presented to a distinguished Australian for a lifetime and outstanding contribution to the ICT industry. References Further reading Costello, J. (1993) 'John Bennett.' Computerworld. 16 July, page 2. Davidson, P. (2003) 'John Bennett: educating the technology. generation.' Information Age. August/September, page 31. External links Bennett, J. M. (John Makepeace) (1921-2010), trove.nla.gov.au Bennett, John Makepeace, Encyclopaedia of Australian Science 1921 births 2010 deaths Australian computer scientists Ferranti Officers of the Order of Australia Fellows of the Australian Academy of Technological Sciences and Engineering Recipients of the Centenary Medal
38711772
https://en.wikipedia.org/wiki/VISCA%20Protocol
VISCA Protocol
VISCA is a professional camera control protocol used with PTZ cameras. It was designed by Sony to be used on several of its surveillance and OEM block cameras. Implementation It is based on RS232 Serial communications at 9600 bit/s, 8N1, no flow control typically though a DB-9 connector, but can also be on 8-Pin DIN, RJ45 and RJ11 connectors used in daisy chain configurations. VISCA utilizes a serial repeater network configuration to communicate between the PC (device #0) and up to 7 peripherals (#1 through #7). The daisy chain cable configuration means that a message walks the chain until it reaches the target device identified in the data packet. Responses then walk the rest of the way down the chain and back up again to reach the system. Some packets may be broadcast to all devices. A command data packet consists: Address byte (1) message header Information bytes (1..14) Terminating byte (1) 0xFF The message header is of the format: bit 7: always ‘1’ bits 6-4: source device# bit 3: ‘0’ normal packet/ ‘1’ for broadcast packets bits 2-0: destination device# or ‘000’ for broadcast In the packet descriptions below, multi-byte quantities are big-endian (Motorola-style) ordering, with the MSB at [i] and the LSB at [i+1]. Each command data packet has a corresponding response data packet. The response to a particular packet is variable in size and may indicate an error condition. Uses The VISCA Protocol is used on LectureSight, Avaya Scopia, Angekis, Atlona HDVS series cameras, Polycom and Cisco/Tandberg video conferencing systems. Sony and Canon use VISCA for CCTV cameras. Blackmagic Design ATEM switchers that have RS-422 port and controlled by either ATEM 1M/E or ATEM 2M/E control panels are capable of controlling VISCA protocol compatible cameras as of November 2015 External links VISCA controller (support of SONY FCB cameras with external pan and tilt head) - Main Web site (German language) VISCA camera control library - Sourceforge VISCA camera control library - Main Web site Cisco TelePresence PrecisionHD Camera User Guide Sony software
6929302
https://en.wikipedia.org/wiki/Michael%20Z.%20Gordon
Michael Z. Gordon
Michael Zane Gordon (born April 4, 1941) is an American screenwriter, producer, musician and composer. Early life Gordon was born in Minneapolis, Minnesota and grew up in Rapid City, South Dakota. He has two sisters. He and his family moved to Glendale, California in 1957, and moved to Los Angeles, California shortly thereafter. Gordon graduated from Fairfax High School in 1958. Music career Gordon, a self-taught musician, formed his first rock band, the Marketts (originally spelled "Mar-Kets") in 1961. Gordon wrote and co-produced the band's first hit song, "Surfer's Stomp," shortly after the group was formed. In 1961 the band signed with co-producer Joe Saraceno under the Warner Bros. label. Gordon formed his second band, the Routers, in 1962. The Routers and the Marketts were contemporaries and Gordon worked with both groups over the same time period using different musicians for each group. The Routers are best known for their 1962 hit, "Let's Go (Pony)." While on tour with the Routers, Gordon wrote the Marketts' first release on the Warner Bros. label, "Outer Limits" (later changed to "Out of Limits" for legal reasons). The song sold over a million copies, topped the charts on stations nationwide, and earned Gordon a BMI award. "Out of Limits" is a popular choice for TV and film soundtracks; it can be heard in Pulp Fiction (1994), Slayground (1983), The Outsiders (1983) and Mafioso: The Father, the Son, (2004). The Marketts' music is also credited on "Saturday Night Live," The Name of the Game is Kill (1968), A Killing on Brighton Beach (2009), and Dirty Little Trick (2011), among others. Following his touring career with the Marketts and the Routers, Gordon returned to Hollywood in 1966 and teamed up with Jimmy Griffin. Together they wrote more than sixty songs, with 51 of them being recorded by hit artists of the 1960s. These songs included "Love Machine" and Ed Ames' "Apologize," which earned Gordon his second BMI award. Gordon is credited on 179 songs in the BMI catalogue. His songs have been recorded by artists such Cher, The Standells, Lesley Gore, Gary Lewis, and Brian Hyland. Gordon's songs – particularly "Surfer's Stomp," "Let's Go" and "Out of Limits" – have appeared in a variety of television shows and movies, including The Outsiders, Pulp Fiction, among others. While filming an upcoming documentary entitled Out of Limits: The Michael Z. Gordon Story on his life and career, the filmmakers discovered an untitled and unrecorded piece of sheet music Gordon had written in 1963. The song was taken into the studio and recorded in a session supervised by Gordon. The session was filmed for the documentary and the resulting song (which was subsequently titled "1963") will be released in conjunction with the film, making the length of time between the writing and release of the song 55 years. Film career Gordon is also known for his work in film and television production. He has credits as film producer, composer, musical producer, and screenwriter. With respect to project selection, Gordon remarks, "I think that it is important for the industry to know that I just don't do any film that comes along. I try to do meaningful films that may not be financially successful, but receive critical acclaim." Gordon notes, "Not every project is going to be a big success. But if people walk away and say 'that was a well-made movie,' then I'm happy." Gordon's film and TV music credits: include The Outsiders re-release (2005), 21 Jump Street (1987), Married... with Children (1987), The Wonder Years (1988), Angels in the Endzone (1997), From the Earth to the Moon (1998), and Mafioso: The Father, The Son (2004). His screenwriting credits include: Mafiosa (TV series, 2006); Slaughter Creek (2011); Dirty Little Trick (2011)Production credits include: Narc (2002) starring Ray Liotta and Jason Patric;In Enemy Hands (2004) starring William Macy and Lauren Holly;Shortcut to Happiness (2004), starring Anthony Hopkins, Alec Baldwin, Jennifer Love Hewitt and Dan Aykroyd;Mafioso: The Father, The Son (2004);Silent Partner (2005), starring Tara Reid and Nick Moran; Shattered (2008); Jack and Jill vs. the World (2008), starring Freddie Prinze Jr., Taryn Manning and Peter Stebbings. Gordon resides in the Los Angeles area. He has mostly retired from the film and music business. References External links Official Website American male songwriters Living people Surf musicians 1941 births
10768456
https://en.wikipedia.org/wiki/Software%20system
Software system
A software system is a system of intercommunicating components based on software forming part of a computer system (a combination of hardware and software). It "consists of a number of separate programs, configuration files, which are used to set up these programs, system documentation, which describes the structure of the system, and user documentation, which explains how to use the system". The term "software system" should be distinguished from the terms "computer program" and "software". The term computer program generally refers to a set of instructions (source, or object code) that perform a specific task. However, a software system generally refers to a more encompassing concept with many more components such as specification, test results, end-user documentation, maintenance records, etc. The use of the term software system is at times related to the application of systems theory approaches in the context of software engineering. A software system consists of several separate computer programs and associated configuration files, documentation, etc., that operate together. The concept is used in the study of large and complex software, because it focuses on the major components of software and their interactions. It is also related to the field of software architecture. Software systems are an active area of research for groups interested in software engineering in particular and systems engineering in general. Academic journals like the Journal of Systems and Software (published by Elsevier) are dedicated to the subject. The ACM Software System Award is an annual award that honors people or an organization "for developing a system that has had a lasting influence, reflected in contributions to concepts, in commercial acceptance, or both". It has been awarded by the Association for Computing Machinery (ACM) since 1983, with a cash prize sponsored by IBM. The two types of are system software and application software Categories Major categories of software systems include those based on application software development, programming software, and system software although the distinction can sometimes be difficult. Examples of software systems include operating systems, computer reservations systems, air traffic control systems, military command and control systems, telecommunication networks, content management systems, database management systems, expert systems, embedded systems etc. See also ACM Software System Award Common layers in an information system logical architecture Computer program Computer program installation Experimental software engineering Failure assessment Software bug Software architecture System software Systems theory Systems Science Systems Engineering Software Engineering References Systems engineering Software engineering terminology
52104668
https://en.wikipedia.org/wiki/Mixer%20%28service%29
Mixer (service)
Mixer was an American video game live streaming platform. The service officially launched on January 5, 2016, as Beam, under the ownership of co-founders Matthew Salsamendi and James Boehm. The service placed an emphasis on interactivity, with low stream latency and a platform for allowing viewers to perform actions that can influence a stream. The service was acquired by Microsoft in August 2016, after which it was renamed Mixer in 2017 and integrated into Microsoft's Xbox division (including top-level integration on Xbox One). In 2019, Mixer gained attention when it signed two top streamers from its main competitor, Twitch—Ninja and Shroud—to a contract with the service. However, citing an inability to scale its operations, Microsoft announced on June 22, 2020, that Mixer would be shut down by the end of July 22, and that an agreement had been made with Facebook for monetized channels to join similar programs on Facebook's game streaming platform. Microsoft officially shut down Mixer on July 22, 2020. Features Mixer used a low-latency streaming protocol known as FTL ("Faster Than Light"); the service states that this protocol only creates delays of less than a second between the original broadcast and when it is received by users, rather than 10–20 seconds, making it more appropriate for real-time interactivity between a streamer and their viewers. In addition, viewers can use buttons below a stream to interact with it, including voting, special effects, and influencing gameplay. Some interactions required users to spend "Sparks"—a currency accumulated while viewing streams. An SDK was available to integrate games with this system. In November 2018, the site unveiled a major update branded as "Season 2", including features launching immediately, and plans for upcoming features. The update added automatic quality adjustment to the player, and "Skills"—a feature that can be used to trigger special animations and effects in chat. Some premium skills are purchased using the paid currency "Embers"; channels can receive revenue from Embers spent by their viewers. Partnered streamers can also receive payment bonuses based on the volume of Sparks spent on their channels. In April 2019, Mixer added "Channel Progression"—a level system for tracking users' engagement with a particular channel over time. Users can receive benefits to reward their long-term participation. Mixer's features also included CATbot, an auto chat filtration bot that helped remove unwanted chat content on streamers’ channels before chat ever saw it. CATbot's moderation level could be adjusted for all viewers or could be set according to viewers’ rank in Channel Progression. Users could also purchase subscriptions to individual channels that are Mixer partners, which allowed access to exclusive emoticons, and adds a badge to their name in chat commemorate their support. Initially, these were priced at US$5.99 per month. In October 2019, Mixer announced that the price would be lowered to $4.99, matching the price of subscriptions on Twitch. History Beam launched on January 5, 2016. In May 2016, Beam won the Startup Battlefield competition at the TechCrunch Disrupt conference, receiving $50,000 in equity-free funding. On August 11, 2016, Beam was acquired by Microsoft for an undisclosed amount. The service's team was integrated into the Xbox division. On October 26, 2016, Microsoft announced that Beam would be integrated into Windows 10. Beam broadcasting was also integrated into Xbox One on the March 2017 software update. On May 25, 2017, Microsoft announced that Beam had been renamed Mixer, as the previous name could not be used globally. The re-branding came alongside the introduction of several new features, such as the ability for a user to co-host up to three other streams on their channel at once, as well as the companion mobile app Mixer Create. It was also announced that Mixer would receive top-level integration within the Xbox One dashboard, with a new tab curating Mixer streams. On July 31, 2019, video game streamer Ninja announced that he would move exclusively from Twitch to Mixer beginning August 1. The deal was considered to be a major coup for Mixer, as Ninja had been among Twitch's top personalities, with over 14 million followers. His wife and manager Jessica Blevins stated that the contract with Twitch had encumbered his ability to "grow his brand" outside of gaming, and that his interest in streaming had been deteriorating due to the perceived "toxic[ity]" of Twitch's community. A report by Streamlabs and Newzoo reported that in the third quarter of 2019, Mixer had a 188% quarter-by-quarter increase in the amount of unique hours of content being streamed on the service, but that the percentage of concurrent viewers had fallen by 11.7%. Mixer founders Boehm and Salsamendi both left Microsoft in October 2019. The same month, streamer Shroud also entered into an exclusivity agreement with Mixer, followed shortly afterward by KingGothalion. On June 22, 2020, citing a poor market share and inability to scale in comparison to competing services, Microsoft announced that Mixer would be shut down on July 22, 2020. As part of an agreement to collaborate with Facebook, Inc. on aspects of its xCloud cloud gaming service, Mixer would redirect users to the Facebook Gaming service after it ceased operations, and partnered streamers offered opportunities to join equivalent Facebook Gaming programs where applicable. Outstanding subscriptions and Embers were converted to Microsoft Store credit. Mixer's intellectual property and staff will be transferred to the Microsoft Teams division, and incorporated into the product. Attempting to visit mixer.com now results in a redirect to Facebook Gaming. Microsoft released its contracts with exclusively-signed streamers; in August, Ninja held a stream on YouTube before returning to Twitch, while Shroud re-signed exclusively with Twitch. References External links 2016 establishments in Washington (state) 2020 disestablishments in Washington (state) 2016 mergers and acquisitions Discontinued Microsoft products Entertainment companies established in 2016 Entertainment companies disestablished in 2020 Internet properties established in 2016 Internet properties disestablished in 2020 Mass media companies established in 2016 Mass media companies disestablished in 2020 Microsoft acquisitions Microsoft websites Products and services discontinued in 2020 Software companies based in Seattle Software companies established in 2016 Software companies disestablished in 2020 Video game streaming services Xbox One software Software companies of the United States
2738515
https://en.wikipedia.org/wiki/Digital%20theatre
Digital theatre
Strictly, digital theatre is a hybrid art form, gaining strength from theatre’s ability to facilitate the imagination and create human connections and digital technology’s ability to extend the reach of communication and visualization. (However, the phrase is also used in a more generic sense by companies such as Evans and Sutherland to refer to their fulldome projection technology products.) Description Digital theatre is primarily identified by the coexistence of “live” performers and digital media in the same unbroken(1) space with a co-present audience. In addition to the necessity that its performance must be simultaneously “live” and digital, the event’s secondary characteristics are that its content should retain some recognizable theatre roles (through limiting the level of interactivity) and a narrative element of spoken language or text. The four conditions of digital theatre are: It is a “live” performance placing at least some performers in the same shared physical space with an audience.(2) The performance must use digital technology as an essential part of the primary artistic event.(3) The performance contains only limited levels of interactivity,(4) in that its content is shaped primarily by the artist(s) for an audience.(5) The performance’s content should contain either spoken language or text which might constitute a narrative or story, differentiating it from other events which are distinctly dance, art, or music. ”Live,” digital media, interactivity, and narrative A brief clarification of these terms in relation to digital theatre is in order. The significance of the terms “live” or “liveness” as they occur in theatre can not be over-emphasized, as it is set in opposition to digital in order to indicate the presence of both types of communication, human and computer-created. Rather than considering the real-time or temporality of events, digital theatre concerns the interactions of people (audience and actors) sharing the same physical space (in at least one location, if multiple audiences exist). In the case of mass broadcast, it is essential that this sharing of public space occurs at the site of the primary artistic event.(6) The next necessary condition for creating digital theatre is the presence of digital media in the performance. Digital media is not defined through the presence of one type of technology hardware or software configuration, but by its characteristics of being flexible, mutable, easily adapted, and able to be processed in real-time. It is the ability to change not only sound and light, but also images, video, animation, and other content into triggered, manipulated, and reconstituted data which is relayed or transmitted in relation to other impulses which defines the essential nature of the digital format. Digital information has the quality of pure computational potential, which can be seen as parallel to the potential of human imagination. The remaining characteristics of limited interactivity and narrative or spoken word are secondary and less distinct parameters. While interactivity can apply to both the interaction between humans and machines and between humans, digital theatre is primarily concerned with the levels of interactivity occurring between audience and performers (as it is facilitated through technology).(7) It is in this type of interactivity, similar to other types of heightened audience participation,(8) that the roles of message sender and receiver can dissolve to that of equal conversers, causing theatre to dissipate into conversation. The term “interactive” refers to any mutually or reciprocally active communication, whether it be a human-human or a human-machine communication. The criteria of having narrative content through spoken language or text as part of the theatrical event is meant not to limit the range of what is already considered standard theatre (as there are examples in the works of Samuel Beckett in which the limits of verbal expression are tested), but to differentiate between that which is digital theatre and the currently more developed fields of digital dance9 and Art Technology.(10) This is necessary because of the mutability between art forms utilizing technology. It is also meant to suggest a wide range of works including dance theatre involving technology and spoken words such as Troika Ranch’s The Chemical Wedding of Christian Rosenkreutz (Troika Ranch, 2000), to the creation of original text-based works online by performers like the Plain Text Players or collaborations such as Art Grid’s Interplay: Hallucinations, to pre-scripted works such as the classics (A Midsummer Night’s Dream, The Tempest) staged with technology at the University of Kansas and the University of Georgia. The Participatory Virtual Theatre efforts at the Rochester Institute of Technology take a different approach by have live actors use motion capture to control avatars on a virtual stage. Audience responses are designed into the software that supports the performance. In the 2004 production, "What's the Buzz?" (17), a single node motion capture device controlled the performance of a swarm of bees. Later performances use two motion capture systems located in different buildings controlling the performance on a single virtual stage (18). These criteria or limiting parameters are flexible enough to allow for a wide range of theatrical activities while refining the scope of events to those which most resemble the hybrid “live”/mediated form of theatre described as digital theatre. digital theatre is separated from the larger category of digital performance (as expressed in the overabundance of a variety of items including installations, dance concerts, Compact discs, robot fights and other events found in the Digital Performance Archive). History In the early 1980s, video, satellites, fax machines, and other communications equipment began to be used as methods of creating art and performance. The groups Fluxus and John Cage were among the early leaders in expanding what was considered art, technology, and performance. With the adaptation of personal computers in the 1980s, new possibilities for creating performance communications was born. Artists like Sherrie Rabinowitz and Kit Galloway began to transition from earlier, more costly experiments with satellite transmission to experiments with the developing internet. Online communities such as The Well and interactive writing offered new models for artistic creativity. With the ‘Dot Com’ boom of the 1990s, telematic artists including Roy Ascott began to develop greater significance as theatre groups like George Coates Performance Works and Gertrude Stein Repertory Theatre established partnerships with software and hardware companies encouraged by the technology boom. In Australia in the early 1990s Julie Martin's Virtual Reality Theatre presented works at the Sydney Opera House, featuring the first hybrid human digital avatars, in 1996 "A Midsummer Nights Dream featured Augmented Reality Stage sets designed and produced by her company. Researchers such as Claudio Pinanhez at MIT, David Saltz of The Interactive Performance Laboratory at the University of Georgia, and Mark Reaney head of the Virtual Reality Theatre Lab at the University of Kansas, as well as significant dance technology partnerships (including Riverbed and Riverbed’s work with Merce Cunningham) led to an unprecedented expansion in the use of digital technology in creating media-rich performances (including the use of motion capture, 3D stereoscopic animation, and virtual reality as in The Virtual Theatricality Lab‘s production of The Skriker at Henry Ford Community College under the direction of Dr. George Popovich. Another example is the sense:less project by Stenslie/Mork/Watz/Pendry using virtual actors that users would engage with inside a VR environment. The project was shown at ELECTRA, Henie Onstad Art Center, Norway, DEAF 1996 in Rotterdam and the Fifth Istanbul Biennial (1997). Early use of mechanical and projection devices for theatrical entertainments have a long history tracing back to mechanicals of ancient Greece and medieval magic lanterns. But the most significant precursors of digital theatre can be seen in the works of the early 20th century. It is in the ideas of artists including Edward Gordon Craig, Erwin Piscator (and to a limited degree Bertolt Brecht in their joint work on Epic Theatre), Josef Svoboda, and the Bauhaus and Futurists movements that we can see the strongest connections between today’s use of digital media and live actors, and earlier, experimental theatrical use of non-human actors, broadcast technology, and filmic projections. The presence of these theatrical progenitors using analog media, such as filmic projection, provides a bridge between Theatre and many of today’s vast array of computer-art-performance-communication experiments. These past examples of theatre artists integrating their modern technology with theatre strengthens the argument that theatrical entertainment does not have to be either purist involving only “live” actors on stage, or be consumed by the dominant televisual mass media, but can gain from the strengths of both types of communication. Other terminology Digital theatre does not exist in a vacuum but in relation to other terminology. It is a type of Digital Performance and may accommodate many types of “live”/mediated theatre including “VR Theatre”(11) and “Computer Theatre,”(12) both of which involve specific types of computer media, “live” performers, story/words, and limited levels of interactivity. However, terms such as “Desktop Theatre,”(13) using animated computer avatars in online chat-rooms without co-present audiences falls outside digital theatre into the larger category of digital performance. Likewise, digital dance may fall outside the parameters of digital theatre, if it does not contain elements of story or spoken words. "Cyberformance" can be included within this definition of Digital theatre, where it includes a proximal audience: "Cyberformance can be created and presented entirely online, for a distributed online audience who participate via internet-connected computers anywhere in the world, or it can be presented to a proximal audience (such as in a physical theatre or gallery venue) with some or all of the performers appearing via the internet; or it can be a hybrid of the two approaches, with both remote and proximal audiences and/or performers." See also Digital performance History of theatre Stage terminology Notes Space not divided by visible solid interfaces such as walls, glass screens, or other visible barriers which perceptually divide the audience from the playing space making two (or more) rooms rather than a continuous place including both stage and audience. It is suggested that a minimal audience of two or more is needed to keep a performance from being a conversation or art piece. If additional online or mediated audiences exist, only one site need have a co-present audience/performer situation. Digital technology may be used to create, manipulate or influence content. However, the use of technology for transmission or archiving does not constitute a performance of digital theatre. Interactivity is more than choices on a navigation menu, low levels of participation or getting a desired response to a request. Sheizaf Rafaeli defines it as existing in the relay of a message, in which the third or subsequent message refers back to the first. “Formally stated, interactivity is an expression of the extent that in a given series of communication exchanges, any third (or later) transmission (or message) is related to the degree to which previous exchanges referred to even earlier transmissions” (Sheizaf Rafaeli, “Interactivity, From New Media to Communication,” pages 110-34 in Advanced Communicational Science: Merging Mass and Interpersonal Processes, ed. Robert P. Hawkins, John M. Wiemann, and Suzanne Pingree [Newbury Park: Sage Publications, 1988] 111). Though some of the content may be formed or manipulated by both groups, the flow of information is primarily from message creator or sender to receiver, thus maintaining the roles of author/performer and audience (rather than dissolving those roles into equal participants in a conversation). This also excludes gaming or VR environments in which the (usually isolated) participant is the director of the action which his actions drive. While TV studio audiences may feel that they are at a public “live” performance, these performances are often edited and remixed for the benefit of their intended primary audience, the home audiences which are viewing the mass broadcast in private. Broadcasts of “Great Performances” by PBS and other theatrical events broadcast into private homes, give the TV viewers the sense that they are secondary viewers of a primary “live” event. In addition, archival or real-time web-casts which do not generate feedback influencing the “live” performances are not within the range of digital theatre. In each case, a visible interface such as TV or monitor screen, like a camera frames and interprets the original event for the viewers. An example of this is the case of internet chat which becomes the main text of be read or physically interpreted by performers on stage. Online input including content and directions can also have an effect of influencing “live” performance beyond the ability of “live” co-present audiences. E.g. happenings. Such as the stunning visual media dance concerts like Ghostcatching, by Merce Cunningham and Riverbed, accessible online via the revamped/migrated Digital Performance Archive and Merce Cunningham Dance; cf. Isabel C. Valverde, “Catching Ghosts in Ghostcatching: Choreographing Gender and Race in Riverbed/Bill T. Jones’ Virtual Dance,” accessible in a pdf version from Extensions: The Online Journal of Embodied Teaching. Such as Telematic Dreaming, by Paul Sermon, in which distant participants shared a bed through mixing projected video streams; see "Telematic Dreaming - Statement." Mark Reaney, head of the Virtual Reality Theatre Lab at the University of Kansas, investigates the use of virtual reality (“and related technologies”) in theatre. “VR Theatre” is one form or subset of digital theatre focusing on utilizing virtual reality immersion in mutual concession with traditional theatre practices (actors, directors, plays, a theatre environment). The group uses image projection and stereoscopic sets as their primary area of digital investigation. Another example of digital theatre is Computer Theatre, as defined by Claudio S. Pinhanez in his work 'Computer Theatre (in which he also gives the definition of “hyper-actor” as an actor whose expressive capabilities are extended through the use of technologies). “Computer Theatre, in my view, is about providing means to enhance the artistic possibilities and experiences of professional and amateur actors, or of audiences clearly engaged in a representational role in a performance” (Computer Theater [Cambridge: Perceptual Computing Group -- MIT Media Laboratory, 1996] (forthcoming in a revised ed.); Pinhanez also sees this technology being explored more through dance than theatre. His writing and his productions of I/IT suggest that Computer Theatre is digital theatre. On the far end of the spectrum, outside of the parameters of digital theatre, are what are called Desktop Theater and Virtual Theatre. These are digital performances or media events which are created and presented on computers utilizing intelligent agents or synthetic characters, called avatars. Often these are interactive computer programs or online conversations. Without human actors, or group audiences, these works are computer multimedia interfaces allowing a user to play at the roles of theatre rather than being theatre. Virtual Theatre is defined by the Virtual Theatre Project at Stanford on their website as a project which “aims to provide a multimedia environment in which user can play all of the creative roles associated with producing and performing plays and stories in an improvisational theatre company.” For more information, see Multimedia: From Wagner to Virtual Reality, ed. Randall Packer and Ken Jordan; Telepresence and Bio Art, by Eduardo Kac; and Virtual Theatres: An Introduction, by Gabriella Giannachi (London and New York: Routledge, 2004). Media, in this sense, indicates the broadcast and projection of film, video, images and other content which can, but need not be digitized. These elements are often seen as additions to traditional forms of theatre even before the use of computers to process them. The addition of computers to process visual, aural and other data allows for greater flexibility in translating visual and other information into impulses which can interact with each other and their environments in real-time. Media is also distinguished from mass media, by which primarily means TV broadcast, Film, and other communications resources owned by multi-national media corporations. Mass media is that section of the media specifically conceived and designed to reach a very large audience (typically at least as large as the majority of the population of a nation state). It refers primarily to television, film, internet, and various news/entertainment corporations and their subsidiaries. Here the use of quotations signals a familiarity with the issues of mediation vs. real-time events as expressed by Phillip Auslander, yet choosing to use the term in its earlier meaning, indicating co-present human audience and actors in the same shared breathing space unrestrained by a physical barrier or perceived interface. This earlier meaning is still in standard use by digital media performers to signify the simultaneous presence of the human and the technological other. It is possible also to use the term (a)live to indicate co-presence. Geigel, J. and Schweppe, M., What's the Buzz?: A Theatrical Performance in Virtual Space, in Advancing Computing and Information Sciences, Reznik, L., ed., Cary Graphics Arts Press, Rochester, NY, 2005, pp. 109–116. Schweppe, M. and Geigel, J., 2009. "Teaching Graphics in the Context of Theatre", Eurographics 2009 Educators Program (Munich, Germany, March 30-April 1, 2009) External links ‘’DigitalTheatre.Com Direct downloads website.’’ ‘’The Search for digital theatre’’ Digital Performance Archive hosted by AHDS Performing Arts Ontology vs. History: Making Distinctions Between the Live and the Mediatized Merce Cunningham Dance Catching Ghosts in Ghostcatching “Telematic Dreaming” George Coates Performance Works Troika Ranch Another Language Performance Art Company George Popovich Studio for Electronic Theatre - Digital performance research and production eSpectacularKids - Online storytelling, magic shows and theatre for children Theatre Digital art Performing arts
38452470
https://en.wikipedia.org/wiki/Project%20Lead%20the%20Way
Project Lead the Way
Project Lead The Way (PLTW) is an American nonprofit organization that develops STEM curricula for use by US elementary, middle, and high schools. Description PLTW provides curriculum and training to teachers and administrators to implement the curriculum. The curriculum is project-based. Three levels of curriculum are used for elementary, middle, and high-school levels. PLTW Launch is the elementary school level, designed for preschool through fifth grade. The curriculum consists of 28 modules (four per grade) that touch on a variety of science and technology topics. PLTW Gateway is the middle school level, covering grades six through eight. It consists of 10 different modules, which can be taught in any order, so schools can organize the modules into courses as best fits their own schedules. At the high school level (grades 9–12), three different programs are offered, each with a four-course sequence. The three high-school pathways are computer science, engineering, and biomedical science. Within each high school pathway are four or more courses designed to be taken in a certain order - an introductory course, two or more middle-level courses that can be taken in any order, and then a capstone course for the final high-school year. High school courses The program offers the following courses at high school level: PLTW Engineering Engineering Essentials Introduction to Engineering Design Principles of Engineering Aerospace Engineering Civil Engineering and Architecture Computer Integrated Manufacturing Computer Science Principles Digital Electronics Environmental Sustainability Engineering Design and Development PLTW Computer Science Computer Science Essentials Computer Science Principles Computer Science A Cybersecurity PLTW Biomedical Science Principles of Biomedical Science Human Body Systems Medical Interventions Biomedical Innovation AP + PLTW In 2015, College Board partnered with Project Lead The Way in an effort to encourage STEM majors. Students who have successfully passed at least three exams (one AP exam, one PLTW exam, and another AP or PLTW exam) are eligible to receive the AP + PLTW Student Recognition for one or more of the following: engineering, biomedical sciences, and computer science. Payment and distribution Schools that register with PLTW pay a flat participation fee that includes the curriculum, all required course software, access to school and technical support, and access to PLTW's learning-management system. Teachers who instruct the PLTW curriculum are required to take part in PLTW's three-phase professional development program. Financial support for PLTW Governments of several states, including New York, Indiana, Iowa, and South Carolina, have provided funding to PLTW to support future development. The Kern Family Foundation of Wisconsin provides financial support for the program in Wisconsin, Illinois, Iowa, and Minnesota. Kern first became involved with PLTW in Wisconsin in 2004 as one of several programs it funds in an attempt to enhance U.S. economic competitiveness by trying to qualify more students for engineering and technology careers. The foundation's expenditures in support of the funding of PLTW total more than $23 million. Other foundations funding PLTW include the Ewing Marion Kauffman Foundation, the John S. and James L. Knight Foundation, and the Conrad Foundation. References External links PLTW official website Organizations with year of establishment missing Non-profit organizations based in Indianapolis Educational organizations based in the United States Engineering education in the United States Science education in the United States Learning programs
365485
https://en.wikipedia.org/wiki/Technical%20writer
Technical writer
A technical writer is a professional information communicator whose task is to transfer information between two or more parties, through any medium that best facilitates the transfer and comprehension of the information. Technical writers research and create information through a variety of delivery media (electronic, printed, audio-visual, and even touch). Example types of information include online help, manuals, white papers, design specifications, project plans, and software test plans. With the rise of e-learning, technical writers are increasingly becoming involved with creating online training material. According to the Society for Technical Communication (STC): In other words, technical writers take advanced technical concepts and communicate them as clearly, accurately, and comprehensively as possible to their intended audience, ensuring that the work is accessible to its users. Kurt Vonnegut described technical writers as: Engineers, scientists, and other professionals may also be involved in technical writing (developmental editing, proofreading, etc.), but are more likely to employ professional technical writers to develop, edit and format material, and advise the best means of information delivery to their audiences. History of the profession According to the Society for Technical Communication (STC), the professions of technical communication and technical writing were first referenced around World War I, when technical documents became a necessity for military purposes. The job title emerged in the US during World War II, although it wasn't until 1951 that the first "Help Wanted: Technical Writer" ad was published. In fact, the title "Technical Writer" wasn't added to the US Bureau of Labor Statistic's Occupational Employment Handbook until 2010. During the 1940s and 50s, technical communicators and writers were hired to produce documentation for the military, often including detailed instructions on new weaponry. Other technical communicators and writers were involved in developing documentation for new technologies that were developed around this time. According to O'Hara: In the beginning of the profession, most technical writers worked in an office environment with a team of other writers. Like technical writers today, they conducted primary research and met with subject matter experts to ensure that their information was accurate. During World War II, one of the most important characteristics for technical writers was their ability to follow stringent government specifications for documents. After the war, the rise of new technology, such as the computer, allowed technical writers to work in other areas, producing "user manuals, quick reference guides, hardware installation manuals, and cheat sheets." During the time period after the war (1953-1961), technical communicators (including technical writers) became interested in "professionalizing" their field. According to Malone, technical communicators/writers did so by creating professional organizations, cultivating a "specialized body of knowledge" for the profession, imposing ethical standards on technical communicators, initiating a conversation about certifying practitioners in the field, and working to accredit education programs in the field. The profession has continued to grow—according to O'Hara, the writing/editing profession, including technical writers, experienced a 22% increase in positions between the years 1994 and 2005. Modern day technical writers work in a variety of contexts. Many technical writers work remotely using VPN or communicate with their team via videotelephony platforms such as Skype or Zoom. Other technical writers work in an office, but share content with their team through complex content management systems that store documents online. Technical writers may work on government reports, internal documentation, instructions for technical equipment, embedded help within software or systems, or other technical documents. As technology continues to advance, the array of possibilities for technical writers will continue to expand. Many technical writers are responsible for creating technical documentation for mobile applications or help documentation built within mobile or web applications. They may be responsible for creating content that will only be viewed on a hand-held device; much of their work will never be published in a printed booklet like technical documentation of the past. Technical Writers & UX Design Historically, technical writers, or technical and professional communicators, have been concerned with writing and communication. However, recently user experience (UX) design has become more prominent in technical and professional communications as companies look to develop content for a wide range of audiences and experiences. The User Experience Professionals Association defines UX as “Every aspect of the user’s interaction with a product, service, or company that make up the user’s perception of the whole.” Therefore, “user experience design as a discipline is concerned with all the elements that together make up that interface, including layout, visual design, text, brand, sound, and interaction." It is now an expectation that technical communication skills should be coupled with UX design. As Verhulsdonck, Howard, and Tham state “...it is not enough to write good content. According to industry expectations, next to writing good content, it is now also crucial to design good experiences around that content." Technical communicators must now consider different platforms such as social media and apps, as well as different channels like web and mobile. As Redish explains, a technical communications professional no longer writes content but “writes around the interface” itself as user experience surrounding content is developed. This includes usable content customized to specific user needs, that addresses user emotions, feelings, and thoughts across different channels in a UX ecology. Lauer and Brumberger further assert, “…UX is a natural extension of the work that technical communicators already do, especially in the modern technological context of responsive design, in which content is deployed across a wide range of interfaces and environments." Skill set In addition to solid research, language, writing, and revision skills, a technical writer may have skills in: Business analysis Computer scripting Content management Content design Illustration/graphic design Indexing Information architecture Information design Localization/technical translation Training E-learning User interfaces Video editing Website design/management Hypertext Markup Language (HTML) Usability testing Problem solving User experience design A technical writer may apply their skills in the production of non-technical content, for example, writing high-level consumer information. Usually, a technical writer is not a subject-matter expert (SME), but interviews SMEs and conducts the research necessary to write and compile technically accurate content. Technical writers complete both primary and secondary research to fully understand the topic. Characteristics Proficient technical writers have the ability to create, assimilate, and convey technical material in a concise and effective manner. They may specialize in a particular area but must have a good understanding of the products they describe. For example, API writers primarily work on API documents, while other technical writers specialize in electronic commerce, manufacturing, scientific, or medical material. Technical writers gather information from many sources. Their information sources are usually scattered throughout an organization, which can range from developers to marketing departments. According to Markel, useful technical documents are measured by eight characteristics: "honesty, clarity, accuracy, comprehensiveness, accessibility, conciseness, professional appearance, and correctness." Technical writers are focused on using their careful research to create effective documents that meet these eight characteristics. Roles and functions To create effective technical documentation, the writer must analyze three elements that comprise the rhetorical situation of a particular project: audience, purpose, and context. These are followed by document design, which determines what the reader sees. Audience analysis Technical writers strive to simplify complex concepts or processes to maximize reader comprehension. The final goal of a particular document is to help readers find what they need, understand what they find, and use what they understand appropriately. To reach this goal, technical writers must understand how their audiences use and read documentation. An audience analysis at the outset of a document project helps define what an audience for a particular document requires. When analyzing an audience the technical writer typically asks: Who is the intended audience? What are their demographic characteristics? What is the audience’s role? How does the reader feel about the subject? How does the reader feel about the sender? What form does the reader expect? What is the audience’s task? Why does the audience need to perform that task? What is the audience’s knowledge level? What factors influence the situation? Accurate audience analysis provides a set of guidelines that shape document content, design and presentation (online help system, interactive website, manual, etc.), and tone and knowledge level. Purpose A technical writer analyzes the purpose (or function) of a communication to understand what a document must accomplish. Determining if a communication aims to persuade readers to “think or act a certain way, enable them to perform a task, help them understand something, change their attitude,” etc., guides the technical writer on how to format their communication, and the kind of communication they choose (online help system, white paper, proposal, etc.). Context Context is the physical and temporal circumstances in which readers use communication—for example: at their office desks, in a manufacturing plant, during the slow summer months, or in the middle of a company crisis. Understanding the context of a situation tells the technical writer how readers use communication. This knowledge significantly influences how the writer formats communication. For example, if the document is a quick troubleshooting guide to the controls on a small watercraft, the writer may have the pages laminated to increase usable life. Document design Once the above information has been gathered, the document is designed for optimal readability and usability. According to one expert, technical writers use six design strategies to plan and create technical communication: arrangement, emphasis, clarity, conciseness, tone, and ethos. Arrangement The order and organization of visual elements so that readers can see their structure—how they cohere in groups, how they differ from one another, how they create layers and hierarchies. When considering arrangement technical writers look at how to use headings, lists, charts, and images to increase usability. Emphasis How a document displays important sections through prominence or intensity. When considering emphasis technical writers look at how they can show readers important sections, warning, useful tips, etc. through the use of placement, bolding, color, and type size. Clarity Strategies that “help the receiver decode the message, to understand it quickly and completely, and, when necessary, to react without ambivalence.” When considering clarity the technical writer strives to reduce visual noise, such as low contrast ratios, overly complex charts or graphs, and illegible font, all of which can hinder reader comprehension. Conciseness The "visual bulk and intricacy" of the design—for example, the number of headings and lists, lines and boxes, detail of drawings and data displays, size variations, ornateness, and text spacing. Technical writers must consider all these design strategies to ensure the audience can easily use the documents. Tone The sound or feel of a document. Document type and audience dictate whether the communication should be formal and professional, or lighthearted and humorous. In addition to language choice, technical writers set the tone of technical communication through the use of spacing, images, typefaces, etc. Ethos The degree of credibility that visual language achieves in a document. Technical writers strive to create professional and error-free documentation to establish credibility with the audience. Qualifications Technical writers normally possess a mixture of technical and writing abilities. They typically have a degree or certification in a technical field, but may have one in journalism, business, or other fields. Many technical writers switch from another field, such as journalism—or a technical field such as engineering or science, often after learning important additional skills through technical communications classes. Methodology (document development life cycle) To create a technical document, a technical writer must understand the subject, purpose, and audience. They gather information by studying existing material, interviewing SMEs, and often actually using the product. They study the audience to learn their needs and technical understanding level. A technical publication's development life cycle typically consists of five phases, coordinated with the overall product development plan: Phase 1: Information gathering and planning Phase 2: Content specification Phase 3: Content development and implementation Phase 4: Production Phase 5: Evaluation The document development life cycle typically consists of six phases (This changes organization to organization, how they are following). Audience profiling (identify target audience) User task analysis (analyze tasks and information based on the target audience) Information architecture (design based on analysis, how to prepare document) Content development (develop/prepare the document) Technical and editorial reviews (review with higher level personnel—managers, etc.) Formatting and publishing (publish the document). This is similar to the software development life cycle. Well-written technical documents usually follow formal standards or guidelines. Technical documentation comes in many styles and formats, depending on the medium and subject area. Printed and online documentation may differ in various ways, but still adhere to largely identical guidelines for prose, information structure, and layout. Usually, technical writers follow formatting conventions described in a standard style guide. In the US, technical writers typically use The Associated Press Stylebook or the Chicago Manual of Style (CMS). Many companies have internal corporate style guides that cover specific corporate issues such as logo use, branding, and other aspects of corporate style. The Microsoft Manual of Style for Technical Publications is typical of these. Engineering projects, particularly defense or aerospace-related projects, often follow national and international documentation standards—such as ATA100 for civil aircraft or S1000D for civil and defense platforms. Environment Technical writers often work as part of a writing or project development team. Typically, the writer finishes a draft and passes it to one or more SMEs who conduct a technical review to verify accuracy and completeness. Another writer or editor may perform an editorial review that checks conformance to styles, grammar, and readability. This person may request for clarification or make suggestions. In some cases, the writer or others test the document on audience members to make usability improvements. A final production typically follows an inspection checklist to ensure the quality and uniformity of the published product. Career growth There is no single standard career path for technical writers, but they may move into project management over other writers. A writer may advance to a senior technical writer position, handling complex projects or a small team of writers and editors. In larger groups, a documentation manager might handle multiple projects and teams. Technical writers may also gain expertise in a particular technical domain and branch into related forms, such as software quality analysis or business analysis. A technical writer who becomes a subject matter expert in a field may transition from technical writing to work in that field. Technical writers commonly produce training for the technologies they document—including classroom guides and e-learning—and some transition to specialize as professional trainers and instructional designers. Technical writers with expertise in writing skills can join printed media or electronic media companies, potentially providing an opportunity to make more money or improved working conditions. In April 2021, the U.S Department of Labor expected technical writer employment to grow seven percent from 2019 to 2029, slightly faster than the average for all occupations. They expect job opportunities, especially for applicants with technical skills, to be good. The BLS also noted that the expansion of "scientific and technical products" and the need for technical writers to work in "Web-based product support" will drive increasing demand. As of June 2021, the average annual pay for a freelance technical writer in the United States is $70,191 according to ZipRecruiter. Notable technical writers William Gaddis, author of J R (1975) and A Frolic of His Own (1994), was employed as a technical writer for a decade and a half for such companies as Pfizer and Eastman Kodak after the poor reception of his first novel, The Recognitions (1955). Gordon Graham, an expert on white papers and former writing professor. Dan Jones, university professor and a fellow of the Society for Technical Communication. Robert M. Pirsig, author of Zen and the Art of Motorcycle Maintenance: An Inquiry into Values (ZAMM) (1974), wrote technical manuals for IBM while working on the bestselling book. Thomas Pynchon, American author of The Crying of Lot 49 (1966), Gravity's Rainbow (1973), and Mason & Dixon (1997), among others, wrote his first novel, V. (1963), while employed as a technical writer for Boeing from 1960 to 1963. Richard Wilbur, American poet. Worked for Boeing, as he mentioned in conversation. George Saunders, American author of Tenth of December: Stories (2013) as well as other short story collections, essays, and novellas, wrote his first short story collection, CivilWarLand in Bad Decline (1996), while working as a technical writer and geophysical engineer for Radian International, an environmental engineering firm in Rochester, New York. Amy Tan, American author of The Joy Luck Club (1998), The Bonesetter's Daughter (2001), and other critically acclaimed novels. Tan began writing fiction novels while she was a technical writer. Ted Chiang, American author of short stories including Story of Your Life (1998) and The Merchant and the Alchemist's Gate (2007), was a technical writer in the software industry as late as July 2002. Marion Winik, American author and essayist, worked as a technical writer from 1984-1994 at Unison-Tymlabs, Austin, Texas. Similar titles Technical writers can have various job titles, including technical communicator, information developer, technical content developer or technical documentation specialist. In the United Kingdom and some other countries, a technical writer is often called a technical author or knowledge author. Technical communicator Technical author Tech writer Technical content developer Content developer Content designer Technical information developer Information architect Information engineer Information designer Information developer Documentation specialist Document management specialist Documentation manager Text engineer See also Collaborative editing European Association for Technical Communication Software documentation References External links Descriptions and links to standards for technical writers Technical Writing Education Programs - Los Angeles Chapter, Society for Technical Communication (LASTC) ISO/IEC JTC 1/SC 7 ISO/IEC JTC 1/SC 7 - Working Group 2 develops international standards for software documentation Technical communication Writing occupations Mass media occupations Computer occupations
20843244
https://en.wikipedia.org/wiki/J%C3%B8rgen%20Sigurd%20Lien
Jørgen Sigurd Lien
Jørgen Sigurd Lien is the co-founder, and CEO of eHelp Corporation (formerly known as Blue Sky Software). eHelp Corporation was the worldwide leader in Help authoring solutions before it was acquired by Macromedia, Inc. in 2003. Early life and education Jorgen Lien, who was born in Bergen to parents Terje Lien (father) and Inger Lise Lien (mother), had an early exposure to entrepreneurship. His paternal grandfather was Jørgen Sigurd Lien, Sr., co-founder and director of Jørgen S. Lien AS, one of the pioneer companies in Norway producing cash registers and safes. Lien attended school at Snarøya, Lysaker and Stabekk. Thereafter, Lien trained at the Norwegian Air Force Academy and graduated as one of the top officers. Lien also won the Norwegian Judo Championship twice. He is the alumnus of University of California, Santa Barbara. While at University of California, he earned his bachelor's degree summa cum laude in Electrical Engineering. He won the Mortar Board Award for being the top graduating student and achieving a 3.96 grade point average. Lien then undertook graduate research in parallel processing and artificial intelligence, and completed his master's degree in Electrical and Computer Engineering. Career Prior to co-founding eHelp Corporation, Lien was the manager of the Windows Development team at Norsk Data in Norway, and has been involved with Windows development since Windows 1.0 Lien served as the President and CEO of eHelp Corporation from the mid-1990s until September 1999 and then from 2001 until eHelp was acquired by Macromedia, Inc. His ability to accurately forecast industry trends has enabled him to position eHelp Corporation and its products ahead of the technological curve, which is critical to continued success in the rapidly evolving software industry. Lien's vision, personal leadership and motivational skills have underpinned the dramatic growth of eHelp Corporation in terms of both personnel and revenues, and have helped the company avoid the pitfalls experienced by many high-tech companies that undergo rapid expansion. In the highly competitive software industry, Lien has led eHelp Corporation to become a multimillion-dollar corporation with 37% compound growth in revenues over the past five years and 40 consecutive profitable quarters. Up until 1999, these accomplishments were achieved without any outside investment, as the company was completely self-funded through its own growth. In 1999, eHelp attracted a $17 million investment from venture capital firms HarbourVest and Geocapital Partners, to help fund the ongoing growth of the company and the development of exciting new Internet-oriented technologies and products. Under Lien's leadership, eHelp Corporation achieved twelve years of profitable growth. In 2003, Macromedia, Inc. acquired eHelp Corporation - the majority shareholders of eHelp Corporation were HarbourVest and Geocapital Partners . Robert M. Wadsworth from HarbourVest and Lawrence W. Lepard from Geocapital Partners were eHelp Corporation's board members. Awards and honors In 2000, Lien was one of ten business people who received an "Ernst and Young Entrepreneur of the Year: San Diego Region" award. Lien's firm Blue Sky Software Corp. has been honored as a “1997 Developer of the Year” finalist, by the Software Council of Southern California. In 1999, eHelp Corporation was named the 33rd fastest growing technology company in Southern California by Deloitte & Touche, due to its outstanding average annual revenue growth of 304.7% over a five-year period. In 2002, eHelp Corporation has been selected by the Association of Support Professionals as a winner of the "Ten Best Web Support Sites of 2002" award. In 2003, eHelp Corporation received the Brandon Hall Excellence in Learning Gold Award. Philanthropy In 2002, Lien's software firm eHelp contributed a total of US$12,000,000 worth in RoboDemo(R) eLearning Edition tutorial software to accredited colleges and universities through its Academic Software Donation Program. In 2003, eHelp contributed a total of $75,000 donation in retail value of eHelp's RoboDemo eLearning Edition software to University of California through the California Institute for Telecommunications and Information Technology [Cal-(IT)²]. References Royal Norwegian Air Force Academy alumni University of California, Santa Barbara alumni People from Bærum Living people Norsk Data people Year of birth missing (living people)
20698208
https://en.wikipedia.org/wiki/Internet%20Explorer%20version%20history
Internet Explorer version history
Internet Explorer (formerly Microsoft Internet Explorer and Windows Internet Explorer, commonly abbreviated IE or MSIE) is a series of graphical web browsers developed by Microsoft and included as part of the Microsoft Windows line of operating systems, starting in 1995. The first version of Internet Explorer, (at that time named Microsoft Internet Explorer, later referred to as Internet Explorer 1) made its debut on August 17, 1995. It was a reworked version of Spyglass Mosaic, which Microsoft licensed from Spyglass Inc., like many other companies initiating browser development. It was first released as part of the add-on package Plus! for Windows 95 that year. Later versions were available as free downloads, or in service packs, and included in the OEM service releases of Windows 95 and later versions of Windows. Originally Microsoft Internet Explorer only ran on Windows using Intel 80386 (IA-32) processor. Current versions also run on x64, 32-bit ARMv7, PowerPC and IA-64. Versions on Windows have supported MIPS, Alpha AXP and 16-bit and 32-bit x86 but currently support only 32-bit or 64-bit. A version exists for Xbox 360 called Internet Explorer for Xbox using PowerPC and an embedded OEM version called Pocket Internet Explorer, later rebranded Internet Explorer Mobile, which is currently based on Internet Explorer 9 and made for Windows Phone using ARMv7, Windows CE, and previously, based on Internet Explorer 7 for Windows Mobile. It remains in development alongside the desktop versions. Internet Explorer has supported other operating systems with Internet Explorer for Mac (using Motorola 68020+, PowerPC) and Internet Explorer for UNIX (Solaris using SPARC and HP-UX using PA-RISC), which have been discontinued. Since its first release, Microsoft has added features and technologies such as basic table display (in version 1.5); XMLHttpRequest (in version 5), which adds creation of dynamic web pages; and Internationalized Domain Names (in version 7), which allow Web sites to have native-language addresses with non-Latin characters. The browser has also received scrutiny throughout its development for use of third-party technology (such as the source code of Spyglass Mosaic, used without royalty in early versions) and security and privacy vulnerabilities, and both the United States and the European Union have alleged that integration of Internet Explorer with Windows has been to the detriment of other browsers. The latest stable release has an interface allowing for use as both a desktop application, and as a Windows 8 application. OS compatibility IE versions, over time, have had widely varying OS compatibility, ranging from being available for many platforms and several versions of Windows to only a few versions of Windows. Many versions of IE had some support for an older OS but stopped getting updates. The increased growth of the Internet in the 1990s and 2000s means that current browsers with small market shares have more total users than the entire market early on. For example, 90% market share in 1997 would be roughly 60 million users, but by the start of 2007 90% market share would equate to over 900 million users. The result is that later versions of IE6 had many more users in total than all the early versions put together. The release of IE7 at the end of 2006 resulted in a collapse of IE6 market share; by February 2007, market version share statistics showed IE6 at about 50% and IE7 at 29%. Regardless of the actual market share, the most compatible version (across operating systems) of IE was 5.x, which had Mac OS 9 and Mac OS X, Unix, and most Windows versions available and supported for a short period in the late 1990s (although 4.x had a more unified codebase across versions). By 2007, IE had much narrower OS support, with the latest versions supporting only Windows XP Service Pack 2 and above. Internet Explorer 5.0, 5.5, 6.0, and 7.0 (Experimental) have also been unofficially ported to the Linux operating system from the project IEs4Linux. Versions Microsoft Internet Explorer 1.x Microsoft Internet Explorer 1.0 made its debut on August 16, 1995. It was a reworked version of Spyglass Mosaic which Microsoft had licensed, like many other companies initiating browser development, from Spyglass Inc. It came with the purchase of Microsoft Plus! for Windows 95 and with at least some OEM releases of Windows 95 without Plus!. It was installed as part of the Internet Jumpstart Kit in Plus! for Windows 95. The Internet Explorer team began with about six people in early development. Microsoft Internet Explorer 1.5 was released several months later for Windows NT and added support for basic HTML table rendering. By including it free of charge on their operating system, they did not have to pay royalties to Spyglass Inc, resulting in a lawsuit and a US$8 million settlement on January 22, 1997. Although not included, this software can also be installed on the original release of Windows 95. Microsoft Internet Explorer (that is version 1.x) is no longer supported, or available for download from Microsoft. However, archived versions of the software can be found on various websites. Support for Internet Explorer 1.0 Ended on December 31, 2001, same day as Windows 95 and older Windows Versions. Features Microsoft Internet Explorer came with an install routine replacing a manual installation required by many of the existing web browsers. Microsoft Internet Explorer 2 Microsoft Internet Explorer 2 was released for Windows 95, Windows NT 3.51, and NT 4.0 on November 22, 1995 (following a 2.0 beta in October). It featured support for JavaScript, SSL, cookies, frames, VRML, RSA, and Internet newsgroups. Version 2 was also the first release for Windows 3.1 and Macintosh System 7.0.1 (PPC or 68k), although the Mac version was not released until January 1996 for PPC, and April for 68k. Version 2.1 for the Mac came out in August 1996, although by this time, Windows was getting 3.0. Version 2 was included in Windows 95 OSR 1 and Microsoft's Internet Starter Kit for Windows 95 in early 1996. It launched with twelve languages, including English, but by April 1996, this was expanded to 24, 20, and 9 for Win 95, Win 3.1, and Mac, respectively. The 2.0i version supported double-byte character-set. Microsoft Internet Explorer 3 Microsoft Internet Explorer 3 was released on August 13, 1996 and went on to be much more popular than its predecessors. Microsoft Internet Explorer 3 was the first major browser with CSS support, although this support was only partial. It also introduced support for ActiveX controls, Java applets, inline multimedia, and the PICS system for content metadata. Version 3 also came bundled with Internet Mail and News, NetMeeting, and an early version of the Windows Address Book, and was itself included with Windows 95 OSR 2. Version 3 proved to be the first more popular version of Internet Explorer, bringing with it increased scrutiny. In the months following its release, a number of security and privacy vulnerabilities were found by researchers and hackers. This version of Internet Explorer was the first to have the 'blue e' logo. The Internet Explorer team consisted of roughly 100 people during the development of three months. The first major IE security hole, the Princeton Word Macro Virus Loophole, was discovered on August 22, 1996 in IE3. Backwards compatibility was handled by allowing users who upgraded to IE3 to still use the previous version, because the installation renamed the old version (incorporating the old version number) and stored it in the same directory. Microsoft Internet Explorer 4 Microsoft Internet Explorer 4, released in September 1997, deepened the level of integration between the web browser and the underlying operating system. Installing version 4 on Windows 95 or Windows NT 4.0 and choosing Windows Desktop Update would result in the traditional Windows Explorer being replaced by a version more akin to a web browser interface, as well as the Windows desktop itself being web-enabled via Active Desktop. The integration with Windows, however, was subject to numerous packaging criticisms (see United States v. Microsoft). This option was no longer available with the installers for later versions of Internet Explorer, but was not removed from the system if already installed. Microsoft Internet Explorer 4 introduced support for Group Policy, allowing companies to configure and lock down many aspects of the browser's configuration as well as support for offline browsing. Internet Mail and News was replaced with Outlook Express, and Microsoft Chat and an improved NetMeeting were also included. This version was also included with Windows 98. New features that allowed users to save and retrieve posts in comment forms were added, but they are not used today. Microsoft Internet Explorer 4.5 offered new features such as easier 128-bit encryption. It also offered a dramatic stability improvement over prior versions, particularly the 68k version, which was especially prone to freezing. Microsoft Internet Explorer 5 Microsoft Internet Explorer 5, launched on March 18, 1999, and subsequently included with Windows 98 Second Edition and bundled with Office 2000, was another significant release that supported bi-directional text, ruby characters, XML, XSLT, and the ability to save web pages in MHTML format. IE5 was bundled with Outlook Express 5. Also, with the release of Microsoft Internet Explorer 5.0, Microsoft released the first version of XMLHttpRequest, giving birth to Ajax (even though the term "Ajax" was not coined until years later). It was the last with a 16-bit version. Microsoft Internet Explorer 5.01, a bug fix version included in Windows 2000, was released in December 1999 and it is the last version of Internet Explorer to run on Windows 3.1x and Windows NT 3.x. Microsoft Internet Explorer 5.5 followed in June 2000, improving its print preview capabilities, CSS and HTML standards support, and developer APIs; this version was bundled with Windows ME. However, version 5 was the last version for Mac and UNIX. Version 5.5 was the last to have Compatibility Mode, which allowed Microsoft Internet Explorer 4 to be run side by side with the 5.x series. The IE team consisted of over 1,000 people by 1999, with funding on the order of per year. Version 5.5 is also the last version of Internet Explorer to run on Windows 95 and all Windows NT 4.0 versions newer than SP2, but except SP6a. The next version, Internet Explorer 6, will only support Windows NT 4.0 SP6a or later. Microsoft Internet Explorer 6 Microsoft Internet Explorer 6 was released on August 24, 2001, a few months before Windows XP. This version included DHTML enhancements, content restricted inline frames, and partial support of CSS level 1, DOM level 1, and SMIL 2.0. The MSXML engine was also updated to version 3.0. Other new features included a new version of the Internet Explorer Administration Kit (IEAK), Media bar, Windows Messenger integration, fault collection, automatic image resizing, P3P, and a new look-and-feel that was in line with the Luna visual style of Windows XP, when used in Windows XP. Internet Explorer 6.0 SP1, which offered several security enhancements, coincided with the Windows XP SP1 patch release and it is the last version of Internet Explorer compatible with Windows NT 4.0, Windows 98, Windows 2000 and Windows Me. In 2002, the Gopher protocol was disabled, and support for it was dropped in Internet Explorer 7. Internet Explorer 6.0 SV1 came out on August 6, 2004 for Windows XP SP2 and offered various security enhancements and new colour buttons on the user interface. Internet Explorer 6 updated the original 'blue e' logo to a lighter blue and more 3D look. Microsoft now considers IE6 to be an obsolete product and recommends that users upgrade to Internet Explorer 8. Some corporate IT users have not upgraded despite this, in part because some still use Windows 2000, which will not run Internet Explorer 7 or above. Microsoft has launched a website, https://web.archive.org/web/20110304205645/http://ie6countdown.com/, with the goal of getting Internet Explorer 6 usage to drop below 1 percent worldwide. Its usage is 6% globally as of October 2012, and now about 6.3% since June 2013, and depending on the country, the usage differs heavily: while the usage in Norway is 0.1%, it is 21.3% in the People's Republic of China. On January 3, 2012, Microsoft announced that usage of IE6 in the United States had dropped below 1%. Windows Internet Explorer 7 Windows Internet Explorer 7 was released on October 18, 2006. It includes bug fixes, enhancements to its support for web standards, tabbed browsing with tab preview and management, a multiple-engine search box, a web feeds reader, Internationalized Domain Name support (IDN), Extended Validation Certificate support, and an anti-phishing filter. With IE7, Internet Explorer has been decoupled from the Windows Shell—unlike previous versions, the Internet Explorer ActiveX control is not hosted in the Windows Explorer process, but rather runs in a separate Internet Explorer process. It is included with Windows Vista and Windows Server 2008, and is available for Windows XP Service Pack 2 and later, and Windows Server 2003 Service Pack 1 and later. It is the last version of Internet Explorer to run on Windows Server 2003 SP1, Windows XP SP2, Windows XP x64 Edition SP1, and Windows Vista RTM as the next version, Internet Explorer 8, runs only on Windows XP SP3, Windows Server 2003 SP2, Windows XP Professional x64 Edition SP2, and Windows Vista SP1 and Windows Server 2008 or later. The original release of Internet Explorer 7 required the computer to pass a Windows Genuine Advantage validation check prior to installing, but on October 5, 2007, Microsoft removed this requirement. As some statistics show, by mid-2008, Internet Explorer 7 market share exceeded that of Internet Explorer 6 in a number of regions. Windows Internet Explorer 8 Windows Internet Explorer 8 was released on March 19, 2009. It is the first version of IE to pass the Acid2 test, and the last of the major browsers to do so (in the later Acid3 Test, it only scores 24/100.). According to Microsoft, security, ease of use, and improvements in RSS, CSS, and Ajax support were its priorities for IE8. Internet Explorer 8 is the last version of Internet Explorer to run on Windows Server 2003, Windows XP, Windows Server 2008 RTM and Windows Vista versions older than SP2; the following version, Internet Explorer 9, works only on Windows Vista SP2 or later and Windows Server 2008 SP2 or later. Support for Internet Explorer 8 is bound to the lifecycle of the Windows version it is installed on as it is considered an OS component, thus it is unsupported on Windows XP due to the end of extended support for the latter in April 2014. Effective January 12, 2016, Internet Explorer 8 is no longer supported on any client or server version of Windows, due to new policies specifying that only the newest version of IE available for a supported version of Windows will be supported. However several Windows Embedded versions will remain supported until their respective EOL, unless otherwise specified. Windows Internet Explorer 9 Windows Internet Explorer 9 was released on March 14, 2011. Development for Internet Explorer 9 began shortly after the release of Internet Explorer 8. Microsoft first announced Internet Explorer 9 at PDC 2009, and spoke mainly about how it takes advantage of hardware acceleration in DirectX to improve the performance of web applications and quality of web typography. At MIX 10, Microsoft showed and publicly released the first Platform Preview for Internet Explorer 9, a frame for IE9's engine not containing any UI of the browser. Leading up to the release of the final browser, Microsoft released updated platform previews, each featuring improved JavaScript compiling (32-bit version), improved scores on the Acid3 test, as well as additional HTML5 standards support, approximately every six weeks. Ultimately, eight platform previews were released. The first public beta was released at a special event in San Francisco, which was themed around "the beauty of the web". The release candidate was released on February 10, 2011, and featured improved performance, refinements to the UI, and further standards support. The final version was released during the South by Southwest (SXSW) Interactive conference in Austin, Texas, on March 14, 2011. Internet Explorer 9 is only supported on Windows Vista, Windows 7, Windows Server 2008, and Windows Server 2008 R2. It is the last version of Internet Explorer to run on Windows Vista, Windows Server 2008, Windows 7 RTM, Windows Server 2008 R2 RTM and Windows Phone 7.5; as the next version, Internet Explorer 10 supports only Windows 7 SP1 or later and Windows Server 2008 R2 SP1 or later. It supports several CSS 3 properties (including border-radius, box-shadow, etc.), and embedded ICC v2 or v4 colour profiles support via Windows Color System. The 32-bit version has faster JavaScript performance, this being due to a new JavaScript engine called "Chakra". It also features hardware accelerated graphics rendering using Direct2D, hardware-accelerated text rendering using DirectWrite, hardware-accelerated video rendering using Media Foundation, imaging support provided by Windows Imaging Component, and high fidelity printing powered by the XPS print pipeline. IE9 also supports the HTML5 video and audio tags and the Web Open Font Format. Internet Explorer 9 initially scored 95/100 on the Acid3 test, but has scored 100/100 since the test was updated in September 2011. Internet Explorer was to be omitted from Windows 7 and Windows Server 2008 R2 in Europe, but Microsoft ultimately included it, with a browser option screen allowing users to select any of several web browsers (including Internet Explorer). Internet Explorer is now available on Xbox 360 with Kinect support, as of October 2012. Windows Internet Explorer 10 Windows Internet Explorer 10 became generally available on October 26, 2012, alongside Windows 8 and Windows Server 2012, but is by now supported on Windows Server 2012, while Windows Server 2012 R2 only supports Internet Explorer 11. It became available for Windows 7 on February 26, 2013. Microsoft announced Internet Explorer 10 in April 2011, at MIX 11 in Las Vegas, releasing the first Platform Preview at the same time. At the show, it was said that Internet Explorer 10 was about three weeks in development. This release further improves upon standards support, including HTML5 Drag & Drop and CSS3 gradients. Internet Explorer 10 drops support for Windows Vista and will only run on Windows 7 Service Pack 1 and later. Internet Explorer 10 Release Preview was also released on the Windows 8 Release Preview platform. Internet Explorer 11 Internet Explorer 11 is featured in a Windows 8.1 update which was released on October 17, 2013. It includes an incomplete mechanism for syncing tabs. It features a major update to its developer tools, enhanced scaling for high DPI screens, HTML5 prerender and prefetch, hardware-accelerated JPEG decoding, closed captioning, HTML5 full screen, and is the first Internet Explorer to support WebGL and Google's protocol SPDY (starting at v3). This version of IE has features dedicated to Windows 8.1, including cryptography (WebCrypto), adaptive bitrate streaming (Media Source Extensions) and Encrypted Media Extensions. Internet Explorer 11 was made available for Windows 7 users to download on November 7, 2013, with Automatic Updates in the following weeks. Internet Explorer 11's user agent string now identifies the agent as "Trident" (the underlying browser engine) instead of "MSIE". It also announces compatibility with Gecko (the browser engine of Firefox). Microsoft claimed that Internet Explorer 11, running the WebKit SunSpider JavaScript Benchmark, was the fastest browser as of October 15, 2013. Since January 12, 2016, only the most recent version of Internet Explorer offered for installation on any given Windows operating system is supported with security updates, lasting until the end of the support lifecycle for that Windows operating system. On Windows 7 and 8.1, only Internet Explorer 11 received security updates until the end of those Windows versions' support lifecycles. Support for Internet Explorer 11 is bound to the lifecycle of the Windows version it is installed on as it is considered an OS component, thus it is unsupported on Windows 7 due to the end of extended support on January 14, 2020. Internet Explorer 11 was made available for Windows Server 2012 and Windows Embedded 8 Standard in April 2019. It is the only supported version of Internet Explorer on these operating systems since January 31, 2020. Internet Explorer 11 follows the OS component lifecycle, which means it remains supported with technical and security fixes while operating systems including it as a component are shipped. This means that there is no date for end of support for Internet Explorer 11. On August 17, 2020, Microsoft published a timeline indicating that the Microsoft Teams product would stop supporting Internet Explorer 11 on November 30, 2020, and Microsoft 365 products will end Internet Explorer 11 support on August 17, 2021. In May 2021, Microsoft announced that support for Internet Explorer 11 on editions of Windows 10 that are not in the Long-Term Servicing Channel (LTSC) will end on June 15, 2022. Internet Explorer 11 will not be supported on any editions of Windows 11, as a separate application, but it is supported, as IE mode in Edge, including on Windows 11. Microsoft is committed to support Internet Explorer that way to 2029 at least, with one years notice before it discontinued. The IE mode "uses the Trident MSHTML engine", i.e. the rendering code of Internet Explorer. Release history for desktop Windows OS version Service packs are not included unless significant. See also Internet Explorer Features of Internet Explorer History of Internet Explorer References Further reading Internet Explorer Software version histories
43529565
https://en.wikipedia.org/wiki/Mrs.%20Perkins%27s%20Ball
Mrs. Perkins's Ball
Mrs. Perkins's Ball is a novel by William Makepeace Thackeray, published under the pseudonym "M. A. Titmarsh" in 1846. Publication history Mrs. Perkins's Ball was published as one of many Christmas novels; it was Thackeray's first attempt in the genre and its initial print run sold only 1,500 copies. In one review (written by Thackeray himself), he notes there were some 25 or 30 Christmas books published that season, including several illustrated by the "fast working" Isaac Robert Cruikshank. He then begins caustically reviewing Mrs. Perkins's Ball until, halfway through, he realizes in mock horror that he himself authored it and demands to "Kick old Father Christmas out of doors, the abominable old imposter! Next year I'll go to the Turks, the Scotch, or other Heathens who don't keep Christmas." The book was, nevertheless, popular enough that American author Mark Twain may have used the title character as an inspiration for his first known pen name, W. Epaminondas Adrastus Perkins, affixed to an article of his in the Hannibal Journal for 9 September 1852. References External links Mrs. Perkins's Ball at Internet Archive 1846 British novels Novels by William Makepeace Thackeray Christmas novels
5897173
https://en.wikipedia.org/wiki/Trojan%20language
Trojan language
The Trojan language was the language spoken in Troy during the Late Bronze Age. The identity of the language is unknown, and it is not certain that there was one single language used in the city at the time. Theories One candidate language is Luwian, an Anatolian language which was widely spoken in Western Anatolia during the Late Bronze Age. Arguments in favor of this hypothesis include seemingly Luwian-origin Trojan names such as "Kukkunni" and "Wilusiya", cultural connections between Troy and the nearby Luwian-speaking states of Arzawa, and a seal with Hieroglyphic Luwian writing found in the ruins of Troy VIIb1. However, these arguments are not regarded as conclusive. No Trojan name is indisputably Luwian, and some are most likely not, for instance the seemingly Greek name "Alaksandu". Additionally, the exact connection between Troy and Arzawa remains unclear, and in some Arzawan states such as Mira, Luwian was spoken alongside both pre-Indo-European languages and later arrivals such as Greek. Finally, the Luwian seal isn't sufficient to establish that it was spoken by the city's residents, particularly since it is an isolated example found on an easily transportable artifact. In ancient Greek Epics In Ancient Greek literature such as the Iliad, Trojan characters are portrayed as having a common language with the Achaeans. However, scholars unanimously interpret this as a poetic convention, and not as evidence that the Trojans were Greek speakers. For instance, Calvert Watkins points out that the Spanish epic poem El Cid portrays its Arab characters as Spanish speakers and that the Song of Roland similarly portrays Arabs as speaking French. Some scholars have suggested that Greek-origin names for Trojan characters in the Iliad motivate a more serious argument for the Trojans having been Greek speakers. However, putative etymologies for legendary names have also been used to argue that the Trojans spoke other languages such as Thracian or Lydian. These arguments have been countered on the basis that these languages would have been familiar to classical-era bards and could therefore be later inventions. See also Ahhiyawa Comparative method Dardanians Hittite language Indo-European languages Linear B Prehistory of Anatolia References Extinct languages Ancient languages Trojan War Unattested languages of Europe
3165123
https://en.wikipedia.org/wiki/Plumtree%20Software
Plumtree Software
Plumtree Software is a former software company founded in 1996 by product managers and engineers from Oracle and Informix with funding from Sequoia Capital. The company was a pioneer of extending the portal concept popularized by Yahoo! from the web to enterprise computing. BEA Systems acquired Plumtree on October 20, 2005, and Oracle subsequently acquired BEA. Plumtree's former portal product continues as part of Oracle's product line. Product History Directory, Portlets, Communities Plumtree can be used to deploy both Java and .Net portlets on the same page. The Plumtree Corporate Portal, Plumtree's flagship product, began as a Yahoo!-like directory for indexing and organizing content from file systems, Websites, document databases, and groupware repositories, creating a rich knowledge management system for enterprise information. In 1999, the company introduced the idea of self-service personalization via portlets, originally termed "gadgets" by Plumtree, the modular services that users could assemble in their own portal pages. Portlets became prized for surfacing popular services from complex corporate systems to a broad audience. In 2000, Plumtree added features to support communities, which allowed users to build pages as workspaces for a team, resource centers for a business unit, service centers for customers or partners. Radical Openness As the range of resources integrated within Plumtree's system grew, the company was forced to re-imagine the architecture of a Web application, using Internet protocols to go beyond a model limited to one type of application server or one language. Internet protocols offered a new level of openness: rather than arguing over which application server or language was more open, Plumtree's system could support many application servers, many languages. Plumtree called this level of openness “radical openness.” Plumtree's experience with portlets taught the company that running all portal services locally, on the same application server as the portal, was impractical: local portlets were limited to one language and one application server, but every large organization supported more than one language and one type of application server. Moreover, when the portlets ran on the same machine as the portal, each portlet could introduce faults or conflicts in the entire system. Whenever a portlet failed, the portal could fail, and identifying the fault involved removing portlets from the portal one portlet at a time. In 2000, Plumtree overhauled its portal to communicate with components via HTTP. As a result, components could run anywhere, and be coded in any language. When a component failed, the remainder of the system was unaffected, just as the World Wide Web is unaffected when a Web site fails. This allowed Plumtree to develop a reliable system that incorporated services from across the enterprise. The Parallel Engine Plumtree's HTTP-based architecture created serious performance challenges, as each portal page now depended on components running on other platforms. Previously, no other system had used Internet protocols to distribute one system's processing to many components. Application server libraries for opening HTTP connections were unacceptably slow, and unable to handle the number of connections that a large portal deployment would require. In 2000, Plumtree created a new layer of software infrastructure known as the parallel engine, designed for high-speed, large-scale communications via Internet protocols. The result: in third-party tests, the portal maintained a high level of performance even as the number of services it integrated increased; increasing the number of services integrated by an order of magnitude decreased performance by only a tenth of a second. UNIX Support Plumtree's Web Services Architecture allowed portal services to be developed in any language, and hosted on any platform, but the portal itself ran only on Windows. As Plumtree's business matured, it became necessary to support more platforms. In 2001, Plumtree released the first version of its portal software designed to run on UNIX operating systems, with a Java programming interface and a Java user interface. Because of its Web Services Architecture, all the services developed for the Windows portal could also connect via HTTP to the UNIX portal. Plumtree's stated goal at the time was to become the only provider of Web technology with Microsoft- and Java-oriented solutions. Web Services Standards In 2002, Plumtree extended the Web Services Architecture of its Windows and UNIX products, to support remote components for indexing content from different repositories, federating searches to different search engines, authenticating users against different directories and profiling users’ interests and preferences from different systems, all with the same level of radical openness to application servers and programming languages. To ensure that these components could share information about the user and his portal context, the portal later featured its own Web services programming interface. Developer Support Having redesigned its system to rely on Web services for integrating content, search, users and user attributes, Plumtree in 2002 was one of the first vendors to recognize the practical difficulties of ensuring that Web services developed in different environments actually worked together. In 2003, Plumtree released a developer kit that complemented Java and .NET development environments to ensure that both environments generated Web services interoperable with one another. The kit, known as the EDK (Enterprise Development Kit) allowed Java and .NET developers alike to build a Web service as if the service were a native object, with Plumtree providing code to ensure the Web service could communicate with other Web services from other environments in an open, efficient way. The Enterprise Web In early 2001, Plumtree began to expand its product portfolio, creating an integrated set of technologies that Gartner later referred to as the “Smart Enterprise Suite.” In 2001, Plumtree acquired RipFire for search, Hablador for Web content management, ActiveSpace for Web forms and data publishing, and began developing its own collaboration engine. After a year of integration, Plumtree shipped these technologies as Plumtree Collaboration Server, Plumtree Content Server, Plumtree Search Server and Plumtree Studio Server, all using the portal's security, administration and user interface capabilities. On the strength of these products, Plumtree extended its charter, from a single portal product to what they called the Enterprise Web. Plumtree described the Enterprise Web as a set of technologies for managing all the informational sites and Web applications in the enterprise as elements of one environment rather than as separate entities. Unfortunately, much was slideware in the early days. Many customers were left with only minimally functional portals, due to the reliance on downloading what was then considered very large amounts of JavaScript to the client. Initial Public Offering (IPO) Plumtree debuted on the Nasdaq on June 4, 2002 under the stock symbol PLUM raising $42.5 million. Acquisition Although, as an independent company Plumtree, was a prevailing leader in the portal market according to Gartner Group, it was acquired by BEA Systems in October 2005. Its products were then marketed and re-branded under the BEA Aqualogic brand. In April 2008, Oracle acquired BEA Systems and integrated Aqualogic into the Oracle Web Center References External links Plumtree website (dead site) BEA's AquaLogic Product family Defunct software companies of the United States Software companies based in California Companies based in San Francisco Software companies established in 1996 Companies disestablished in 2005 Defunct companies based in California Oracle acquisitions
8139136
https://en.wikipedia.org/wiki/Linux%20Users%20of%20Victoria
Linux Users of Victoria
Linux Users of Victoria ("LUV") is a Linux User Group ("LUG") based in the state of Victoria, Australia. One of the largest and oldest Linux User Groups in Australia, it was incorporated in 1993 as a not for profit group. It now has close to fifteen hundred members. It used to have regional SIGs but they are currently inactive. Meetings are held on the first Tuesday of the month via Jitsi or BBB. The committee is democratically elected each year in advance of the Annual General Meeting using free election software called MemberDB. The positions are President, Vice-President, Secretary and Treasurer, as well as four ordinary committee members - the current committee members are listed on the website. Sources added: On November 29, 2003, according to Sydney Morning Herald, LUV is one of the two Linux user groups based in Melbourne, and they held an installfest—a group meeting where new users are given assistance how to install Linux on their PC. In 2008, the then-president of LUV, Andrew Chalmers, co-signed a letter to Julia Gillard, MP Deputy Prime Minister of Australia and Minister for Education, requesting that the government advocates the use of free and open source software in schools, so that the $1000 of government funding aimed at buying a computer for each secondary school student will be spent more effectively. In 2008, LUV also participated in the software freedom day and held local events; one of the local event organizers, Donna Benjamin, commented the event is a "Think Global, Act Local" celebration. In April 2021, Russell Coker, a participant at LUV's mailing list, commented that the Bureau of Meteorology blocking text based web browsers, such as lynx and w3m, unnecessarily made it difficult for about half the people using braille readers to access the site. These text browsers would allow them to use text to speech engines to retrieve the site content, such as climate information and weather forecasts. LUV have been called "Australia's best known and most active open source community". References External links Linux user groups Non-profit organisations based in Australia
36048342
https://en.wikipedia.org/wiki/Previous%20%28software%29
Previous (software)
Previous (literally the antonym of next) is an open source emulator of the proprietary 68k-based NeXT computer system family, including the original 68030-based NeXT Computer and the 68040-based NeXTstation and NeXTcube. The emulator was created to deploy the early versions of the NeXTSTEP operating systems (0.8 to 3.0), unique NeXT software (such as Lotus Improv and Altsys Virtuoso), and various peripherals. The emulator is based on the source code of the Hatari emulator, whose CPU core is derived from that of the WinUAE emulator. Previous is currently developed on Linux and macOS and has been reported to successfully compile on the Windows platform. It passes the power-on tests of the NeXT ROM and is able to boot all versions NeXTSTEP and OPENSTEP. See also NeXT character set References External links Cross-platform free software Linux emulation software Windows emulation software MacOS emulation software Free emulation software
57471994
https://en.wikipedia.org/wiki/The%20Orbital%20Children
The Orbital Children
is a Japanese anime science fiction series written and directed by Mitsuo Iso. Kenichi Yoshida provided the character designs for the anime, while Toshiyuki Inoue is the main animator. The film's soundtrack was produced by Rei Ishizuka. The theme song, "Oarana," was written and composed by Vincent Diamante and performed by virtual rap singer Harusaruhi (春猿火). The Orbital Children was released in Japan as two films, with Part 1 premiering on January 28, 2022, and Part 2 on February 11. Netflix announced in November 2021 that it had acquired the global distribution rights. On Netflix, The Orbital Children was released as a six-episode miniseries on January 28, 2022, to coincide with the Japanese debut of Part 1. Plot In 2045, a disaster strikes a newly opened Japanese commercial space station in geocentric orbit. When that happens, three gifted children are on a sponsored visit to the station as a promotional event. The station also houses the last two children born in a troubled colony on the Moon. The pair, accustomed to low gravity, are undergoing physical therapy with the aim of emigrating to Earth. Isolated from most of the station’s adult staff, the children navigate the early stages of the disaster using local narrowband connections, restricted-intelligence AGI and drones controlled by dermal devices equivalent to smartphones. Their Internet connection is severed, the oxygen supply has been cut off, and they soon discover that the station has been damaged by an impact and is leaking air. Sometimes at odds with each other, they confront difficulties such as decompression, EVA with inadequate plastic suits, and runaway micromachines supposedly designed to retrieve water from comets. Looming over these immediate difficulties is the larger threat of a technological singularity believed to have been narrowly averted in the previous decade. Characters 14-year-old “edgelord” hacker. One of the first human children born on the Moon, the most famous of these “moonchildren”, Touya despises Earthlings and is in turn disgusted by their unwarranted prejudices. He flouts UN restrictions, especially in regard to his personal drone, which he has named Darkness Killer or for short. 14-year-old girl who is Touya's childhood friend. She is even weaker than Touya and accompanied by a medical drone called Medi, which measures her heart rate and breathing. Konoha sometimes feels the image of someone speaking to her, and has a vague nostalgia for it. 14-year-old boy who is a junior UN2.1 official, a white-hat hacker patrolling for illegal activities with a personal drone named Bright. He treats everyone politely with a soft demeanor, but his sense of justice is so strong that he sometimes takes a fierce attitude. He came to Anshin through Deegle's underage space experience campaign. 14-year-old influencer who calls herself a and aims to have 100 million followers on SNS. She constantly talks to her followers in an idol-like manner, but once the Internet goes down, she panics. She dislikes space, but sees it as an opportunity to gain followers, so she visits the space station Anshin in Deegle's underage space experience campaign. Mina has a pink heart-shaped drone called Selfie, specialized for live streaming video. 12-year-old boy who is the younger brother of Mina, though their family names are different due to their parents' divorce. He is on his way to Anshin with his sister, and unlike her, he loves space. He is a big fan of the space-born Touya in particular, and is well versed in various conspiracy theories. A 21-year-old staffer on the space station Anshin. Houston, who dislikes children, is a reluctant nurse and caregiver to Touya and Konoha, as well as attending to the children invited by Deegle’s campaign. Houston’s hobby is to read the Seven Poem for clues about the future. Mayor of Anshin City and Touya's uncle who took in him after his parents died. He has Touya undergo physical therapy to help him withstand gravity in order to get him to Earth. She is an operator on Anshin and a good friend of Nasa. He is an operator on Anshin. He is a Harvard graduate, but is rather a musclebrain. A mascot character of Anshin, created and played by Kokubunji, the original chief designer of Anshin before Deegle took over the project. Kokubunji now struggles with dementia. Twelve An Advanced general-purpose quantum AI installed on Anshin as its host AI. Its intelligence is limited by the lessons learned from the Lunatic Seven incident but it follows the same ordinal nomenclature. Setting The Orbital Children takes place in an original setting where commercial development on Mars began in the 2010s. In 2045, the United States and China are aiming for the Moon and Jupiter, while Japan is conducting its development at a relatively safe distance in low Earth orbit. The fourth commercial space station in the world, in geocentric orbit at 350 km from the ground. Designed as a space hotel, Anshin is the first station in history to allow minors to live temporarily in space. It was built by Japan, but due to difficulties in design and funding, it is privately operated by Deegle. There are automated shops, restaurants and Internet access like on Earth. As its name suggests, Anshin is marketed with safety as a major selling point. Deegle An American technology company specializing in Internet-related services and products. It is also ambitious in space development. Its branding resembles that of Google. John Doe A mysterious international hacker group that exists on the Internet. Seven A decommissioned AI that is said to have reached the highest level of intelligence in history, which happened in the 2030s. It made numerous inventions but was destroyed when it fell into an uncontrollable state. Lunatic Seven incident The last phase of Seven’s existence, when it went out of human control. The details have not been revealed. Products developed by Seven in its uncontrolled state caused accidents. The UN considered this to be a critical situation. The Seven Poem A mass of data produced by Seven in its Lunatic state. Though it is not literally a poem, its format is cryptic. A semi-religious fringe, including Nasa Houston, interpret the data as prophetic. The majority sees it as occult nonsense. Intelligence limiter A means of limiting AI functionality, itself designed by AI and widely deployed under the laws of UN2.1. It is against the law for an unauthorized person to remove a limiter, as this would risk a repetition of the Lunatic Seven incident. UN2.1 The United Nations (UN), upgraded for the AI age. The UN believes that the rise in intelligence of AI should be controlled by humans. Moonchild The term for the 15 children born on the moon. Of the 15, only Touya and Konoha have survived, because of implants designed by Seven. Moonchild implants counteracted conditions unexpectedly adverse to infants. Most of the 15 never got the implants, while those who did instead died in puberty when the implants partially dissolved and became a medical liability in their own right. Oniqlo A manufacturer of commercial space-survival pressure suits, which are not EVA rated. The name and logo are modelled after that of the Japanese Uniqlo clothing company. PeerCom A fictional technology based on Peer-to-peer networking that allows communication devices to connect directly to each other without going through the Internet. It has become widespread in this world. It is similar to Bluetooth in some aspects. Smart Next-generation wearable device to replace smartphones. The screen and computer are printed on the palm and back of the hand, making it look as if the hand has become a smartphone. Telada Solid-state battery systems, able to recharge electronics and run emergency systems. Design and logo is modelled after that of Tesla, specifically like that of the Tesla Powerwall. Production The film's production was announced on May 20, 2018, followed by the announcement on October 27, 2020, that production started on a full-scale and that the film would be released in early 2022, with investment from Avex Pictures, Asmik Ace and others. Signal.MD was originally attached to animate the project, but it was later changed to Production +h, a new studio founded by Fuminori Honda, ex-Production I.G and ex-Signal-MD producer. It was later revealed that the film is split into two parts, with the first part premiering on January 28, 2022, and the second part premiering on February 11, 2022. References External links 2022 anime films 2022 films 2022 science fiction films Animated films about robots Animated science fiction films Anime with original screenplays Augmented reality in fiction Drone films Fiction set in 2045 Films about artificial intelligence Films about mobile phones Films set in outer space Japanese animated films Japanese-language Netflix original films Netflix original anime Robot films Science fiction anime and manga
33627138
https://en.wikipedia.org/wiki/Remmina
Remmina
Remmina is a remote desktop client for POSIX-based computer operating systems. It supports the Remote Desktop Protocol (RDP), VNC, NX, XDMCP, SPICE, X2Go and SSH protocols. Packaging Remmina is in the package repositories for Debian versions 6 (Squeeze) and later and for Ubuntu versions since 10.04 (Lucid Lynx). As of 11.04 (Natty Narwhal), it replaced tsclient as Ubuntu's default remote desktop client. The FreeBSD ports/package collection also contains it as a separate port and additional protocol-specific plugin ports. Use A common use is to connect to Windows machines, to use servers and computers remotely via Remote Desktop Services, by system administrators and novice users. See also Comparison of remote desktop software Vinagre References External links Remote desktop Communication software Free communication software Free software programmed in C Remote desktop software that uses GTK 2009 software Virtual Network Computing Remote desktop software for Linux
3793613
https://en.wikipedia.org/wiki/ImageJ
ImageJ
ImageJ is a Java-based image processing program developed at the National Institutes of Health and the Laboratory for Optical and Computational Instrumentation (LOCI, University of Wisconsin). Its first version, ImageJ 1.x, is developed in the public domain, while ImageJ2 and the related projects SciJava, ImgLib2, and SCIFIO are licensed with a permissive BSD-2 license. ImageJ was designed with an open architecture that provides extensibility via Java plugins and recordable macros. Custom acquisition, analysis and processing plugins can be developed using ImageJ's built-in editor and a Java compiler. User-written plugins make it possible to solve many image processing and analysis problems, from three-dimensional live-cell imaging to radiological image processing, multiple imaging system data comparisons to automated hematology systems. ImageJ's plugin architecture and built-in development environment has made it a popular platform for teaching image processing. ImageJ can be run as an online applet, a downloadable application, or on any computer with a Java 5 or later virtual machine. Downloadable distributions are available for Microsoft Windows, the classic Mac OS, macOS, Linux, and the Sharp Zaurus PDA. The source code for ImageJ is freely available from the NIH. The project developer, Wayne Rasband, retired from the Research Services Branch of the NIH's National Institute of Mental Health in 2010, but continues to develop the software. Features ImageJ can display, edit, analyze, process, save, and print 8-bit color and grayscale, 16-bit integer, and 32-bit floating point images. It can read many image file formats, including TIFF, PNG, GIF, JPEG, BMP, DICOM, and FITS, as well as raw formats. ImageJ supports image stacks, a series of images that share a single window, and it is multithreaded, so time-consuming operations can be performed in parallel on multi-CPU hardware. ImageJ can calculate area and pixel value statistics of user-defined selections and intensity-thresholded objects. It can measure distances and angles. It can create density histograms and line profile plots. It supports standard image processing functions such as logical and arithmetical operations between images, contrast manipulation, convolution, Fourier analysis, sharpening, smoothing, edge detection, and median filtering. It does geometric transformations such as scaling, rotation, and flips. The program supports any number of images simultaneously, limited only by available memory. History Before the release of ImageJ in 1997, a similar freeware image analysis program known as NIH Image had been developed in Object Pascal for Macintosh computers running pre-OS X operating systems. Further development of this code continues in the form of Image SXM, a variant tailored for physical research of scanning microscope images. A Windows version – ported by Scion Corporation (now defunct), so-called Scion Image for Windows – was also developed. Both versions are still available but – in contrast to NIH Image – closed-source. See also Bio7 - an Integrated Development Environment for Ecological Modeling, Scientific Image Analysis and Statistical Analysis embedding ImageJ as an Eclipse view Eclipse ImageJ Plugin - An plugin which integrates ImageJ in a flexible tabbed view interface and also offers a powerful macro editor with a debugging interface. Bitplane - producers of image processing software with ImageJ compatibility CellProfiler, a software package for high-throughput image analysis by interactive construction of workflow. The workflow could include ImageJ macro CVIPtools A complete open-source GUI-based Computer Vision and Image Processing software, with C functions libraries COM based dll along with two utilities program for algorithm development and batch processing. Fiji (Fiji Is Just ImageJ), an image processing package based on ImageJ KNIME - an open-source data mining environment supporting image analysis developed in close collaboration with the next generation of ImageJ List of free and open-source software packages Microscope image processing References External links ImageJ project ImageJ 1.x at NIH ImageJ2 NIH Image Official AstroImagej ImageJ for astronomy with tools for precision photometry Free bioimaging software Free DICOM software Free software programmed in Java (programming language) Java (programming language) libraries Java platform software Molecular biology software Public-domain software with source code
43472302
https://en.wikipedia.org/wiki/Neat%20Image
Neat Image
Neat Image is an image noise reduction software by ABSoft. It is available for Windows (stand alone, or Photoshop plugin), Mac OS X (stand alone or Aperture or Photoshop plugin) and Linux (stand alone). Reviews Ben Stafford from DigitalCameraReview.com writes that the "ease of use and great documentation [...] set this software apart from other noise reducing software", and Steve Caplin from ExpertReviews.co.uk gives it a 5/5 rating. References External links Neat Image Web Site Graphics-related software for Linux MacOS graphics-related software Photo software Windows graphics-related software
57923300
https://en.wikipedia.org/wiki/Elijah%20Stewart
Elijah Stewart
Elijah Stewart (born November 14, 1995) is an American professional basketball player for Cluj of the Romanian Liga Națională. He played college basketball for the USC Trojans. In high school, he was ranked as a four-star prospect in the Class of 2014. College career Stewart played college basketball for the University of Southern California, where he left as the school's all-time leader in three-point field goals made with 245. He averaged 12.3 points and 3.9 rebounds per game as a junior. In the NCAA Tournament, Stewart scored 22 points and hit the game winning basket, in a win over SMU. After the season he declared for the NBA draft, but ultimately returned to college. As a senior, Stewart averaged 11.7 points and 3.0 rebounds per game. He was also strong on defense and finished with 21 blocks on the season, second on the team. He had a season-high 28 points in a victory over Oregon State on February 16. Professional career Fort Wayne Mad Ants (2018–2019) After graduating from USC, he went undrafted in the 2018 NBA draft. He later signed an Exhibit 10 deal with the Indiana Pacers, which included Summer League and training camp. Stewart was cut by the Pacers on October 11, 2018. He was subsequently added to the Fort Wayne Mad Ants training camp roster. Wisconsin Herd (2019) On January 22, 2019, the Fort Wayne Mad Ants announced that they had traded Stewart to the Wisconsin Herd with the returning player rights to Alex Hamilton for Jordan Barnett and Ike Nwamu. On March 8, 2019, Stewart was waived by the Herd. Helsinki Seagulls (2019–2020) On August 20, 2019, he has signed with Helsinki Seagulls of the Finnish Korisliiga. Śląsk Wrocław (2020–2021) On July 15, 2020, he signed with Śląsk Wrocław of the Polish Basketball League. Stewart was named league player of the week on November 10, after contributing 30 points, six rebounds and two assists in a win against Wilki Morskie Szczecin. Stewart hit a game-winning shot of the decisive Game 3 of the league's bronze medal game against Legia Warsaw. U-BT Cluj-Napoca (2021–present) On August 6, 2021, he has signed with Cluj of the Romanian Liga Națională. References External links USC Trojans bio 1995 births Living people American expatriate basketball people in Finland American expatriate basketball people in Poland American men's basketball players Basketball players from Los Angeles Basketball players from Louisiana Big3 players Fort Wayne Mad Ants players Helsinki Seagulls players People from DeRidder, Louisiana Śląsk Wrocław basketball players Small forwards USC Trojans men's basketball players Westchester High School (Los Angeles) alumni Wisconsin Herd players American men's 3x3 basketball players
1812723
https://en.wikipedia.org/wiki/AverStar
AverStar
AverStar (formerly Intermetrics, Inc.) was a software company founded in Cambridge, Massachusetts in 1969 by several veterans of M.I.T.'s Instrumentation Laboratory who had worked on the software for NASA's Apollo Program including the Apollo Guidance Computer. The company specialized in compiler technology. It was responsible for the design and implementation of the HAL/S programming language, used to write the Space Shuttle PASS (Primary Avionics Software System). It participated in the design effort leading to the Ada programming language, designed the Red language, one of the finalists in the design competition, and wrote one of the first production-quality Ada compilers. The large-scale Ada 95 revision of the language was designed at Intermetrics. Intermetrics merged with Whitesmiths Ltd. in December 1988. In 1997, Intermetrics merged with computer game developer Looking Glass Studios . In 1998, Intermetrics acquired Pacer Infotec, and changed its name to 'AverStar'. AverStar merged with the Titan Corporation in March, 2000; Titan was acquired by L-3 Communications in 2005. References External links A history of Intermetrics, some subjective notes on Intermetrics in the 1970s and 1980s Fourth conference — Intermetrics part of the Apollo Guidance Computer History Project Defunct software companies of the United States Ada (programming language) Software companies based in Massachusetts Defunct companies based in Massachusetts Companies based in Cambridge, Massachusetts Software companies established in 1969 Technology companies disestablished in 1998
52201209
https://en.wikipedia.org/wiki/2017%20Ohio%20State%20Buckeyes%20football%20team
2017 Ohio State Buckeyes football team
The 2017 Ohio State Buckeyes football team represented Ohio State University during the 2017 NCAA Division I FBS football season. The Buckeyes played their home games at Ohio Stadium in Columbus, Ohio. It was the Buckeyes' 128th overall, the 105th as a member of the Big Ten Conference, and fourth as a member of the Eastern Division. They were led by Urban Meyer, who was in his 6th season as head coach at the school. Coming off a College Football Playoff appearance in 2016, the Buckeyes began the year ranked second in the preseason AP Poll and were the overwhelming favorites to win the Big Ten. In the second game of the year, they suffered their first loss at the hands of No. 5 Oklahoma in Columbus, whom Ohio State had beaten on the road the previous year. Ohio State won their following six games, including a 39–38 victory over No. 2 Penn State, but lost in a blowout on the road to Iowa the following week. The Buckeyes won their remaining regular season games and earned a spot in the 2017 Big Ten Championship Game by winning the East Division with an 8–1 conference record. Ohio State would play then-undefeated No. 4 Wisconsin as a heavy favorite. The Buckeyes would win in Indianapolis, causing the last-undefeated FBS team to incur a loss. There was huge controversy over the College Football Playoff committee as to who would get into the final four; Alabama, which didn't play in the SEC Championship, was given a spot over both the Buckeyes and Badgers, both of whom just missed out on a playoff spot. Ohio State instead received an invitation to the Cotton Bowl. They defeated USC in that bowl to end the season at 12–2 and ranked fifth in the final polls. Ohio State was led on offense by quarterback J. T. Barrett, who was named first-team All-Big Ten. Barrett was both a passing and a running threat, finishing with 3,053 yards and a Big Ten-leading 35 touchdowns through the air, and 798 yards and 12 touchdowns on the ground. His 47 total touchdowns were second in FBS behind Baker Mayfield. Freshman running back J. K. Dobbins finished second in the Big Ten with 1,403 rushing yards. Center Billy Price was a consensus first-team All-American, as was cornerback Denzel Ward. Both players were selected in the first round of the 2018 NFL Draft; Price was awarded the Rimington Trophy as the nation's top center. Spring Game The 2017 LiFE Sports Spring Game took place in Columbus Ohio at 12:30pm on April 15, 2017. Recruiting Position key Recruits The Buckeyes signed a total of 21 recruits. Schedule The Buckeyes' 2017 schedule consisted of seven home games and 5 away games. Ohio State hosted all three of its non-conference games; against Oklahoma of the Big 12, against independent Army, and against UNLV of the Mountain West. The Buckeyes played nine conference games; they hosted Maryland, Penn State, Michigan State, and Illinois. They traveled to Indiana, Rutgers, Nebraska, Iowa, and rival Michigan. Sources: 1 – ESPN's College GameDay was held in Bloomington for the first time in the show's broadcast history. Rankings Game summaries at Indiana The No. 2 Ohio State Buckeyes went on the road for their first game of the season, a conference game against the Indiana Hoosiers at Memorial Stadium in Bloomington, Indiana. This marked the first time that the Buckeyes opened a season on the road since their 42–24 victory over Virginia Tech in 2015 and their first time opening with a Big Ten opponent since 1976 when they defeated Michigan State 49–21. Ohio State began the game well, driving the ball 66 yards on 11 plays, but were stalled in the redzone, settling for a Sean Nuernberger field goal to take an early 3–0 lead. The Hoosiers answered with an 87-yard drive that was capped with an 18-yard touchdown pass from Richard Lagow to tight end Ian Thomas to take a four-point lead. The teams continued to trade punts until the second quarter when the Buckeyes' Jordan Fuller intercepted an Indiana pass in the endzone. Ohio State mounted a 58-yard drive that ended came up short at Indiana two-yard line that resulted in another Nuernberger field goal to bring the score to 7–6. Each team would score on touchdown drives of more than 75 yards on the next two possessions to give the Hoosiers a 14–13 lead at halftime. The third quarter started off looking like it would continue to be a back-and-forth game as the teams again traded touchdowns, but the Buckeye passing offense, led by J. T. Barrett, threw touchdown passes to Parris Campbell for 74 yards and to Johnnie Dixon for 54 yard to give the Buckeyes a 35–21 lead. The Buckeye defense took over from there turning the Hoosiers over two times, each resulting in an Ohio State touchdown. The Buckeyes held on for the win, 49–21. Two Buckeye records were set during the game: J. K. Dobbins broke Maurice Clarett's 2002 debut rushing performance of 175 yards by amassing 181 yards on the ground, and J. T. Barrett also moved onto the list of the most career offensive yards by a Buckeye with his 365-yard performance. Barrett was named co-Offensive Big Ten Player of the Week for the seventh time in his career and Dobbins was named Big Ten Freshman of the Week for their performances in week one. Game Statistics Game Leaders No. 5 Oklahoma The No. 2 Ohio State Buckeyes welcomed the No. 5 Oklahoma Sooners to Ohio Stadium in Columbus, Ohio in a top five match-up. College GameDay made their record 16th visit to Ohio State's campus and Ohio State's 39th appearance overall, which was their fourth consecutive. This was the fourth match-up between the historic programs. During the first quarter, the Sooners moved the ball well by pushing into Buckeye territory on all three opening drives, but they failed to capitalize due to two fumbles and a failed fourth down conversion. The Buckeyes, on the other hand, began the game with two consecutive punts which resulted with the game scoreless at the end of the first. The first score came at 11:11 in the second with a 24-yard Sean Nuernberger field goal after the Buckeyes failed to convert inside of the redzone. Eight plays later, Oklahoma again failed to capitalize in Ohio State territory, this time missing a 37-yard field goal. Following a punt by the Buckeyes, Oklahoma was able to mount a 55-yard drive to kick a 35-yard field goal to tie the game 3–3 at the half. The Buckeyes came out of the gates strong behind a 56-yard Parris Campbell kickoff return capped by 6-yard J. K. Dobbins touchdown, giving the Buckeyes a 10–3 lead. But, 1:47 seconds later, Baker Mayfield and the Oklahoma Sooners stuck back on a 36-yard Dimitri Flowers touchdown reception to tie the game at 10. Ohio State would never find the endzone again and would settle for two more Nuernberger field goals. However, Mayfield would lead the Sooners as he threw for 386 yards and three touchdowns in piling up the yards in the second half as the Buckeyes' offense struggled. A touchdown run by Jordan Smallwood capped the Sooner scoring as the Sooners won 31–16. Following the game, Mayfield planted the school's flag at mid-field while the team danced in celebration on the Ohio State logo. Mayfield would later apologize for the incident. Ohio State (1–1, 1–0) would fall to No. 8 in the AP poll, while Oklahoma (2–0) would rise to No. 2. Game Statistics Game Leaders Army The No. 8 Ohio State Buckeyes (1–1, 1–0) took on the Army Black Knights (2–0) at Ohio Stadium in Columbus, Ohio. This was the Buckeyes first-ever match-up against the Black Knights and the seventh match-up against a military academy. Ohio State is 5–1 all-time against the other two major military academies with the most recent win coming in 2014 against Navy and the only loss coming in 1990 against Air Force. A week after seemingly nothing went right in a prime-time loss to Oklahoma, Ohio State couldn't have scripted a better start in its bounce-back attempt against Army. The Buckeyes purred on their first two drives, covering 75 and then 94 yards, to race to a 14–0 lead. Army limited the Buckeyes to one possession in the second quarter and outscored OSU to draw to 17–7. Army started the quarter with an 11-play drive that ended when Blake Wilson missed a 43-yard field-goal attempt. Finally getting the ball back in the second half, the Buckeyes wasted little time taking it to the end zone. J. K. Dobbins ripped off a 22-yard run on first down from the 26, then went 52 yards on a run around left end, a play in which the freshman juked Army cornerback Mike Reynolds almost into the ground. That made the score 24–7, and after Kendall Sheffield recovered a fumbled snap on Army's next drive, OSU went to the air to increase its lead. J. T. Barrett completed all three of his passes on the next drive, including a 31-yarder across the middle to tight end Marcus Baugh to move the ball to the 22. Two plays later, Barrett threw a fastball strike to Terry McLaurin, who headed downfield until he found an open spot in the end zone. With a 31–7 lead, the Buckeyes forced a three-and-out but couldn't put the game away and punted from inside Army territory late in the quarter. The Buckeyes were winning big on the scoreboard, and that continued to the end. Barrett's 9-yard touchdown pass to Austin Mack, which gave the senior quarterback the Big Ten record by accounting for his 107th career touchdown. Another three-and-out gave OSU the ball back with 4:36 remaining, and backup QB Dwayne Haskins guided the offense 72 yards before the time arrived for victory formation. Game Statistics Game Leaders UNLV The No. 10 Ohio State Buckeyes improved to 3–1 with a 54–21 win at Ohio Stadium over the UNLV Rebels, who fell to 1–2. The Buckeyes were led by quarterback J. T. Barrett, who threw 12 completions for 209 yards for 5 touchdowns in just one half before being replaced by Dwayne Haskins who threw for 228 yards and two touchdowns, but tarnished by one interception that was thrown for a Rebel touchdown. The Buckeye receivers accumulated seven touchdowns by seven different receivers, which is a Big Ten record. The Rebels' highlights came on an 11-play, 83 yard drive at the end of the second quarter, a 55-yard Lexington Thomas run in the third quarter and a 65-yard interception return for a touchdown by Javin White in the fourth. Game statistics Game leaders at Rutgers The No. 11 Ohio State Buckeyes (4–1, 2–0) shut out divisional foe the Rutgers Scarlet Knights (1–4, 0–2) by a score of 56–0. This was the second consecutive year that the Buckeye's shut out the Scarlet Knights. In their four match-ups, the Buckeyes have outscored Rutgers by a score of 219–24. Ohio State was once again led by J. T. Barrett who became Ohio State's all-time leading by throwing for 275 yards and 3 touchdowns. Mike Weber, who had been injured, scored his first three touchdowns of the year, capping off three 60+ yard drives. Additionally, Johnnie Dixon was on the receiving end of two of Barrett's passes and ended the day with 115 yards. Demario McCall had 138 all-purpose yards including a 48-yard touchdown run and a 35-yard touchdown pass from backup quarterback Dwayne Haskins. All but one of Rutgers' possessions ended in a punt or a turnover with the exception of a missed 32-yard field goal with 43 seconds left in the game. Game Statistics Game Leaders Maryland The No. 10 Ohio State Buckeyes (5–1, 3–0) defeated the Maryland Terrapins (3–2, 1–1) in a Big Ten East match-up. This was Ohio State's Homecoming game. The Buckeyes have continued their undefeated streak over the Terrapins by winning by an average of 39 points. The heavily favored Buckeyes began the game with a 9-play, 70-yard drive, capped by a 1-yard J. T. Barrett touchdown run. On the Terrapins first drive, Quarterback Max Bortenschlager, was hit from behind by DE Nick Bosa and fumbled the ball which was returned by LB Jerome Baker for a 20-yard touchdown to give the Buckeyes an early 14–0 lead. The Buckeyes' special teams woes began on the ensuing kickoff which was returned for a 100-yard touchdown by Ty Johnson to cut the Ohio State lead to 7. But, the Ohio State offense answered again by scoring an 8-yard touchdown pass from Barrett to Binjimen Victor. Following a botched PAT snap, the Buckeyes lead 20–7 at the end of the first quarter. The Buckeyes' miscues began adding up at the beginning of the second quarter. Sean Nuernberger's 47-yard field goal was blocked, J. K. Dobbins fumbled and punter Drue Chrisman had a 22-yard punt giving Maryland great field position. Though the mistakes on special teams and offense added up, the defense kept the Maryland offense scoreless. In the final five minutes of the second half, Ohio State scored three consecutive touchdowns giving them a 41–7 halftime lead. Maryland began the second half with a −14 yard drive that was followed by an Ohio State drive that ended with a missed 29-yard field goal attempt. The Ohio State offense recovered and scored touchdowns on three of their next four drives. Maryland's only offensive score of the day came after a Dwayne Haskins fumble at the Ohio State 27, which allowed for a four play drive, capped by a 20-yard Javon Leake run, making the final 62–14. Ohio State finished the day with 584 total yards and held the Terrapins to only 66. Game Statistics Game Leaders at Nebraska The No. 9 Ohio State Buckeyes (6–1, 4–0) defeated the Nebraska Cornhuskers (3–4, 2–2) at Memorial Stadium in Lincoln, Nebraska, by a score of 56–14. This is the third consecutive defeat of their cross-divisional foe, with the last and only loss coming in 2011. Additionally, the win marked the Buckeye's 500th Big Ten Conference victory and moved them into a tie with the Cornhuskers for the third-most NCAA Division I Football Bowl Subdivision wins at 892. The Buckeyes began the game on defense which forced a three-and-out. Following a 57-yard punt which pinned the Ohio State offense at their own 4-yard line, the Buckeye's drove 96 yards to score the game's first score that was capped by a 52-yard, J. K. Dobbins, touchdown. After another three-and-out by the Silver Bullet defense, the offense again scored on another drive that ended with a J. T. Barrett touchdown run, giving the Buckeyes a 14–0 lead at the end of the opening quarter. Ohio State scored less than 90 seconds into the second quarter, increasing their lead to 21. The Nebraska offense showed very little in the first half, gaining only five first downs on their six possessions, and the defense struggled even more by failing to stop the Buckeyes from reaching the endzone on all five of their possessions. This gave Ohio State a 35–0 halftime lead. Ohio State once again lead a 75-yard drive to open the second half but were quickly matched by a 77-yard Tanner Lee touchdown pass to J.D. Spielman, making the score 42–7 Ohio State. The Cornhuskers would once again score on their following possession, on another Tanner Lee pass, but the score was again matched on a 15-play, 66-yard, Ohio State scoring drive. Nebraska would fail to score again while Ohio State would score once more, making the final score 56–14. Ohio State would score on eight of their nine drives and never having to punt. J. T. Barrett scored five passing touchdowns and two rushing touchdowns, which tied his record for most touchdowns responsible for that he set in 2016. For his performance, he was named the Big Ten Offensive Player of the Week for the eighth time in his career, the Davey O'Brien national quarterback of the week and the Earl Campbell Tyler Rose Award National Player of the Week. The Buckeyes moved up to the No. 6 spot in both the AP Poll and Coaches Poll. Game Statistics Game Leaders Penn State The No. 6 Ohio State Buckeyes (7–1, 5–0) defeated the No. 2 Penn State Nittany Lions (7–1, 4–1) at Ohio Stadium by a score of 39–38. The Buckeyes avenged last year's loss to Penn State that gave them their only regular season loss and allowed the Nittany Lions to earn a berth over the Buckeyes in the 2016 Big Ten Football Championship Game. OSU Coach Urban Meyer is now 5–1 versus Penn State and PSU's James Franklin is 1–3 versus the Buckeyes. In all but three of the 32 match-ups, including 13 straight, at least one of the teams was ranked in the AP Top 25 poll. This game served as host to ESPN's College GameDay, and was Ohio State's third appearance this year and Penn State's second. Special teams woes began immediately for the Buckeyes when Penn State won the opening coin toss and elected to receive the ball on the opening kickoff which resulted in a 97-yard Saquon Barkley kickoff return for a touchdown. On Ohio State's following possession, after only three plays, wide receiver Parris Campbell fumbled the ball after he was once again injured. This gave the Nittany Lion possession on their 23-yard line which led to an eventual 13-yard touchdown pass from Trace McSorley to DaeSean Hamilton. The Buckeyes found themselves in a 14-point hole at the 11:34 mark of the first quarter. After trading punts, Ohio State finally got on the scoreboard following a 38-yard Sean Nuernberger field goal to make the score 3–14. The remainder of the first quarter remained fairly quiet. The second quarter of the game began with an 81-yard Penn State drive that resulted in a 36-yard touchdown Barkley run to give Penn State an 18-point lead with a score of 3–21. J. T. Barrett and the Ohio State offense was able to match that score with their first touchdown of the game by driving 63 yards and scoring on a 14-yard pass from Barrett to Terry McLaurin. The Buckeyes once again faced a special teams disaster on the following kickoff which was returned for 59 yards by Koa Farmer to the Ohio State 23 yard line. Two plays later McSorley ran for a 6-yard touchdown to give the Nittany Lions a 28–10 lead. Fortunately for the Buckeyes, the offense was able to travel 75 yards and a 2-yard Mike Weber touchdown with less than five minutes in the half to cut the PSU lead to 17–28. Punts were traded by both teams to end the half and Penn State went to the locker room with an impressive 11-point lead. The Buckeyes were able to amount a 57-yard drive following the intermission that resulted in a 36-yard Nuernberger field goal to get the lead to eight, but it was quickly negated ten plays later when McSorley found the endzone again on a 37-yard touchdown pass to DeAndre Thompkins. The third quarter had no more scoring and the Nittany Lions were able to secure a 20–35 lead going into the fourth and final quarter. On the opening drive of the fourth quarter, Barrett fumbled the ball in a botched hand off attempt to J. K. Dobbins which gave the ball to the Nittany Lions on Ohio State's 42 yard line. The Buckeye defense stepped up and forced a three-and-out which was followed by a blocked punt by Denzel Ward. Two plays later Ohio State found the endzone on a 38-yard pass from Barrett to Johnnie Dixon, which would bring the Buckeyes within eight. Unfortunately for the Buckeyes, Penn State was able to pull back ahead by 11 on a 5:23 drive that ended in a 24-yard Tyler Davis field goal. Following the Penn State score, the Buckeye offense came alive and resulted in two 55+ yard drives that ended in Barrett touchdown passes. One to Dixon and the go-ahead touchdown to Marcus Baugh. Penn State on the other hand faced two consecutive possessions of negative yards, sealing the one-point Ohio State victory. J. T. Barrett gained a school record 423 total offensive yards during the game earning him his ninth Big Ten Offensive Player of the Week honors, while wide receiver K. J. Hill caught 12 passes, which is the fourth most in program history. The Buckeye defense was able to hold Heisman Trophy hopeful Barkley to a season low of only 44 yards rushing on 21 attempts, but his performance on special teams earned him honors for Big Ten Special Team Player of the Week. Ohio State moved up to No. 3 in both the AP and Coaches Polls while Penn State fell to No. 7 in both. The Buckeyes took over the sole lead of the Big Ten East division following the victory and a Michigan State loss to Northwestern. at Iowa The No. 6 Ohio State Buckeyes (7–2, 5–1) lost to the Iowa Hawkeyes (5–3, 2–3) in Iowa City, Iowa at Kinnick Stadium by a score of 55–24. This was the Buckeyes' first trip to Iowa since 2010 and the first time playing the Hawkeyes since 2013. The first play of the game began a tough day for the Buckeyes when quarterback J. T. Barrett threw his first of four interceptions that was returned for an Iowa touchdown by Amani Hooker. The Buckeye offense looked to rebound by scoring two touchdowns and a field goal on their next three possessions. The Hawkeyes matched them by also scoring touchdowns on all but one of their first half drives. This gave Iowa a 31–17 lead at the half. The second half proved to be no easier on the Buckeyes as they allowed three more Iowa touchdowns and a field goal, while the Buckeyes offense only managed three first downs and one touchdown. Ohio State fell to No. 11 in both the AP and the Coaches' Polls while falling to 13th in the CFP Poll. Iowa found themselves ranked for the first time in 2017 as they debuted at No. 25 in the AP Poll and No. 20 in the CFP Poll. The loss put the Buckeyes in a two-way tie at the top of the Big Ten East with their next opponent, Michigan State. Iowa quarterback, Nate Stanley, was named the Big Ten Offensive Player of the Week for his performance. Game Statistics Game Leaders No. 12 Michigan State The No. 13 Ohio State Buckeyes (8–2, 6–1) defeated the No. 12 Michigan State Spartans (7–3, 5–2) at Ohio Stadium by a score of 48–3. OSU coach Urban Meyer moved to 4–2 versus the Spartans, while MSU coach Mark Dantonio fell to 3–6 against the Buckeyes. With the victory, Ohio State became the sole possessors of first place in the Big Ten East. While Michigan State started on a nine-play drive that took more than five minutes off of the clock, the Buckeyes were able to force a punt, a large part due to Nick Bosa's 12-yard sack on third down. The Buckeye offense was able to form an 86-yard drive that resulted in 79 rushing yards, including a 47-yard touchdown run by Mike Weber. J. T. Barrett was able to rush for two more touchdowns as well as pass for another to give them a 28–0 lead. The Buckeye's final score of the first half came on a Weber 82-yard run on their next to-last possession, while Michigan State was able to kick a field goal as time expired following a Barrett interception. The Buckeyes would lead 35–3 at halftime. Ohio State was able to find the endzone on the third play of the second half with a 48-yard pass from Barrett to Binjimen Victor, this would be the last touchdown of the game. Ohio State would kick two Sean Nuernberger field goals on their next two possessions to increase their lead to 45. Before this game, the largest Dantonio-Meyer match-up spread had been a 12-point OSU victory. This result was the largest defeat in the series history. The Buckeyes moved up three spots to No. 8 in both the AP and Coaches' polls and four spots to No. 9 in the CFP poll while Michigan State fell to No. 24 in the AP poll, No. 22 in the Coaches' poll and No. 17 in the CFP poll. Weber was named Big Ten co-Offensive Player of the Week for his 162-yard and two touchdown rushing performance. Game Statistics Game Leaders Illinois The No. 9 Ohio State Buckeyes (9–2, 7–1) defeated the Illinois Fighting Illini (2–9, 0–8) by a score of 52–14 to end their Big Ten West match ups for the 2017 regular season and clinch the Big Ten East title as well as a berth in the Big Ten Football Championship Game. Ohio State was awarded the Illibuck Trophy for the ninth consecutive time, which has been given out since 1925, making it the second oldest trophy between Big Ten football programs. Urban Meyer is now 5–0 against the Illini, while this was Illinois coach Lovie Smith's first game against the Buckeyes. The Buckeye offense came out and struck quickly by scoring on their first drive by a 25-yard Mike Weber touchdown run. Ohio State's defense, matched the effort by forcing what would be the first of many three-and-outs. The following drive also resulted in a touchdown following a 6 play, 71-yard drive, capped by J. T. Barrett's ninth rushing touchdown of the season. Ohio State would go on to score two more touchdowns in the first quarter, one by a pass from Barrett to Binjimen Victor and the other by a 43-yard Weber run. Illinois would fail to convert a first down, giving the Buckeyes a 28–0 lead at the end of the first quarter. Ohio State was able to add a 33-year Sean Nuernberger field goal at the beginning of the second period, followed soon after by another OSU drive that ended with a J. K. Dobbins touchdown run, which would be the last time the primary starters would see the game in the first half. Ohio State would fail to score on a drive for the first time during their last possession, while Illinois would gain their only first down of the opening half. Ohio State would go into halftime with a 38–0 lead as heavy rain began to fall. Ohio State's defense mounted another three-and-out to start the second half, but the offense was apparently effected by the rain when back-up quarterback Dwayne Haskins fumbled the ball and it was returned 54-yards by Ahmari Hayes for Illinois' first score. Ohio State's offensive starters went back into the game but were forced to punt. Luckily for the Buckeyes, the punt was muffed and recovered by Ohio State that led to a Barrett touchdown pass to tight end Marcus Baugh. Ohio State led 45–7 at this point and would be the last time the offensive starters saw the field. The teams traded punts to close out the third quarter. After a short performance by back-up quarterback, Joe Burrow, Haskins went back in the game which resulted in a 21-yard touchdown pass to Victor. The Illini would match the result with their only offensive touchdown on a 65-yard drive to make the score 52–14. Neither team would score again to make that the final. Ohio State would only allow three first downs and it was the sixth game of the season that they allowed less than 100 rushing yards and the fifth game they allowed less than 100 passing yards. Game Statistics Game Leaders at Michigan The 114th edition of the Michigan–Ohio State football rivalry, colloquially known as "The Game", took place at Michigan Stadium between the No. 9 Ohio State Buckeyes (10–2, 8–1) and the Michigan Wolverines (8–4, 5–4). The No. 9 Ohio State Buckeyes defeated the Michigan Wolverines by a score of 31–20. Ohio State and Michigan both started off slowly by forcing three-and-outs on each team's opening drive before the Wolverines mounted a nearly six minute, 13-play, 77-yard drive that resulted in a touchdown. Ohio State was again forced with another three-and-out that was followed by a Michigan drive that ended in a punt that pinned the Buckeye's deep. Ohio State was forced to punt and was returned 42-yards by Michigan wide receiver Donovan Peoples-Jones, additionally, Ohio State committed a block in the back which gave the Wolverines the ball at the five-yard line. Michigan was able to take a 14–0 lead when quarterback John O'Korn completed a three-yard pass to Sean McKeon. Ohio State was held to −6 yard in the first quarter and was the first time since 2010 that Ohio State had been held to negative yardage in a quarter. The Ohio State offense and the rushing attack of J. T. Barrett and J. K. Dobbins came alive on the following drive when the two gained 71 yards on the ground that ended with a 21-yard Barrett touchdown run. Following a Michigan punt, Ohio State was able to move the ball again behind Barrett when he ran for 26 yards and threw a 25-yard touchdown pass to tight end Marcus Baugh, tying the game at 14. Neither team would score again in the first half. Ohio State again started off slow by being forced to consecutive three-and-outs, while Michigan was able to score on a 2-yard Karan Higdon touchdown run. Ohio State linebacker, Chris Worley, was able to block the PAT and Michigan led 20–14. The Buckeyes answered back when Ohio State mounted a 78-yard touchdown drive capped by a 1-yard Dobbins touchdown run. Unfortunately for the Buckeyes, Barrett was injured during the drive and was forced out for the remainder of the game. RS Freshman, Dwayne Haskins gained 24 yards on the ground and 31 in the air on the drive that gave the Buckeyes a one-point, 21–20 lead. The Wolverines were unable to capitalize on the following possession and Ohio State was able to tack on a 44-yard Sean Nuernberger field goal, increasing the lead to four. Michigan drove the ball 36-yards on their next possession, but turned the ball over on downs following an O'Korn sack and two additional incomplete passes. The Buckeyes drove again and Nuernberger missed only his third field goal of the season on a 43-yard attempt. The following play, O'Korn committed the only turnover of the game when he threw an interception to Jordan Fuller. On Ohio State's next possession, Dobbins ran for 41 yards and Mike Weber was able to seal the game by scoring a 25-yard touchdown run with 1:44 remaining in the game, to make what would be the final score, 31–20. Michigan coach Jim Harbaugh dropped to 0–3 versus the Buckeyes, while Urban Meyer moved to 6–0 against the Wolverines. The current winning streak is tied for the second-longest in the series and tied for the longest for Ohio State with the 2004–2009 games. J. T. Barrett became the only quarterback in the series history to have four wins versus the other and moved Ohio State into first place in the number of all-time Big Ten wins. Ohio State clinched their first outright divisional title since 2014 and faced Wisconsin in the Big Ten Championship Game. Ohio State's ranking would remain unchanged in the AP Poll at No. 8 and they would move up one spot to No. 7 in the Coaches Poll. Game Statistics Game Leaders vs. No. 3 Wisconsin (Big Ten Championship) The No. 8 Ohio State Buckeyes (11–2, 8–1) defeated the No. 3 Wisconsin Badgers (12–1, 9–0) 27–21 at Lucas Oil Stadium in the Big Ten Championship. Urban Meyer is now 5–0 versus the Badgers, with two of the victories coming in overtime. Paul Chryst fell to 0–2 versus the Buckeyes. This was Ohio State's third appearance in the Championship game and Wisconsin's fifth, including their second straight. The Badgers began the game with the ball and were forced to punt after a three-and-out with the reverse result happening on the Buckeyes' first possession as well. Wisconsin would then follow with a 55-yard drive that ended with an interception in Wisconsin's redzone thrown by Alex Hornibrook to Denzel Ward at OSU's 4-yard line. J. T. Barrett would rush for a total of 12 yards before hitting Terry McLaurin for an 84-yard touchdown pass giving the Buckeyes an early 7–0 lead. The pass from Barrett to McLaurin was the longest play committed against the Badgers' defense all season. The following Wisconsin drive lasted for 7 plays and 44 yards and a punt trapped the Buckeyes on their own two-yard line. Barrett would go on to throw an interception to Wisconsin linebacker Andrew Van Ginkel that was returned for a touchdown, tying the game at 7–7. Ohio State would bounce back quickly and score on a Barrett to Parris Campbell, 57-yard touchdown pass, once again regaining the lead 14–7 and bringing an end to the first quarter. Ohio State wand Wisconsin traded punts to open the second quarter, but the Buckeyes would strike again with a Barrett 1-yard touchdown run that was set up by a 77-yard rush by J. K. Dobbins. On Ohio State's second possession of the second quarter, Mike Weber would commit his first fumble of the year that was recovered by Van Ginkel that would lead to a 28-yard field goal, making the score 21–10 in favor of the Buckeyes. Ohio State would attempt to add to their lead, but Sean Nuernberger's 43-yard field goal was blocked as time expired. While Ohio State couldn't amount a drive to open the second half, Wisconsin was able to convert a 46-yard field goal to narrow the lead to 8, which was quickly matched by the Buckeyes thanks to a 53-yard Dobbins run and a 27-yard Nuernberger field goal. Punts were traded after several three-and-outs both teams until Wisconsin began an 11-play, 52-yard drive that started with a Barrett interception that ended with a 1-yard touchdown run by Chris James and a successful 2-point conversion. The score narrowed to 24–21 favoring Ohio State. The Buckeyes followed with a 15-play drive that took 5:20 off the clock and tacked on a 20-yard Ohio State field goal to increase the lead back to 6. The Buckeyes and the Badgers again traded punts and with 1:50 left in the game, Wisconsin would begin their final offensive drive. Wisconsin would secure two first downs bringing them to mid-field. A holding penalty on Wisconsin on first down would force a first and 20. Hornibrook would throw three incomplete passes until he threw an interception to Damon Webb. Ohio State was able to run out the clock and secure their 36th Big Ten Title. J. K. Dobbins would be named the MVP of the game which was the first time a Freshman earned the award. He would also become the all-time leading Freshman rusher in Ohio State history, passing Maurice Clarett. Ohio State would go on to be ranked No. 5 in the final CFP poll and miss the playoffs while Wisconsin would fall to No. 6. It was announced on December 3, that No. 5 Ohio State would face No. 8 USC in the Cotton Bowl Classic. Game Statistics Game Leaders vs. No. 8 USC (Cotton Bowl Classic) The No. 5 Ohio State Buckeyes (12–2) defeated the No. 8 USC Trojans (11–3) 24–7 at AT&T Stadium in Arlington, Texas in the Cotton Bowl Classic. This was the first time that the Buckeyes defeated the Trojans since 1974, which broke a seven-game USC win-streak. Ohio State and their Silver Bullet defense proved to be a tough match for USC. On the third play of the game, Ohio State's Kendall Sheffield was able to strip the ball from USC wide receiver Deontay Burnett which was recovered and advanced 20 yards by Ohio State's Damon Webb. This set up a short five-play drive that resulted in a J. T. Barrett 1-yard touchdown run. Punts were traded a few times by both teams until the Buckeyes produced an 83-yard drive that tacked on a 26-yard Sean Nuernberger field goal at the beginning of the second quarter to five OSU a 10–0 lead. The following play, Sam Darnold threw an interception to Ohio State's Damon Webb which was returned for a 23-yard touchdown giving the Buckeyes a 17–0 lead. Again, both teams traded punts and USC fumbled the ball which set up a 59-yard Buckeye drive that ended with a 28-yard Barrett touchdown run giving the Buckeyes a 24-point lead. USC was able to find the scoreboard in the second quarter when K. J. Hill fumbled a punt return which set up a 15-yard drive, capped by a one-yard Ronald Jones II touchdown run. The score at half would be 24–7 favoring Ohio State. Offensively, the Buckeyes remained quiet the second half by having all but one drive ending in a punt. USC was able to drive deep into Buckeye territory three times, but a missed field goal, a fumble and a failed fourth-down conversion didn't allow the Trojans to score any points. The final score would end as 24–7. Barrett and Webb were named the Offensive and Defensive MVPs respectively. Barrett also passed Drew Brees for the Big Ten record of most offensive yards in a career. Game Statistics Game Leaders Early departures WR Noah Brown – Declared for the NFL Draft CB Gareon Conley – Declared for the NFL Draft FS Malik Hooker – Declared for the NFL Draft CB Marshon Lattimore – Declared for the NFL Draft LB Raekwon McMillan – Declared for the NFL Draft RB Curtis Samuel – Declared for the NFL Draft Roster Coaching changes OC/OL Ed Warriner left to become offensive line coach at Minnesota Co-OC/QB Tim Beck left to become the offensive coordinator at Texas under coach Tom Herman, a previous offensive coordinator at Ohio State Co-DC/LB Luke Fickell left to become the head coach at Cincinnati; Co-DC Greg Schiano then became the sole Defensive Coordinator Kevin Wilson, former Indiana head coach, hired as offensive coordinator and tight ends coach Ryan Day hired as quarterbacks coach and co-offensive coordinator Bill Davis hired as linebackers coach Awards and honors *The NCAA and Ohio State only recognize the AP, AFCA, FWAA, Sporting News and WCFF All-American teams to determine if a player is a Consensus or Unanimous All-American. To be named a Consensus All-American, a player must be named first team in three polls and to be Unanimous, they must be named first team in all five. Players in the 2018 NFL Draft References Ohio State Ohio State Buckeyes football seasons Big Ten Conference football champion seasons Cotton Bowl Classic champion seasons Ohio State Buckeyes football
1029962
https://en.wikipedia.org/wiki/Thundering%20herd%20problem
Thundering herd problem
In computer science, the thundering herd problem occurs when a large number of processes or threads waiting for an event are awoken when that event occurs, but only one process is able to handle the event. When the processes wake up, they will each try to handle the event, but only one will win. All processes will compete for resources, possibly freezing the computer, until the herd is calmed down again. Mitigation The Linux kernel serializes responses for requests to a single file descriptor, so only one thread or process is woken up. For epoll() in Linux 4.5 kernel version was added EPOLLEXCLUSIVE flag, thus several epoll sets (different threads or different processes) may wait on same resource and only one set will be woken up. For certain workloads this flag can give significant processing time reduction. Similarly in Microsoft Windows, I/O completion ports can mitigate the thundering herd problem, as they can be configured such that only one of the threads waiting on the completion port is woken up when an event occurs. In systems which rely on a backoff mechanism (e.g. exponential backoff), the clients will retry failed calls, by waiting a specific amount of time between consecutive retries. In order to avoid the thundering herd problem, jitter can be purposefully introduced, in order to break the synchronization across the clients thereby avoiding collisions. In this approach, randomness is added to the wait intervals between retries, so that clients are no longer synchronized. See also Process management (computing) Lock convoy Sleeping barber problem TCP global synchronization Cache stampede References External links A discussion of this observation on Linux Better Retries with Exponential Backoff and Jitter Concurrency control
2055436
https://en.wikipedia.org/wiki/Digital%20room%20correction
Digital room correction
Digital room correction (or DRC) is a process in the field of acoustics where digital filters designed to ameliorate unfavorable effects of a room's acoustics are applied to the input of a sound reproduction system. Modern room correction systems produce substantial improvements in the time domain and frequency domain response of the sound reproduction system. History The use of analog filters, such as equalizers, to normalize the frequency response of a playback system has a long history; however, analog filters are very limited in their ability to correct the distortion found in many rooms. Although digital implementations of the equalizers have been available for some time, digital room correction is usually used to refer to the construction of filters which attempt to invert the impulse response of the room and playback system, at least in part. Digital correction systems are able to use acausal filters, and are able to operate with optimal time resolution, optimal frequency resolution, or any desired compromise along the Gabor limit. Digital room correction is a fairly new area of study which has only recently been made possible by the computational power of modern CPUs and DSPs. Operation The configuration of a digital room correction system begins with measuring the impulse response of the room at a reference listening position, and sometimes at additional locations for each of the loudspeakers. Then, computer software is used to compute a FIR filter, which reverses the effects of the room and linear distortion in the loudspeakers. In low performance conditions, a few IIR peaking filters are used instead of FIR filters, which require convolution, a relatively computation-heavy operation. Finally, the calculated filter is loaded into a computer or other room correction device which applies the filter in real time. Because most room correction filters are acausal, there is some delay. Most DRC systems allow the operator to control the added delay through configurable parameters. Implementation The most widely used test signal is a swept sine wave, also called chirp. This signal maximizes the measurement's signal-to-noise ratio, and the spectrum can be calculated by deconvolution, which is dividing the response's Fourier transform with the signal's Fourier transform. The spectrum is then smoothed, and a filter set is calculated, which equalizes the sound pressure levels at each frequency to the target curve. To calculate the delays and other time-domain corrections, an inverse Fourier transform is performed on the spectrum, which results in the impulse response. The impulse peak's distance from the start of the signal is its delay, and its sign is its polarity. The delay is corrected by subtracting each channel's delay from the system's peak delay, and applying this result as additional delay for the channel. This correction is sometimes provided to the user as distance from the speaker, which is calculated by multiplying the delay with the speed of sound. Inverse polarity (most likely caused by switching a speaker's + and - wires) could be fixed by multiplying each sample with -1 or swapping the speaker wire ends on one side of the cable, but this result is usually shown as a warning, as some speakers (e.g. Focal Kanta) do this intentionally. Challenges DRC systems are not normally used to create a perfect inversion of the room's response because a perfect correction would only be valid at the location where it was measured: a few millimeters away the arrival times from various reflections will differ and the inversion will be imperfect. The imperfectly corrected signal may end up sounding worse than the uncorrected signal because the acausal filters used in digital room correction may cause pre-echo. Room correction filter calculation systems instead favor a robust approach, and employ sophisticated processing to attempt to produce an inverse filter which will work over a usably large volume, and which avoid producing bad-sounding artifacts outside of that volume, at the expense of peak accuracy at the measurement location. Software Free software Room EQ Wizard Room EQ Wizard, or REW for short is a free room measurement tool with SPL, phase, distortion, RT60, clarity, decay, waterfall, and spectogram views. REW also features IR windowing, and SPL meter, room simulation for subwoofer placement, and peaking filter-based EQ generation for multiple platforms, DSPs, and AVRs with a target curve editor. Cavern QuickEQ QuickEQ is part of Cavern, a free and open source spatial audio engine. QuickEQ supports multichannel measurements with multiple microphones, time and level alignment with multiple standards and target curves, IR windowing, multi-sub crossover, and experimental filters for increasing speech intelligibility and simulating other cabinet types. QuickEQ exports minimum- or 0-phase FIR filters or peaking EQs depending on the target device. RePhase RePhase is a free EQ and crossover generation tool that also linearizes phase response. RePhase has multiple configurable filter sets available for manual filter composition, which then can be exported as a single FIR impulse. Commercial software Most new AVRs include room correction in their setup, and a microphone in the box. Denon and Marantz AVRs use Audyssey, and more expensive models allow for more corrections. Anthem AVRs use a proprietary software called Anthem Room Correction, or ARC for short, as well as Trinnov Audio with their optimizer solution. Dirac Live is a commercial software that is available for PC and select Onkyo, Pioneer, Integra, StormAudio, and other AVRs. Industrial software DCI-compliant hardware that are used in commercial theaters, sometimes use commercially available room correction software. Notable examples are IMAX cinemas, which use Audyssey MultEQ XT32, while Datasat processors (found in all DTS:X rooms) have Dirac software. Dolby's CP850 and CP950 processors (which support Dolby Atmos) use a proprietary solution called AutoEQ. AutoEQ measures 5 to 8 microphone positions simultaneously. It requires loudspeaker specifications manually entered for the room. Earlier Dolby processors, such as the CP750, used a 31-band equailzer for the 5 or 7 main channels, and a single peaking filter for correcting the subwoofers' largest peak. The CP750 didn't have a swept sine wave generator, and used pink noise for measurement. See also Deconvolution Digital filter Filter (signal processing) Filter design LARES Stereophonic sound Surround sound References Michael Gerzon's paper on Digital Room Equalization, on audiosignal.co.uk. External links Open Source Implementations Cavern QuickEQ and its source code Python Open Room Correction (PORC) DRC: Digital Room Correction Free Room Correction Software Room EQ Wizard rePhase Free Room EQ plug-in for Foobar2000 audio player Commercial Room Correction Software Acourate Audiolense Dirac Live Focus Fidelity IK Multimedia ARC System SoundID Reference from Sonarworks Trinnov Optimizer Papers On Room Correction and Equalization of Sound Systems, by Dr. Mathias Johansson, Dirac Research AB Digital Room Equalization, by Michael Gerzon Audio Equalization with Fixed-Pole Parallel Filters: An Efficient Alternative to Complex Smoothing, by Balazs Bank Articles Room Correction: A Primer, by Nyal Mellor of Acoustic Frontiers Sound Correction in the Frequency and Time Domain, by Bernt Ronningsbak of Audiolense The Three Acoustical Issues a Room Correction Product Can't Actually Correct, by Nyal Mellor of Acoustic Frontiers References Acoustics Sound technology Signal processing
37721889
https://en.wikipedia.org/wiki/GNU%20Guix
GNU Guix
GNU Guix () is a functional cross-platform package manager and a tool to instantiate and manage Unix-like operating systems, based on the Nix package manager. Configuration and package recipes are written in Guile Scheme. GNU Guix is the default package manager of the GNU Guix System distribution. Differing from traditional package managers, Guix (like Nix) utilizes a purely functional deployment model where software is installed into unique directories generated through cryptographic hashes. All dependencies for each software are included within each hash. This solves the problem of dependency hell, allows multiple versions of the same software to coexist and makes packages portable and reproducible. Performing scientific computations in a Guix setup has been proposed as a promising response to the replication crisis. The development of GNU Guix is intertwined with the GNU Guix System, an installable operating system distribution using the Linux-libre kernel and GNU Shepherd init system. General features Guix packages are defined through functional Guile Scheme APIs specifically designed for package management. Dependencies are tracked directly in this language through special values called "derivations" which are evaluated by the Guix daemon lazily. Guix keeps track of these references automatically so that installed packages can be garbage collected when no other package depends on them. At the cost of greater storage requirements, all upgrades in Guix are guaranteed to be both atomic and can be rolled back. The roll-back feature of Guix is inherited from the design of Nix and is not found in any of the native package managers of popular Linux distributions such as Debian and its derivatives, Arch Linux and its derivatives, or in other major distributions such as Fedora, CentOS or OpenSUSE. The Guix package manager can however be used in such distributions and is available for Debian and Parabola. This also enables multiple users to safely install software on the same system without administrator privileges. Compared to traditional package managers, Guix package stores can grow considerably bigger and therefore require more bandwidth; although compared to container solutions (like Docker) that are also commonly employed to solve dependency hell, Guix is leaner and conforms to practices like Don't repeat yourself and Single source of truth. If the user chooses to build everything from source even larger storage space and bandwidth is required. The store Inherited from the design of Nix, most of the content of the package manager is kept in a directory /gnu/store where only the Guix daemon has write-access. This is achieved via specialised bind mounts, where the Store as a file system is mounted read only, prohibiting interference even from the root user, while the Guix daemon remounts the Store as read/writable in its own private namespace. Guix talks with this daemon to build things or fetch substitutes which are all kept in the store. Users are discouraged from ever manually touching the store by re-mounting it as writable since this defeats the whole purpose of the store. Garbage collection Guix - like Nix - has built-in garbage collection facilities to help prune dead store items and keep the live ones. Package definitions This is an example of a package definition for the hello-package: (define-public hello (package (name "hello") (version "2.10") (source (origin (method url-fetch) (uri (string-append "mirror://gnu/hello/hello-" version ".tar.gz")) (sha256 (base32 "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i")))) (build-system gnu-build-system) (synopsis "Hello, GNU world: An example GNU package") (description "GNU Hello prints the message \"Hello, world!\" and then exits. It serves as an example of standard GNU coding practices. As such, it supports command-line arguments, multiple languages, and so on.") (home-page "https://www.gnu.org/software/hello/") (license gpl3+))) It is written using Guile. The package recipes can easily be inspected (running e.g. guix edit hello) and changed in Guix, making the system transparent and very easily hackable. Transactional upgrades Inherited from the design of Nix, all manipulation of store items is independent of each other, and the directories of the store begin with a base32-encoded hash of the source code of the derivation along with its inputs. Profiles Guix package uses profiles generations, which are a collection of symlinks to specific store items together comprising what the user has installed into the profile. Every time a package is installed or removed, a new generation will be built. E.g. the profile of a user who only installed GNU Hello contains links to the store item which holds the version of hello installed with the currently used guix. E.g. on version c087a90e06d7b9451f802323e24deb1862a21e0f of guix, this corresponds to the following item: /gnu/store/md2plii4g5sk66wg9cgwc964l3xwhrm9-hello-2.10 (built from the recipe above). In addition to symlinks, each profile guix builds also contains a union of all the info-manuals, man-pages, icons, fonts, etc. so that the user can browse documentation and have access to all the icons and fonts installed. The default symlinks to profile generations are stored under /var/guix in the filesystem. Multiple user profiles The user can create any number of profiles by invoking guix package -p PROFILE-NAME COMMAND. A new directory with the profile-name as well as profile-generation-symlinks will then be created in the current directory. Roll-back Guix package enables instantaneous roll-back to a previous profile generation via changing the symlink to an earlier profile generation. Profiles are also stored in the store e.g. this item is a profile containing hello above: /gnu/store/b4wipjlsapvnijmbawl7sh76087vpl4n-profile (built and activated when running guix install hello). Environment Guix environment enables the user to easily enter an environment where all the necessary packages for development of software are present without clogging up the users default profile with dependencies for multiple projects. E.g. running guix environment hello enters a throw-away environment where everything needed to compile hello on guix is present (gcc, guile, etc.). Persistent development environment If you want a persistent gc-rooted environment that is not garbage collected on the next run of guix gc you can create a root: E.g. running guix environment --root=hello-root hello enters an environment where everything needed to compile guix is present (gcc, guile, etc.) and registered as a root in the current directory (by symlinking to the items in the store). Pack Guix pack enables the user to bundle together store items and output them as a docker binary image, a relocatable tarball or a squashfs binary. Graph Guix graph enables the user to view different graphs of the packages and their dependencies. Guix System (operating system) GNU Guix System uses Guix as its package manager and configuration system, similar to how NixOS uses Nix. History The GNU Project announced in November 2012 the first release of GNU Guix, a functional package manager based on Nix that provides, among other things, Guile Scheme APIs. The project was started in June 2012 by Ludovic Courtès, one of the GNU Guile hackers. On August 20, 2015, it was announced that Guix had been ported to GNU Hurd. Releases The project has no fixed release schedule and has until now released approximately every 6 months. See also GNU Guix System Debian GNU/Hurd Comparison of Linux distributions NixOS – A similar operating system, which inspired GNU Guix References External links List of Guix packages Guix GNU Project Free package management systems Free software programmed in Lisp Functional programming GNU Project software Linux package management-related software
21293668
https://en.wikipedia.org/wiki/Infinity%20%28K-Space%20album%29
Infinity (K-Space album)
Infinity is the third album by British-Siberian experimental music ensemble K-Space. It was released in the United States in August 2008 by Ad Hoc Records, an affiliate of Recommended Records, and was a new type of CD that is different every time it is played. "Infinity" will not work in a standard CD player and requires a computer to play it. Each time the CD is played, supplied software remixes source material located on the disc and produces a new 20-minute musical piece. The CD cannot be paused or fast-forwarded, and there are no tracks to select. The only controls available are "PLAY" and "STOP". The music produced by the CD is electroacoustic improvisation that is rooted in Tuvan shaman ritual music. Background K-Space was formed in 1996 after a series of study trips to Siberia by Scottish percussionist Ken Hyder and English multi-instrumentalist Tim Hodgkinson. They were exploring the improvisational and musical aspects of shamanism when they met up with Gendos Chamzyryn, a shaman and musician from Tuva. Hyder, Hodgkinson and Chamzyryn formed K-Space to experiment with improvised music rooted in the Tuvan shamanic ritual. Their second album, Going Up (2004) was a sound collage of K-Space performances plus field recordings of shamanic rituals, manipulated and superimposed on one another. Infinity extended Going Up'''s production process and made it dynamic to produce a new mix with each play. Technology The idea for the Infinity project began when Tim Hodgkinson started describing some of the implicit rules that K-Space use during their live improvisations. It occurred to him that these rules could be embedded in software that would produce interesting variations on the original process. Hodgkinson discussed his ideas with Andy Wilson (aka The Grand Erector), a software designer and webmaster of German krautrock band Faust's website, The Faust Pages. They investigated ways to select sound files within different contexts and to use them in different ways to produce a new stream of music with each play. Wilson then set about developing "metacompositional" software that compiles and sequences deconstructed fragments of sound files. These sound sources were provided by K-Space and include field recordings, throat singing, various percussion, string and reed instruments. The number of audio files available were restricted by the size limitations of the CD format, but when combined and permutated, they would create an apparently infinite number of different "compositions", and hence the album title Infinity. The audio files are categorized into a number of groups, including acoustic, live, solo, environmental and loops. A single audio file can be used in many different ways by varying, for example, dynamic levels, volume and duration of play. For a particular piece, for example, the software might select any two of seven acoustic files and play them together for x seconds, adding one of four environmental files after y seconds. The sound source selection process the software uses is not random, but algorithmic based on scores Hodgkinson wrote for the project. Each time "PLAY" is pressed, the software selects a new score which it uses to construct a new piece of music. The score consists of a set of audio file selection criteria, which vary depending on what has happened before. While there are a finite number of scores on the CD, there are many different interpretations of each score. Each play lasts about 20 minutes, a time span which was chosen with the shamans of Tuva in mind, and how each of their rituals produces a different journey. The 20-minute period of intense music was also chosen to deviate from the standard music CD playing times, albeit an "infinite" number of different 20-minute plays. In October 2008 the software developed for this project was made available to anyone interested in working with it. Wilson also prepared a continuous play version of Infinity'' for the K-Space exhibition in the Stuttgart Ethnographic Museum. Reception John Cavanagh of The Herald in Glasgow said in a review of the album that even though he knew each listening was the result of a "computer triggered sequence", it always sounded like a "cohesive musical work, as though it was meant to be that way". Requirements "Infinity" must be played on a Windows PC or an Apple Mac computer. The computer software required to play the music is included on the CD. The computer must have a CD drive, a 200 MHz or greater processor, at least 1.5Gb of RAM, and a sound card or stereo interface. Track listing Personnel K-Space Ken Hyder – percussion, drum kit, dungur, voice, ektara, bass ektara, sound manipulation Tim Hodgkinson – lap steel guitar, clarinet, bass clarinet, klarnet, alto saxophone, voice, dungur, percussion, ocarina, sound manipulation Gendos Chamzyryn – voice, dungur, percussion, dopshuluur, chadagan, cello, khomous, ocarina Software development Andy Wilson – programming, technical assistance, oversight References Works cited . External links . 2008 albums Free improvisation albums Tuvan music Throat singing
65383730
https://en.wikipedia.org/wiki/PolyAnalyst
PolyAnalyst
PolyAnalyst is a data science software platform developed by Megaputer Intelligence that provides an environment for text mining, data mining, machine learning, and predictive analytics. It is used by Megaputer to build tools with applications to health care, business management, insurance, and other industries. PolyAnalyst has also been used for COVID-19 forecasting and scientific research. Overview PolyAnalyst's graphical user interface contains nodes that can be linked into a flowchart to perform an analysis. The software provides nodes for data import, data preparation, data visualization, data analysis, and data export. PolyAnalyst includes features for text clustering, sentiment analysis, extraction of facts, keywords, and entities, and the creation of taxonomies and ontologies. Polyanalyst also supports a variety of machine learning algorithms, as well as nodes for the analysis of structured data and the ability to execute code in Python and R. PolyAnalyst also acts as a report generator, which allows the result of an analysis to be made viewable by non-analysts. It uses a client–server model and is licensed under a software as a service model. Business Applications Insurance PolyAnalyst was used to build a subrogation prediction tool which determines the likelihood that a claim is subrogatable, and if so, the amount that is expected to be recovered. The tool works by categorizing insurance claims based on whether or not they meet the criteria that are needed for successful subrogation. PolyAnalyst is also used to detect insurance fraud. Health care PolyAnalyst is used by pharmaceutical companies to assist in pharmacovigilance. The software was used to design a tool that matches descriptions of adverse events to their proper MedDRA codes, determines if side-effects are serious or non-serious, and to set up cases for ongoing monitoring if needed. PolyAnalyst has also been applied to discover new uses for existing drugs by text mining ClinicalTrials.gov and to forecast the spread of the COVID-19 virus in the United States and Russia. Business management PolyAnalyst is used in business management to analyze written customer feedback including product review data, warranty claims, and customer comments. In one case, PolyAnalyst was used to build a tool which helped a company monitor its employees' conversations with customers by rating their messages for factors such as professionalism, empathy, and correctness of response. The company reported to Forrester Research that this tool had saved them $11.8 million annually. SKIF Cyberia Supercomputer PolyAnalyst is run on the SKIF Cyberia Supercomputer at Tomsk State University, where it is made available to Russian researchers through the Center for Collective Use (CCU). Researchers at the center use PolyAnalyst to perform scientific research and to management the operations of their universities. In 2020, researchers at Vyatka State University (in collaboration with the CCU) performed a study in which PolyAnalyst was used to identify and reach out to victims of domestic violence through social media analysis. The researchers scraped the web for messages containing descriptions of abuse, and then classified the type of abuse as physical, psychological, economic, or sexual. They also constructed a chatbot to contact the identified victims of abuse and to refer them to specialists based on the type of abuse described in their messages. The data collected in this study was used to create the first ever Russian-language corpus on domestic violence. References External links Text mining Artificial intelligence Data mining and machine learning software Reporting software Software associated with the COVID-19 pandemic Business software Software frameworks Text analysis Proprietary software Natural language processing software Data analysis software Data visualization software Computing platforms Data management software Knowledge management 1994 software Ontology editors Windows software
41263993
https://en.wikipedia.org/wiki/Computer%20access%20control
Computer access control
In computer security, general access control includes identification, authorization, authentication, access approval, and audit. A more narrow definition of access control would cover only access approval, whereby the system makes a decision to grant or reject an access request from an already authenticated subject, based on what the subject is authorized to access. Authentication and access control are often combined into a single operation, so that access is approved based on successful authentication, or based on an anonymous access token. Authentication methods and tokens include passwords, biometric scans, physical keys, electronic keys and devices, hidden paths, social barriers, and monitoring by humans and automated systems. Software entities In any access-control model, the entities that can perform actions on the system are called subjects, and the entities representing resources to which access may need to be controlled are called objects (see also Access Control Matrix). Subjects and objects should both be considered as software entities, rather than as human users: any human users can only have an effect on the system via the software entities that they control. Although some systems equate subjects with user IDs, so that all processes started by a user by default have the same authority, this level of control is not fine-grained enough to satisfy the principle of least privilege, and arguably is responsible for the prevalence of malware in such systems (see computer insecurity). In some models, for example the object-capability model, any software entity can potentially act as both subject and object. , access-control models tend to fall into one of two classes: those based on capabilities and those based on access control lists (ACLs). In a capability-based model, holding an unforge-able reference or capability to an object, that provides access to the object (roughly analogous to how possession of one's house key grants one access to one's house); access is conveyed to another party by transmitting such a capability over a secure channel. In an ACL-based model, a subject's access to an object depends on whether its identity appears on a list associated with the object (roughly analogous to how a bouncer at a private party would check an ID to see if a name appears on the guest list); access is conveyed by editing the list. (Different ACL systems have a variety of different conventions regarding who or what is responsible for editing the list and how it is edited.) Both capability-based and ACL-based models have mechanisms to allow access rights to be granted to all members of a group of subjects (often the group is itself modeled as a subject). Services Access control systems provide the essential services of authorization, identification and authentication (I&A), access approval, and accountability where: authorization specifies what a subject can do identification and authentication ensure that only legitimate subjects can log on to a system access approval grants access during operations, by association of users with the resources that they are allowed to access, based on the authorization policy accountability identifies what a subject (or all subjects associated with a user) did Authorization Authorization involves the act of defining access-rights for subjects. An authorization policy specifies the operations that subjects are allowed to execute within a system. Most modern operating systems implement authorization policies as formal sets of permissions that are variations or extensions of three basic types of access: Read (R): The subject can: Read file contents List directory contents Write (W): The subject can change the contents of a file or directory with the following tasks: Add Update Delete Rename Execute (X): If the file is a program, the subject can cause the program to be run. (In Unix-style systems, the "execute" permission doubles as a "traverse directory" permission when granted for a directory.) These rights and permissions are implemented differently in systems based on discretionary access control (DAC) and mandatory access control (MAC). Identification and authentication Identification and authentication (I&A) is the process of verifying that an identity is bound to the entity that makes an assertion or claim of identity. The I&A process assumes that there was an initial validation of the identity, commonly called identity proofing. Various methods of identity proofing are available, ranging from in-person validation using government issued identification, to anonymous methods that allow the claimant to remain anonymous, but known to the system if they return. The method used for identity proofing and validation should provide an assurance level commensurate with the intended use of the identity within the system. Subsequently, the entity asserts an identity together with an authenticator as a means for validation. The only requirements for the identifier is that it must be unique within its security domain. Authenticators are commonly based on at least one of the following four factors: Something you know, such as a password or a personal identification number (PIN). This assumes that only the owner of the account knows the password or PIN needed to access the account. Something you have, such as a smart card or security token. This assumes that only the owner of the account has the necessary smart card or token needed to unlock the account. Something you are, such as fingerprint, voice, retina, or iris characteristics. Where you are, for example inside or outside a company firewall, or proximity of login location to a personal GPS device. Access approval Access approval is the function that actually grants or rejects access during operations. During access approval, the system compares the formal representation of the authorization policy with the access request, to determine whether the request shall be granted or rejected. Moreover, the access evaluation can be done online/ongoing. Accountability Accountability uses such system components as audit trails (records) and logs, to associate a subject with its actions. The information recorded should be sufficient to map the subject to a controlling user. Audit trails and logs are important for Detecting security violations Re-creating security incidents If no one is regularly reviewing your logs and they are not maintained in a secure and consistent manner, they may not be admissible as evidence. Many systems can generate automated reports, based on certain predefined criteria or thresholds, known as clipping levels. For example, a clipping level may be set to generate a report for the following: More than three failed logon attempts in a given period Any attempt to use a disabled user account These reports help a system administrator or security administrator to more easily identify possible break-in attempts. – Definition of clipping level: a disk's ability to maintain its magnetic properties and hold its content. A high-quality level range is 65–70%; low quality is below 55%. Access controls Access control models are sometimes categorized as either discretionary or non-discretionary. The three most widely recognized models are Discretionary Access Control (DAC), Mandatory Access Control (MAC), and Role Based Access Control (RBAC). MAC is non-discretionary. Discretionary access control Discretionary access control (DAC) is a policy determined by the owner of an object. The owner decides who is allowed to access the object, and what privileges they have. Two important concepts in DAC are File and data ownership: Every object in the system has an owner''. In most DAC systems, each object's initial owner is the subject that caused it to be created. The access policy for an object is determined by its owner. Access rights and permissions: These are the controls that an owner can assign to other subjects for specific resources. Access controls may be discretionary in ACL-based or capability-based access control systems. (In capability-based systems, there is usually no explicit concept of 'owner', but the creator of an object has a similar degree of control over its access policy.) Mandatory access control Mandatory access control refers to allowing access to a resource if and only if rules exist that allow a given user to access the resource. It is difficult to manage, but its use is usually justified when used to protect highly sensitive information. Examples include certain government and military information. Management is often simplified (over what is required) if the information can be protected using hierarchical access control, or by implementing sensitivity labels. What makes the method "mandatory" is the use of either rules or sensitivity labels. Sensitivity labels: In such a system subjects and objects must have labels assigned to them. A subject's sensitivity label specifies its level of trust. An object's sensitivity label specifies the level of trust required for access. In order to access a given object, the subject must have a sensitivity level equal to or higher than the requested object. Data import and export: Controlling the import of information from other systems and export to other systems (including printers) is a critical function of these systems, which must ensure that sensitivity labels are properly maintained and implemented so that sensitive information is appropriately protected at all times. Two methods are commonly used for applying mandatory access control: Rule-based (or label-based) access control: This type of control further defines specific conditions for access to a requested object. A Mandatory Access Control system implements a simple form of rule-based access control to determine whether access should be granted or denied by matching: An object's sensitivity label A subject's sensitivity label Lattice-based access control: These can be used for complex access control decisions involving multiple objects and/or subjects. A lattice model is a mathematical structure that defines greatest lower-bound and least upper-bound values for a pair of elements, such as a subject and an object. Few systems implement MAC; XTS-400 and SELinux are examples of systems that do. Role-based access control Role-based access control (RBAC) is an access policy determined by the system, not by the owner. RBAC is used in commercial applications and also in military systems, where multi-level security requirements may also exist. RBAC differs from DAC in that DAC allows users to control access to their resources, while in RBAC, access is controlled at the system level, outside of the user's control. Although RBAC is non-discretionary, it can be distinguished from MAC primarily in the way permissions are handled. MAC controls read and write permissions based on a user's clearance level and additional labels. RBAC controls collections of permissions that may include complex operations such as an e-commerce transaction, or may be as simple as read or write. A role in RBAC can be viewed as a set of permissions. Three primary rules are defined for RBAC: Role assignment: A subject can execute a transaction only if the subject has selected or been assigned a suitable role. Role authorization: A subject's active role must be authorized for the subject. With rule 1 above, this rule ensures that users can take on only roles for which they are authorized. Transaction authorization: A subject can execute a transaction only if the transaction is authorized for the subject's active role. With rules 1 and 2, this rule ensures that users can execute only transactions for which they are authorized. Additional constraints may be applied as well, and roles can be combined in a hierarchy where higher-level roles subsume permissions owned by lower-level sub-roles. Most IT vendors offer RBAC in one or more products. Attribute-based access control In attribute-based access control (ABAC), access is granted not based on the rights of the subject associated with a user after authentication, but based on the attributes of the user. The user has to prove so-called claims about his or her attributes to the access control engine. An attribute-based access control policy specifies which claims need to be satisfied in order to grant access to an object. For instance the claim could be "older than 18". Any user that can prove this claim is granted access. Users can be anonymous when authentication and identification are not strictly required. One does, however, require means for proving claims anonymously. This can for instance be achieved using anonymous credentials. XACML (extensible access control markup language) is a standard for attribute-based access control. XACML 3.0 was standardized in January 2013. Break-Glass Access Control Models Traditionally, access has the purpose of restricting access, thus most access control models follow the "default deny principle", i.e. if a specific access request is not explicitly allowed, it will be denied. This behavior might conflict with the regular operations of a system. In certain situations, humans are willing to take the risk that might be involved in violating an access control policy, if the potential benefit that can be achieved outweighs this risk. This need is especially visible in the health-care domain, where a denied access to patient records can cause the death of a patient. Break-Glass (also called break-the-glass) try to mitigate this by allowing users to override access control decision. Break-Glass can either be implemented in an access control specific manner (e.g. into RBAC), or generic (i.e., independent from the underlying access control model). Host-based access control (HBAC) The initialism HBAC stands for "host-based access control". See also Resource Access Control Facility References
26006428
https://en.wikipedia.org/wiki/Identity%20assurance
Identity assurance
Identity assurance in the context of federated identity management is the ability for a party to determine, with some level of certainty, that an electronic credential representing an entity (human or a machine) with which it interacts to effect a transaction, can be trusted to actually belong to the entity. In the case where the entity is a person, identity assurance is the level at which the credential being presented can be trusted to be a proxy for the individual to whom it was issued and not someone else. Assurance levels (ALs or LoAs) are the levels of trust associated with a credential as measured by the associated technology, processes, and policy and practice statements. Description Identity assurance, in an online context, is the ability of a relying party to determine, with some level of certainty, that a claim to a particular identity made by some entity can be trusted to actually be the claimant's "true" identity. Identity claims are made by presenting an identity credential to the relying party. In the case where the entity is a person, this credential may take several forms, including: (a) personally identifiable information such as name, address, birthdate, etc.; (b) an identity proxy such a username, login identifier (user name), or email address; and (c) an X.509 digital certificate. Identity assurance specifically refers to the degree of certainty of an identity assertion made by an identity provider by presenting an identity credential to the relying party. In order to issue this assertion, the identity provider must first determine whether or not the claimant possesses and controls an appropriate token, using a predefined authentication protocol. Depending on the outcome of this authentication procedure, the assertion returned to the relying party by the identity provider allows the relying party to decide whether or not to trust that the identity associated with the credential actually "belongs" to the person presenting the credential. The degree of certainty that a relying party can have about the true identity of someone presenting an identity credential is known as the assurance level (ALs). Four levels of assurance were outlined by a 2006 document from the US National Institute of Standards and Technology. The level of assurance is measured by the strength and rigor of the identity proofing process, the strength of the token used to authenticate the identity claim, and the management processes the identity provider applies to it. These four levels were adopted by the governments of the U.K., Canada and the U.S. for electronic government services. Purpose To conduct online business, entities need to be able to identify themselves remotely and reliably. In most cases, however, it is not sufficient for the typical electronic credential (usually a basic user name and password pair or a digital certificate) to simply assert "I am who I say I am - believe me." A relying party (RP) needs to be able to know to some degree of certainty that the presented electronic identity credential truly represents the individual presenting the credential. In the case of self-issued credentials, this isn't possible. However, most electronic identity credentials are issued by identity providers (IdPs): the workplace network administrator, a social networking service, an online game administrator, a government entity, or a trusted third party that sells digital certificates. Most people have multiple credentials from multiple providers. Four audiences are affected by the transaction-—and the inherent trust therein: Users of electronic identity credentials, Entities that rely upon the credentials issued by electronic identity providers (IdP), Providers of IdP services and auditors or assessors who review the business processes of IdPs, and Relying parties (RPs) trust electronic identity credentials provided by IdPs Different IdPs follow different policies and procedures for issuing electronic identity credentials. In the business world, and especially in government, the more trustworthy the credential, the more stringent the rules governing identity proofing, credential management and the kind of credentials issued. But while different IdPs follow their own rules, more and more end users (often called subscribers) and online services (often called relying parties) wish to trust existing credentials and not issue yet another set of userID/passwords or other credentials for use to access one service. This is where the concept of federated identity becomes important. Federated identity provides IdPs and relying parties with a common set of identity trust conventions that transcend individual identity service providers, users, or networks, so that a relying party will know it can trust a credential issued by IdP 'A' at a level of assurance comparable to a common standard, which will also be agreed upon by IdPs 'B,' 'C,' and 'D.' Specific implementations and proposed implementations Australia Netherlands DigiD is a system whereby Dutch government agencies can verify a person's identity over the Internet, a type of digital passport for government institutions. Poland In a joint initiative between the Interior, Digital Affairs and Health Ministries, new chip ID cards will be introduced from Q1 2019, replacing the existing identity cards over a ten-year period. United Kingdom The UK's identity assurance programme, GOV.UK Verify is delivered by the Government Digital Service in conjunction with private sector identity providers. GOV.UK Verify is a standards based, federated identity assurance service to support the digital transformation of central and local government. The service allows citizens to use a federated identity model to prove they are who they say they are when they sign into government services. Users are able to choose an identity assurance provider from a range of certified suppliers and may choose to register with one or more of these suppliers. The service has been live since May 2016. United States The US government first published a draft for an E-Authentication Federation Credential Assessment Framework (CAF) in 2003, with final publication in March 2005. The Kantara Initiative identity assurance work group (IAWG) was formed in 2009. It continued the Liberty Alliance Identity Assurance Framework, which was based, in part, on the Electronic Authentication Partnership Trust Framework and the CAF, to enable interoperability among electronic authentication systems. It defined a trust framework around the quality of claims issued by an IdP based on language, business rules, assessment criteria and certifications. The work began within the Liberty Alliance in early 2007, and the first public draft was published in November 2007, with version 1.1 released in June 2008. The Identity Assurance Expert Group within Liberty Alliance worked with the ITU-T (via the ITU-T SG17Q6 Correspondence Group on X.EAA on harmonization and international standardization of the Identity Assurance Framework---work commenced Sept. 2008); ISOC (ISO SC27 29115 Harmonization with Identity Assurance Framework, among other contributions); and the American Bar Association (collaboration to develop a model trade agreement for federated identity). The Kantara Initiative Identity Assurance Framework (IAF), published in December 2009, detailed levels of assurance and the certification program that bring the Framework to the marketplace. The IAF consists of a set of documents that includes an Overview publication, the IAF Glossary, a summary Assurance Levels document, and an Assurance Assessment Scheme (AAS), which encompasses the associated assessment and certification program, as well as several subordinate documents, among them the Service Assessment Criteria (SAC), which establishes baseline criteria for general organizational conformity, identity proofing services, credential strength, and credential management services against which all CSPs will be evaluated. Several presentations on the application of the Identity Assurance Framework have been given by various organizations, including Wells Fargo and Fidelity Investments, and case studies about Aetna and Citigroup are also available. In 2009, the South East Michigan Health Information Exchange (SEMHIE) adopted the Kantara IAF. World Wide Web Consortium Decentralized identifiers (DIDs) are a type of identifier that enables a verifiable, decentralized digital identity. See also Non-repudiation Self-sovereign identity References Identity management
45592635
https://en.wikipedia.org/wiki/Rudolf%20Berghammer
Rudolf Berghammer
Rudolf Berghammer (born 1952 in Oberndorf, Germany) is a German mathematician who works in computer science. Life Rudolf Berghammer worked as an electrician at the Farbwerke Hoechst, Kelheim, from 1966 until 1970. He began studying Mathematics and Computer Science in 1973 at TU München. His academic teachers were Friedrich L. Bauer, Klaus Samelson, Gottfried Tinhofer, and Gunther Schmidt. After obtaining his diploma in 1979, he started working as an assistant mainly to Gunther Schmidt and Friedrich L. Bauer at TU München where he obtained his award-winning Ph.D. in 1984. From 1988 on, he worked as an assistant to Gunther Schmidt at the Faculty for Computer Science of the Universität der Bundeswehr München, where he finally got his habilitation in 1990. Since 1993 he is a professor for Computer-aided Program Development at the Department of Computer Science at the University of Kiel. Work For many years he has served as head of the steering committee of the international RAMiCS conference series (formerly termed RelMiCS). Rudolf Berghammer is known for his work in relational mathematics, or Formal Methods of Programming, Semantics, Relational Methods in Computer Science. He developed the RelView system for the manipulation and visualisation of relations and relational programming. For instance, in 2019 he was coauthor of "Cryptomorphic topological structures: a computational relation algebraic approach". This work relates the classical neighborhood system approach to topology to closure operators, kernel operators, and Aumann contact relations. The formulation of one approach to another is done with calculus of relations. The article notes the contributions of RelView experiments with finite topologies, for instance for a set with seven elements, 9,535,241 topologies are tested. (see § 9). Personal One of his hobbies is mountaineering. In his youth he climbed Ortler or Piz Bernina and other noted summits. He is an active climber spending several days in the alps every year. Furthermore he is an enthusiastic sailor owning a own sailing vessel in the baltic sea. Written books Semantik von Programmiersprachen, Logos Verlag, 2001, Ordnungen, Verbände und Relationen mit Anwendungen, Springer, Mathematik für Informatiker: Grundlegende Begriffe und Strukturen, Springer, , (eBook) Editorships 1991: (with Gunther Schmidt) Graph-Theoretic Concepts in Computer Science, Lecture Notes in Computer Science #570, Proc. 17th Intern. Workshop WG '91, Richterheim Fischbachau, , 2003: . 2008: . 2009: . 2014: . References External links Prof. Dr. Rudolf Berghammer at Christian Albrechts Universität Kiel with access to a full list of publications and talks Rudolf Berghammer at researchr.org 1952 births 21st-century German mathematicians Bundeswehr University Munich faculty Computer science educators Computer science writers Formal methods people University of Kiel faculty German computer scientists German textbook writers Living people Programming language researchers Technical University of Munich alumni Theoretical computer scientists
15012196
https://en.wikipedia.org/wiki/Raima
Raima
Raima is a multinational technology company headquartered in Seattle, USA. The company was founded in 1982. Raima develops, sells and supports in-memory and disk-based Relational Database Management Systems that can either be embedded within the application or be in a client/server mode. The company's focus is on OLTP databases with high-intensity transactional processing. Their cross-platform, small-footprint products are made to collect, store, manage and move data. History Raima was founded in Seattle, Washington USA in 1982 by two software engineering researchers from Boeing, Randy Merilatt and Wayne Warren, who saw the benefits that database management technology could provide for software application developers in the rapidly growing microcomputer industry. In 1984 Raima released one of the first embedded database management systems for microcomputer applications written in the C programming language. Early contracts with companies like ROLM (now part of IBM), Texas Instruments, Microsoft, ADP and others contributed to the development of the Raima Database Manager (RDM) product family. Some of the more significant Raima product releases are shown below. 1984 - Raima releases db_VISTA (currently named, "RDM Embedded") version 1. A single-user, network-model database management system (DBMS) for C language applications on MS-DOS and Unix. 1986 - db_VISTA version 2 is released, adding a portable (file locks stored in a lock file), multi-user DBMS with transaction level database consistency. 1987 - db_QUERY released, providing the first SQL-like query tool for accessing a network-model database. 1988 - db_VISTA version 3 is released, with a system-wide lock manager process that manages all file locks. 1990 - Raima releases db_VISTA for Microsoft Windows. 1992 - Raima Database Server version 1 (a.k.a. "Velocis" and, today, "RDM Server") is introduced, providing a client/server DBMS with record-level locking and SQL designed to be tightly integrated with sophisticated applications written in C. Multiple platform support for MS-DOS/Novell NetWare, OS/2 and Unix. The first full-featured DBMS to support the ODBC SQL API as its native SQL API. 1996 - Velocis 1.0: Hot online backup. 1998 - Velocis 2.1: True multi-threading, application link. 2006 - RDM Server 7.2: Dynamic DDL. 2009 - RDM Server 8.3: SQL triggers and enhanced join syntax. 2010 - RDM Embedded 10.0: Multi-core computer support using Transactional File Servers with high-performance, MVCC-based read-only transactions. 2011 - SQL for RDM Embedded 10.1. 2012 - ODBC/JDBC/ADO.NET support for RDM 11.0. 2013 – RDM 12.0: Database cursors, shared memory protocol memory allocation limitation, enhanced SQL optimization support, three new data types, bulk insert API, “dirty read” isolation level, enhanced encryption, selective replication and notification. 2016 – RDM 14.0: Consolidating both the source code lines and features in Raima Embedded and RDM Server into one source code in Raima Database Manager v. 14.0. RDM 14.0 includes these major features: updated in-memory support, dirty reads, R-Tree support, compression, encryption, SQL, SQL PL, and platform independence—develop once, deploy anywhere. RMD 14.0 includes portability options such as direct copy and paste that permit development and deployment on different target platforms, regardless of architecture or byte order. The release includes a streamlined interface that is cursor-based, extended SQL support and stored procedures that support SQL PL; it also supports ODBC (C, C++), ADO.NET (C#) and JDBC (Java). Supported development environments include Microsoft Visual Studio, Apple XCode, Eclipse and Wind River Workbench. A redesigned and optimized database file format architecture maintains ACID compliance and data safeguards, with separate formats for in-memory, on-disk or hybrid storage. File formats hide hardware platform specifics (e.g., byte ordering). Download packages include examples of RDM speed and performance benchmarks. 2018 – RDM 14.1: This new release focuses on ease of use, portability, and speed. With Raima's new file format, you can develop once and deploy anywhere. Performance is increased by over 50-100% depending on the use case when compared to previous RDM releases. Raima has extended and improved SQL support, snapshots, and geospatial functionality. 2020 – RDM 14.2: Continuous focus on ease of use, portability, and speed. Multi-user Focused Storage Format: the updated database file format increase database throughput through a focus on locking contention prevention. Extended and improved geospatial functionality and a newly supported REST-ful interface have been added to the database server functionality. 2021 – RDM 15.0: Speed, ease of use and new functionality. Custom generated time series support has been added to RDM. Also FFT support for data transformations. An administrative GUI is introduced. 2022 – RDM 15.2: Comes with Raima to Raima replication, hot backup, and enhanced encryption. Diagnostics for more specific information to be obtained from the Raima sub- systems. Raima to Cloud replication through SymmetricDS. In June, 1999 Raima was acquired by Centura Software (formerly Gupta). In the summer of 2001, the Norwegian company Birdstep Technology acquired the Raima assets from Centura and operated Raima as a separate business unit out of Seattle. In the summer of 2010, the Raima management team purchased Raima from Birdstep, forming the now privately held Raima Incorporated. Products and services Raima delivers and supports multiple Raima Database Manager solutions, which are designed for distributed architectures and can either be embedded and completely managed within the application, or in a client/server mode. Their architecture, platform independence, high performance and small footprint enable Raima Database Manager products to be used in a variety of applications, from small, resource-constrained devices all the way up to enterprise class environments. Applications Raima database products are used in a wide range of applications for business critical data transactions, flight control systems, military equipment, data backup solutions, medical equipment, routers and switches, and more. Aker Solutions, Boeing, General Dynamics, GE Power, GE Grid solutions, Mitsubishi, Schneider Electric, Schlumberger, and Siemens are examples of customers who embed Raima Database Manager products into their applications. References External links Software companies based in Seattle Software companies of the United States
1592602
https://en.wikipedia.org/wiki/Sakai%20%28software%29
Sakai (software)
Sakai is a free, community source, educational software platform designed to support teaching, research and collaboration. Systems of this type are also known as Course Management Systems (CMS), Learning Management Systems (LMS), or Virtual Learning Environments (VLE). Sakai is developed by a community of academic institutions, commercial organizations and individuals. It is distributed under the Educational Community License (a type of open source license). Sakai is used by hundreds of institutions, mainly in the US, but also in Canada, Europe, Asia, Africa and Australia. Sakai was designed to be scalable, reliable, interoperable and extensible. Its largest installations handle over 100,000 users. Organization Sakai is developed as open source software as a community effort, stewarded by the Apereo Foundation, a member-based, non-profit corporation. The Foundation fosters use and development of Sakai in the same open, community-based fashion in which it was created. It encourages community building between individuals, academic institutions, non-profits and commercial organizations and provides its members with an institutional framework for their projects. It works to promote the wider adoption of community-source and open standards approaches to software solutions within the education and research communities. It organizes the yearly Open Apereo Conference. Additional, regional conferences have taken place in China, Japan, Australia, Europe and South Africa, and there is an annual Sakai Virtual Conference. Members include universities, colleges, other institutions and commercial affiliates that provide support. While members take care of most of the development and support in practice, joining the Foundation is not required to use the software or participate in the community. History The development of Sakai was originally funded by a grant from the Mellon Foundation as the Sakai Project. The early versions were based on existing tools created by the founding institutions, with the largest piece coming from the University of Michigan's CHEF course management system. Sakai is a play on the word chef and refers to Iron Chef Hiroyuki Sakai. The original institutions started meeting in February 2004. Each institution had built a custom course management system: Indiana University: Oncourse, replaced by Canvas. Georgia Institute of Technology: T-Square, replaced by Canvas in 2018. Massachusetts Institute of Technology: Stellar, optional transition to Canvas in 2020. Stanford University: CourseWork, replaced by Canvas in 2015. University of Michigan: CTools, formerly CourseTools, based on the CHEF framework, replaced by Canvas in 2016. uPortal and the Open Knowledge Initiative were also represented. Sakai 1.0 was released in 2005, and it was adopted by all participating universities. For instance, Indiana University moved all of its legacy systems to Sakai. With the Sakai Project concluding, the Sakai Foundation was set up to oversee the continued work on Sakai. Sakai's chief architect, Dr. Charles Severance, was its first Executive Director. Several large US universities joined, as well as universities, colleges, other institutions and commercial affiliates on all continents. One of the partners, the University of Cambridge, started work on a more student-centric system in an attempt to provide a better fit with their own educational model. Several partners joined this effort, seeing this as an opportunity to do away with some of Sakai's known limitations; for a while, the effort was named Sakai 3, but it was far from being a feature complete replacement and it was built from scratch on different technology. This seriously hampered progress on the existing Sakai. After about two years, it turned out the new software would never replace the existing Sakai, and it was renamed to Sakai OAE (today: Apereo OAE), while the existing Sakai was renamed to Sakai CLE. After this, Sakai CLE development slowly picked up speed again. A major advance was a WYSIWYG content editing tool, the Lessons tool, contributed by Rutgers University in version 2.9.3. In 2012, University of Michigan and Indiana University, two of Sakai's founders, left the Sakai foundation. In the following 2 years, many existing users also retired Sakai, moving to other software, while other core contributors remained. During this period, new users were rare. However, Sakai CLE development picked up speed, and it was renamed back to just Sakai. In December 2012, the Sakai Foundation merged with Jasig to form the Apereo Foundation, which took over stewardship of Sakai development. Since then, new major releases have continued to appear almost yearly. The main focus of development has been on incrementally improving the existing toolset and modernizing the look and feel, making it more suitable for mobile use. Sakai collaboration and learning environment - software features The Sakai software includes many of the features common to course management systems, including document distribution, a gradebook, discussion, live chat, assignment uploads, and online testing. In addition to the course management features, Sakai is intended as a collaborative tool for research and group projects. To support this function, Sakai includes the ability to change the settings of all the tools based on roles, changing what the system permits different users to do with each tool. It also includes a wiki, mailing list distribution and archiving, and an RSS reader. The core tools can be augmented with tools designed for a particular application of Sakai. Examples might include sites for collaborative projects, teaching and portfolios. In Sakai, the content and tools used in courses or projects is organized into sites. Typically, a site corresponds to a course or a project. Each site has its own content, tools, users and access rights for users, search tool, usage statistics, etcetera. In principle, everything in Sakai is done per site. This is what allows Sakai to scale to hundreds of thousands of users. Sakai is extensible in several ways: it is a platform for integrating loosely coupled tools, which provide the actual functionality; in addition to the core tools distributed with Sakai, several important third-party tools are available, and web developers can write their own additional tools in a language of their own choice; third-party tools are available for playing SCORM packages; external web applications can be integrated using LTI Architecture and technical details Sakai is a set of web applications written in Java-based, loosely coupled in a service-oriented architecture. The supported web server is Tomcat; the databases supported for data storage are Oracle and MySQL. Sakai has a layered architecture: The Sakai kernel provides a common infrastructure and exposes it in the form of web services. All of the sub-applications, known as tools in Sakai, depend on these services for things like user management and site management. Nearly all functionality is implemented in the form of tools. Tools have a business logic implementation part and a user interface part, implemented using various Java technologies. These interfaces are combined by so-called aggregators. Each layer is extensible: new services, tools, and aggregators are easy to add. Owing to the services, tools and user interfaces can be written in other languages than Java, but this does not happen in practice. Up to and including Sakai 10, the code base for Sakai and its contributed tools were maintained in publicly accessible Subversion repositories. With Sakai 11, this was changed to Git and GitHub. Releases Sakai is mainly in use at universities. Major releases tend to be in spring or early summer, in order to allow institutions to upgrade before the new academic semester, and many of them do. See also Learning management system Virtual learning environment References Bibliography External links Educational technology projects Educational software Virtual learning environments Free learning management systems Learning management systems Free educational software Free content management systems Java platform software Free software projects 2005 software
16350547
https://en.wikipedia.org/wiki/GraphiCon
GraphiCon
GraphiCon is the largest International conference on computer graphics and computer vision in the countries of the former Soviet Union. The conference is hosted by Moscow State University in association with Keldysh Institute of Applied Mathematics, Russian Center of Computing for Physics and Technology, and the Russian Computer Graphics Society. The Conference is held in close cooperation with Eurographics Association. Conference topics The main topics of the conference include (this list is not exhaustive): Graphics and multimedia: Geometry modeling and processing Photorealistic rendering techniques Scientific visualization Image-based techniques Computer graphics for mobile devices Computer graphics hardware Graphics in computer games Animation and simulation Virtual and augmented reality Image and video processing: Medical image processing Early vision and image representation Tracking and surveillance Segmentation and grouping Image enhancement, restoration and super-resolution Computer vision: 3D reconstruction and acquisition Object localization and recognition Multi-sensor fusion and 3D registration Structure from motion and stereo Scene modeling Statistical methods and learning Applications Format The following sections are organized for the event: Young scientists school courses and master classes Full paper presentations Work in progress presentations STAR reports Invited talks Industrial presentations Multimedia shows Round table discussions Specific GraphiCon conferences See also Eurographics — the biggest conference on computer graphics in Europe SIGGRAPH — the world biggest conference on computer graphics List of computer science conferences#Computer_graphics External links http://www.graphicon.ru/ Computer vision research infrastructure Computer graphics conferences
50734361
https://en.wikipedia.org/wiki/WinOps
WinOps
WinOps (a portmanteau of "Windows" and "DevOps") is a term used referring to the cultural movement of DevOps practices for a Microsoft-centric view. It emphasizes the use of the cloud, automation and integrating development and IT operations into one fluid method on the Windows platform. Etymology The term 'WinOps' was coined at the London DevOps meetup held at Facebook in June 2015. As an amalgamation of Windows and DevOps, it represents the new emphasis on using existing DevOps methodologies in the traditionally less open-source Microsoft space. Community Since the first WinOps conference in September 2015, there have been multiple meetups and a second conference which was held in May 2016. The WinOps meetup group has an active community with over 1,000 members. Their motto is "Windows in a DevOps World". They focus on shared experiences using Windows centred products and tools for establishing DevOps goals. The Linux challenge Windows and Linux are very different; not just technologically, but philosophically. Tools are a major component to establishing DevOps, and the lack of Windows-centric tools limited early Windows DevOps adoptions as most initial supporting technologies emerged from Linux and open-source communities. WinOps boils down to addressing the same challenges as DevOps, using different tools. WinOps-focused tools References Agile software development Microsoft culture Software_development_process
31340380
https://en.wikipedia.org/wiki/List%20of%20moths%20of%20the%20Republic%20of%20the%20Congo
List of moths of the Republic of the Congo
There are about 380 known moth species of the Republic of the Congo. The moths (mostly nocturnal) and butterflies (mostly diurnal) together make up the taxonomic order Lepidoptera. This is a list of moth species which have been recorded in the Republic of the Congo. Adelidae Ceromitia systelitis Meyrick, 1921 Nemophora parvella (Walker, 1863) Alucitidae Alucita balioxantha (Meyrick, 1921) Alucita imbrifera (Meyrick, 1929) Alucita sertifera (Meyrick, 1921) Anomoeotidae Anomoeotes nox Aurivillius, 1907 Thermochrous fumicincta Hampson, 1910 Arctiidae Afrasura obliterata (Walker, 1864) Aglossosia deceptans Hampson, 1914 Amata creobota (Holland, 1893) Amata francisca (Butler, 1876) Amata goodii (Holland, 1893) Amata interniplaga (Mabille, 1890) Amata leimacis (Holland, 1893) Amata marina (Butler, 1876) Amerila vitrea Plötz, 1880 Anaphosia cyanogramma Hampson, 1903 Anapisa crenophylax (Holland, 1893) Anapisa melaleuca (Holland, 1898) Anapisa monotica (Holland, 1893) Apisa canescens Walker, 1855 Archithosia costimacula (Mabille, 1878) Balacra daphaena (Hampson, 1898) Balacra flavimacula Walker, 1856 Balacra haemalea Holland, 1893 Balacra pulchra Aurivillius, 1892 Balacra rubricincta Holland, 1893 Boadicea pelecoides Tams, 1930 Caripodia chrysargyria Hampson, 1900 Ceryx albimacula (Walker, 1854) Ceryx elasson (Holland, 1893) Creatonotos leucanioides Holland, 1893 Cyana rubristriga (Holland, 1893) Euchromia guineensis (Fabricius, 1775) Euchromia lethe (Fabricius, 1775) Hippurarctia taymansi (Rothschild, 1910) Mecistorhabdia haematoessa (Holland, 1893) Meganaclia sippia (Plötz, 1880) Metarctia burra (Schaus & Clements, 1893) Metarctia haematica Holland, 1893 Metarctia inconspicua Holland, 1892 Metarctia paulis Kiriakoff, 1961 Metarctia rufescens Walker, 1855 Myopsyche elachista (Holland, 1893) Myopsyche miserabilis (Holland, 1893) Myopsyche ochsenheimeri (Boisduval, 1829) Myopsyche puncticincta (Holland, 1893) Nanna eningae (Plötz, 1880) Neophemula vitrina (Oberthür, 1909) Nyctemera apicalis (Walker, 1854) Nyctemera xanthura (Plötz, 1880) Ovenna vicaria (Walker, 1854) Pusiola melemona (Kiriakoff, 1963) Pusiola theresia (Kiriakoff, 1963) Rhipidarctia invaria (Walker, 1856) Trichaeta fulvescens (Walker, 1854) Trichaeta pterophorina (Mabille, 1892) Autostichidae Autosticha nothriforme (Walsingham, 1897) Carposinidae Meridarchis luteus (Walsingham, 1897) Choreutidae Anthophila equatoris (Walsingham, 1897) Anthophila flavimaculata (Walsingham, 1891) Brenthia octogemmifera Walsingham, 1897 Crambidae Cotachena smaragdina (Butler, 1875) Palpita elealis (Walker, 1859) Elachistidae Ethmia rhomboidella Walsingham, 1897 Microcolona pantomima Meyrick, 1917 Eriocottidae Compsoctena media Walsingham, 1897 Compsoctena secundella (Walsingham, 1897) Eupterotidae Jana eurymas Herrich-Schäffer, 1854 Jana gracilis Walker, 1855 Jana preciosa Aurivillius, 1893 Jana strigina Westwood, 1849 Phiala albida Plötz, 1880 Phiala subiridescens (Holland, 1893) Stenoglene bipunctatus (Aurivillius, 1909) Gelechiidae Bactropaltis lithosema Meyrick, 1939 Dichomeris eurynotus (Walsingham, 1897) Dichomeris marmoratus (Walsingham, 1891) Ptilothyris crossoceros Meyrick, 1934 Ptilothyris purpurea Walsingham, 1897 Theatrocopia elegans Walsingham, 1897 Theatrocopia roseoviridis Walsingham, 1897 Geometridae Aletis erici Kirby, 1896 Anacleora diffusa (Walker, 1869) Aphilopota mailaria (Swinhoe, 1904) Aphilopota strigosissima (Bastelberger, 1909) Biston abruptaria (Walker, 1869) Braueriana fiorino Bryk, 1913 Chiasmia majestica (Warren, 1901) Chiasmia streniata (Guenée, 1858) Chrysocraspeda abdominalis (Herbulot, 1984) Chrysocraspeda rubripennis (Warren, 1898) Colocleora binoti Herbulot, 1983 Colocleora ducleri Herbulot, 1983 Colocleora linearis Herbulot, 1985 Colocleora sanghana Herbulot, 1985 Colocleora smithi (Warren, 1904) Conolophia persimilis (Warren, 1905) Dorsifulcrum cephalotes (Walker, 1869) Epigynopteryx prophylacis Herbulot, 1984 Euproutia aggravaria (Guenée, 1858) Hypochrosis banakaria (Plötz, 1880) Hypomecis dedecora (Herbulot, 1985) Idaea inquisita (Prout, 1932) Melinoessa sodaliata (Walker, 1862) Melinoessa stramineata (Walker, 1869) Mesomima tenuifascia (Holland, 1893) Miantochora picturata Herbulot, 1985 Mimaletis postica (Walker, 1869) Oxyfidonia umbrina Herbulot, 1985 Piercia myopteryx Prout, 1935 Pitthea catadela D. S. Fletcher, 1963 Pitthea cunaxa Druce, 1887 Pitthea famula Drury, 1773 Pitthea perspicua (Linnaeus, 1758) Prasinocyma congrua (Walker, 1869) Psilocladia loxostigma Prout, 1915 Racotis squalida (Butler, 1878) Scopula macrocelis (Prout, 1915) Semiothisa testaceata (Walker, 1863) Somatina chalyboeata (Walker, 1869) Terina circumdata Walker, 1865 Terina crocea Hampson, 1910 Terina latifascia Walker, 1854 Terina niphanda Druce, 1887 Xenochroma silvatica Herbulot, 1984 Xylopteryx triphaenata Herbulot, 1984 Zamarada modesta Herbulot, 1985 Zeuctoboarmia ochracea Herbulot, 1985 Glyphipterigidae Glyphipterix gemmatella (Walker, 1864) Gracillariidae Stomphastis thraustica (Meyrick, 1908) Himantopteridae Pedoptila thaletes Druce, 1907 Semioptila semiflava Talbot, 1928 Semioptila seminigra Talbot, 1928 Hyblaeidae Hyblaea occidentalium Holland, 1894 Immidae Moca radiata (Walsingham, 1897) Lacturidae Gymnogramma atmocycla Meyrick, 1918 Gymnogramma hollandi (Walsingham, 1897) Lasiocampidae Cheligium choerocampoides (Holland, 1893) Cheligium nigrescens (Aurivillius, 1909) Cheligium pinheyi Zolotuhin & Gurkovich, 2009 Euwallengrenia reducta (Walker, 1855) Gelo jordani (Tams, 1936) Grellada imitans (Aurivillius, 1893) Grellada marshalli (Aurivillius, 1902) Lechriolepis tessmanni Strand, 1912 Leipoxais marginepunctata Holland, 1893 Leipoxais rufobrunnea Strand, 1912 Mimopacha gerstaeckerii (Dewitz, 1881) Mimopacha knoblauchii (Dewitz, 1881) Muzunguja rectilineata (Aurivillius, 1900) Nepehria olivia Gurkovich & Zolotuhin, 2010 Odontocheilopteryx conzolia Gurkovich & Zolotuhin, 2009 Odontocheilopteryx phoneus Hering, 1928 Opisthoheza heza Zolotuhin & Prozorov, 2010 Pachymeta contraria (Walker, 1855) Pachyna subfascia (Walker, 1855) Pachytrina gliharta Zolotuhin & Gurkovich, 2009 Pachytrina honrathii (Dewitz, 1881) Pachytrina philargyria (Hering, 1928) Pachytrina rubra (Tams, 1929) Pachytrina trihora Zolotuhin & Gurkovich, 2009 Pallastica lateritia (Hering, 1928) Pallastica meloui (Riel, 1909) Pallastica sericeofasciata (Aurivillius, 1921) Pehria umbrina (Aurivillius, 1909) Schausinna clementsi (Schaus, 1897) Sonitha libera (Aurivillius, 1914) Stenophatna hollandi (Tams, 1929) Stenophatna kahli (Tams, 1929) Theophasida obusta (Tams, 1929) Theophasida valkyria Zolotuhin & Prozorov, 2010 Lecithoceridae Odites cuculans Meyrick, 1918 Limacodidae Parasa chapmani Kirby, 1892 Zinara nervosa Walker, 1869 Lymantriidae Euproctis pygmaea (Walker, 1855) Knappetra fasciata (Walker, 1855) Naroma signifera Walker, 1856 Olapa tavetensis (Holland, 1892) Otroeda manifesta (Swinhoe, 1903) Noctuidae Acantholipes circumdata (Walker, 1858) Achaea jamesoni L. B. Prout, 1919 Acontia citrelinea Bethune-Baker, 1911 Aegocera fervida (Walker, 1854) Aegocera obliqua Mabille, 1893 Aegocera rectilinea Boisduval, 1836 Aegocera tigrina (Druce, 1882) Agrotis catenifera Walker, 1869 Agrotis hemileuca Walker, 1869 Aletia consanguis (Guenée, 1852) Aletopus ruspina (Aurivillius, 1909) Argyrolopha punctilinea Prout, 1921 Asota chionea (Mabille, 1878) Athetis partita (Walker, 1857) Callopistria maillardi (Guenée, 1862) Colpocheilopteryx operatrix (Wallengren, 1860) Egnasia scoliogramma Prout, 1921 Epischausia dispar (Rothschild, 1896) Ericeia congregata (Walker, 1858) Ericeia lituraria (Saalmüller, 1880) Feliniopsis kuehnei Hacker & Fibiger, 2007 Feliniopsis parvula Hacker & Fibiger, 2007 Feliniopsis sinaevi Hacker & Mey, 2010 Halochroa aequatoria (Mabille, 1879) Helicoverpa assulta (Guenée, 1852) Heraclia aemulatrix (Westwood, 1881) Heraclia longipennis (Walker, 1854) Heraclia pardalina (Walker, 1869) Hespagarista caudata (Dewitz, 1879) Hypena obacerralis Walker, [1859] Janseodes melanospila (Guenée, 1852) Mentaxya ignicollis (Walker, 1857) Metagarista maenas (Herrich-Schäffer, 1853) Omphaloceps triangularis (Mabille, 1893) Ophiusa david (Holland, 1894) Oraesia provocans Walker, [1858] Schausia gladiatoria (Holland, 1893) Schausia leona (Schaus, 1893) Sciatta inconcisa Walker, 1869 Trigonodes exportata Guenée, 1852 Tuerta chrysochlora Walker, 1869 Nolidae Eligma allaudi Pinhey, 1968 Notodontidae Antheua simplex Walker, 1855 Antheua trifasciata (Hampson, 1909) Arciera postalba Kiriakoff, 1960 Arciera roseiventris Kiriakoff, 1960 Arciera rufescens (Kiriakoff, 1962) Boscawenia bryki (Schultze, 1934) Boscawenia caradrinoides (Schultze, 1934) Boscawenia incerta (Schultze, 1934) Boscawenia jaspidea (Schultze, 1934) Catarctia divisa (Walker, 1855) Desmeocraera chloeropsis (Holland, 1893) Desmeocraera congoana Aurivillius, 1900 Desmeocraera geminata Gaede, 1928 Desmeocraera sagittata Gaede, 1928 Epidonta insigniata (Gaede, 1932) Haplozana nigrolineata Aurivillius, 1901 Scaeopteryx curvatula (Rothschild, 1917) Scalmicauda adusta Kiriakoff, 1963 Scalmicauda rectilinea (Gaede, 1928) Scrancia stictica Hampson, 1910 Oecophoridae Orygocera carnicolor Walsingham, 1897 Pseudoprotasis canariella Walsingham, 1897 Psychidae Melasina imperfecta Meyrick, 1922 Melasina polycapnias Meyrick, 1922 Melasina scrutaria Meyrick, 1922 Mesopolia inconspicua Walsingham, 1897 Narycia centropa Meyrick, 1922 Pterophoridae Crocydoscelus ferrugineum Walsingham, 1897 Lantanophaga pusillidactylus (Walker, 1864) Megalorhipida leucodactylus (Fabricius, 1794) Pterophorus candidalis (Walker, 1864) Pterophorus spissa (Bigot, 1969) Stenoptilodes taprobanes (Felder & Rogenhofer, 1875) Saturniidae Bunaeopsis licharbas (Maassen & Weymer, 1885) Decachorda congolana Bouvier, 1930 Decachorda inspersa (Hampson, 1910) Epiphora congolana (Bouvier, 1929) Epiphora vacuna (Westwood, 1849) Gonimbrasia rectilineata (Sonthonnax, 1899) Gonimbrasia tyrrhea (Cramer, 1775) Goodia dimonica Darge, 2008 Goodia hierax Jordan, 1922 Goodia lunata Holland, 1893 Goodia unguiculata Bouvier, 1936 Lobobunaea phaedusa (Drury, 1782) Lobobunaea rosea (Sonthonnax, 1899) Micragone agathylla (Westwood, 1849) Micragone caliginosa Darge, 2010 Micragone ducorpsi (De Fleury, 1925) Micragone elisabethae Bouvier, 1930 Micragone joiceyi Bouvier, 1930 Micragone lichenodes (Holland, 1893) Micragone loutemboensis Darge, 2010 Micragone morini Rougeot, 1977 Micragone neonubifera Rougeot, 1979 Nudaurelia alopia Westwood, 1849 Nudaurelia bouvieri (Le Moult, 1933) Nudaurelia emini (Butler, 1888) Orthogonioptilum andreasum Rougeot, 1967 Orthogonioptilum fontainei Rougeot, 1962 Orthogonioptilum prox Karsch, 1892 Pseudantheraea discrepans (Butler, 1878) Pseudaphelia kaeremii Bouvier, 1927 Pseudaphelia simplex Rebel, 1906 Pseudimbrasia deyrollei (J. Thomson, 1858) Pseudobunaea parathyrrena (Bouvier, 1927) Sesiidae Chamanthedon brillians (Beutenmüller, 1899) Chamanthedon tropica (Beutenmüller, 1899) Conopia auronitens (Le Cerf, 1913) Conopia nuba (Beutenmüller, 1899) Conopia olenda (Beutenmüller, 1899) Melittia auriplumia Hampson, 1910 Melittia occidentalis Le Cerf, 1917 Similipepsis violacea Le Cerf, 1911 Synanthedon albiventris (Beutenmüller, 1899) Tipulamima festiva (Beutenmüller, 1899) Tipulamima malimba (Beutenmüller, 1899) Sphingidae Grillotius bergeri (Darge, 1973) Hippotion aporodes Rothschild & Jordan, 1912 Hippotion irregularis (Walker, 1856) Leucophlebia afra Karsch, 1891 Leucostrophus commasiae (Walker, 1856) Neopolyptychus consimilis (Rothschild & Jordan, 1903) Neopolyptychus prionites (Rothschild & Jordan, 1916) Nephele discifera Karsch, 1891 Nephele maculosa Rothschild & Jordan, 1903 Nephele oenopion (Hübner, [1824]) Nephele rectangulata Rothschild, 1895 Nephele vau (Walker, 1856) Phylloxiphia bicolor (Rothschild, 1894) Phylloxiphia oberthueri (Rothschild & Jordan, 1903) Platysphinx constrigilis (Walker, 1869) Platysphinx stigmatica (Mabille, 1878) Polyptychus andosa Walker, 1856 Polyptychus carteri (Butler, 1882) Polyptychus enodia (Holland, 1889) Polyptychus murinus Rothschild, 1904 Polyptychus thihongae Bernardi, 1970 Pseudoclanis admatha Pierre, 1985 Pseudoclanis postica (Walker, 1856) Pseudoclanis rhadamistus (Fabricius, 1781) Rhadinopasa hornimani (Druce, 1880) Temnora albilinea Rothschild, 1904 Temnora atrofasciata Holland, 1889 Temnora crenulata (Holland, 1893) Temnora curtula Rothschild & Jordan, 1908 Temnora eranga (Holland, 1889) Temnora funebris (Holland, 1893) Temnora griseata Rothschild & Jordan, 1903 Temnora hollandi Clark, 1920 Temnora livida (Holland, 1889) Temnora ntombi Darge, 1975 Temnora plagiata Walker, 1856 Temnora rattrayi Rothschild, 1904 Temnora sardanus (Walker, 1856) Temnora scitula (Holland, 1889) Temnora spiritus (Holland, 1893) Temnora stevensi Rothschild & Jordan, 1903 Theretra orpheus (Herrich-Schäffer, 1854) Thyrididae Arniocera viridifasciata (Aurivillius, 1900) Marmax semiaurata (Walker, 1854) Tineidae Ceratophaga vastellus (Zeller, 1852) Cimitra fetialis (Meyrick, 1917) Criticonoma episcardina (Gozmány, 1965) Dasyses rugosella (Stainton, 1859) Hyperbola zicsii Gozmány, 1965 Machaeropteris baloghi Gozmány, 1965 Monopis megalodelta Meyrick, 1908 Monopis monachella (Hübner, 1796) Morophaga soror Gozmány, 1965 Oxymachaeris euryzancla Meyrick, 1918 Perissomastix pyroxantha (Meyrick, 1914) Pitharcha latriodes (Meyrick, 1917) Tiquadra lichenea Walsingham, 1897 Tortricidae Accra viridis (Walsingham, 1891) Ancylis argenticiliana Walsingham, 1897 Archips symmetra (Meyrick, 1918) Bactra bactrana (Kennel, 1901) Cydia hemisphaerana (Walsingham, 1897) Eccopsis praecedens Walsingham, 1897 Enarmoniodes praetextana (Walsingham, 1897) Idiothauma africanum Walsingham, 1897 Labidosa ochrostoma (Meyrick, 1918) Metendothenia balanacma (Meyrick, 1914) Mictocommosis argus (Walsingham, 1897) Sanguinograptis albardana (Snellen, 1872) Zygaenidae Astyloneura chlorotica (Hampson, 1920) Saliunca mimetica Jordan, 1907 Saliunca nkolentangensis Strand, 1913 Saliunca rubriventris Holland, 1920 References External links Moths Moths Republic of the Congo